AI – Its Impact Now and in the Future of an Organisation
The current climate has exacerbated our need for knowledge of emerging technologies, none more than AI. What are the risks and unintended consequences of operating in this new world? Does the technology sector have our best interest at heart? What roles do ethics and human rights play in developing and deploying artificial intelligence and machine learning?
To help address these big questions, we spoke with Dr Catriona Wallace, one of Australia’s leading thinkers in the AI space.
In addition to her role as Executive Director at the Gradient Institute, a responsible AI research institute, Dr Wallace has been recognised by the Australian Financial Review as “The most influential woman in business and entrepreneurship”. An expert in artificial intelligence, Dr Wallace also serves as an adjunct professor, keynote speaker, and chair of an artificial intelligence venture capital fund.
How has AI become important within technology?
According to Dr Wallace, the next big wave in technology will be delivered on the back of Web3. Dr Wallace explains, “Web2, which came into use around 2004, is what we all know and experience as the internet right now. Web3, which is starting to emerge now, is a response by many interested parties in the tech community, and the internet community, to a growing desire for a decentralised web. So we call Web3 the semantic web, which will be a machine-readable decentralised web infrastructure rather than the very heavily centralised Web2.0 we have now. So expect 2.0 to continue, and Web3 to start to come in, with blockchain and cryptocurrency being some of the core technologies that underpin it.”
Of course, on the back of any discussion around a decentralised web, one must immediately ask what might be the potential unintended and unforeseen consequences of such technology. According to Dr Wallace, this is where the emergence of ethical technology and ethical leadership in AI becomes extremely important.
One example of in-built bias can be found in early versions of facial recognition software. While these biases have since been rectified, the initial emergence of bias serves as both an example and a cautionary tale. In this example, early facial recognition software developers worked with data sets that were unwittingly biased towards white males. In hindsight, the explanation for this bias is evident and straightforward. Many of the images available online in the early 2000s were images of white males. This has been attributed to factors such as celebrity, the fact that the majority of actors, musicians, sports stars and so on in the western world at that time were predominantly male and primarily Caucasian. As such, the facial recognition software of the time became extremely good at recognising the faces of white, middle-aged males and not so great at identifying the faces of coloured or female subjects. Once discovered and rectified, it quickly became apparent how easily AI can be biased, depending on the data used to train the algorithms that determine the decision-making or automation of a particular function in a business or government.
According to Dr Wallace, the AI sector is full of bias. “If the data sets used to train the algorithms within the AI are historical data sets, and reflect history or society’s existing biases, then those biases will be coded into the machines that will be running our world.
“Recent history is littered with example after example of this kind of thing occurring in the big tech companies you think would know better. Take Amazon, for example, which used a recruitment app that only recommended males for positions. We have the example of the Apple Card, along with Goldman Sachs, which gave male applicants ten times the credit limit of female applicants. Then we have the example of Optum, a big healthcare provider in the US that allocated additional healthcare funding to higher socioeconomic groups and neglected the lower socioeconomic groups. These examples illustrate how the world is already living with these problems. It is important to ensure that, moving forward, we don’t add to those terrible examples of AI done unethically.”
Current challenges concerning AI in the security space
According to Dr Wallace, another current challenge concerning AI, especially in the security space, is that many people and companies are unfortunately taking licence and labelling anything automated or rules-based as being AI, which it isn’t.
I think about AI more in the context of machine learning. We will see going forward that machine learning is the big platform and capability that most AI will be built upon. Machine learning is how we describe the software or the algorithms that can learn independently. By performing a task repeatedly, machine learning systems become better and better at that task or function.
Right now, there are a couple of different types of machine learning. The two significant types are supervised machine learning, which is the more traditional method of machine learning, and unsupervised machine learning. Then there is something called reinforcement learning. The best way to think of good AI, or an AI system, is something that involves data, or big data, algorithms, analytics, decision making, and then automation. Those would be the basic components of a whole AI system, but it is the machine learning capability that those in the AI space are most excited about. Now we need to ensure that we have proper ethical leadership in place to ensure that we get the best out of the new technologies that are emerging.