×

AI: Artificial Intelligence or Artificial Ignorance?

August 22, 2019
Editor(s): Matthew Trachevski
Writer(s): Lachlan Woods, Nicholas Bea, Michelle Koo, Preethika Padmanabhan

For a society indisputably flourishing in the technological age, it only makes sense that AI serves on the forefront of such massive change. From $500 million research companies such as DeepMind, to AI platforms capable of creating digital copies of physical environments like that of SenSat, it appears that AI is fast becoming one of the most prevalent and sophisticated aspects of technological advancement. However, the outcomes presented by this futuristic wave are not as straightforward as many of us might like. AI will not simply solve all of humanity’s problems. While the goal to do just that may be at the motivational centre of many AI-orientated companies, the social and economic drawbacks and consequences to unbridled AI and technological development must be taken into account.

Put simply, artificial intelligence is a “hypothetical machine that exhibits behaviour at least as skilful and flexible as humans do.” A world leader in AI research and application, Alphabet’s DeepMind has demonstrated not only the power but, the positives that come with initializing artificial intelligence. Deep reinforcement learning, a technique for machines to simply “learn” how to perform an action through the revolution of continuous patterns, has provided tech experts with proof of the power of AI. Human records on old Atari games such as Breakout and Space Invaders were succinctly beaten following DeepMind’s single neural network system that utilized raw pixels and a variant of Q-learning to outmaneuver the game’s programming with an unchanged learning algorithm. In short, AI, in a period of time not conceivable to the human mind, had run through billions of scenarios in its algorithms, learning and understanding which branches brought success and which brought failure, before making its move that ultimately would prove the correct one. The success and power of AI through deep reinforcement learning is not an isolated event. AlphaGo bested all human players of Go it came across through a similar method, performing the task over and over again, tracking which decision reaped the best reward. On the back of these simulated successes, DeepMind continues to look into the future, into more challenging problems, such as curing cancer and providing a global energy source that is clean and sustainable. Yet, simply believing something to be possible brings it no closer to fruition.

There are numerous social and economic parameters that come into play when even experimenting with AI. DeepMind is losing money. $572 million in 2018, more than $1 billion in the past three years and over $1 billion in debt. On paper, this looks as dire a situation as any. And yet, DeepMind is hardly losing any supporters. It maintains that its research is on the right track, and angel investors and many supporters continue to back its projects. Many economists view AI as an antagonist. Job automation, according to prominent researchers, will catalyse mass unemployment on a scale much larger than the industrial revolution. It is estimated that by 2028, 47% of jobs in the United States will be automated.

Source: Consultancy.uk

However, the supposed wave of automation that is all around us and rising is not completely believable. Unemployment rates are at historic lows. Bank branches and call centres still run mainly on human effort. Despite automotive advances introduced by AI, productivity is stalling and, in many developed nations, falling. Labour productivity in the UK, for example, has in the past year grown at its slowest since 1976, and the top companies that invest in AI and utilise AI religiously have experienced stagnated productivity levels.

This leads to questions surrounding the social impact of AI. Has AI learning already peaked? Evidence shows AI learning techniques may have already produced the achievements they are fully capable of, and that further experimentation will likely produce nothing more than diminishing returns. If that is the case, a significant proportion of the enormous amount of money, time and power spent perfecting machine learning will be for naught. Investments will be wasted, and companies could fold. Furthermore, the scale of AI is simply too huge for many to truly comprehend. Oceans of data must be generated to run AI systems and their training simulations efficiently. It comes to a point when it is more feasible for a data analyst to be simply hired and run simulations manually. Thus, a “pseudo-AI” is generated, whereby firms appear to have used an AI bot to reach a certain point or interact with a customer, when in actuality, it is just a human behind a screen. Engineer.ai, an Indian start-up professing a self-building application operating under artificial intelligence, was found to have cleverly used the smokescreen of AI to attract investors and $30 million in funding from SoftBank-owned firms and others, while relying on an abundance of human engineers to create application after application. Claims from the company that the AI used to write the code for these apps was 80% in completion, was found to be grossly misleading, with Wall Street Journal investigators discovering such code was written up by human coders located in India. When asked about what aspects of Engineer.ai was actually AI, natural language processing and decision tree specification was their response, two elements of data processing that do not logistically fall under the net of artificial intelligence.

Source: Deloitte

Therein lies the current problem surrounding AI. Hype threatens to blind its actual usage. Undoubtedly, AI has worth. Funding for AI start-ups reached $31 billion in 2018 and many giants of technology, such as Google, Amazon and Uber shout their social and financial support for companies embroiled in the business of AI-technology. But is this hype for AI justified? Or is it simply companies jumping on the technology bandwagon? Exposure of companies such as Engineer.ai highlight a very distressing factor in this mad rush for AI, in that it barely exists. Programs that run under the guise of AI are largely operated on by people and those data processors and machines that have the technological parameters found to be able to support AI experimentation cannot support it indefinitely. The cost needed to run such machines is simply too much. Yet large companies, investors and the general public have been seemingly duped into believing AI is more than what it is. The lack of transparency, coupled with the extreme losses suffered by companies with a backbone of AI such as DeepMind, has produced conflicting results. Some companies, such as Facebook, have halted their human-assisted AI programs in transforming speech-to-text, while others like Amazon continue to work on their own projects, most notably, Alexa.

Great care must be taken in ensuring that AI delivers on its positive potential and is not merely exploited for profiteering ends. AI is still largely misunderstood, and undeniably alluring. The number of years or decades it will take for elements of artificial intelligence to truly begin to exhibit clear practical results remains to be seen. For now, technologists and enthusiasts alike must treat AI consciously, and realise its shortcomings to plant the seeds for an AI future that has humanity at its centre.

References

The CAINZ Digest is published by CAINZ, a student society affiliated with the Faculty of Business at the University of Melbourne. Opinions published are not necessarily those of the publishers, printers or editors. CAINZ and the University of Melbourne do not accept any responsibility for the accuracy of information contained in the publication.

Meet our authors:

Matthew Trachevski
Editor
Lachlan Woods
Writer
Nicholas Bea
Writer
Michelle Koo
Writer

This author has not left any details

Michelle Koo's Articles
Preethika Padmanabhan
Writer