Overview of Artificial Intelligence

Over the past 250 years, the primary drivers of economic growth have been technological innovations that became general-purpose technologies creating waves of complementary innovations and opportunities. Artificial intelligence and more specifically machine learning is the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it is given. Artificial Intelligence (AI) will henceforth be referred to as machine-learning algorithms (Brynjolfsson & McAfee, 2019).
PWC predicts that AI would add $16trn to the global economy by 2030, while McKinsey estimates this figure at $13trn (Cross, 2020a). AI’s most important attributes that will only increase in importance in the coming years are connectivity and updateability. AI can connect to an integrated network with an innumerable number of similar applications, which enables it to process data and learn much quicker than humans could every wish to. The other ability is that of updateability whereby AI remains updated with the latest data, which is impossible for humans to do (Harari, 2018).
The promise of AI has not materialised in its application, which resulted in some researchers suggesting that it is heading for its 3rd AI winter. The previous two AI winters took place when the promise of what AI could do went unmet, which led to research funding drying up and the importance of this research field being questioned (Cross, 2020a). The Boston Consulting Group conducted a survey of 2500 CEOs of which seven out of ten stated that their AI projects had generated little benefit thus far. Furthermore, two-fifths of the respondents that invested significantly in AI have not realised any benefits. PWC also conducted a survey that found that only 4% of the respondents planned to deploy AI across their organisations, which is down from 20% the year before (Cross, 2020b). Both the surveys could suggest a cooling in the enthusiasm for using AI currently and could potentially indicate that an AI winter is approaching.
The revival of AI over the past few years can be ascribed to three trends that converged, namely the improvement in AI algorithms, the use of big data and cloud supercomputing (Miailhe & Hodes, 2017). These three trends not only influenced the revival of AI but will also determine whether AI is moving into its 3rd winter or not. They will also shape the future use of AI and as discussed in the next section.
AI Algorithms
One of the limiting factors of the current AI algorithms is their lack of cognitive ability, which means they only have the ability to correlate inputs with outputs but they do it blindly, with no understanding of the broader context. Therefore, although they are powerful pattern-recognition tools, they do not have top-down reasoning that represents the way humans approach problems and tasks. Rather, AI algorithms are currently trained with bottom-up reasoning that requires mountains of big data and creates serious limitations because the algorithms do not know how to treat situations where little or no data exist. If the future algorithms can improve their cognitive ability by using top-down reasoning, relying less on bottom-up big data, they will more closely resemble the way humans approach problems and tasks and will be more broadly applied (Wilson, Daugherty & Davenport, 2019).
To create cognitive algorithms that use top-down reasoning, researchers are encouraged to widen the scope rather than the volume of data of what machines are taught. Furthermore, it is suggested that researchers should combine current machine-learning techniques with older, symbolic AI approaches, which emphasised formal logic, hierarchical categories and top-down reasoning (Cross, 2020e).
Data Requirements
The data requirements of AI algorithms depend on whether algorithms will develop the cognitive function of using top-down reasoning in the future or remain limited to bottom-up reasoning that will require vast amounts of data to train these algorithms. Ransbotham, Kiron, Gerbert and Reeves (2017) found that one of the key factors that differentiate between AI leaders and laggard organisations is the difference in their approach to data.
Current AI algorithms require large numbers of carefully labelled examples, and those labels usually have to be applied by humans, which is known as data wrangling. Data wrangling takes approximately 80% of the time used on an AI project (Cross, 2020d). The more complex and the more parameters that AI algorithms have, the higher the cost in terms of training data. Also, more computing power, which tend to be cloud computing, is then required (Cross, 2020c).
There are some problems with training data that need to be taken into account:
a) In many cases the training data contain a bias that is then transferred to the algorithm, creating or reinforcing unfair bias (Cross, 2020d).
b) The use of personal data has brought about ethical constraints and also privacy issues (Wilson et al., 2019).
c) Apart from the training data, AI algorithms also require new data that will allow them to improve their accuracy in terms of prediction (Agrawal, Gans & Goldfarb, 2019).
One of the ways in which to deal with these data problems is to make up some data by creating synthetic virtual training data. It was found that algorithms trained on synthetic data performed better than algorithms trained on real data alone. Synthenic data are also more attractive as they address privacy concerns relating to personal data (Cross, 2020d).
Computational Power
One of the key trends that has brought about the revival of AI is the unlimited access to supercomputing in the cloud, which was estimated at $70 billion in 2015, and the continual growth in big data has brought about a compound annual growth rate of more than 50% since 2010. One of the central laws used to predict computing power is known as Moore’s Law states that computing power doubles every two years on average at a constant cost (Miailhe & Hodes, 2017).
The explosion in the demand for computing power has put pressure on Moore’s law as the shrinking of computer chips is getting harder and the associated benefits of doing so are not what they were. This led to the optimisation of techniques such as changing the computer architecture to follow the structure of the data being processed or eliminating the time AI models spend on multiplying numbers by zero (Cross, 2020c).
Other researchers are looking at alternative ideas such as quantum computing and neuromorphic chips. These two ideas are explained below (Cross, 2020c):
- a) Quantum computing uses the counter-intuitive properties of quantum mechanics to provide big speed-ups for some types of computation in which a computer is trying to make trade- offs between millions of variables to arrive at a solution that minimises as many as possible.
- b) Neuromorphic chips are inspired by biology and chip makers are investigating chips that contain components designed to mimic more closely the electrical behaviour of the neurons that make up biological brains.
Companies should take full advantage of this collaboration between AI and humans. They must understand how humans can most effectively augment machines, how machines can enhance what humans do best, and how to redesign business processes to support the partnership (Wilson et al., 2019).
REFERENCES
Agrawal, A., Gans, J., & Goldfarb, A. (2019). Is your company’s data actually valuable in the AI era? Boston, MA: Harvard Business Review Press.
Brynjolfsson, E., & McAfee, A. (2019). The business of artificial intelligence. Boston, MA: Harvard Business Review Press.
Cross, T. (2020a, 11 June). An understanding of AI’s limitations is starting to sink in. Economist. Retrieved from: https://www.economist.com/technology-quarterly/2020/06/11/an- understanding-of-ais-limitations-is-starting-to-sink-in
Cross. (2020b, 11 June). Businesses are finding AI hard to adopt. Economist. Retrieved from https://www.economist.com/technology-quarterly/2020/06/11/businesses-are-finding-ai- hard-to-adopt
Cross, T. (2020c, 11 June). The cost of training machines is becoming a problem. Economist. Retrieved from: https://www.economist.com/technology-quarterly/2020/06/11/the-cost-of- training-machines-is-becoming-a-problem
Cross, T. (2020d, 11 June). For AI, data are harder to come by than you think. Economist. Retrieved from: https://www.economist.com/technology-quarterly/2020/06/11/for-ai-data-are-harder-to- come-by-than-you-think
Cross. T. (2020e, 11 June). Driverless cars show the limits of today’s AI. Economist. Retrieved from https://www.economist.com/technology-quarterly/2020/06/11/driverless-cars-show- the-limits-of-todays-ai
Harari, Y.N. (2018). 21 Lessons for the 21 century. Kindle Edition. Random House.
Miailhe, N., & Hodes, C. 2017. The third age of artificial intelligence. The Journal of Field Actions, 17: 6–11.
Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence. MIT Sloan Management Review, Fall:1–22.
Wilson, H.J., Daugherty, P., & Davenport, C. (2019). The future of AI will be about less data, not more. Boston, MA: Harvard Business Review Press.
Leave a Reply