Artificial Intelligence (AI) is much more than just a buzzword nowadays. It powers facial recognition in smartphones and computers, translation between foreign languages, systems which filter spam emails and identify toxic content on social media, and can even detect cancerous tumours. These examples, along with countless other existing and emerging applications of AI, help make people’s daily lives easier, especially in the developed world.
As of October 2021, 44 countries were reported to have their own national AI strategic plans, showing their willingness to forge ahead in the global AI race. These include emerging economies like China and India, which are leading the way in building national AI plans within the developing world.
Oxford Insights, a consultancy firm that advises organisations and governments on matters relating to digital transformation, has ranked the preparedness of 160 countries across the world when it comes to using AI in public services. The US ranks first in their 2021 Government AI Readiness Index, followed by Singapore and the UK.
Notably, the lowest-scoring regions in this index include much of the developing world, such as sub-Saharan Africa, the Carribean and Latin America, as well as some central and south Asian countries.
The developed world has an inevitable edge in making rapid progress in the AI revolution. With greater economic capacity, these wealthier countries are naturally best positioned to make large investments in the research and development needed for creating modern AI models.
In contrast, developing countries often have more urgent priorities, such as education, sanitation, healthcare and feeding the population, which override any significant investment in digital transformation. In this climate, AI could widen the digital divide that already exists between developed and developing countries.
The hidden costs of modern AI
AI is traditionally defined as “the science and engineering of making intelligent machines”. To solve problems and perform tasks, AI models generally look at past information and learn rules for making predictions based on unique patterns in the data.
AI is a broad term, comprising two main areas – machine learning and deep learning. While machine learning tends to be suitable when learning from smaller, well-organised datasets, deep learning algorithms are more suited to complex, real-world problems – for example, predicting respiratory diseases using chest X-ray images.
Many modern AI-driven applications, from the Google translate feature to robot-assisted surgical procedures, leverage deep neural networks. These are a special type of deep learning model loosely based on the architecture of the human brain.
Crucially, neural networks are data hungry, often requiring millions of examples to learn how to perform a new task well. This means they require a complex infrastructure of data storage and modern computing hardware, compared to simpler machine learning models. Such large-scale computing infrastructure is generally unaffordable for developing nations.