Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Will AI Take Over the World? Examining the Future of Artificial Intelligence

With the progression of time, fears, hopes and questions about AI have become more and more urgent. Making machines as smart as humans(or smarter than us: what kind of robot grudge would this bring?!) appears to be the next step for technology…and its dark side. Will our phones fly out of our hands in chaos? Eventually, if we continue to develop artificial intelligence, AI systems will be able to control humans’ lives. Will machines take over our role and ‘keep us in a cage’ someday? This opinion piece looks at the reasons, scope and possible future of AI.

The Origins of Artificial Intelligence

This understanding of the early history of AI helps us understand its current approach and directions, as well as forecast future repercussions. Starting in the mid-20th century, computer scientists assumed leadership in AI. Alan Turing, the mathematician and computer scientist who invented the ‘universal machine’ that has served as a theoretical basis of AI, inspired John McCarthy – one of the founders of the field – to organise the Dartmouth Conference that set AI as a separate academic discipline and introduced the term ‘artificial intelligence’ (in 1956).

Early AI projects took inspiration from the most general human activity – symbolic reasoning – and used algorithms that attempted to formalise the steps that a human mind might employ to solve problems. Both the Logic Theorist and the General Problem Solver set their sights high. The inspiration for the General Problem Solver was the human brain. Its aim was described as follows: The GPAs procedures are designed to imitate those of a person who has become acquainted with the ground rules of his particular problem-solving domain, and can now perform unrehearsed decisions across the entire input space, drawing upon a knowledge – procedural, logical, psychological, skilled or intuitive – that is normally acquired through long and arduous experience. Back then, these first AI projects were hamstrung by available computational power and algorithms. Nevertheless, there is a long lineage between what has come to be known as ‘Good Old-Fashioned AI’, and current research.

From this early, symbol-based AI emerged several decades of ‘AI winters’, when high expectations weren’t met, interspersed with hopeful spurts that brought us to the dominant approaches of machine learning that we see today. The availability of vast computing resources and large amounts of data in the past decades has ushered in another golden age of AI research into numerous techniques, such as neural networks and deep learning.

Current State of AI Technology

These are the days of big advances in artificial intelligence: machine learning, natural language processing, and computer vision are all pervasive, from the Siris and Alexas that run our lives to the self-driving cars and the recommendation systems that help us avoid the dreck on streaming TV. Rapid improvements in machine learning are especially important. In particular, forms of machine learning known collectively as ‘deep learning’ are bringing remarkable improvements in the capabilities of AI systems to recognise objects in images, translate languages, and even play games, such as those in which AI programs recently defeated grandmasters in Go and chess.

And yet, despite these accomplishments, AI still faces two formidable challenges. Firstly, so-called general intelligence or AGI (for Artificial General Intelligence) remains elusive – that is, the ability of an AI system to understand, learn and use information in a general way across a broad spectrum of tasks (at least at the level of human intelligence). Current AI systems are highly specialised and perform very well in narrow task-domains or contexts, but flounder horribly at tasks involving general reasoning or require excessive amounts of human supervision to accomplish cross-task targeting.

Moreover, ethical and societal concerns have been on the table for some time now. There is a growing awareness of the so-called bias problem in AI – in other words, the problem of getting the algorithms to work correctly and fairly for all people, not just for those with whom the developers are most intimately familiar. There are also concerns that AI will put people out of work, that it will be used for malevolent means such as intrusive surveillance or autonomous weaponry. Meeting these challenges requires putting technologists in a dialogue with ethicists, policymakers, workers, and other stakeholders, to make sure we harness the power of AI in ways that are friendly to human values and promising for our society as a whole.

Potential Scenarios of AI Takeover

AI takeover corresponds to a continuum of futures that encompass both a utopia and a dystopia. One rosy interpretation is that we might attain symbiosis with an AI ‘superintelligence’, blending our brainpower to solve the world’s problems. AI would focus only on what computers do best, leaving humans to do what a constructivist embodiment thesis reveals to be our comparative advantage: cultural intelligence. Together, we may evolve into a super-species, exploring the stars and perhaps even elsewhere. This future could foster a workplace of greater autonomy, intellectual stimulation and job satisfaction as technology replaces mundane repetitive tasks. Artificial intelligence-driven medical diagnosis could be mind-boggling. The development of AI-distributed educational platforms could transform this sector and the learning experience. I imagine that, in time, our ‘children’ will be able to identify a wide range of allergies like coeliac disease beyond the narrow definition of bladder allergies. In the same vein, huge advances would be possible in sectors such as environmental remediation, green sources of energy and more.

On the other hand, a darker set of forecasts predicts that, without effective regulation, AI could create dramatic social ruptures. The widespread automation of jobs could lead to massive joblessness and economic inequality. Certain human tasks will undoubtedly be automated, with new jobs created in AI-driven industries, but – in a potentially rocky period – not all workers will have the skills for the new positions that come into play. AI systems could also be ‘weaponised’ through malevolent uses, such as cyberattacks or the dissemination of disinformation, opening up significant threats to security and democracy.

The most dystopian fiction features artificial general intelligence (AGI) where machines become smarter than people and, perhaps, want to dominate or eliminate us. This is pure speculation for now, but some of the greatest minds in science and technology – including Stephen Hawking and Elon Musk – have voiced concerns about the risk that AI poses to our very survival. Making sure that AI is developed in a way that is safe and ethical is of paramount importance.

The Role of Ethics and Regulation in AI Development

Given the potential negative impacts – as well as the positive benefits – that AI development and deployment may yield, the Minderoo Foundation believes that the use and application of AI warrants the establishment of the highest level of ethical standards and regulatory frameworks. Ethical AI should exhibit protections such as fairness, transparency, accountability, and human rights. To make those ideals real, the Minderoo Foundation’s Global Ethics Initiative is calling on researchers, industry leaders and policymakers to develop the guidelines and standards that give AI a conscience.

Bias is another ethical issue, and what it boils down to is this: if algorithms inherit and amplify social inequalities found in human-generated data, then we’ll have a society with even worse disparities than we have today. Bias in data is one problem, for example, but addressing it could lead to other biases. Whose voice is being listened to in the AI process? These issues must be addressed in order to increase public trust and allow people to understand the absolute necessity for the use of AI. To do this, we must consider how to make AI systems more transparent and explainable.

Regulation is necessary for oversight functions – including safeguarding the public interest When we think about regulation, we typically think about what governments do, and how they regulate certain types of businesses and industries – including, increasingly, matters directly within the realm of AI applications. Governments and international intergovernmental organisations are announcing and implementing a growing number of AI-specific regulations. For instance, it is unrealistic not to expect that data-privacy and security legislation will be a central component of almost any contemplated regulation of AI systems. Regulatory schemes need to operate within reliable technological frameworks that are subject to rapidly evolving technical standards, yet they must also accommodate the rate of technological change in order to promote the development, use and improvement of the systems they seek to regulate. Fortunately, strong collaboration between the private sector and the public sector is growing, which augurs for a more measured and balanced regulatory approach that incorporates innovation and fosters public-good principles.

The Future of AI and Human Interaction

The nature of human interaction and human society, in the long run, will be more shaped by the future of AI than by anything else whatsoever. If the trajectory of AI technology continues, and it seems that if anything the trend will continue to accelerate, integration with the main currents of human affairs will become more comprehensive, and the impact on human growth more profound. In medicine, AI-powered automated diagnostics and tailored treatment plans could transform medical practice. In education, AI tutors and adaptive learning systems could offer tailored learning experiences.

However, the rapid incorporation of AI into the fabric of daily existence brings with it the pivotal question of the control humans and their place vis-à-vis AI systems, and raising the likelihood that we could lose our own thought-processing and decision-making skills through our reliance on AI. To prevent humans losing control over their machines, and the agency that this implies, it will indeed be necessary to allow alternative or parallel generalisation, ‘reasoning’ as humans understand it, and to put into question the infallibility of actions resulting from the initially omnipotent push-button technology.

Similarly, the role of AI in our society will also dramatically depend on its equitable distribution: how to make sure that even the very last person in the social rankings can profit from developments in AI technology. At the core should be public policies and initiatives that work toward expanding digital literacy across the board and offer all citizens access to upskilling opportunities and reskilling pathways.

Preparing for an AI-Driven Future

An intelligent response to an AI future must focus on education and workforce development, as well as public policy taking into account our future position and relationship with new AI technologies. With the ongoing emergence of new AI technologies, we will have new jobs, but we’ll also have fewer old jobs. Thus, a key focus in post-school education will have to be STEM (science, technology, engineering and mathematics) education and even more non-narrowly focused interdisciplinary, curious learning (not easily conducive to today’s educational standards). Universities will have to increase teaching of ethics, humanity and social sciences (where the so-called ‘soft’ sciences, in that skilful and less objective way of describing natural sciences, will be essential).

Second, workforce development programmes should encourage lifelong learning and churn. They should include training programmes, apprenticeships and collaboration between industry and educationing. Cultures that promote learning during periods of transition between jobs or industries – ‘churn’ – will be more resilient in the face of artificial intelligence disruptions. Workers would continue to be relevant and competitive.

Public policy, when done well, can be crucial in shaping this world, encouraging academics and private sector experts to engage, and leading a consensus process rather than trying to dictate one: devising strategies that encompass the economic, ethical and social dimensions of AI; that set rules for regulation that encourage innovation while protecting public interests; and that devise mechanisms to share the benefits of AI broadly across societies. In this way, the evolution of AI can, at once, be guided and helped to flourish safely.


Whether we should be worried about a future where Artificial Intelligence rules the world with a vulcan grip (an imagined apocalyptic scenario) is a complex question. The truth is that the future of AI encompasses both great hopes and great dangers. Certainly, AI holds the potential for a future that provides unprecedented benefits in various domains, but it would be reckless to ignore the concomitant ethical, social and existential challenges that it will raise. A grasp of the historical and philosophical origins of AI, coupled with an understanding of where the technology stands today and where its future could lead, is a necessary prerequisite for charting a path to a future AI-enriched civilisation. A version of this essay was originally published on the Futurism website.

Louie Stark
Louie Stark

Louie Stark is a seasoned blog writer and Senior Editor with over ten years of experience. He excels at transforming complex ideas into engaging, reader-friendly content across various topics. Louie focuses Architecture, Automotives, Games and Technological aspects. As Senior Editor, Louie leads the editorial team, ensuring high-quality, relevant articles. His collaborative approach and dedication to excellence foster a creative environment. Louie also mentors junior writers, helping them refine their skills. His passion for storytelling and commitment to quality make him a key asset to the blog and its readers.

Articles: 12

Leave a Reply

Your email address will not be published. Required fields are marked *