Pop culture models the ideas we have about artificial intelligence (AI). It influences what we hope, expect, and fear from this and other game-changing technologies. Based on the critical role that pop culture exerts on our understanding of tech, I came up with a simple categorization for fictional AI systems (sometimes characterized as robots) that I use as a base for discussion of possible future, real-life AI scenarios:
Confrontational: This is the most prominent category. It was born due to the immortal series The Terminator and many other popular films. It doesn’t need much explanation: in its most extreme version, the robots feel superior and decide to exterminate the human race.
Other less extreme alternatives are the AI system HAL 9000 from 2001: A Space Odyssey and Ava from Ex Machina. Unlike The Terminator, they don’t aim to harm humans, but they will do what it takes to complete the mission, in the case of Hal, and gain freedom, in the case of Ava. I don’t think any of these AI systems would become rampant serial killers, but they don’t have any problem taking some human lives to accomplish their goals.
- Benevolent: Refers to the helpful robot, mostly incapable or unwilling to harm. Isaac Asimov’s Robots, my favorite science fiction author, is at the pinnacle of this category. His robots can’t harm humans physically and, in many cases, even emotionally because they are imbued with the Three Laws of Robotics. The downside, according to Asimov, is that society as a whole becomes utterly dependent on them. Other benevolent robot examples are Data from Star Trek, WALL-E, and R2-D2 and C-3PO from Star Wars.
- Disengaged: I consider Samantha from the film Her a unique take on AI and human relations, depicting an AI that ultimately chooses to leave humans behind in pursuit of a higher state of existence or understanding.
We rush to create AI in the real world because we dream of the second category. But some have nightmares with the first. Reality is even more complex than we imagine.
Enter the mortal computer!
Geoffrey Hinton is often hailed as the “godfather of deep learning.” He is a cognitive psychologist and computer scientist renowned for his groundbreaking work in AI because of his contributions to developing neural networks, the backbone of modern AI. Hinton has spent much of his career advocating for the idea that artificial neural networks could mimic the human brain’s functionality. This concept faced skepticism until significant breakthroughs in recent years vindicated his vision.
One of his novel ideas is mortal computation. To understand this concept, let’s begin by saying that digital systems allow for a clear separation between software and hardware. This separation means that algorithms (or software) can be transferred, duplicated, and run on multiple hardware systems without loss of functionality, essentially granting them “immortality” in the sense that the failure (or destruction) of any single piece of hardware does not doom the algorithm. In contrast, mortal computers are those in which the software is inseparable from the hardware, meaning that if the hardware is damaged, the software is irremediably lost.
But why would you want to create such a system? Because our immortal digital systems operate at the cost of very high energy consumption. The exact energy consumption for LLMs (ChatGPT and others) can vary widely based on the specific task, the hardware used, and the software’s optimization. Still, it can reach some kilowatts for complex computations. In contrast, the human brain is remarkably energy-efficient. It operates on roughly 20 watts of power, about the same as a dim light bulb. This is because our brains are analog (they don’t use ones and zeros to represent information but an infinite range of values that vary smoothly over time).
Analog systems are subject to physical flaws and variations, making each piece of hardware unique. Consequently, algorithms that learn and operate on these systems are closely tied to the specific characteristics of their hardware. In the 1970s, digital computers became the norm because we needed precise systems. With the flourishing of neural networks, analog computers could rise again. A mortal computer is a piece of analog hardware with unique software that requires low energy levels to operate. In other words, a mortal computer is an artificial brain that is various orders of magnitude cheaper to operate than digital AI systems.
The uniqueness of software and hardware: a paradigm shift
Hinton first proposes mortal computers but then says they are impractical because we don’t know how to train such a system. But I think we will. As I said earlier, humans are analog, and we constantly learn. An artificial analog brain will surely be capable of learning similarly to humans, resulting in lower costs and higher speed.
Additionally, digital neural networks and analog neural networks are not mutually exclusive. Each type of system could have its applications. If you need higher predictability and continuous operation, and your use case allows a higher cost, you will undoubtedly use a digital neural network. If operational cost is a concern, you may opt for an analog alternative.
Training mortal computers introduces a paradigm shift in how we perceive and interact with technology. Each mortal computer, with its unique hardware and corresponding software, undergoes a distinct training process that molds its capabilities, preferences, and even its “personality” analogous to human learning and development. This individualization of AI systems transcends the current one-size-fits-all approach in digital AI, making each mortal computer unique with its skills and knowledge.
The implications of this are profound, not only for the tasks these computers can perform but also for the relationships they can forge with human beings. The possibility of genuinely personalized AI assistants that understand and anticipate the needs of their specific human counterparts could redefine the human-technology interface.
However, the personalization and individualization of mortal computers bring significant ethical considerations. As these systems become more akin to sentient beings, the moral implications of their existence become increasingly complex. Issues such as the rights of these AI entities, their treatment, and the ethics surrounding their end of life (whether through accident, intentional decommissioning, or natural deterioration) will demand careful contemplation. The potential for emotional attachments to these AI systems raises questions about grief and loss when a mortal computer “dies.” Furthermore, considering what constitutes a meaningful life extends beyond biological entities to these artificial beings, challenging our philosophical and ethical frameworks.
The evolution towards personalized, mortal computers may need reevaluating society’s philosophical outlook on technology, intelligence, and life itself. The lines between biological and artificial life blur as we inch closer to creating entities that might not only exhibit intelligence but probably also a form of consciousness. Society might need to confront and adapt to these changes, developing new legal, moral, and social norms that recognize the evolving nature of intelligence and consciousness. The advent of mortal computers could thus catalyze a broader philosophical exploration of what it means to be alive and the intrinsic value of all forms of intelligent life.
As in a science fiction movie, the world will soon be unrecognizable. Irrespective of the final scenario that unravels (most probably a mix of all of them), we have excellent material for another blockbuster.