The Future of AI: Two Doom Scenarios, Three Requirements

William Visterin
12 December 2024
3 min

The Future of AI: Two Doom Scenarios, Three Requirements

Event | Cybersec Netherlands 2024
At Cybersec Netherlands, Bart Preneel, cryptographer and professor, delivered a presentation on AI and security. He outlined two doom scenarios and three essential requirements to safeguard the future of AI. “Tech companies would do well not to develop certain AI applications,” he stated.


The Evolution of Artificial Intelligence

Artificial intelligence dates back to the mid-20th century. “But in the 1970s, we experienced an ‘AI winter,’ only to see breakthroughs in the 1990s, such as the chess computer Deep Blue,” explained Bart Preneel, a professor at Belgium’s KU Leuven and head of the renowned COSIC research group (Computer Security & Industrial Cryptography).

Cats and Dogs
Another pivotal moment in AI—somewhat ironically—came twelve years ago when Google managed to teach AI to differentiate between cats and dogs. Fast forward to 2024, and the AI hype is in full swing. “There’s even a Gartner AI Hype Cycle, essentially the hype about the hype,” Preneel remarked.

Globally, there are 18 billion IoT devices and 10 billion computers. Taking proactive action across such a vast landscape is challenging. “This is why the focus has shifted from prevention to detection. By applying AI to incident datasets, we can learn how bad actors behave and detect them faster in the future,” he explained.

AI has proven highly effective in improving detection in cybersecurity. “This includes the detection of malware, intrusions, vulnerabilities, fraud, phishing, data loss, and the analysis of so-called side channels,” Preneel elaborated. However, he noted the importance of reliability, particularly in mitigating false positives and negatives. At the same time, cybersecurity contributes to AI by safeguarding a model’s inputs, outputs, and even the model itself.


AI Dystopia

The darker side of AI includes concerns about the absence of privacy and fairness. Preneel described two doom scenarios often discussed:

  1. AI Dystopia
    This scenario envisions catastrophic consequences for humanity, including surveillance states, autonomous killer drones, and the loss of human autonomy.
  2. Paperclipalypse
    This term refers to a philosophical concept in which AI, tasked with something as seemingly harmless as making paperclips, causes an apocalypse by dedicating increasing resources to the task. In doing so, it learns to resist efforts to shut it down. “This reflects humanity’s fear of losing control over AI,” Preneel said. “Personally, I believe the likelihood of AI dystopia is higher than that of the paperclipalypse.”

Three Requirements

To manage AI responsibly in the future, Preneel identified three key requirements:

  1. Legislation
    “Having an AI Act is a positive step, even if only to raise awareness. However, it is not sufficient, as there are still many gaps and pitfalls,” he noted.
  2. Ethical Awareness
    “Ethics must be integrated into education. Students need to learn this during their studies, but this is often lacking today,” Preneel emphasized.
  3. Technological Responsibility
    “AI is evolving rapidly, and companies are often driven to be the fastest. But tech companies must recognize that some applications should not be developed or released. The ‘move fast and break things’ mentality is not suitable for AI. If you cannot accurately predict the consequences, it’s better not to deploy the technology,” Preneel argued.

He even placed this in a historical context. “In the last century, physicists only began reflecting on the impact of their science after the atomic bomb was dropped. That is a risk we cannot afford to take with AI.”

Source: Computable.nl

 

Gerelateerde artikelen