Why We Must Remain Vigilant Despite AI Legislation
BLOG – Both privately and professionally, we frequently utilize the possibilities that artificial intelligence (AI) offers. However, developments in this field are advancing so rapidly that it is difficult to determine what is real, misleading, or outright fake. To better safeguard the digital rights of citizens, efforts are being made to develop laws and regulations around AI. Nevertheless, this will not eliminate the dangers associated with AI.
So far, AI has brought us several good developments. Think of virtual assistants like Siri, Google Now, and Copilot, which can provide useful information based on spoken commands. Another example is AI tools that can help SOC analysts deal with the increasing stream of alerts more quickly and effectively. AI can also play a role in detecting online fraud: by analyzing large amounts of data, an unusual (read: suspicious) pattern can be quickly detected.
Cybercriminals also see the advantages of AI and have added this technology to their toolkit. They bypass protections in ChatGPT with tools like WormGPT and FraudGPT, which are available for purchase on Telegram and the dark web. These AI tools make it easy to carry out cyberattacks. FraudGPT can be used to write texts for phishing emails or malicious codes to gain access to company networks.
Political Agreement
Since the workings of AI tools are not always clear to users, the European Parliament and the Council of the EU reached a political agreement on an AI law in December 2023. The principle is that AI systems offered in the EU must be safe, transparent, and traceable. The AI law was formulated with a future-proof approach, allowing rules to be adjusted to technological changes. On May 21, 2024, the European Council approved the AI Act. Official AI systems now offered in the EU must not only be safe and transparent but also non-discriminatory and environmentally friendly. Additionally, AI systems must always be under human supervision to prevent harmful consequences according to the new legislation.
The goal is to involve citizens, businesses, and institutions in the development of the Cybersecurity Act.
However, the AI law is not the only legislation aimed at ensuring digital security. To strengthen the digital and economic resilience of European member states, an amendment to the Network and Information Security Directive was implemented at the end of 2022. NIS2 states that companies and institutions providing essential services must improve their network and information security to minimize the chances of successful cyberattacks and data breaches. Next year, the directive will be translated into the Dutch Cybersecurity Act, which will replace the current Network and Information Systems Security Act (Wbni).
This new law tightens the current Wbni and provides points for improving supervision and enforcement. Additionally, significant fines are foreseen, and even members of governing bodies can be held personally liable. The Cybersecurity Act is still under development, and suggestions for the law could be submitted via an internet consultation until July 1 of this year. The aim is to involve citizens, businesses, and institutions in developing this law to make it better and more practical. Given all the AI developments (read: threats from cybercriminals), input on AI is expected, and the law will likely take this into account.
Legislation or not, it is well known that criminals disregard such laws. This also applies to cybercriminals concerning the new AI Act and Cybersecurity Act. Malicious AI tools remain available on the dark web, and cybercriminals continue to look for more ways to use AI to achieve their goals. This creates an uneven playing field: security specialists must stop all attacks, while a cybercriminal only needs one successful attempt.
Remaining Vigilant
It is good that we, as a society, remain alert to all risks and dangers surrounding AI and develop laws and regulations to manage everything properly. This means that legitimate AI service providers must adhere to the rules. For users of these AI services, this is good news: they know they can trust the service and that their data and privacy are handled carefully. However, the threat from cybercriminals will not disappear. This means there is work to be done for security specialists tasked with recognizing and stopping all attacks. This cat-and-mouse game will not end anytime soon. Therefore, it is good to continue relying on the expertise and intelligent AI tools of security specialists in the coming years to stay ahead in this game.
Source: Lex Borger for Computable.nl