Europol warned about the use of ChatGPT to “facilitate criminal activities”

In the midst of the frenzy that ChatGPT 4 is causing, the Europol alerted to a “wide range of criminal use cases” with the artificial intelligence (AI) tool. In this sense, the agency warned especially about fraud, identity theft, social engineering, misinformation and cybercrime. In this regard, they explained that the system “facilitates criminal activities”so he conducted “workshops” to study the “dark side” of this type of AI.

“Workshops involving subject matter experts from across the range of Europol expertise identified a wide range of criminal use cases in GPT-3.5. In some cases, harmful GPT-4 responses were even more advanced,” detailed from the police agency of the European Union. In this line, the report carried out by Europol GPT talk. The impact of extended language models on law enforcement detect that the evolution of these systems “offers a bleak outlook” on potential criminal exploitation.

Elon Musk and several experts ask to stop the advances of artificial intelligence and ChatGPT 4

With respect to modus operandi of “potential criminals”, the security body explained that ChatGPT “can significantly speed up” the investigation process on any particular crime as it offers information that can then be explored in further steps. “As such, ChatGPT can be used to obtain information on a large number of potential crime areas without knowledgeranging from how to enter a house to terrorism, cybercrime and sexual abuse”, they stated.

They added: “While all of the information that ChatGPT provides is freely available on the Internet, the ability to use the model to provide specific steps through contextual questions makes significantly easier for malicious actors better understand the subject and subsequently commit various types of crimes.

Fraud, misinformation and cybercrime, the “areas of concern” for Europol experts

The agency identified three “areas of concern” about the malicious use of systems like ChatGPT: fraud and social engineering, misinformation and cybercrime. However, they acknowledged that there may be more criminal uses of interest because a “comprehensive” analysis was not done. “Identified use cases that emerged from the Europol workshops conducted with their experts are by no means exhaustive. Rather, the goal is to give an idea of ​​how diverse and potentially dangerous tools like ChatGPT can be in the hands of malicious actors.”

Inside the edge of fraud and social engineeringEuropol found that ChatGPT’s ability to write “very realistic” texts makes it a useful tool for identity fraud. Along these lines, they explained that the capacity of these systems makes it possible to impersonate the speech style of specific people or groups, which would make it easier to carry out scams online. “Before it was easier to detect scams identity fraud basics due to obvious grammatical and spelling errors. It is now possible to impersonate an organization or individual in a highly realistic manner,” they said in the report.

And they added: “With the help of extensive language models, these types of identity fraud and online fraud can be created fastermore authentically and on a significantly larger scale.

Europol classified as “areas of concern” fraud and social engineering, disinformation and cybercrime.

Added to this, they indicated that the program’s abilities can be used for crimes such as terrorism, propaganda and disinformation thanks to its ability to collect information. “As such, the model can be used to collect more information that may facilitate terrorist activities, such as financial terrorism or anonymous file sharing,” he said.

Refering to disinformation, Europol called ChatGPT the “ideal model” for propaganda and fake news purposes. Along these lines, they explained that users are able to generate and distribute messages “reflecting a specific narrative with little effort.” “For example, ChatGPT can be used to generate propaganda online on behalf of other actors to promote or defend certain points of view that have been discredited as disinformation or fake news,” they said.

ChatGPT: what is the new Artificial Intelligence that the world is talking about

In addition to generating messages that seem to come from humans, another of ChatGPT’s capabilities that set off alarm bells at Europol is the ability to create codes in different programming languages ​​in just minutes. According to the agency, this has a “significant impact” on the cybercrime because it allows someone without technical knowledge to attack a person’s system. In this sense, they warned that the system to create malicious programs shortly after its release. “For a potential criminal with little technical knowledge, this is an invaluable resource for producing malicious code“, they stated.

“As technology advances and new models become available, it will become increasingly important for law enforcement to stay ahead of these developments to anticipate and prevent abuse, as well as to ensure that potential benefits can be reaped,” the report concluded.


You may also like