ESET analyzes criminal schemes using AI

0
542
It is proposed to create an interagency group for AI management

Over the past year, the development of artificial intelligence has accelerated significantly, causing a stir among cybersecurity professionals and ordinary Internet users alike. While professionals use artificial intelligence (AI) to improve cybersecurity, criminals use the technology for their own purposes. This year, fraud, social engineering schemes, account scams, disinformation campaigns, and other threats are expected to increase.

Due to the growing interest in the use of artificial intelligence, ESET experts have prepared a list of the main schemes that can be used by attackers, as well as talked about possible threats to privacy and the benefits of using AI.

How attackers use AI technologies

Artificial intelligence is already being used by various types of threats, and the number of cyberattacks involving it is growing every year. The most dangerous is the threat in the context of social engineering, where generative AI can help attackers create a rather convincing text in local languages. Other most common schemes using artificial intelligence by attackers include:

  • Authentication bypass: Deepfake technology helps fraudsters impersonate users during selfie and video-based verifications to create and access new accounts.
  • Corporate email hacking: artificial intelligence can trick a victim into transferring funds to an account under the fraudster’s control. Audio and video diphones can also be used to disguise themselves as CEOs and other executives during phone calls and online meetings.
  • Impersonation fraud: Programs based on large language models (LLMs) will open up new opportunities for fraudsters. With data obtained from hacked or publicly available social media accounts, fraudsters can impersonate victims of virtual kidnapping and other scams designed to deceive friends and family.
  • Influencer scams: In 2025, fraudsters are expected to use AI technology to create fake or duplicate accounts of celebrities, influencers, and other famous people on social media. In particular, attackers may publish a deepfake video to lure followers to share personal information and money.
  • Disinformation: cybercriminals will use artificial intelligence to create fake content, thus luring gullible social media users to follow fake accounts. In the future, these users can become a tool for other operations.
  • Password cracking: Artificial intelligence-based tools can lure out credentials to access corporate networks and data, as well as user accounts, in seconds.

Як зловмисники використовують технології ШІ

What are the threats to privacy

In addition to being used as a tool in cybercriminal attacks, AI can also increase the risk of data breaches. Programs based on large language models require huge amounts of text, images, and videos to train them. Often, by accident, some of this data will be sensitive: biometrics, medical or financial information. Also, some social networks and other companies may change their terms of service to use user data to train models.

Providing information to an artificial intelligence model poses a danger to the user due to the possibility of hacking the system or transferring it to others through programs that run on LLM. In particular, corporate users may inadvertently share confidential work-related information through artificial intelligence prompts. According to the survey, a fifth of British companies have accidentally disclosed potentially confidential data due to the use of AI technologies by employees.

What are the benefits of artificial intelligence?

Artificial intelligence will play an increasingly important role in the work of cybersecurity teams this year as it is incorporated into new products and services. In particular, it will help to:

  • generate data for training users, security teams, and even AI-based security tools
  • summarize long and complex threat reports for analysts and facilitate faster decision-making on incidents;
  • increase the productivity of security professionals by automating workflows for investigation and remediation;
  • scan large amounts of data for signs of suspicious behavior;
  • improve the skills of IT teams with the help of the “co-pilot” function built into various products to reduce the likelihood of incorrect settings.

However, IT professionals need to understand the limitations of artificial intelligence and the importance of human experience in the decision-making process. In 2025, it will be important to balance human resources and machine work to reduce the risks of errors as well as potentially negative consequences. AI technologies need to be combined with other tools and techniques to achieve optimal results.

In 2025, artificial intelligence will radically change the way we interact with technology. The use of AI-powered tools has enormous potential benefits for businesses and individual users, but it also creates new risks that need to be managed. Therefore, the government, businesses, and users must work together to harness the potential of AI while mitigating the risks.

LEAVE A REPLY

Please enter your comment!
Please enter your name here