[[{“value”:”
For many of us, Artificial Intelligence (AI) has been working away in the background of our digital tasks for years.
It drives internet searching, satellite navigation, customer service chatbots, automation and more. It offers a lot of potential for businesses and is increasingly a focus for big tech companies looking to expand their offerings (and improve their bottom line!). Unfortunately, while the technology continues developing and more businesses try to understand how best to use it, cybercriminals are doing the same thing.
Without a doubt, AI and its potential in helping to manage rising cyber threats against organisations, is a topic of great interest to business leaders and cyber specialists. According to a recent global IEEE survey, AI, including predictive and generative AI, machine learning (ML) and natural language processing (NLP), ranked the highest (65 per cent) as the area of technology that will be the most important in 2024. The survey also found that of the top potential applications for AI this year, 54 per cent of technology leaders surveyed selected real-time cybersecurity vulnerability identification and attack prevention as their top choice to address the rising threats related to phishing, ransomware and other attacks.
In January this year, the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) released a resource on the topic of engaging with AI to provide organisations with guidance on how to use AI systems “securely”; however, some common AI-related threats are outlined below. These threats are not presented to dissuade AI use, but rather to assist all AI stakeholders to engage with AI securely.
Historically, cybercriminals’ success has mostly depended on human error. According to the U.S. Cybersecurity and Infrastructure Security Agency, 90 per cent of cyber attacks begin with phishing, most commonly using emails or SMS messages. However, advanced attacks also bombard victims with push notifications in an attempt to bypass multi-factor authentication. For many recipients, it may not be difficult to identify these attacks as scams, due to unusual domain names in the sender’s email address, spelling or design errors in the body of the email or a lack of personalisation. But criminals know that all it takes is one moment of inattention for users to give away their account credentials unknowingly.
Once an account is breached, an attacker can make financial purchases, sell personal information to the highest bidder or demand payment in return for giving back control of a victim’s online life. If a business account is hacked, criminals may steal valuable intellectual property, leak private data or install ransomware that can cause vital systems to malfunction if a substantial payment isn’t delivered to the hackers. For governments and other operators of critical infrastructure, the consequences of a cyber attack have the potential to be catastrophic.
In recent years phishing attacks have generated astounding revenues, all without criminals having help from artificial intelligence. It’s simply impossible for targets to remain permanently alert, no matter how well-educated they are about the risks. For this reason, strong phishing-resistant multi-factor authentication has always been necessary to safeguard account access.
Generative AI’s frightening power
Large Language Models, a form of generative AI, can instantly craft flawless messages, each tailored to specific individuals. Previously, a targeted ‘spear phishing’ attack would’ve required laborious research and manual effort, but the gathering of vast amounts of personal information, using publicly-visible posts on social media and review sites, can now be automated. A typical AI-assisted phishing attack might produce an email claiming to be from a business where a target has recently shopped, seeking a one-off payment that requires credit card information. Referencing specific details and flawlessly reproducing the expected tone and writing style, the phishing email is virtually impossible to identify as a hoax.
Even more concerning is AI’s ability to clone voices and likenesses from audio and video clips or images found online. Combined with tools that mimic Caller ID, targets can be fooled by calls seeking urgent assistance seemingly from family, friends or co-workers. The call may be used in tandem with a phishing email leading the recipient to a fake sign-in page. Cybercriminals are already using these methods with great success—with many parents reporting being deceived by voice clones of children supposedly in distress, or even kidnapped. As AI develops and mass adoption rises, fraud will only increase.
The need for stronger protection against phishing
This is the moment to rethink the way we prove our identities online. Education is insufficient—cybercriminals will find a way to convince even the most sceptical target. Passwords are woefully inadequate, as they can be so easily acquired through phishing attacks, and while multi-factor authentication is essential, not all methods are equal. Any multi-factor authentication method that can be shared or approved remotely, like one-time passwords or push apps, will always be at risk from modern phishing attacks. AI can easily spoof methods using facial recognition or voice recognition.
There is, though, a solution that does protect against phishing attacks. Hardware security keys, like YubiKeys, require your physical presence to authenticate and are phishing-resistant. They store passkeys, which can be used to log in to popular apps and services quickly and securely. The hardware-bound passkeys on the YubiKey cannot be copied, and authentication is only possible on verified sites, so account credentials are never provided to malicious websites.
The transformative power of generative AI is only in the early stages of being realised, yet the effects for online security are already clear. Protecting ourselves online must be an urgent priority, or the costs of cybercrime will only continue to rise. As the adoption of these tools continues to grow, it will be important to focus on the key ways to circumvent the associated risks. This underlines the importance of using strong phishing-resistant multi-factor authentication (MFA) and identity-based security methods so that businesses and individuals can stay one step ahead of the cyber threat and stop cybercriminals from using AI to turn attacks into an art form.
Keep up to date with our stories on LinkedIn, Twitter, Facebook and Instagram.
“}]]