top of page

6 AI-related cybersecurity scams your end users should be aware of

AI tools like ChatGPT, Bard and other tools are becoming more popular, and cyber attacks that utilize the new tech are increasing as well. Over half (56%) of AI-driven cyber attacks are used in the access and penetration phases of an attack. This means that the majority are attempting to exploit the human element through scams like those listed below to allow hackers to gain initial access to a system before deepening their control.



These six AI-chatbot-related cybersecurity scams have been spotted on the threat landscape since the release of ChatGPT. Keep in mind that just like technology, hacker tactics are always evolving. While these attack-types may be common today, new attacks are always on the horizon.


Inform your users of common scams to be aware of and encourage them to stay consistently vigilant of when and where they click, download or enter personal information online. They should exercise increased caution for any message, post or download claiming to provide special access to AI-tools or that claim to provide access capabilities outside the scope of the tools they are familiar with


6 AI chatbot cybersecurity attacks your end users should be aware of:


1. Browser extensions, add-ons and apps

Apps impersonating ChatGPT add-ons and extensions claiming to give users quick access to the tool have been found in both Chrome and Firefox app stores. While many malicious extensions and apps have been removed, it’s best to be cautious when downloading chatbot-related apps to your device. The malicious extensions are commonly promoted as shortcuts to ChatGPT or as services that provide paid access to already free tools or access to their free official phone app.

A diagram showing a ChatGPT application and showing where clicking on the link on the app will take you. An arrow guides the eye to another application that is malicious but appears benign
Image credit: thehackernews.com

2. AI-generated phishing attacks

Misspellings and grammatical errors have been a consistent indicator of phishing messages. Chatbots could eliminate that telltale sign from some attacks. While not all attackers are using these tools, as the use becomes more popular it will likely make it tougher to identify a phishing message. Bad actors have also been found using AI-chatbots to personalize BEC and spear phishing messages to industries, job types and companies.


3. Spoofed sites

Alongside phishing and other social engineering attacks, hackers are exploiting the excitement around the chatbots to create malicious websites impersonating OpenAI (the creator of ChatGPT) and other AI companies The spoofed sites attempt to mirror the real website but include a link to download malicious software or a form for users to fill out where hackers can harvest private data.


4. Deep fakes of human likenesses and/or voice

Voice calls or even video calls can be impersonated using AI deep fakes. Bad actors use short audio clips to create voice clones to use in attacks. These attacks are often similar to other phone-related scams, where a friend, relative or business colleague is impersonated and asks the victim for money, often through a wire transfer or cryptocurrency. To avoid these attacks, always contact the assumed caller in another way and exercise caution before sending anyone money based only on a phone call. Planning a secret phrase or private question and answer is another way to prepare in advance to avoid these types of attacks — make sure you don’t send the secret phrases or responses via phone or email to avoid having those safeguards stolen by hackers.


A diagram showing a drawing of a face as an example to show the original versus a deepfake. The deepfake signs are inconsistent eye blinking, mismatched earrings and some features lack definition.
Image credit: U.S. Government Accountability Office

5. Social media

Outside of crafting realistic sounding social media content for bots or other nefarious purposes, hackers have been using social media to spread malicious links and other downloads related to popular chatbots. Links for free access to the paid version of ChatGPT have been used frequently across Twitter and other social media apps, as have malicious links to plugins, extensions and other related downloads (as mentioned above). Some malicious downloads have even been used to give hackers access to a victim’s Facebook account where the attacker uses the account to create bots and malvertising campaigns.


6. Malware

Bad actors have been selling access to a “dark” version of GPT-3 (a variant of the model used in ChatGPT) on Telegram, a popular site for hacker activity. This ability allows them to use the chatbot in ways outside of its intended use, allowing the bot to perform malicious activities like writing malware scripts that can be used in attacks.


Now that you’re aware of the many ways attackers are using AI to target users, you can take part in educating your clients and end users on these risks and help them understand how to better avoid these types of attacks.


Use these strategies to train your end users to recognize and avoid these attacks:


Search, don’t click: Teach users to be cautious with any links they are sent in an email or that’s found online. Instead of clicking on a link they’ve been sent, users should search for the correct website to find a legitimate source. In addition, remind users to download apps from official app stores or websites instead of through a link on social media or other untrustworthy source.


Setup DNS filtering & spoof detection: Ensure that your clients have DNS filtering set up to prevent users from accessing spoofed websites, and that spoof detection is turned on within their email solutions.


Fight deep fakes: Consider having your clients and their users set up a unique internal process to combat deep fake attacks. This could be coming up with a private passphrase or other method that can be used to determine if a caller or other contact is an AI tool or a person - make sure they share this information verbally and never email or text the secret passphrase.


Implement end user training: If your MSP isn’t already enrolling your clients and users into ongoing security awareness training, it’s time. Continuous training can reduce the likelihood that an end user will click on a phishing message by 60%, and HacWare’s AI-driven phishing simulations will keep your users up-to-date with modern attack-types. Our engaging micro courses are easy to complete in less than three minutes and cover cybersecurity topics your end users need to know about, including the principle of least privilege, insider threats, crypto-airdrop scams and more.

 

Learn more about HacWare: MSP partners can decrease the likelihood their end users will click on a phishing email by 60%. Let us educate your end users with automated, AI-driven phishing simulations and under three-minute micro-trainings to keep user attention and improve learning outcomes.


Learn more about our partner program and how we can support your MSP's growth!

bottom of page