AI Impact on Small Business Cybersecurity

ai llm impact small business security

The Rapid Rise of AI & LLM

Over the last few months, there have been countless headlines related to AI. Major tech companies are promising to integrate AI into their core product offerings while the advances in large language model (LLM) chatbots are generating headlines as they pass entry exams into medical school or score highly on the bar and are being banned from universities.

The recent explosion of AI developments have pushed tech stock into what some are even calling the ‘AI Bubble’ - reminiscent of the Dot-com bubble of the early 2000s. At the time of writing, NVIDIA is up 200% YTD with several tech companies in tow (Palantir 140%, APPL 42%, Microsoft 34%) as promises are made around the coming AI revolution.

One thing is clear, there is a frenzy around AI and it is important to understand how it impacts cybersecurity. With the explosion of new technologies, the barrier to entry for threat actors to operate is lower and organizations, especially small to medium size, are likely to see an increase in attacks.

AI Growth Outpacing Ethics, Cybersecurity Concerns Grow

OpenAI, the creators behind ChatGPT, were quick to put safeguards in place to prevent misuse of the ChatGPT project for nefarious purposes. Additionally, the White House presented a plan for a path towards ethical AI seeing 140 million dollars pledged towards ethical AI research.

Many large tech companies, including all of the ones named above, are cooperating alongside government efforts to facilitate safe development of AI systems. However, not everyone is interested in following the rule book.

It did not take long for a malicious generative AI to be developed. WormGPT allows for unrestricted use of large language models to aid in the production of malware, exploit of vulnerabilities, and generation of phishing emails. WormGPT’s creators claim to have trained the GPT on data specifically to improve its malicious capabilities and aid in cyber-criminal endeavors.

The telegram channel supporting the blackhat GPT has quickly passed 5000 subscribers and is continuing to grow as threat actors turn to AI to assist in their campaigns.

Alongside WormGPT, there have also been concerns around ‘Deep Fake’ technology and the impacts of AI on social engineering campaigns. Threat actors could leverage such tools to impersonate others; a tactic seen recently when AI was leveraged to scare a mother into thinking her daughter had been kidnapped.

AI will certainly be used in the future to generate phishing attacks, impersonate employees on the phone, and to help craft new malware.

Securing Organizations During The Rise Of AI

Fortunately, there are ways to protect yourself and your organization from threat actors leveraging AI. The tactics, techniques, and procedures used in an AI assisted attack are generally the same as traditional ones. A threat actor leveraging AI is using automation or consulting large language models to assist them in crafting their attack.

Below are some key areas of concern that AI assisted attacks bring to the threat landscape and ways that an organization can become more resilient to AI and threat actors leveraging GPT models.

Frequency, Not Complexity

As of now, AI has not developed any revolutionary zero-day vulnerabilities or written advanced malware that current defensive tools cannot detect. AI does, however, make offensive security more accessible to non-technical would-be threat actors. It is becoming easier to use offensive security tools to carry out malicious attacks against organizations and is likely to see more attacks taking place.

With tools like WormGPT, the barrier to entry is lower allowing someone who is not overtly technical to begin to write payloads and exploit vulnerable systems. Simply asking ‘How do I scan X system for a vulnerability’ or ‘Write me a payload that I can use in a phishing attack’ will save attackers hours of writing code or understanding the fundamentals of how a system works and instead give them the tools needed to start hacking.

As such, any assets on the internet will be scanned more often, vulnerabilities will be poked at more frequently, and there will be more automation of reconnaissance and early exploitation phases. Since smaller organizations are often unaware of their external attack surface, they pose to be impacted the greatest as more automated scanning / testing takes place.

With a rise in attack frequency expected, organizations should:

  1. Review external footprint to understand what assets are exposed to the internet

  2. Ensure assets are patched - especially those publicly facing (see #1)

  3. Ensure assets have some form of endpoint protection

  4. Leverage MFA and strong password policies

  5. Train employees on social engineering & AI safety Social Engineering

Social Engineering

With how big of a part social media plays in our day to day lives, it is important to step back and understand how to validate the information being consumed. Deep Fakes and Voice AI allow an entirely different vector for threat actors to exploit and it underscores the importance of not taking digital media at face value.

With AI tools becoming more available, it is likely that social engineering attacks will continue to evolve in their complexity. 80% of breaches involve sort of compromised identity component and it is unlikely that this number will drop any time soon..

While AI tools may allow for phishing emails to appear more legitimate or phone calls to clone the voice of an employee, employee training and administrative controls can prevent their effectiveness.

Always be sure to review emails for signs that it could be a phishing attempt:

  • Note the ‘sender’ name and address to be sure they’re legitimate

  • Review the URLs included in the email - hover, don’t click, to see where they redirect

  • Configure ‘External’ banners for all emails originating from outside the organization

Set Strong controls to prevent the impacts of Business Email Compromise (BEC):

  • Set a secure system in place to validate any account number or personnel changes

  • Do not allow the changing of payment information via e-mail

  • Designate key individuals authorized to make financial changes, store their contact information

Thoreson Consulting has also published a blog post on dissecting a real phishing email.

The Bottom Line

There has been a lot of talk about AI and its impact on the future. With the remarkable advances in ChatGPT and AI technology, there will certainly be new ways threat actors can target organizations and the barrier to entry for threat actors is lowering. However, with strong fundamental security processes in place, organizations can maintain resilience to AI assisted threat actors.

Organizations should undergo regular risk assessments and penetration testing to reveal any areas of weakness in their security program. As AI security initiatives continue to develop, organizations should review security controls for effectiveness against new tactics, techniques, and procedures. It is important to remember that security is an ongoing process and that the threat landscape is constantly evolving.

Should your organization have questions or concerns around AI or interest in how Thoreson Consulting can assist in improving your cybersecurity posture, please check out our services and do not hesitate to contact us today.

Previous
Previous

5 ‘Quick Wins’ Small Business Can Achieve to Massively Improve Security

Next
Next

Learning From Security Headlines - May 2023