Newsletters




Navigating The Good and Bad Of AI


Despite the overwhelming promise of AI, its rapid proliferation has been accompanied by some unwanted side effects, including a dramatic uptick in cyberattacks— particularly social engineering attacks like phishing, which have grown 1,265% since late 2022. As a result, organizations across industries are faced with an ever-evolving challenge: balancing the positive use cases associated with AI, such as automated monitoring and streamlined workflows, with its very real risks.

The challenge is so pertinent, in fact, that President Joe Biden issued an executive order, which his administration dubbed “the most sweeping action” ever taken regarding AI security. As organizations work to establish usage policies in response to the executive order, they must consider how to account for the risks of AI while also leveraging the technology to detect and prevent attacks.

Lowering the Barrier to Entry for Cybercrime

Carrying out a cyberattack isn’t easy, but AI is lowering the barrier to entry. For instance, less than 8 months after ChatGPT was publicly launched, researchers discovered a tool quickly dubbed its “malicious cousin.” WormGPT is a self-described “evil neighborhood chatbot” designed to assist hackers in their exploits. Other versions, such as BadGPT and EvilGPT, are cropping up as well.

These free online tools are prime examples of a broader trend. According to a report by the National Cyber Security Centre (NCSC), “Publicly available AI models already largely remove the need for actors to create their own replica technologies. …” In turn, novice hackers are better equipped to carry out a cyberattack, while more sophisticated hackers have a powerful new tool in their arsenal.

These AI tools can cover everything from mining social media to create hyper-personalized phishing emails, crafting emotionally compelling text, and modifying code to dodge malware detection engines.

Plus, AI capabilities are evolving rapidly. According to the NCSC, AI is already being used by state and non-state actors alike, and makes reconnaissance and social engineering “more effective, efficient, and harder to detect.” All things considered, cybercrime will likely become a commoditized offering with cutting-edge capabilities available as a service in the near future.

Strengthening Organizational Defenses

With a growing number of hackers armed with AI, it’s more urgent than ever for organizations to ensure their cybersecurity hygiene is up to par. Multifactor authentication (MFA), role-based access controls, and security patches can all interrupt the efforts of bad actors. Here are some other important cybersecurity considerations for software providers, per CISA’s Secure by Design:

  • Default passwords—Does your product rely on default passwords, and are they required to be changed upon login?
  • Reducing classes of vulnerabilities—What are you doing together to scrub input for SQL injection or XSS?
  • Policy—Does your business have a vulnerability disclosure policy?
  • Logging—Does the product provide detailed logs around configuration changes?

In addition to a strong baseline of cyber-defenses, AI can also be deployed to help detect and remediate anomalous behaviors. However, as organizations deploy AI solutions of their own, they must ensure they have the proper policies, processes, and training in place. With the landscape changing so quickly, these policies must be audited and reassessed on an ongoing basis. As described in detail in a report by Harvard’s Belfer Center, organizations should conduct AI suitability tests for any new tool. This includes determining the value added by the AI, system vulnerability, the level of damage an attack could entail, and whether there are alternatives to implementing it.

The cornerstone of AI is data, so organizations should also assess what they are feeding the model and where that data is stored. It’s preferrable to choose an AI solution that keeps data inside a closed ecosystem in a manner that is compliant with regulations such as GDPR and safe harbor. Once data is released into the wild of the internet, there’s no putting it back in the box. Source code should never be fed to public AI, as it’s basically giving hackers a map of the castle.

The Bottom Line

The transformative potential of AI is real, but it’s crucial to remember that any cutting- edge technology can be used for malicious purposes as well. As Biden’s executive order states: “In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built.”

Organizations need to be cognizant of the risks of AI—both when they implement new solutions and regarding AI-enabled attacks.

With the proper precautions, organizations can protect against a wide array of attack vectors, while also reaping the benefits of cutting-edge tools. Ready or not, AI is here to stay. But again, the landscape is changing quickly. Organizations must be ready to adapt and reassess on an ongoing basis in order to stay ahead of the curve.


Sponsors