What will the next phase of artificial intelligence (AI) bring? Amazon has launched AI-generated images for advertisers. The New York Times is hiring a newsroom generative AI lead. Following adoption of technology such as robot process automation (RPA), businesses are adapting processes to chase a new trend. Uptake of AI goes beyond speed and efficiency, with a trade toward increased content generation and smarter decision-making. 

However, there are cybersecurity risks associated with this emerging technology. The nature of technology means that we do not know all the complications and risks at the outset. Here, we look at the state of AI adoption and how to address cybersecurity concerns.

How are businesses using AI and Generative AI and what is the cybersecurity risk?

Some customer-facing businesses are already using AI to handle chatbot functionality and provide a more human response to specific requests by customers. Financial services companies are using AI in fraud detection to spot anomalies in transactions out of massive volumes of data passing through. Engineering industries can leverage generative AI to write new code to support the build-out of a functionality. And in arts-focused industries, such as music and television, AI can provide outputs to advance the writer’s agenda. 

To get an edge in business, there is always the push to be faster, better, and smarter. For those who are early, focusing on the next big thing could result in a massive influx of revenue. This drive to be at the front of the pack is helping AI adoption. However, making a strategic shift without the full support of the organization, or taking a step in the wrong direction, could cause data leakage or breach to a service without boundaries or appropriate cybersecurity controls.

With businesses embracing AI to improve operations and chase these coveted set targets, there still lies within a number of cybersecurity risks, which should be considered as businesses make the shift. Without an appropriate AI cybersecurity strategy, threat actors crafting specific attacks may successfully take advantage of AI being used by businesses. Below are a few major risks associated with this newer technology.

Data exfiltration

The launch of ChatGPT in late 2022 drove a huge boom in adoption, but concerns emerged in 2023 that the data used to train gen-AI models—code snippets, intellectual property, user search data—could compromise users and businesses. Concerned parties noted that a public AI model using a data collection of queries and IP addresses could turn input from one business into output used by another business, essentially passing along the thought strategies of one firm to another. Certain organizations understand the risk. They have implemented preliminary guardrails around the use of public AI services. Some went one step further by adding a path for adoption of private AI services. However, there are unknown security risk considerations that will surface as the technology is matured. 

Social engineering and scams

“Write me a message requesting payment for an invoice in the style of Elon Musk.” 

Consider that search command, knowing that AI and generative AI could support threat actors in improving the quality of their emails and other written communication. Creative use cases of AI may allow threat actors to generate realistic looking messages to be used in spear phishing campaigns. Unsuspecting users on the receiving end who are trained to flag common grammatical mistakes may not suspect such a message from AI and may even fulfill the request within the message.

Code vulnerabilities

In a rapid development and deployment model, like a Continuous Integration/Continuous Development (CI/CD) pipeline, businesses may turn to TuringBots, which generate code, to support the speed in development of code. But a sole reliance on AI to generate code will not always render secure code. A recent Stanford study found that code generators are more likely to introduce vulnerabilities than human coders. The introduction of code generation should be supported by proper software development lifecycle processes to ensure code does not get deployed with major cybersecurity issues.

Data poisoning

AI learns on data model inputs used for decisions and outputs. Typically, datasets are validated and cleaned prior to being ingested by AI. But when threat actors are able to alter the data model with malicious data, they can negatively affect the outputs and propagate misinformation, disinformation, and malinformation. In the case of a former Microsoft chatbot, “Tay,” the model was trained following user speech inputs. Following interactions with the open internet, the chatbot was poisoned with profane “everyday” phrases and began generating the same sentiments in response to user inputs.

Some cybersecurity solutions to consider when introducing AI

When businesses implement a proper cybersecurity strategy and roadmap to address AI and generative AI, they equip themselves to protect that big investment within. Within the AI cybersecurity strategy, consider these three key domains for cybersecurity protection when incorporating AI into the business: 

  1. Data privacy and quality: Revisit the data lifecycle and associated policies to ensure that data provided has passed internal checks prior to providing to an AI platform. Further, ensure that data is not subject to data privacy regulations or other oversight that could land the business into legal trouble.
  2. DevSecOps: Embrace the concept of securing the development lifecycle, from planning to final implementation. If generative AI has created code to inject into the process, then vetting the code, checking libraries, and scanning for vulnerabilities prior to merging code into production may significantly reduce the risk of vulnerabilities. Similar to AI’s use in endpoint and network protection in the detection of anomalous issues, the introduction of AI may take a similar role in the DevSecOps lifecycle, where a properly implemented solution may oversee the DevSecOps lifecycle to detect anomalies and further bolster secure coding practices.
  3. MLSecOps: MLSecOps extends the DevSecOps concept from software lifecycle to the machine-learning lifecycle, to ensure protection across the implementation of data models and AI itself. This may include the engagement of a qualified AI cybersecurity professional to ensure the AI use-case outputs are within the allowable range by the business (i.e., reliable, safe, resilient). This may extend to additional secure coding practices, and ensuring proper plans are in place to respond to threat actors.
  4. Assurance: Review the inputs and outputs of AI and generative AI for consistency in quality of outputs. This involves validation of data ingress and egress points, reviewing quality of data against known sources (e.g., hash checks), and performance of regular audits against known standards (e.g., NIST AI RMF).

How can we help?

If your company is concerned about cybersecurity risks while the business is undergoing an AI / generative AI transformation, the team at AlixPartners can help. Our team of experienced professionals has the knowledge and expertise to assess the current state of AI / generative AI usage in the environment, develop a strategy, and design solutions to allow the use of AI / generative AI to augment teams in a secure way. We can provide innovative solutions, use cases, and recommendations for improving your cybersecurity posture, help you implement the necessary controls to protect against common attacks and emerging threats, and communicate cybersecurity risks with the business effectively. 

In addition, our team can provide guidance on ongoing monitoring and support to ensure that your systems and data are continuously secure. Don't let threats put your organization at risk. Contact AlixPartners Cyber team today to learn more about how we can help protect your business.