Artificial intelligence is making it easier than ever for business users to approach technology in all its forms, including using co-pilots to enable end users to aggregate data, automate processes and even build natural language applications. This signals a shift towards a more inclusive approach to software development, allowing a wider range of people to participate, regardless of their coding knowledge and technical skills.
These technological advances can also introduce novel security threats that the enterprise must address now; software development in the background simply cannot be overlooked. The reality is that in many organizations, employees and external vendors are already using these types of tools, whether the company knows it or not. Failure to address these risks may result in unauthorized access and compromise of confidential data because misuse of Microsoft 365 accounts with PowerApps demonstrates.
Fortunately, security doesn’t have to be sacrificed for productivity. Application security measures can be applied to the novel world of doing business, despite the fact that classic code scanning detection has become obsolete for this type of software development.
Low-code/no-code with AI
ChatGPT experienced the fastest deployment of any application in history new records for the fastest growing user base – so chances are that you and the business users in your organization have tried it in your personal and even professional lives. While ChatGPT has made many processes incredibly uncomplicated for consumers, on the enterprise side, copilots such as Microsoft Copilot, Salesforce Einstein, and OpenAI Enterprise have brought similar generative AI functionality to the business world. Likewise, generative AI technology and other enterprise pilots are having a major impact on low- or no-code development.
With classic low-code/no-code development, business users can drag and drop individual components into their workflow using a wizard-based installation. Now, thanks to AI co-pilots, they can type: “Build me an app that collects data from a Sharepoint site and sends me an email notification when new information is added along with a summary of what’s new,” and they’re done. This happens outside of IT remit and is built into production environments without the checks and balances that a traditional SDLC or CI/CD tools that would provide.
Microsoft Power Automate is one example of a citizen software platform designed to optimize and automate workflows and business processes and enable anyone to build advanced applications and automation on it. Now, after adding Microsoft Copilot, on this platform you can type the prompt after adding an item to SharePoint: “Update Google Sheets and send Gmail.” In the past, this required a multi-step process of dragging and dropping components and connecting all your work applications, but now you can simply ask the system to build the flow.
All of these apply cases do wonders for productivity, but they usually don’t include a security roadmap. There are a lot of things that can go wrong, especially considering that these applications can be easily shared across the enterprise.
Just as you carefully review a blog written by ChatGPT and tailor it to your unique point of view, it is critical to enhance your AI-generated workflows and applications with security controls such as access rights, sharing, and data confidentiality tags. However, this usually doesn’t happen, for one main reason: most of the people creating these workflows and automations are not technically qualified to do so or even realize that it is necessary. Because the AI co-pilot promises to do the work for you when you build an app, many people don’t realize that security controls aren’t built-in or refined.
Data leak problem
The main security risk from AI-powered development is data leakage. When you create apps or co-pilots, you can publish them for wider apply both throughout your company and in the app and co-pilots marketplace. For enterprise copilots to be able to both interact with real-time data and interact with systems outside of that system (i.e. if you want Microsoft Copilot to work with Salesforce), you need a plugin. Let’s say the second remote you built for your company provides greater efficiency and productivity and you want to share it with your team. Well, the default setting for many of these tools is to not require authentication before other people interact with your co-pilot.
This means that if you build a second pilot and publish it so that employees A and B can apply it, all other employees can apply it too – they don’t even have to authenticate to do so. In fact, it can be used by anyone in the tenant, including less trusted or monitored guest users such as external contractors. Not only does this equip the public with the ability to play with this co-pilot, but it also makes it easier for bad actors to access the app/bot and then launch an immediate injection attack. Think of instant injection attacks as short-circuiting a bot so it can overwrite its software and hand over information it shouldn’t. Thus, faint authentication leads to over-sharing of the co-pilot with access to the data and subsequently to over-disclosure of potentially sensitive data.
When building an app, it’s very straightforward to misconfigure a step because the AI doesn’t understand the prompt, which causes the app to link the data set to your personal Gmail account. In a gigantic enterprise, this is tantamount to non-compliance as data escapes beyond the boundaries of the enterprise. There is also a supply chain risk because every time you insert a component or application there is a real risk that it will become infected, unpatched or otherwise unsafe, which means your application is also infected. These plugins can be “loaded” by end users directly into their applications, and the marketplaces where these plugins are hosted are a complete security black box. This means that the security implications can be far-reaching and catastrophic if the scale is gigantic enough (e.g. SolarWinds).
Another security risk that is common in the novel world of up-to-date software development is what is known as credential sharing. When you create an app or a bot, it’s very common to embed your own identity into the app. So every time someone logs in or uses this bot, it looks like it’s you. The result is a lack of visibility for security teams. Account team members accessing customer information is fine, but it can also be accessed by other employees and even third parties who don’t need access to the information. This is also a violation of GDPR, and if you’re dealing with sensitive data, it could open up a whole novel pool of worms for highly regulated industries like banking.
How to overcome security threats
Enterprises can and should reap the benefits of AI, but security teams need to put some guardrails in place to ensure employees and third parties can operate safely.
Application security teams need to know exactly what’s happening in their organization, and they need to get it quickly. To avoid building low-code and no-code AI applications from turning into a security nightmare, teams must:
- . You want to understand across the AI landscape what is being built, why, by whom, and what data it is interacting with. When talking about security, it’s really about understanding the business context behind what’s being built, why it was built in the first place, and how business users interact with it.
- In low-code and generative AI development, each application is a series of components that make it do what it’s supposed to do. Often, these components are essentially located in their version of an app store, which anyone can download from and insert into enterprise applications and co-pilots. These are then ready for a supply chain attack, where an attacker can upload a component containing ransomware or malware. Moreover, any application that then inserts this component into it will be compromised. You also want to thoroughly understand the components of each of these applications across the enterprise so you can identify risks. This is achieved using software composition analysis (SCA) and/or a software bill of materials (SBOM) for generative artificial intelligence and low code.
- The third step is to identify all the issues that have gone wrong since the app was built and how to fix them quickly, such as which apps have hard-coded credentials, which apps access and expose sensitive data, and more. Due to the speed and volume of development of these applications (remember there is no SDLC or IT oversight), there will likely be just a few dozen applications to deal with. Security teams must manage tens and hundreds of thousands (or more) of individual applications. This can be a huge challenge. To keep pace, security teams should implement security barriers to ensure rapid response when risky applications or co-pilots are introduced; whether through alerts to the security team, quarantining applications, deleting connections, or otherwise.
Master emerging technology
Artificial intelligence democratizes apply low-code/no-code platforms that enable business users across enterprises to benefit from increased productivity and efficiency. But the flip side is that novel workflows and automations are not built with security in mind, which can quickly lead to problems such as data leakage and exfiltration. The AI genie is not going back into the bottle, which means application security teams must ensure they have a complete picture of how low-code/no-code processes are developing in their organizations and put appropriate guardrails in place. The good news is that you don’t have to sacrifice productivity for security if you follow the tips outlined above.
about the author
Ben Kliger, CEO and co-founder, Zenith.