Is your corporate private data being exposed and put at risk by AI tools? If your employees are using free AI (Artificial Intelligence) tools readily available (and installed by system updates), then that answer could very well be “yes”. It’s imperative that anyone using AI understand the risks and how to protect their data and business.
Artificial intelligence (AI) refers to the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs. (Wikipedia)
First, as with any free software, there are always caveats to consider. Those EULAs (End User License Agreements) that most people accept without reading, typically offer no guarantees of security or privacy. Free software of any kind can pose a security risk in and of itself: the various libraries and code that make it work may have vulnerabilities that introduce those weaknesses to any system they are used on. Additionally, because AI learns from and interprets data which is input, the mere entering of text, images, sound, etc. gives that tool access to those pieces of data. If any of that data is sensitive PII, the tool may “leak” that data. Now consider that most AI tools are cloud-based. Not only may the application have security issues, but the cloud environment may not be well-protected against unauthorized outside access or even access to data from others using the same cloud platform. So, if someone enters social security numbers, account information, health information or any other sensitive data, it could be exposed and stolen. Paid tools should include assurance of proper data security and privacy.
AI tools are constantly evolving to improve their performance, reasoning and outcomes. Generative AI tools work by ingesting all data input to them and generating output which is accepted or rejected, they “learn” and become more powerful, more accurate—at least in theory. Free tools use all data they receive in their learning models. Large volumes of data are needed for refinement. Unfortunately, bad data can corrupt the model—whether entered in error or with malicious intent. Again, free tools offer no assurances against such situations. Why is this a concern? Bad data in the model can cause the AI to produce erroneous results. This is often referred to as hallucinations. Now, think about someone wanting to do damage: they feed it garbage with the intent of producing or skewing a certain outcome. Your employees are using the tool to make decisions and/or produce content, what could possibly go wrong?
Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts. (Wikipedia)
Plagiarism and copyright infringement are concerns. GenAI tools are built on enormous amounts of data. Learning models can be comprised of words in the form of text or speech, but can also be any other content input to the tool. It then uses that data to “generate” new content. There have been numerous lawsuits by individuals and organizations against various AI companies due to unauthorized use of the suing parties’ data in building the tool or outputting duplicated or manipulated content without permission or credit. Maybe being sued for plagairism or copyright infringement is not a huge concern given the way your organization uses AI, but what if it published PII which your employees entered? What if that was combined with a bit of a hallucination? Depending on what you are using the tool to do, how do you know the output is trustworthy?
AI tools could provide immense benefit in automating and streamlining your operations, but approach its implementation with care and oversight. Ensure your team knows to gain approval for use of these tools before they are put into use and then be sure there is oversight of the data in and out of the tools.
So how do you protect your organization against the threats of AI?
- A first layer of defense is making sure your systems and employees are protected against threats with basic security tools and training.
- Next, only use AI tools from trustworthy vendors that develop their AI with oversight to do so securely. When they discover a vulnerability, patches and fixes are released promptly. The vendor has measures in place to prevent unauthorized access to or exfiltration of your data. The tool should have means to detect model poisoning, plagiarism and malicious activity.
- Control who has access to the tools and data within the tools. Only allow users access that need to use the tool.
- Read the privacy policies and terms of service along with any subsequent updates to those policies and terms.
- Only input data needed to get the desired outcome. Extra data means extra risk.
- Require employees to use strong passwords and multi-factor authentication.
- Keep the tool and all software on your systems updated.
We offer extensive security training covering basic topics as well as electives including AI. Contact us to access a Baseline Employee Cybersecurity Assessment (BECA). This web-based test highlights vulnerabilities tied to employee behavior and awareness. It helps managers understand where their team stands today in cybersecurity awareness and what areas may need extra attention—across key categories like phishing response, password practices, and data handling.
Pair it with the AI Readiness Innovation Assessment (AIRIA) to help you see both the risks faced now and the opportunities to safely adopt AI—giving you context and confidence to move forward. AIRIA identifies where AI can support productivity, reduce friction in processes, and improve efficiency and decision-making—without introducing unnecessary risk.