Artificial intelligence (AI) is an exceptional tool for increasing productivity and innovating at work. However, if you don’t set specific parameters for how to use AI safely in the workplace, you risk your sensitive business data becoming public information.
"If your employees are not already using AI, they will. If you are not proactively helping your employees implement AI responsibly, you could already be leaking data in ways you never imagined possible” (Aaron Willis, VP Forensic Investigations, CISSP, PFI– SecurityMetrics).
Artificial intelligence (AI) is an exceptional tool for increasing productivity and innovating at work. However, if you don’t set specific parameters for how to use AI safely in the workplace, you risk your sensitive business data becoming public information.
Did you know that Artificial Intelligence has existed since 1956? It was first created to serve as a “thinking machine” that aimed to discover “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”
If you’re like me, you use AI frequently at your job to take care of rote or simple tasks, or as another “mind” to bounce ideas off of. We’re not alone either, with roughly 75% of workers in 2024 reporting using AI to improve at their job.
And yet, 74% of IT security professionals report their organizations suffer significant impact from AI-powered threats.
In fact, based on data collected from customers, the SecurityMetrics Forensic Team predicts that “with the help of AI tools, even script kiddies could rapidly create complex completed code that can steal credit card data. For example, AI is being used to create malware for more obscure languages (e.g., Golang, Swift) and generating ecommerce skimmers.”
These potential risks mean that safely navigating AI at your business may be more important than you thought.
I asked artificial intelligence what an AI acceptable use policy is (hey, why not?) and got this response: “An AI Acceptable Use Policy (AI AUP) is a set of rules and guidelines that defines how artificial intelligence tools, systems, or technologies can and cannot be used within an organization, institution, or platform. Its main goal is to ensure AI is used ethically, legally, and responsibly, protecting users, data, and the broader community.”
So, what does it mean to use AI ethically, legally, and responsibly? As someone who works in cybersecurity, it’s essential to my role and business that I don’t share critical business information. This includes private information about customers, internal employee documents, or really anything that would be considered my company's “secret sauce.”
In short, an AI acceptable use policy is a document that is distributed to all employees that sets out the specific rules you’d like them to follow when it comes to using artificial intelligence.
There are many elements a successful AI Acceptable Use Policy will include. Here are my suggestions:
Reporting Non-Compliance:
No one wants to be a tattle-tale. And yet, using AI irresponsibly to cut corners can majorly affect the safety of your business network and overall cybersecurity hygiene. A good AI policy will include parameters for what is considered breaking company rules, and how to report someone who isn’t following the guidelines. Sometimes, employees need more training on how to use AI safely, and reporting can address this. Willful disregard of AI policy may need harsher treatment like involving your legal team to assess damage.
List of Acceptable AI Tools:
To use AI safely, your employees need to know which tools are acceptable to use in their role and which ones are not. Personally, I prefer these tools from a cybersecurity perspective:
Tools I avoid include:
Annual Risk Management: Your AI Acceptable Use Policy should clearly outline when you will audit all AI tools for the safety of your company. Frequently updating your policy can both keep employees in the loop and ensure that you’re only using AI tools that are safe for cybersecurity.
Data and Privacy Laws: Your employees should be aware of any data and privacy laws you must follow such as PCI, GDPR, HIPAA, CCPA, and more. This requires separate training. SecurityMetrics offers comprehensive training for each of these standards, which can be found here. Or, check out our free Academy training for a more general cybersecurity training.
I’m a big fan of artificial intelligence and firmly believe that learning how to use it safely can help your business reach greater heights.
Here are some final tips for using AI safely.
Ensure employees ask for permission before using a tool. You’d be surprised how quickly new AI-powered tools are popping up, and your employees might want to use a new tool that assists in their role. Having an “ask for permission first” approach can ensure you can evaluate any tool.
Include your AI policy in third-party vendors' policies. Who you partner with and their own AI policy is almost just as important as your own. Evaluate whether third-party vendors are following safe AI practices.
Ultimately, you determine to what degree you want employees to use artificial intelligence. I believe AI is an excellent resource that everyone should use to increase productivity, as long as you do so safely.
*Gemini AI and many others only keep your data private with the right contracts in place. Without those, your data is often used as training material for various AI models. And while it may not directly disclose any specific information, your ideas that you used AI to think through all the details of may be suggested as an idea to some other AI user asking similar questions. While your specific product details may not leak, your idea and possible paths to achieving it certainly could.