Are You Using AI In Your Job?

We want to understand the real-world applications of AL and ML in business and the impact it will have on all our jobs.

Want to help? Complete the survey, your insights could make a big difference. It will just take one minute.
You'll be the first to get access to the final report.-->

A Guide to Creating a Generative AI Acceptable Use Policy

Bobby Gill | February 1, 2024

With the launch of ChatGPT in early 2023, the massive adoption of generative Artificial Intelligence (AI) by millions of people worldwide immediately brought to light the need for businesses to come up with acceptable use policies for AI in the workplace.  Whether it was using ChatGPT to craft marketing copy, to using DALL-E to generate images for a PowerPoint deck, employees have adopted the use of generative AI technology throughout their daily workstreams. With this adoption has come the realization that businesses need to govern their employees’ use of generative AI to protect themselves from the myriad of risks that comes with commercial use of this technology. At BlueLabel, we help companies adopt generative AI into business operations, helping them create hybrid human-machine workflows. One of the first things we do in any such engagement is to first define the acceptable use policy for generative AI in those workplaces. In this article, I will share much of what we’ve learned in our engagements about how companies should think about crafting an acceptable use policy for generative AI tools, what they should include in them, and also provide a template of a generative AI acceptable use policy that you can download and adapt to your own needs.

Understanding Generative AI in the Enterprise Context

Generative AI is transforming business operations, enabling the creation of diverse content and automating complex tasks with nothing but the written word. Generative AI is a branch of Artificial Intelligence that is capable of generating new content based on its training data, using nothing more than written instructions from the user (called ‘prompts’). There are many different types of generative AI tools and each type comes with its own capabilities and also its own set of regulatory and legal risks that a company needs to consider when allowing its employees to access them.

AI Tool Category Common Use Cases in Business Example Tools Key Ethical Considerations
Conversational AI – Customer service automation (chatbots)
– Content creation and curation
– Language translation services
– Sentiment analysis
ChatGPT
Bard
Claude
– Data privacy and security
– Potential for biased outputs
– Misinformation risks
Image Generation – Marketing and advertising material creation
– Product visualization, graphic design and artwork creation
– Educational content illustration
DALL-E
Stable Diffusion-like tools
Midjourney
– Intellectual property concerns
– Potential for creating misleading images
– Ethical use in media and advertising
AI Coding Assistants – Code generation and suggestion
– Bug fixing and code optimization
– Automating repetitive coding tasks
GitHub Co-Pilot
DeepSeek Coder
Continue
– Code originality and IP rights
– Dependency on AI for critical thinking
– Security vulnerabilities and code review
AI Music and Audio Generation – Background music for videos and presentations
– Sound effects for media production
– Music composition for commercial use
Soundraw
Beatoven.ai
MuseNet
– Copyright and royalty issues
– Originality and artistic integrity
– Ethical use in content creation

Key Components of a Generative AI Acceptable Use Policy

What goes into an acceptable use policy for generative AI ultimately depends on the context of the company looking to draft them and is influenced by many factors such as regulatory compliance, and internal information security certifications. However, in the AI strategy work work we at BlueLabel have done with our clients to define the guidelines for generative AI usage by employees, we’ve found that most generative AI acceptable use policies need to include some mention of the components below. For each area, we will describe some of the key points a good acceptable use policy should cover.

Responsible Use:

As the first point, it’s critical to ensure your employees are not looking towards generative AI as a way to switch off their brains and let the robot do all of their work. A good responsible use guideline in a generative AI use policy should emphasize that AI tools are meant to be used as a multiplier to human efficiency, rather than a replacement to human judgement and critical thinking. Further, this policy should outline the need for employees to understand what AI can or cannot do as well as being vigilant and aware of the propensity for generative AI tool to deliver biased and altogether incorrect outputs.

Data Privacy and Confidentiality:

Stress the importance of handling sensitive data carefully, especially when using AI tools for data analysis or customer interactions. For instance, tools like language processing AI must be configured to avoid unintentional sharing of confidential information. A comprehensive acceptable use policy should cover the following items as they relate to generative AI tools:

  • Prohibition of uploading sensitive files to cloud-hosted AI tools.
  • Data sharing restrictions to prevent AI tools from using prompts as training data.
  • Data retention policies as it relates to prompt history.

Intellectual Property and Copyright:

Perhaps the greatest legal risk when using generative AI tools, especially those that create images, is that of intellectual property theft and attribution. A good acceptable use policy needs to dictate the rules regarding honoring copyright and obtaining proper permission when in doubt. Further, your policy should list out the requirements for proper attribution when generative AI images are used for commercial or public purposes.

Ethical Standards and Non-Discrimination:

Address ethical AI use, like ensuring language models do not propagate biased or discriminatory language. Policies should encourage regular auditing of AI outputs for fairness and inclusivity. Further, your policy should address restrictions on creating ‘deepfakes‘ or other manipulative or deceptive content from generative AI tools.

In the context of recruitment tools using AI, policies should guide towards mitigating biases and promoting equitable candidate screening. AI outputs must be regularly reviewed to ensure they don’t reflect unintended biases.

Quality Control and Oversight:

Generative AI tools are not predictable and should not be left on their own to pump out content without human supervision and oversight. Generative AI can be unpredictable, if you are using them create customer facing content, you must have some sort of detection and correction mechanism to handle cases where an AI tool goes linguistically ‘rogue’. It is critical that in your acceptable use policy you outline processes and rules that govern the monitoring and quality control of AI outputs. Specifically you should specify the following:

  • Oversight mechanisms and review policies for AI-generated content.
  • Verification policies for humans looking to publish generated AI content to ensure they are accurate and appropriate.
  • Training protocols for teaching employees how to properly prompt an AI and to recognize bias and incorrect outputs from those tools.

Compliance with Laws and Regulations:

Your AI use policy needs to ensure that the use of AI in sensitive fields such as legal, finance and healthcare adhere to all appropriate regulatory laws and compliance requirements.

Monitoring and Enforcement:

Establish clear monitoring processes for the use of AI in the enterprise. A good policy should mandate the use of only company provided generative AI accounts and specify a regular audit and compliance protocol and frequency.

A Generative AI Acceptable Use Policy Template

At BlueLabel, we’ve distilled the insights we’ve gained working with clients across many different industries rationalize their use of generative AI into a standard Generative AI Acceptable Use Policy template. This document is the culmination of best practices, lessons learned, and industry expertise, designed to guide businesses in navigating the complex landscape of generative AI usage.

While we present this template with confidence, borne from our extensive experience in the field, it is essential to note that it is intended as a starting point, not a legal solution. We advise customizing the template to align with your organization’s specific needs, regulatory environment, and operational practices. Remember, as AI technology evolves, so too should your policy, adapting to new challenges and opportunities.

You can download the template for free, just put your email address into this form and you will get a PDF and DOCX copy of the template emailed to you.

Create an AI Use Policy for Your Business

Enter your email address below and you will receive an email containing a PDF and DOCX version of BlueLabel's Generative AI Acceptable Use Policy template.

Tips for Crafting a Generative AI Use Policy

A generative AI acceptable use policy isn’t worth much if nobody uses it. With a technology like generative AI that is both so transformational, and at the same time, foreboding to employees fearing that it might one day replace them, it is important that you both educate employees on what generative AI can or cannot do, while at the same time drafting guidelines for how they should interact with it. Further, before rolling out an acceptable use policy company wide, you need to pilot it and test it out with a small group of users who can let you know the policy is too restrictive or leaves open doors for wiley employees to exploit. Finally, in a space that is as rapidly developing as generative AI, you need to monitor the development of the industry and continue to adjust your policy to stay abreast of the latest developments. Stay vigilant and remember that flexibility is your friend in the ever-evolving AI landscape.

Conclusion

The emergency of generative AI cannot be stopped and it is foolish for businesses to attempt to stop the use of these tools in their business operations. The potential for these tools to dramatically improve worker productivity cannot be understated and should be embraced. However, as the proverb reminds us, with great power comes great responsibility. Generative AI bring with it some very real and unique risks if they are not used properly, as their unpredictable nature and penchant for telling blatant untruths can quickly land a company in hot water. The very first step in adopting generative AI in your business must be to first define the rules and policies that will govern how employees can use this technology. It is only after you’ve set forth these rules of engagement that you can then embark on the path of digital transformation with generative AI.

Bobby Gill
Co-Founder & Chief Architect at BlueLabel | + posts