10 things to include in your AI policy

Brightmine resources graphic

Published: 28 June 2024 | by Natasha K. A. Wiebusch, Brightmine Marketing Content Manager

The future of work is AI-powered. Companies across the globe are making huge investments in AI, racing to integrate it and maximise productivity and innovation. However, many are still struggling to make it past the introductions.

Organisations need to put systems in place to fully partner with AI… and they should start with a solid AI policy. In this article, we cover the key AI policy provisions that will help usher in a new era of AI at work.

Why creating an AI policy is paramount

Creating an AI policy is paramount because it provides employees and leaders with a roadmap for proper AI use in the workplace. It informs employees of what types of AI they can use and how they may (or may not) use it in their work.

And though basic guidance is key, an effective workplace AI policy shouldn’t just set parameters and create disciplinary measures. It should also act as a catalyst for integration and innovation. Specifically, a well-drafted policy can help establish systems to integrate AI into daily work. It can also help the organisation foster cross-functional collaboration and monitor AI tools to ensure quality and compliance.

Additionally, AI policies help organisations manage the adoption of new types of AI. As organisations continue on their AI journey, they will need a safe and efficient way to purchase (or build) and integrate new AI tools. A policy can detail this process and establish a chain of approval.

AI policy provisions you need

Developing an AI policy that covers everything you need it to cover can be daunting. You’ll need to have a deep understanding of how your organisation intends to use AI now and in the future. To get you started, the following are policy provisions every AI policy should have:

1. Scope

The scope of an AI policy is extremely important. To draft an appropriate scope, first, you’ll need to consider what types of AI you’d like the policy to cover. This is important because you’ll need to decide whether you’d like to leave room for AI growth. For example, you can either create a policy specific to chatbots, or you can create an all-encompassing policy.

The scope should also help employees understand whether the policy covers AI built internally, provided by vendors or both. And, you may want to cover types of technology that are not AI, but still use data to draw insights. For example, you may decide that the policy should cover any use of advanced analytics tools.

Other policies

The scope of the policy should help readers understand how it interacts with other workplace policies. For example, you may state that employees should read the AI policy together with other relevant policies. This may include the following policies:

  • Cybersecurity policy.
  • Employee code of conduct.
  • Workplace security policy.
  • Diversity, equity and inclusion (DEI) policy.
  • Privacy policy.

2. Purpose

The purpose of a workplace policy provides a preview of what types of guidance the policy provides. Some AI policies focus only on compliance and proper use. A more thorough policy will include guidance on ethics, auditing AI tools or approving new ones.

When drafting your policy purpose, consider what you’d like to accomplish with the policy and your organisation’s AI strategy. Your AI strategy will help you understand what types of guidance your employees need.

3. Definitions

To help employees understand the contents of the policy, include definitions of key terminology related to AI technology. The following are common terms you may want to include:

  • Algorithm.
  • Analytics.
  • Artificial intelligence (AI).
  • Big data.
  • Chatbots.
  • Generative AI.
  • Hallucination.
  • Large language models (LLMs)
  • Machine learning.
  • Natural language processing (NLP)

4. Contact information

Employees may have questions about the policy, or they’d like assistance with using AI in their work. To help connect employees to members of the organisation who can help, include a section for contact information. You may also consider contact information for vendors of AI systems that are available to all or most employees.

5. What AI employees can and cannot use

Include guidance on what types of AI employees can and cannot use. Employees should understand:

  • Whether they can use a specific AI tool.
  • Whether they need permission.
  • Whether the tool is banned.

An important question you’ll need to answer is whether employees can use publicly available AI, such as ChatGPT. Many organisations, including Amazon and JPMorgan, have banned the use of ChatGPT or other public AI. Though these tools can be helpful to employees, they also pose security, privacy and quality risks. Before deciding to allow employees to use these tools, consider the risks and alternatives.

6. Proper AI use

Provisions addressing proper AI usage ensure increased productivity does not sacrifice quality or safety. Proper use provisions should describe when and how employees can use AI or AI generated content. It should also address employees’ responsibility to make sure AI use is safe and beneficial to the organisation.

A policy can’t define every instance of proper use. So, instead of trying to define every specific instance, consider providing general guidelines that apply to all employees. Some topics to address in your proper AI use guidelines include:

The types of work or tasks for which employees can use AI

Describe the types of work for which employees can and cannot use AI. For example, employees may be able to use chatbots for routine emails and communications, but not for technical research.

Communication and approvals with line managers

Ensure employees know who to speak to before they begin using AI. For example, employees may need to request permission from their line manager before engaging an AI tool. This can be on a per-project basis, yearly or otherwise. Communication or formal approval can help increase safety while providing valuable insight into the organisation’s AI use rates.

Whether and to what extent employees can use AI outputs

Generative AI is one of the most common AI applications at work, particularly chatbots. These tools can write entire articles (e.g., blog posts) in seconds. However, they present unique risks as well, such as inaccuracies, repetitiveness and creating off-brand content. In the policy, describe how and for what types of work employees can use these types of AI outputs.

Whether employees must attend online orientation and training for AI use

Currently, one of the top challenges to AI adoption is employee upskilling. In fact, recent data shows that most employees don’t have the skills needed to fully partner with AI. Internal upskilling can help with this. In the policy, consider whether to recommend or require AI training to employees.

7. Ethics and responsibility guidelines

Organisations must also ensure it uses AI responsibly and ethically. To ensure stakeholders at all levels of the organisation set the right ethics and responsibility standards, include guidelines in the policy. This may include the following:

Ensuring human oversight of AI

AI can automate mundane tasks and increase productivity. However, the policy should ensure that a competent human always monitors AI while they complete these tasks. In addition to requiring human oversight, include details such as how to confirm accuracy or audit outputs.

Preventing discrimination or bias

AI can benefit the organisation by improving productivity, performance and innovation. It should not, however, result in bias or discrimination against employees or customers. Accordingly, commit to using AI in a way that does not lead to these negative outcomes for employees, customers or vendors. When drafting these provisions, consider your DEI and environmental, social and governance (ESG) policies.

Protecting privacy and security

Employers must protect the business and comply with various laws and regulations related to information privacy and security.

Guidance on privacy and security should address how the organisation will continue to protect privacy and security in its AI practices. This may include reference to the organisation’s other policies on privacy and security, such as the cybersecurity policy, workplace safety policy, among others.

Providing sensitive information to an AI

Provide specific guidance on the types of information employees can provide to AI. Without clear guidance, disclosing sensitive information to an AI can lead to compliance issues and other problems. Sensitive information may include:

  • Employee or customer personal information.
  • Private company information.
  • Trade secrets.
  • Information related to intellectual property.

Anonymising data before providing it to an AI

Employees may need to provide an AI with company or personal information to carry out specific projects. For example, HR teams using salary benchmarking software will need to provide their employee pay data. In cases like these, consider requiring employees to anonymise all data whenever possible before providing it to an AI.

Ensuring employee safety

AI should not, under any circumstances, put employee lives in danger. A provision addressing AI and employee safety in the policy will help ensure a safe environment in which employees can work. When drafting this provision, you may also want to reference other relevant policies or procedures related to employee safety.

Practising organisational transparency

As Emily Mackenzie, AVP of market planning and product at Brightmine, noted in her article on driving AI readiness, 54% of employees have ‘no idea’ how their company is using AI. This is not only poor ethical practice, but it’s also damaging to AI integration.

Practicing transparency will help organisations build trust with their workforce, foster collaboration on projects and further AI integration. You can increase transparency by including a genuine commitment to transparency in the policy.

In addition to including ethics and responsibility guidelines in the policy, consider creating a separate document that describes the organisation’s commitments to ethical and responsible AI use.

8. AI audits

AI audits are a common risk management strategy. They ensure compliance and accuracy, and they prevent bias and discrimination. Audits also keep the organisation informed on the AI’s data quality.

AI audits are considered good practice across the board. Now, some jurisdictions require them for certain types of AI. For example, New York City requires employers to audit certain AI tools for bias.

To provide a foundation for your AI auditing procedures, include guidelines on how and how often the organisation must carry out AI audits. Consider also identifying which department has ownership over the audit and referencing any existing audit procedures.

You may also include reference to standards (which may be in a separate document) an AI tool or its data must meet to pass the audit.

9. Adopting new AI

Over time, and as AI develops, your company will likely adopt new AI tools. The AI policy can help ensure employees adopt AI tools in a way that is transparent, consistent and safe. Policy guidelines for adopting new AI can provide employees with a clear process and relevant contact information.

In the policy, include guidance on how employees or the company may requesting or implementing a new AI tool. This should include reference to AI auditing procedures where relevant. Additionally, consider referencing a form or other supporting documents if relevant.

10. Disciplinary actions

Unfortunately, you may need to navigate violations of the AI policy. To help deter violations, include a description of disciplinary action the organisation may take if an employee violates the policy. This provision will differ depending on how the organisation approaches discipline and the jurisdictions in which it operates.

Additional considerations for your AI policy

Policy drafting is about more than writing. The practice is cross-functional, and it requires communication and collaboration with stakeholders from across the company.

Gaining buy-in from stakeholders, including employees who will use AI, will help make it a success. It will promote transparency, build trust and help shape the policy so that it meets the organisation’s unique needs.

Consult legal counsel

In addition to gaining buy-in, consult with your local counsel about the policy. Having counsel involved from the beginning will ensure the policy complies with the many laws that may apply, including data security and privacy laws. This is especially important for employers operating in multiple jurisdictions. Having legal counsel involved will also keep the drafting process efficient.

Conclusion

From effective use to ethical considerations, drafting an AI policy is a complex process. Still, the benefits of creating a policy that helps usher in the future of work are countless and well worth the time. By incorporating the above provisions into an AI policy, you can strike a balance between innovation and responsibility while fostering trust with your employees and the public.