FUTURE POLICING

View Original

A Model Policy for Policing’s use of Generative AI

At this point, literally everyone has heard of generative artificial intelligence (GAI) tools like CHatGPT, Copilot or Gemini. These tools are a form of GAI called large language models (LLM). LLMs are proving themselves to be extremely useful. In day-to-day uses like customer service lines and the crafting of documents, tools like ChatGPT, Copilot, Gemini, Llama, etc. are demonstrating the huge potential for time-saving and dramatic qualitative increases.

Policing’s staffing crisis is proving to be a rational justification for police agencies to authorize the use of GAI tool by their employees. By using GAI tools, users can create documents or images in a fraction of the time traditional methods take. For instance, a staff report that used to take 4-6 hours to write and edit and be done in just minutes using a LLM. However, without a set of guidelines that establishes clear guardrails for the use of these powerful tools, it is only a matter of time before cops get themselves into big trouble through their inadequate understanding of how LLMs work, what their limitations are, and where their pitfalls can produce career-ending results for well-intending officers.

In our survey of the policing community, we found a remarkable lack of policy guidance for officers using GAI tools. Accordingly, the Fellows of the Future Policing Institute have created a model policy for the responsible use of this type of AI in policing. The policy and related documents can be found on the Resources page of this website. We have produced three related documents for policing’s use:

  • A explanation of our rationale for what some might consider a risk averse policy.

  • The actual policy itself. It is in Word format to facilitate customizing for agency needs.

  • A sample department memo, in word format, to facilitate the explanation of the policy.

It is our hope that this policy will facilitate the responsible use of AI by policing. AI holds the promise of incredible benefits. However, without fully understanding the rapidly evolving field of GAI, policing could easily find that this beneficial technology actually harms the relationship the police have with the very people they are paid to protect. The use of police AI must be accompanied by the laser-like focus of police leaders on AI use. This is critical today, but will be increasingly so in the near future as AI technology leaps forward. We view this policy as our contribution to the technological aspect of policing that is effective, empathetic and just.