The Utility and Challenge of AI and Police Report Writing
Cops all across America are using AI to help them write their reports. Of this, we have no doubt. And it’s completely understandable. Large Language Models (LLMs) like ChatGPR, Copilot and Gemini are nothing short of amazing in their ability to create high quality text. They’re tremendous time savers – a real boon as policing struggles with acute staffing shortages.
However, there a multiple implications for cops’ use of AI that police leaders must consider. This is why the Institute created its model policy on policing’s use of generative AI tools like ChatGPT. This is also why FPI Fellow George Watson wrote a short piece on the benefits and pitfalls of using AI to write police reports. This is, at once, a simple and complicated issue. It’s simple because these AI tools are so easy to use. It’s complicated because there is inadequate understanding of how these tools works, what they’re doing when they create documents, the various privacy and security aspects they present, courtroom/evidentiary implications for AI-generated report, etc.
The FPI endeavors, through its works via the Center on Policing and Artificial Intelligence, to help advance the responsible use of AI by the police. We have concerns about the potential for harm rapidily advancing AI technologies present. But we also have great hope for the promise of AI to enhance community safety and police effectiveness. Learning about AI, understanding its pros and cons, and implementing responsible policies and practices is how policing will realize the full potential of AI while also becoming more effective, empathetic and just.
To read Watson’s article click here.