Last week, I spent two days at the Government Accountability Office (GAO) Comptroller General Forum on Artificial Intelligence. It was an event that brought experts together in AI from the fields of autonomous vehicles, cyber security, finance, and criminal justice to discuss the current state of AI, propose what the near future will look like, and present potential policy recommendations and research questions to be taken up by Congress.
To be a part of this well-run, interdisciplinary conversation was a treat. Often stuck in the world of law and criminal justice, it was a welcome change to hear from experts tackling these issues from non-legal perspectives.
Over the two days, I gained perspective on where AI is at broadly and how far criminal justice has to go specifically. Without question, the use of AI in the criminal justice system is nascent. The fact that criminal justice was included (to the exclusion of health care, education or any other policy area) remains surprising to me. One GAO staffer told me what partially sparked criminal justice's inclusion was my op-ed in Wired, which discussed the potential application of neural nets in risk assessment tools. While flattering, it illustrated my surprise: the jump off for criminal justice's inclusion in the forum was a prospective use of AI, not its current application.
To that end, it did feel like myself and Richard Berk of Penn, the other criminal justice expert in the room, were talking about what-ifs, as opposed to the other experts who were talking about current challenges. Nevertheless, by listening to experts from industry, government, and academia from other fields I came away with a new perspective regarding AI in criminal justice. Specifically, without a strong market incentive, if quality AI is going to be created in the criminal justice space then the federal government needs to help. It can do this by building training datasets and creating a transparency mechanism.
While everyone else in the room represented fields with a strong market component, criminal justice was alone in being a government industry. Yes, there are private defense attorneys and third party vendors that sell products and services to the criminal justice system; however, this does not offset the fact that the criminal justice sector is government owned and operated. What this means in regards to the AI discussion is that there is no the market incentive to develop AI at the rate that we see in the medical or transportation fields where the financial payoff will be tremendous.
If the criminal justice system wants to be serious about AI, it is going to need better training data. As I've written about previously, the are organizations that are trying to collect county level data and clean it for cross-jurisdictional analysis. However, being undertaken by a scrappy non-profit is impressive but not ideal for the gargantuan size of this project. A job this big and complex, should be undertaken and underwritten by the Bureau of Justice Assistance and curated by the Bureau of Justice Statistics. This would create a public dataset that could be used to train new AI in the world of risk assessment and facial recognition, for example.
Since collecting these large datasets can often be cost prohibitive, public training data would lower the bar for researchers and entrepreneurs to tackle difficult problems in criminal justice. It would also allow companies or governments developing these tools to benchmark their creations, an important process in the evolution of these tools and the standard in other industries.
While bolstering the use of AI in the criminal justice system, the U.S. government needs to get serious about algorithmic and AI oversight. There is a transparency issue regarding these tools that is in direct conflict with an open and transparent court process. For guidance, I'll be watching the E.U.'s General Data Protection Regulation (GDPR), which creates a new right to challenge any algorithm that makes a non-human aided decision about a person. This type of intervention is critical as our technologies grow more complex and more opaque. (Even as I write this, I have reason to believe that this provision of the GDPR will fall short if applied to our court system, especially if cases like Loomis and Malenchik become the standard.)
The U.S. is a laggard in data and algorithmic regulation, and it's time we take these issues seriously. If the feds don't act, we run the risk of states and localities passing their own laws on these issues. This is already happening in the auto industry. In the same way that we don't want a car to be legal and then illegal as it crosses a county or state line, we don't want algorithmic tools dolling out recommendations or justice through a regulatory patchwork. Due to the ubiquity of technology and AI, federal preemption on this issue will be important to provide guidance and take away uncertainly in the market and legal system, which will help bolster research and experimentation in this space.
Without a doubt, criminal justice, outside of early facial recognition work, is not leading on AI issues. However, there are specific paths the government can take to create an environment that welcomes research into this space, while protecting due process and creating transparency. The GAO will put out a report based on this meeting, and I'll post more about that when it's released.
In the meantime, where do you see AI being beneficial in the criminal justice system? What are the challenges?