Partner Projects for Criminal Justice Technology Course by Jason Tashea

The semester at Georgetown Law starts on Monday, and so does our Criminal Justice Technology, Policy, and Law course. I've already posted the syllabus, but today I'm putting up the project descriptions for the course. To take a step back, this course has a two-hour a week lecture component and a 10-hour a week lab. In the lab, students partner with system stakeholders to work on a discreet technology or data project. I'm happy to say that we have three great partners for this term: the Maryland Governor's Office of Crime Control and Prevention, the Maryland Office of the Public Defender, and the Texas Criminal Justice Coalition. Below are excerpts from those project descriptions. For the complete descriptions, check out our teaching page.  


Project: Create a sequential intercept model of Maryland’s criminal justice system

Sponsor: Maryland Governor’s Office of Crime Control and Prevention

Over the past two decades, Maryland experienced a decrease in crime. However, its prison population has remained stubbornly high. This has lead to an expensive corrections system ($1.3b in FY2014), while alternatives to incarceration and other treatment options remain underfunded.

Acknowledging these shortcomings, in 2015 the state legislature created the Justice Reinvestment Coordinating Council to “develop a statewide framework of sentencing and corrections policies to further reduce the state’s incarcerated population, reduce spending on corrections, and reinvest in strategies to increase public safety and reduce recidivism.” The findings of the Council became the basis of 2016’s Justice Reinvestment Act (JRA), the most comprehensive criminal justice reform in Maryland in a generation.

This holistic reform will change the way that people interact with the justice system and how resources are used. The Governor’s Office of Crime Control and Prevention (GOCCP) is in charge of evaluating the JRA’s impacts.

The goal of this project is to map the Maryland criminal justice system to understand how the JRA will affect people, personnel, and financial resource allocation by creating a sequential intercept model. Sequential intercept models identify institutional intercept points for a person progressing through the criminal justice system. They are designed to help policymakers identify opportunities for better resource allocation. Here, your work will also borrow from best practices in describing and mapping system processes. This project will inform GOCCP’s legislative work in the coming years.


Project: Create a tool to calculate eligibility for a public defender

Sponsor: Maryland Office of the Public Defender

The Maryland Office of the Public Defender (OPD) is modernizing how it calculates eligibility for their services. At the moment, the calculations are done by hand, costing valuable staff time. Currently, an intake worker can spend 10-15 percent of her time making these calculations.

Soon, the responsibility for making most of these determinations will fall to the courts, rather than OPD. However, OPD will retain responsibility for determining eligibility in appellate, post-conviction,  and juvenile delinquency proceedings, and occasionally witnesses.

In Maryland, eligibility for OPD services is not merely income-based. They take a hybrid approach that estimates the potential financial burden of a case (based on the offense charged) and compares it to a defendant’s ability to meet that burden. In other words, the eligibility formula is based on a mix of underlying data and could change over time.

The goal is to create an indigency calculator that accurately determines whether an adult or juvenile qualifies for representation from OPD. Whatever tool is built must be usable by OPD staff in the place where they make a determination. OPD staff should be able to edit the tool’s internal logic when necessary. A successful tool will accurately make an eligibility determination, show the justification for that determination, and demonstrably save OPD staff time.


Project: Create a database to assess judicial appointments of attorneys in felony criminal cases

Sponsor: Texas Criminal Justice Coalition

In 2012, Harris County, Texas, which includes Houston, was the last jurisdiction of its size to establish a public defender service (HCPD) in the United States. Before the creation of the HCPD,  the county court used the “Wheel” System to appoint attorneys for indigent clients. The “Wheel” is a rotating list of pre-approved members of the local bar that a judge can appoint to defend an indigent client. Created by state statute, this system is still in practice and judges in Harris County appoint attorneys, including public defenders, based on this system.

This system has come under scrutiny. Charges are leveled that judges are overriding the list and picking favorites at the detriment of defendants. In a 2016 survey of criminal defense attorneys in Harris County, a vocal minority said that judges were picking attorneys that would either plead a client quickly or attorneys that had donated to a judge’s election campaign.

The goal of this project is to structure and analyze a novel dataset to see if there is a correlation between defense appointments, campaign donations, and/or plea deals.

Thoughts on Ryan Calo's "AI Policy: A Road Map" by Jason Tashea

Courtesy of Getty Images

Courtesy of Getty Images

Recently, Ryan Calo, a UW law prof focused on technology and robotics, posted a working paper on a potential policy roadmap for AI. In that post he asked for feedback. I had a chance to read the paper last week and wanted to share a couple of quick thoughts.

First, there's nothing in this paper I'd take out. The "what if robots kill us" section seemed a little unnecessary, but based on the 116 people last week warning of just that, I get why it's included. I appreciated, and would have liked more of, the history section. I feel context is left out of the AI discussion generally. Journalists are particularly guilty of creating a narrative that says "AI is brand new and it's magic", when neither are the case. Calo's brief history helps us recognize this.

Second, there are a couple of questions I have about the piece after reading the piece. Towards the end of the article, Calo begins to talk about the need for new commissions and federal committees to deal strictly with AI policy issues (pg. 23). But, why? I'm not convinced after reading this paper that unique committees need to be created. My inclination is to fold in the AI discussion to existing committees covering a variety of topics (criminal justice, transportation, finance, medicine, etc.). The subject matter expertise needed to understand these systems and the rules and laws that control these areas of society will be more dispositive on the future of AI than the science or ethics of AI itself. To that end, I'd like to know more behind Calo's thinking here.

Another question I had, and this is in the weeds, was about Calo's assertion that AI would be wrong for executive pardons (pg. 12). Again, why? I'm uncertain why he thinks this, and it isn't substantiated with a cite. It's not to say he's wrong, I just don't think this conclusion is a given. 

Third, I think there's one major missing policy question that is danced around but not confronted in this piece. Calo is right to start his discussion around bias and "inequality of application"; this is the concern that data and AI creates biased outcomes. I'd call this the "fairness when applied" policy issue, which I'd juxtapose against a "fairness of application" policy issue, which is not included in this piece. The "fairness of application" issue, bolstered by Cathy O'Neil's writings, is that poor people will have their futures determined by machine, while rich people will have determinations made by humans. It's a form of class privilege that already exists and proliferates in the black box world of AI and algorithmic decision making. 

Automating decisions for people worse-off economically, while giving those better-off the ability to make their case to a human, is a type of discrimination that will reinforce class and race divides in the U.S. To that end, this is a critical policy discussion that needs to be included when we discuss AI.

Fourth, I want to throw something out there for color and context in the "Use of Force" section. Axon, the body cam manufacturer previously known as Taser, has made clear that the data trove of body cam footage they collect at evidence.com will be used for machine learning and AI applications. What is less known is that they think in the next 5-7 years they will be able to have a body cam, in real time, tell the officer on the scene "if someone’s demeanor has changed and may now be a threat.” 

The implications of this application of AI cannot be understated, and Calo is right to include the "Use of Force" discussion so early in the piece. How does liability work when the AI/officer gets the call wrong and someone is injured, or worse? Do there need to be admin leave procedures for neural nets that keep thinking a brown face is a greater threat than a white face? (Western facial recognition software has shown a lack of competence around minority faces, which Calo touches on.) These are big questions that require thoughtful solutions.

Last, I saw a couple of minor grammar things, because I'm no fun. On page 20, first full paragraph, the word "extend" should be "extent". Also, on page 24, first full paragraph, there is the need for a noun after "separate". 

I enjoyed this article, and I'm glad that Calo is trying to create a framework to analyze these issues from a policy perspective. I'm excited to see where these ideas take us.

 

 

 

A New Kind of Expungement App by Jason Tashea

An expungement app was my gateway drug. In 2014, I developed a public facing, triage web app for my old employer (it's still up!). At the time, this project was exciting and novel. However, research I did earlier this year found that these types of apps don't really accomplish anything. Yes, from a technical perspective they work, but they create limited impact. It's not just my project, others built on this model were also wanting for more expungements.

In the alternative, there is a more impactful model that leverages local court record databases. However, these databases don't exist in every state and, therefore, the model is not replicable. With these limitations in mind, I think there's potential for an expungement app that has greater applicability across states, regardless of court record databases, and improves attorney workflow, a key component of a worthwhile expungement app.

The database model has worked in Maryland and Pennsylvania. Over simplified, this model scrapes a publicly accessible court record database so that the tool and the lawyers can know, in realtime, who can expunge their record and who can't. These tools also auto-populate the appropriate forms to file in court.

Both of these tools do two critical things that have led to their success and impact:

1. It makes determining and filing an expungement easier.

2. It integrates into an existing expungement workflow.

Collectively, these tools have created tens of thousands of expungements. This is an exponential impact over the public facing apps that I've built. As mentioned above, however, this model isn't being replicated widely because a minority of states have the type of database that is needed to build out a project like this. In others, this type of database is not publicly accessible or it is expensive to access. 

In a place like Florida, the law makes it impossible to request criminal record data en masse. You have to do it one case at a time, at a rate of $24 each. This creates too much of a hurdle to replicate the Maryland and Philly database approach. 

So, the question then becomes, do states without a public criminal records database just give up hope of riding the expungement tech wave? I don't think they have to. To that end, I want to propose a model I haven't seen yet, but I think has potential.

The idea is to use OCR (optical character recognition) to read RAP sheets and help determine eligability. This project could be browser or phone based. By either using the camera on a phone or by uploading a pdf to a browser, the machine would read the pertinent data from the RAP sheet, process it through an algorithm that knows the expungement statute, and, if the record is expungeable, then populate the appropriate forms in an editable format.

This project would meet the dual criteria of making an expungement easier to determine and it could easily integrate into an existing workflow. The tool takes the work out of reading someone's RAP sheet and confirming if their record is expungeable. It also obliterates the time it takes to fill out the paperwork. By saving over burdened and underfunded legal aid attorneys significant time, this is an app they will want in their workflow. 

At the beginning of every project, we have to ask ourselves if what we are doing actually makes the end user's life easier. In the case of public facing triage sites, the user drop-off rate indicates that that model isn't value added, but cumbersome to the user. Further, it doesn't add anything to a lawyer's ability to do her job. With these limitation in mind, I think an OCR project is the next evolution of these tools.

What do you think?

AI and Criminal Justice by Jason Tashea

Last week, I spent two days at the Government Accountability Office (GAO) Comptroller General Forum on Artificial Intelligence. It was an event that brought experts together in AI from the fields of autonomous vehicles, cyber security, finance, and criminal justice to discuss the current state of AI, propose what the near future will look like, and present potential policy recommendations and research questions to be taken up by Congress. 

To be a part of this well-run, interdisciplinary conversation was a treat. Often stuck in the world of law and criminal justice, it was a welcome change to hear from experts tackling these issues from non-legal perspectives.

Over the two days, I gained perspective on where AI is at broadly and how far criminal justice has to go specifically. Without question, the use of AI in the criminal justice system is nascent. The fact that criminal justice was included (to the exclusion of health care, education or any other policy area) remains surprising to me. One GAO staffer told me what partially sparked criminal justice's inclusion was my op-ed in Wired, which discussed the potential application of neural nets in risk assessment tools. While flattering, it illustrated my surprise: the jump off for criminal justice's inclusion in the forum was a prospective use of AI, not its current application. 

To that end, it did feel like myself and Richard Berk of Penn, the other criminal justice expert in the room, were talking about what-ifs, as opposed to the other experts who were talking about current challenges. Nevertheless, by listening to experts from industry, government, and academia from other fields I came away with a new perspective regarding AI in criminal justice. Specifically, without a strong market incentive, if quality AI is going to be created in the criminal justice space then the federal government needs to help. It can do this by building training datasets and creating a transparency mechanism.

While everyone else in the room represented fields with a strong market component, criminal justice was alone in being a government industry. Yes, there are private defense attorneys and third party vendors that sell products and services to the criminal justice system; however, this does not offset the fact that the criminal justice sector is government owned and operated. What this means in regards to the AI discussion is that there is no the market incentive to develop AI at the rate that we see in the medical or transportation fields where the financial payoff will be tremendous. 

If the criminal justice system wants to be serious about AI, it is going to need better training data. As I've written about previously, the are organizations that are trying to collect county level data and clean it for cross-jurisdictional analysis. However, being undertaken by a scrappy non-profit is impressive but not ideal for the gargantuan size of this project. A job this big and complex, should be undertaken and underwritten by the Bureau of Justice Assistance and curated by the Bureau of Justice Statistics. This would create a public dataset that could be used to train new AI in the world of risk assessment and facial recognition, for example.

Since collecting these large datasets can often be cost prohibitive, public training data would lower the bar for researchers and entrepreneurs to tackle difficult problems in criminal justice. It would also allow companies or governments developing these tools to benchmark their creations, an important process in the evolution of these tools and the standard in other industries. 

While bolstering the use of AI in the criminal justice system, the U.S. government needs to get serious about algorithmic and AI oversight. There is a transparency issue regarding these tools that is in direct conflict with an open and transparent court process. For guidance, I'll be watching the E.U.'s General Data Protection Regulation (GDPR), which creates a new right to challenge any algorithm that makes a non-human aided decision about a person. This type of intervention is critical as our technologies grow more complex and more opaque. (Even as I write this, I have reason to believe that this provision of the GDPR will fall short if applied to our court system, especially if cases like Loomis and Malenchik become the standard.) 

The U.S. is a laggard in data and algorithmic regulation, and it's time we take these issues seriously. If the feds don't act, we run the risk of states and localities passing their own laws on these issues. This is already happening in the auto industry. In the same way that we don't want a car to be legal and then illegal as it crosses a county or state line, we don't want algorithmic tools dolling out recommendations or justice through a regulatory patchwork. Due to the ubiquity of technology and AI, federal preemption on this issue will be important to provide guidance and take away uncertainly in the market and legal system, which will help bolster research and experimentation in this space.

Without a doubt, criminal justice, outside of early facial recognition work, is not leading on AI issues. However, there are specific paths the government can take to create an environment that welcomes research into this space, while protecting due process and creating transparency. The GAO will put out a report based on this meeting, and I'll post more about that when it's released.

In the meantime, where do you see AI being beneficial in the criminal justice system? What are the challenges?

 

Partner with our Georgetown Law Course! by Jason Tashea

We are seeking partners in the criminal justice system (corrections, courts, defenders, prosecutors, police, social services) to work with our new course at Georgetown Law.

Keith Porcaro, CTO at SIMLab, and I are teaching a course this fall at Georgetown on criminal justice technology, policy, and law. This course is a lab where small groups of students (3 or 4) will work with system partners on designing potential solutions to an existing problem identified by the partner. In total, we are looking for three criminal justice stakeholders to work with. Using teachings from the class, students will work with the partner organization to map a discrete policy or practice challenge and design prototype solutions that partners can take forward. 

A stakeholder partner will be available to help students learn about their system through a series of 30-minute interviews and provide feedback as needed. At the end of the course, the stakeholder will receive a prototyped solution to the problem they face. The course runs from late August to early December of fall 2017. Interested organizations should fill out this intake form, and we will follow up with you. Please don’t hesitate to reach out if you have any questions, jason@justicecodes.org.

Measures for Justice Data Portal by Jason Tashea

This week, Measures for Justice (MFJ) released their new data portal for the public. According to MFJ, the idea behind the portal "allows users to review and compare performance data within and across states, and to break them down by race/ethnicity; sex; indigent status; age; offense type; offense severity; and attorney type." They go on to say that "[t]he Data Portal comprises data that has been passed through 32 performance measures developed by some of the country’s most renowned criminologists and scholars. The measures address three primary objectives of criminal justice systems: Public Safety; Fair Process; Fiscal Responsibility."

Anyone who has done multi-jurisdictional criminal justice research will tell you a project like this is desperately needed. Further, this isn't a project from a dilettante that recently became acquainted with the mess that is American criminal justice. Amy Bach, the executive director of MFJ, is an expert on the criminal justice system and has been fighting this battle for criminal justice reform for many years. For these reasons, this project should be taken seriously.

Launching with six jurisdictions, the staff at MFJ are doing difficult work that the Bureau of Justice Statistics (BJS) should have been doing years ago. What's most impressive is that they're doing this work with just 22 staff in Upstate New York. Further, it's not just a chance to see comparisons through their clean user interface; they made it easy to export any of the raw datasets their staff meticulously poured over.

This being said, a project of this nature and with this massive scope raises some questions:

  • Sustainability: This project is massive and to keep it going will take a lot of money and effort. As of late, MFJ has been infused with a lot of money, especially from the tech sector; but, what comes of these efforts when they are no longer novel or become banal? Even worse, where does the data go if MFJ has to close? I'd assume someone would step in, but does that contingency exist? For the sake of the effort being made, I hope an off-boarding procedure has been considered. 

These challenges were faced by the Sunlight Foundation when they were undertaking their criminal justice data project a number of years ago. Ultimately, they had to off-board the project and never reached their goal of collecting all the criminal justice data from all 50 states plus D.C. and the Feds.

  • Upkeep: The work done by MFJ to get this far has taken six years and, from what I understand, was painstaking. While they plan to add 14 more states in the next three years, what happens to the datasets they've already built? Will we see expansions of these datasets as more data becomes available over time? To do so requires continuous effort and upkeep that isn't necessarily automate-able (the process of going the the jurisdiction in some cases was a needed step). I'm curious to know the plan for this type of perpetual upkeep.
  • Tracking Impact: This is never easy. Ever. While there will be anecdotal successes of district attorneys or judges that see the data and make change, how can we as the criminal justice community track the impact of this work? I hope to see the innovation and energy put into this project extend to creative ways to track its impact. Page view and download stats will be insufficient; as well success stories from various jurisdictions. I'd be happy to brainstorm on this front. It's a challenge worth tackling.

Beyond these initial concerns, I think MFJ is well positioned to be a normative force in criminal justice stats around the country. While their work is focused on collecting and aggregating this data, I think it would be a missed opportunity if they don't leverage their relationships at the county level to help improve data collection capacity and provide a floor for standardization. My dream is for a government agency or company to create something similar to what Google's General Transit Feed Specification did for transit data. Without a doubt the criminal justice system is more challenging than transit; however, MFJ seems well suited through their relationships to bring this message and support. Perhaps they build off of the National Information Exchange Model (NIEM) and the Global Justice XML data transit standards. I'm not sure. However, it would be great to see MFJ, or a coalition led by MFJ, flex this muscle. 

TLDR

Measures for justice made a great data portal with county-level criminal justice statistics. It's a great and needed tool. There will likely be challenges around sustainability, upkeep, and measuring impact. With their impressive national network of local stakeholders, perhaps, MFJ can help standardize data capture in the criminal justice system. Go MFJ!

Can you code for Miranda? by Jason Tashea

A couple months ago, Andrew Ferguson, a law prof at DC Law School, reached out to ask about my thoughts on a recent article he wrote with Richard Leo, a law prof and Miranda expert, on the creation of a Miranda App. I've finally put my thoughts together for a post. This is that post

For your background, the article is summarized thusly:

For fifty years, the core problem that gave rise to Miranda – namely, the coercive pressure of custodial interrogation – has remained largely unchanged. This article proposes bringing Miranda into the twenty-first century by developing a “Miranda App” to replace the existing, human Miranda warnings and waiver process with a digital, scripted computer program of videos, text, and comprehension assessments. The Miranda App would provide constitutionally adequate warnings, clarifying answers, contextual information, and age-appropriate instruction to suspects before interrogation. Designed by legal scholars, validated by social science experts, and tested by police, the Miranda App would address several decades of unsatisfactory Miranda process. The goal is not simply to invent a better process for informing suspects of their Miranda rights, but to use the design process itself to study what has failed in past practice. In the article, the authors summarize the problems with Miranda doctrine and practice and describe the Miranda App's design components. The article explains how the App will address many of the problems with Miranda practice in law enforcement. By removing the core problem with Miranda – police control over the administration of warnings and the elicitation of Miranda waiver and non-waivers – the authors argue that the criminal justice system can improve Miranda practice by bringing it into the digital age.

I had many thoughts after reading this article. I like the mix of metaphor and machine, and, specifically, I think the metaphor is strong. I admire the leap the authors want to make between the confederated, bureaucratic nature of Miranda to a reality where this constitutional protection is treated like one. As they describe, the challenges around Miranda today create hurdles for this tool.

These hurdles are not necessarily overcome by approaching the problem through a tech lens. First and foremost, I think the authors need to prove that the lack of tech is the problem. The article lays out the many problems with Miranda and then proposes the solution. If this project indeed want to move forward, I'd recommend running a randomized controlled trial that had one group be told Miranda, one group who read Miranda, and one group that had a more interactive experience via a website. Afterwards test everyone on their understanding of the warning. If there's a statistically significant difference between the first two groups and the web-based group, then there might be reason to believe that an app could be valuable. Otherwise, I'm hard pressed, on first blush, that an app is the solution. (As a side note, I think an interactive, responsive web-app would suffice; there's no need to build a proper app for this project.)

Further, some time needs to be spent defining what the core competency of the tool is. The article is very broad and articulates numerous potential features, but this is dangerous. Called "feature creep", this is a common issue with any ideation process. While there are an endless number of permutations of the tool put forward, it's important to clearly articulate what the tool's core job is and build (with user feedback) that first, while fending off various unnecessary features. 

On the accessibility front, there's a lack of analysis or definition on what this means. I recommend taking a look at WCAG 2.0 standards. A project like this should be aiming for a AA standard (the standard recommended by the DOJ). 

With any project meant to effect the justice system, implementation is the biggest hurdle. Put another way, building this tool is a moot point if no one is going to use it. The fact that there are 500 different versions of Miranda around the country should illustrate how hard it is to standardize this procedure in America's 19,000 law enforcement agencies. Is there an agency that would be willing to test this idea? The status quo is a powerful foe, so I'd recommend finding an agency that is willing to be a part of the project from the ground floor. It'll be easier to test the initial build and find other adopters if there's the support of a law enforcement agency.

Last, I wonder if the tool as described runs the risk of over-informing the user causing confusion. The article includes a lecture's worth of information, and that could overload and limit the user's comprehension. Through user testing they can figure out what the right amount of information is. But this will require time and effort early in the process.

Beyond these thoughts, I had a couple of random questions about the tool as the article conceives it:

  • The article says that the data would be kept on a "secure server"; however, this misses a number of issues. Beyond secure server not being defined (does this mean password protected? encrypted? etc?), would the data be protected in transit and at rest? Who has access to the server (police, states attorney, defender, courts, researchers, developers)? And further, what would the data retention process and procedure be? A major issue with police bodycams has been the cost of storage, who will pay for all this data (including cumbersome video footage) to be stored?
  • Would the user (the arrestee) be informed that 1. they are being recorded and 2. the data they produce is being collected? 
  • Would the metadata collected be anonymized? Or would there be unique identifiers that would make it easy to admit this information in court? If it is admissible, what are the potential impacts of this new information in court?
  • If the user reaches a point in the process within the tool where they need help or further information, what process would take place? Would the police be the intermediaries? If so, does this diminish some of the value of the tech intervention?
  • The article mentions that the tool would be used at booking and not arrest. Correct me if I'm wrong, but isn't the current standard to read Miranda at the time of arrest? Do they envision that Miranda would be read at arrest and then the app would be given to the person at booking? If the only Miranda warning is given via the tool at booking, then does that create a rights gap between arrest and booking?

They've decided to tackle an interesting and difficult problem, which is exciting. However, there's a lot of work to be done to make this idea viable.