A couple months ago, Andrew Ferguson, a law prof at DC Law School, reached out to ask about my thoughts on a recent article he wrote with Richard Leo, a law prof and Miranda expert, on the creation of a Miranda App. I've finally put my thoughts together for a post. This is that post
For your background, the article is summarized thusly:
For fifty years, the core problem that gave rise to Miranda – namely, the coercive pressure of custodial interrogation – has remained largely unchanged. This article proposes bringing Miranda into the twenty-first century by developing a “Miranda App” to replace the existing, human Miranda warnings and waiver process with a digital, scripted computer program of videos, text, and comprehension assessments. The Miranda App would provide constitutionally adequate warnings, clarifying answers, contextual information, and age-appropriate instruction to suspects before interrogation. Designed by legal scholars, validated by social science experts, and tested by police, the Miranda App would address several decades of unsatisfactory Miranda process. The goal is not simply to invent a better process for informing suspects of their Miranda rights, but to use the design process itself to study what has failed in past practice. In the article, the authors summarize the problems with Miranda doctrine and practice and describe the Miranda App's design components. The article explains how the App will address many of the problems with Miranda practice in law enforcement. By removing the core problem with Miranda – police control over the administration of warnings and the elicitation of Miranda waiver and non-waivers – the authors argue that the criminal justice system can improve Miranda practice by bringing it into the digital age.
I had many thoughts after reading this article. I like the mix of metaphor and machine, and, specifically, I think the metaphor is strong. I admire the leap the authors want to make between the confederated, bureaucratic nature of Miranda to a reality where this constitutional protection is treated like one. As they describe, the challenges around Miranda today create hurdles for this tool.
These hurdles are not necessarily overcome by approaching the problem through a tech lens. First and foremost, I think the authors need to prove that the lack of tech is the problem. The article lays out the many problems with Miranda and then proposes the solution. If this project indeed want to move forward, I'd recommend running a randomized controlled trial that had one group be told Miranda, one group who read Miranda, and one group that had a more interactive experience via a website. Afterwards test everyone on their understanding of the warning. If there's a statistically significant difference between the first two groups and the web-based group, then there might be reason to believe that an app could be valuable. Otherwise, I'm hard pressed, on first blush, that an app is the solution. (As a side note, I think an interactive, responsive web-app would suffice; there's no need to build a proper app for this project.)
Further, some time needs to be spent defining what the core competency of the tool is. The article is very broad and articulates numerous potential features, but this is dangerous. Called "feature creep", this is a common issue with any ideation process. While there are an endless number of permutations of the tool put forward, it's important to clearly articulate what the tool's core job is and build (with user feedback) that first, while fending off various unnecessary features.
On the accessibility front, there's a lack of analysis or definition on what this means. I recommend taking a look at WCAG 2.0 standards. A project like this should be aiming for a AA standard (the standard recommended by the DOJ).
With any project meant to effect the justice system, implementation is the biggest hurdle. Put another way, building this tool is a moot point if no one is going to use it. The fact that there are 500 different versions of Miranda around the country should illustrate how hard it is to standardize this procedure in America's 19,000 law enforcement agencies. Is there an agency that would be willing to test this idea? The status quo is a powerful foe, so I'd recommend finding an agency that is willing to be a part of the project from the ground floor. It'll be easier to test the initial build and find other adopters if there's the support of a law enforcement agency.
Last, I wonder if the tool as described runs the risk of over-informing the user causing confusion. The article includes a lecture's worth of information, and that could overload and limit the user's comprehension. Through user testing they can figure out what the right amount of information is. But this will require time and effort early in the process.
Beyond these thoughts, I had a couple of random questions about the tool as the article conceives it:
- The article says that the data would be kept on a "secure server"; however, this misses a number of issues. Beyond secure server not being defined (does this mean password protected? encrypted? etc?), would the data be protected in transit and at rest? Who has access to the server (police, states attorney, defender, courts, researchers, developers)? And further, what would the data retention process and procedure be? A major issue with police bodycams has been the cost of storage, who will pay for all this data (including cumbersome video footage) to be stored?
- Would the user (the arrestee) be informed that 1. they are being recorded and 2. the data they produce is being collected?
- Would the metadata collected be anonymized? Or would there be unique identifiers that would make it easy to admit this information in court? If it is admissible, what are the potential impacts of this new information in court?
- If the user reaches a point in the process within the tool where they need help or further information, what process would take place? Would the police be the intermediaries? If so, does this diminish some of the value of the tech intervention?
- The article mentions that the tool would be used at booking and not arrest. Correct me if I'm wrong, but isn't the current standard to read Miranda at the time of arrest? Do they envision that Miranda would be read at arrest and then the app would be given to the person at booking? If the only Miranda warning is given via the tool at booking, then does that create a rights gap between arrest and booking?
They've decided to tackle an interesting and difficult problem, which is exciting. However, there's a lot of work to be done to make this idea viable.