Thoughts on Ryan Calo's "AI Policy: A Road Map" / by Jason Tashea

Courtesy of Getty Images

Courtesy of Getty Images

Recently, Ryan Calo, a UW law prof focused on technology and robotics, posted a working paper on a potential policy roadmap for AI. In that post he asked for feedback. I had a chance to read the paper last week and wanted to share a couple of quick thoughts.

First, there's nothing in this paper I'd take out. The "what if robots kill us" section seemed a little unnecessary, but based on the 116 people last week warning of just that, I get why it's included. I appreciated, and would have liked more of, the history section. I feel context is left out of the AI discussion generally. Journalists are particularly guilty of creating a narrative that says "AI is brand new and it's magic", when neither are the case. Calo's brief history helps us recognize this.

Second, there are a couple of questions I have about the piece after reading the piece. Towards the end of the article, Calo begins to talk about the need for new commissions and federal committees to deal strictly with AI policy issues (pg. 23). But, why? I'm not convinced after reading this paper that unique committees need to be created. My inclination is to fold in the AI discussion to existing committees covering a variety of topics (criminal justice, transportation, finance, medicine, etc.). The subject matter expertise needed to understand these systems and the rules and laws that control these areas of society will be more dispositive on the future of AI than the science or ethics of AI itself. To that end, I'd like to know more behind Calo's thinking here.

Another question I had, and this is in the weeds, was about Calo's assertion that AI would be wrong for executive pardons (pg. 12). Again, why? I'm uncertain why he thinks this, and it isn't substantiated with a cite. It's not to say he's wrong, I just don't think this conclusion is a given. 

Third, I think there's one major missing policy question that is danced around but not confronted in this piece. Calo is right to start his discussion around bias and "inequality of application"; this is the concern that data and AI creates biased outcomes. I'd call this the "fairness when applied" policy issue, which I'd juxtapose against a "fairness of application" policy issue, which is not included in this piece. The "fairness of application" issue, bolstered by Cathy O'Neil's writings, is that poor people will have their futures determined by machine, while rich people will have determinations made by humans. It's a form of class privilege that already exists and proliferates in the black box world of AI and algorithmic decision making. 

Automating decisions for people worse-off economically, while giving those better-off the ability to make their case to a human, is a type of discrimination that will reinforce class and race divides in the U.S. To that end, this is a critical policy discussion that needs to be included when we discuss AI.

Fourth, I want to throw something out there for color and context in the "Use of Force" section. Axon, the body cam manufacturer previously known as Taser, has made clear that the data trove of body cam footage they collect at evidence.com will be used for machine learning and AI applications. What is less known is that they think in the next 5-7 years they will be able to have a body cam, in real time, tell the officer on the scene "if someone’s demeanor has changed and may now be a threat.” 

The implications of this application of AI cannot be understated, and Calo is right to include the "Use of Force" discussion so early in the piece. How does liability work when the AI/officer gets the call wrong and someone is injured, or worse? Do there need to be admin leave procedures for neural nets that keep thinking a brown face is a greater threat than a white face? (Western facial recognition software has shown a lack of competence around minority faces, which Calo touches on.) These are big questions that require thoughtful solutions.

Last, I saw a couple of minor grammar things, because I'm no fun. On page 20, first full paragraph, the word "extend" should be "extent". Also, on page 24, first full paragraph, there is the need for a noun after "separate". 

I enjoyed this article, and I'm glad that Calo is trying to create a framework to analyze these issues from a policy perspective. I'm excited to see where these ideas take us.