Support The Wire

Three stories on artificial intelligence that highlight the gap between policy and the marketplace

“So, how do you sustain a business model in which users don’t pay for your service?” — Sen. Orrin Hatch

“Senator, we run ads.”  — Facebook CEO Mark Zuckerberg

Mark Zuckerberg’s recent appearance before Congress put into stark relief something we all knew, but few understood with as much clarity as now:  our policy makers are extraordinarily out of touch with the challenges of regulating technology.

Facebook is a 14-year old company.  It’s got 2.2 billion users.  Americans spend 40 minutes a day on the site.

In other words, few things permeate life in America today as deeply as Facebook.

And, yet it is a technology of which – generally speaking – policy makers at all levels have insufficient understanding.

Which brings me to three recent stories where the gap between thoughtful policy making and the marketplace is so wide as to be scary.

Google Duplex

Last week, Google announced a new free service called Duplex.  It is an AI assistant that will make calls to schedule appointments for you.  It’s been a multi-year progression that has built upon Google’s significant work in natural voice.

If you haven’t seen it, this video will floor you.

If we can’t get our hands around a substantive, meaningful policy response to Facebook, how are we going to get our hands around this?

 

Boston Dynamics’s robot, Atlas

Perhaps you’ve already met Atlas.  You may have seen him do a backflip.  Or, maybe you’ve met one of his ‘siblings,’ like the dog that can open doors.

In any case, this week we learned that the Boston Dynamics robot can now run autonomously, and jump over obstacles like a log.

Watching the movement of this humanoid robot is strikingly similar to human bi-pedal locomotion.  In other words, Atlas runs like a human, not a robot (mostly).

With human voices mastered, with human movements becoming mastered, we are getting pretty close to WestWorld and Terminator, right?  There is absolutely no discussion about regulating of humanoid robots, of potential liability, or the basic rights we might need to afford such robots.

On this point, don’t think about freedom of speech.  Just think about the right of way.  Does a robot that is driving a car have the right to pull into an intersection before you if it has arrived at the intersection ahead of you?  The whole idea of rights can be very quickly transformed as robots begin to fit into our social structures (like making calls to schedule hair appointments).

 

Autonomous vehicles and pedestrian deaths

By now you’ve probably heard about the incident last month in Arizona where one of Uber’s autonomous vehicles ran over a pedestrian and killed her.  The video footage is disturbing.  The autonomous vehicle had a safety driver who was distracted.  It was driving 40 mph in a 45 zone at about 10:00 pm.  The victim was walking her bike across a street.

So, how did this happen?

Autonomous vehicles rely on sensing the roadway ahead.  It identifies stationary and moving objects, and responds to those based on anticipated behavior.

This graphic taken from the NY Times (which took it from Google) attempts to highlight some of the complexity.

But here is the interesting thing.  The car identifies static objects that aren’t moving, and stays away from those.  Think of a street light.  It knows it doesn’t want to hit a street light, knows to expect it to not move, and stays clear.  The car also identifies moving objects, of which there are many in a roadway.  But, some moving things it can run over.  In fact, it has to decide to run some moving objects over or the car would never move!

You can think of many things that might be acceptable to run over:  a tumbleweed, a plastic bag, maybe some insects or small rodents.  You aren’t slamming on your brakes to save a moth.  Why should an autonomous vehicle’s AI?

So, the car’s AI has to determine what it can run over and what it can’t in order to successfully navigate a roadway.

And therein lies a significant policy challenge:  what happens when the decision made by the AI inadvertently does harm to a human, or any large animal?

If we can’t have a substantial policy discussion about Facebook, where we’re talking about ads at a 14-year old company, how can we expect our policy discussion to address the life and death implications of AI decision making?

 

Bottom line–

We are woefully ill-prepared for public policy discussions about artificial intelligence at all levels of government.  State and local policy makers are perhaps more important in the near term for building AI regulatory regimes because of the near-term employment of robots and AI in our day to day, local experience.  Remember that the robot wars in Terminator started when Los Angeles turned on SkyNet and its artificial intelligence to manage LA traffic.

Just sayin’…


Your support matters.

Public service journalism is important today as ever. If you get something from our coverage, please consider making a donation to support our work. Thanks for reading our stuff.