Partial automation of self-driving vehicles

Take a look at the scene below, and imagine it from the perspective of a self-driving car.

Street scene 1
Linked to original image on Flickr.

There’s a lot going on.

  • There’s the outlines of the road (i.e. potential paths, some of them legal routes for the vehicle to take);
  • there are traffic lights and other signs, all of which need to be identified (and then also checked for context – i.e. “does this sign apply to me?”);
  • there are other cars on the road and pedestrians (each of which may suddenly change direction);
  • as well as innumerable other non-traffic-related distractions (trees, shops);
  • and (perhaps most importantly) the desired destination of the passengers, which may also change rapidly (“hey, there’s a sale on in that store”, “you just missed a parking spot!” etc).

Each of these items needs to be identified many times per second, and a decision made in order to determine the path and velocity of the vehicle. It’s a complicated enough task for a person, which is why we have so many vehicular accidents. The task is far harder for a computer.

One intermediate stage in the path towards fully automated vehicles is partial automation. We see this to some degree already some vehicles, with features like automatic highway breaking.

I’ve been thinking a bit about what an interface for a more advanced intermediate stage could look like. What kinds of display and controls could we present to a “driver” to allow them to guide an almost-independent vehicle?

The following are some rough ideas so far. I haven’t done much looking around to see what research has already been done in this area, so wouldn’t be surprised if none of this is original.

1388859330_d6c3400472_z-marked
Linked to original on Flickr

This is a fairly standard street scene. I’ve marked it up (you can click it to see the original), with black indicating possible (legal or otherwise) paths the vehicle can take, red indicating potential obstacles, and yellow indicating road signage. I haven’t tagged everything, of course. The goal here is just to give a gist.

A simplified display that would allow the user to give direction to the vehicle without needing a steering wheel or pedals might look something like this:

a) Display

  • Needs to clearly indicate the intended future path of the vehicle, along with possible options
  • Needs to indicate road signals and obstacles that the vehicle has detected

b) Controls

  • Needs to allow the user to easily select an options other than the currently anticipated path (i.e. tell the vehicle to change lanes when safe); This is aside from generally setting the destination via GPS.
  • Needs to allow the user to quickly indicate an error in the vehicle’s understanding of the scene (i.e. traffic light is actually for the other lane)
  • Needs to very rapidly allow for emergencies that the vehicle had not yet detected (i.e. pedestrian suddenly walks in front of the vehicle)

In many cases, existing systems will actually detect and react to certain kinds of problems (i.e. the pedestrian, above) faster than a human could, and with more accuracy. However, there’s no harm (and likely great value) in providing a red button to make the vehicle apply breaks in a hurry.

One possibility could be a combination of the following:

  • Large screen (or better yet a head-up display on the windshield) that displays something like the street scene above, with overlays; the currently active potential route would be highlighted, and other possibilities displayed in a lower tone.
  • Joystick to allow overriding the route, which will change the highlighted route on the screen. Probably there should also be a confirm button; i.e. push the joystick right to select a lane change, and press the button to confirm; the HUD display would change the highlighted path.
  • Big red button to trigger emergency breaking (how do we let the car know when it is safe to proceed?).
  • Make the display screen touch sensitive, to allow the user to press on a specific area to indicate something about it (i.e. vehicle would do a region detection on the touched area, and put a circle around the object, along with the vehicle’s understanding of what the object is, and some way for the user to change that understanding quickly); Or, how about a gesture-based system that detects the user’s hand movements and uses them to highlight items on the HUD, and then uses voice activation to change their meaning?
  • Full override mode to allow the passenger to take complete control, possibly also via the joystick (left/right, forward to accelerate, back to break).

The above will be greatly simplified by systems that are in development right now to allow inter-vehicle communication (i.e. if a vehicle breaks, other vehicles in the immediate vicinity will be instantly aware).

What would you change in the above to simplify or improve the interface of such a vehicle?

Edit: An idea came to me after posting this. What if the routes were “sticky”, the way that tools in a graphic editing program are. The joystick could have tactile feedback, and tend to “prefer” one of the existing routes, but the user could still move the vehicle into a different one.