By I-Hsien Sherwood (i.sherwood@latinospost.com) | First Posted: Apr 30, 2013 03:51 PM EDT

Ever wonder how one of Google's self-driving cars can tell what's going on around it? Idealab's Bill Gross tweeted a pic showing exactly what an intersection looks like to one of those cars.

"Google's Self-Driving Car gathers almost 1 GB per SECOND. Here's what it "sees" making a left turn: pic.twitter.com/vZCWhEeBmF," Gross tweeted Monday night.

The multi-hued picture looks a bit like contour map of a four-way intersection. Concentric lines radiate from the car, detailing the outlines of other vehicles, pedestrians, the edge of the road, stationary objects on the sidewalk and surrounding buildings.

Unlike a human, it doesn't look like the self-driving car has any blind spots, though it can't see around or through other objects.

Gross is the founder of Idealab, a tech management company that runs or has run big names in consumer tech like Citysearch, Picasa, NetZero and PetSmart. It's based in Pasadena, and Gross is a big enough name that he's likely imvolved in the driverless car initiative in some way, though he doesn't say where he got the pic.

The fact that it's an image of an intersection bodes well for the program. The computer piloting the car has much better reaction time than a human, but it's not as good at making predictions about what other cars and people are going to do. Driverless cars tend to perform much better in highway environments, where there are few lane changes or braking and every car is moving in the same direction.

Intersections and city traffic pose a much ore complex problem for programmers, as there are many more variables to consider. Still, it likely won't be long before self-driving cards outperform humans even in traffic; they don't get angry when someone cuts them off, and they don't get distracted by sundresses or the latest Macklemore single.

© 2015 Latinos Post. All rights reserved. Do not reproduce without permission.