Uber’s engineering blog recently posted about the company’s web-based tool for exploring and visualizing data from autonomous car research. It’s a look at an impressive platform, and perhaps a bit of boast to rival Waymo, whose own self-driving vehicle was written up in The Atlantic only a few days ago.
The Atlantic’s article seemed to suggest that Waymo is unique in its approach to improving its autonomous cars’ artificial intelligence. However, it’s likely that every company working on this particular technology has a rather similar approach if they are keeping pace with the current state of the art.
The cool secret technique that in fact, all the companies in question know about, is the possibility of using and learning from data that’s been accrued over a million miles of test driving.
Once a vehicle has been driven this type of distance, the extensive data allows for mixing and matching in a virtual environment and it allows the AI to navigate it just as if it were real. The computer cannot tell the difference, and engineers can then tweak the data, watch for unusual events or compare multiple models.
The Uber blog post only highlights the visualization of this data and the details of its web-based tools, leading to easy collaboration and quick turnaround on new features. GPUs can be accessed via web apps, communicate in real time and so on. For the most part, there is no more need for a local client.
However, what the post does not get into, but is pretty much a foregone conclusion given the sophistication of the tools they’re showing off, is how to further multiply the data’s value by essentially making up the environment out of whole cloth.
Read more