Shh! The Robots Can Now Read Your Lips
Computers are moving way past simply, well, computing. Now, they are starting to truly see.
That means, they are starting to put images into context. And this represents a huge step forward for technology as we know it.
Part of this evolution is new hardware. Technology giants are pushing the limits of what was possible with bleeding-edge microprocessors and image sensors. Another part, maybe more important, is the acceleration of artificial intelligence.
The neural networks that drive AI are learning at an exponential rate. And they have an integral role in “tentpole” technologies like self-driving cars. But what we’ve seen from them so far is merely the tip of the iceberg.
If computers can see and understand context, then potential new business models are limitless. They can surveil, read lips … even understand body language and complex emotional facial cues.
Given their learning curve, in time, these abilities will be superhuman.
In April, a Google (GOOG) engineer wanted to push the limits of smartphone image sensors. Using one of its regular Pixel smartphones and customized Android camera software, he took a single picture of the Golden Gate Bridge at dusk. He then snapped 30 burst mode shots.
But here’s what’s remarkable: He covered the image sensor with an opaque material. The software did the rest, reimagining data tarnished by low light.
The finished image was stunning.
This month, Nvidia (NVDA) debuted updated software for its self-driving car platform. It has high-tech sensors to help it navigate winding dirt roads, snow, fog and obstacle-littered test routes. But what’s really fascinating is this …
Interior sensors showed surprising dexterity at recognizing where drivers were looking — and even reading their lips! Apparently, frustrated drivers tend to curse, and drive more poorly. And now, this software can “see” that.
Nvidia’s software learned what data points were important, and then added them to the guiding algorithm.
Now, not everyone is pumped about the pace at which neural networks are learning. Replacing humans with robotic cars will certainly slash traffic fatalities. But thought leaders still worry about moving too quickly.
The founders of Microsoft (MSFT) and Tesla (TSLA), Bill Gates and Elon Musk, have joined physicist Stephen Hawking. Together, they make a compelling, although grisly, case for the robot apocalypse.
They surmise it’s only a matter of time before super-intelligent, AI-powered machines conclude that we pesky, irrational humans are the real problem.
Enter the terminators …
I don’t see it that way. I’m far more optimistic. In fact, I see a world of opportunity ahead for investors.
At the Google I/O developers conference this week, Alphabet (GOOGL) announced the further democratization of its Tensor Flow AI platform. It’s providing tools so third-party builders can easily bring robust AI to all sorts of new applications and businesses.
This process is being repeated at Facebook (FB), Amazon (AMZN), Microsoft and Nvidia.
First movers will continue to do well. But the real opening for investors now is the emerging technology companies that will ultimately build on top of these impressive foundations.
Finding those opportunities has been my focus. It led me to Nvidia two years ago. Then, it was only a gaming graphic card maker with seemingly limited prospects.
Right now, I’m convinced the next Nvidia is building AI-powered software to revolutionize retail payments, video surveillance or augmented reality.
There are plenty of these smaller companies on my radar. To be among the first to learn about them as soon as I discover them, click here.