Qualcomm made a major set of AI/5G announcements this week at a San Francisco media event, including the release of new smartphone processors with enhanced artificial intelligence potential and a server-based AI accelerator. But the most interesting thing that was said actually came at the end, when most folks appeared to have nodded off or moved on to do their email: In a lab in Amsterdam, Qualcomm is working on applying quantum field theory to deep-learning AI. This is an enormous potential game changer, and this is our topic for this week.
The Birth of the Quantum AI
For the record: Quantum computing is the use of quantum-mechanical phenomena, such as superposition and entanglement, to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically.
There is a lot of work going into both quantum computing and artificial intelligence, but the two efforts are largely independent of each other. This is largely because we have yet to see a quantum computer that comes close to its potential; we are also early in AIs with our current technology. Most analysts think that pivoting AIs from traditional computers to quantum computers will come some time after we have the first working quantum computer. That timeline would place that event around 2030 or later, so it’s well outside any reasonable planning timeline. I really only know of one company that is even looking at this blend, and that is IBM.
However, you don’t need a quantum computer to use quantum theory. What Qualcomm announced as a working project was the application of quantum field theory to deep learning-vision AIs using Qualcomm computing technology.
This is existing Qualcomm computing technology, not some future quantum computer. Quantum field theory now provides a unique way of working on and solving complex problems using massive parallel processing resources.
Where they are applying this is on distorted images. For instance, let’s say you have a globe lens on a camera. This is a lens that captures an image that not only reflects the world around the camera but the world above and below it as well. With one frame you capture pretty much the entire environment. The problem is everything is distorted by the lens, and this reduces substantially the AI’s ability to make sense out of what the camera is seeing.
Apparently, using quantum field theory provides a mathematical model that will remove, in real time, this distortion, raising significantly the ability for this one lens to be used effectively for AI recognition.
Applying the Quantum Theory Approach
Now let’s take a crazy application such as Amazon Go. These are the test stores that have run into technical problems where your purchase is captured as soon as you pick up the item. These stores have around 5,000 cameras each. One server can only handle 100 cameras, so you end up with an unsustainable 50 servers per store with all of the related overhead, and these stores aren’t that big. But if you could use globe cameras, you could cut down the number of cameras needed by around a magnitude or more–going to a far more practical five servers or fewer per store. (I actually think you might be able to get down to far fewer than 100 cameras, and then one server would be all you’d need).
In any case, this one technology could change an attractive concept that simply doesn’t scale into one that does scale, potentially making the Amazon Go concept viable even for large box stores like Costco.
Other areas where this might be applied are autonomous cars or aircraft. Once again, rather than an array of cameras covering every angle, you’d only need a couple and either a 360-degree lens or a globe lens, reducing dramatically the number of cameras, reducing the related complexity and potentially reducing the cost of the related autonomous vehicle system cost substantially.
This lower cost would result in a lower price, making autonomous cars more affordable and driving the technology down market before it is even widely available in luxury cars.
Wrapping Up: The Quantum AI
This all begs the question that if you can use quantum theory to massively improve an AI’s performance using current technology, what will happen when we have real quantum computers and the mathematical algorithms can work at true quantum speeds? Tied to extended reality glasses, this could dynamically change the world around us visually to anything we want it to be and still allow us to interact with physical objects that simply look different. If you wanted to live in a “Game of Thrones” world that was an exact overlay on this one, that could become a possibility.
Until then, this quantum hybrid approach to visual capture and analytics has the potential to change a lot of markets and vastly reduce the costs associated with the aggressive use of visual AIs. This should lower the cost of fully automated stores, autonomous cars and drones (land, air and sea), and provide security camera options we can only now imagine.
I expect this will eventually be a huge game changer. I also expect that I’m just touching the tip of this iceberg in this column.
Rob Enderle is a principal at Enderle Group. He is an award-winning analyst and a longtime contributor to QuinStreet publications and Pund-IT.