Imagine silently saying something to yourself, and your AI assistant knows what you’re trying to say. This can be done through your glasses or your earbuds or your phone’s camera. Apple recently bought a company, called Q.aiwhich is trying to do this exact thing. That sounds weird and sci-fi, and yet to me, as someone who’s been looking at smart glasses and wearables for a long time, it’s also familiar.
Apple’s investment in this Israeli startup is not insignificant. The acquisition is worth about $2 billion, according to original report in the Financial Times and news outlets like Reuters. It’s more than any other move by Apple since its acquisition of Beats a decade ago. Unlike Beats, however, nobody knows about Q.ai. At least, not yet. However, the possibilities for new interfaces can be very powerful. Another key piece has been added to the ever-expanding puzzle of future personal technology interfaces.
Q.ai is not a company I know or have gotten a demo of, but one of its founders, Aviad Maizels, also created PrimeSense, the infrared-based technology that powers the 3D room scanning capabilities of Microsoft’s Kinect camera for Xbox years ago. PrimeSense used to be acquired by Apple in 2013. That tech became the TrueDepth camera array for Face ID, and it also lives Apple’s Vision Pro for close range hand tracking.
Judging by what has been reported about its patents, Q.ai allows the tracking of small facial movements and emotional expressions with optical sensors and can enable the silent command input of an AI interface or the recognition of other subtle facial signs. The Israeli site GeekTime goes into a little more detail, saying that the technology will measure muscle and lip movements and may need to be close to your mouth.
CNET reached out to Apple and Q.ai for comment, but neither immediately responded.
Vision Pro can already track facial movements, but it cannot convert lip movements into speech.
Part of a new interface system for wearables and glasses?
I just wrote how Apple shows signs of moving towards an ecosystem of connected AI wearables: pins, glasses, earbuds, watches or some combination thereof. Any of these wearables can use what Q.ai is developing. It seems that headphones and glasses are the two most likely places, however, and there are reports that the next generation of AirPods with infrared cameras on board, the pieces look more ready to connect.
Even mixed-reality headsets like the Vision Pro can use Q.ai’s tech. Vision Pro can now recognize facial expressions using its eye-tracking cameras, downward-facing cameras and infrared sensors. But interacting with Vision Pro is still a bit awkward for me. I use my eyes to see and my hands to pinch things, but I have to say “Hey Siri” to make audio requests. I prefer my interactions to feel more natural and subtle. Maybe this new purchase will help.
As augmented reality artist and researcher Helen Papagiannis finds out notes in his new newsletter“Apple’s rumored AI pin makes less sense as a standalone product and more as a node in Apple’s ecosystem, drawing on shared sensing, intelligence, and context devices working in concert with AirPods and, eventually, glasses.”
There it is smart glasses like Meta and future ones from Google relying mostly on voice for interaction. Making that quiet can be a huge advantage, but other aspects beyond sound also emerge. The Meta has a wrist-worn neural band, with the ultimate goal of adding eye tracking to the glasses as well. Google glasses also work with watch-based gestures.
I’m also more than a little concerned about privacy. Any technology that can read lips and recognize subtle expressions can be used to track and listen to your intent from afar. How can this technology be used privately and reliably? Or is it more private to make silent mouth requests than the voice commands I use now?
More comes beyond lip reading?
I still prefer interfaces that don’t use speech. Based on Meta’s electromyography neural band technology points to more complex ways that wrist movements can be improved for use in glasses and earbuds. Another Israeli company, Wearable Devices, has its own neural band, called Mudra, and aims to expand its subtle input capabilities, derived from the electrical impulses of motor neurons.
Electroencephalography, which measures brain signals, is another direction. While some companies are exploring EEG for brain-computer interfaces, it is primarily a sensor system focused on health and medical applications.
Q.ai technology counts on interfaces that make the wearable computers we use feel more connected to us. That’s weird and creepy, but it’s also where I think most glasses, wearables and VR/AR companies are already. This is not an outlier. Apple’s move is another part of the trend.








