Artificial intelligence models can be surprisingly intuitive—just how you handle sniffing the model’s electromagnetic signature. While repeatedly emphasizing that they do not, in fact, want to help people attack neural networks, researchers at North Carolina State University describe such a technique in a new paper. All they needed was an electromagnetic probe, several pre-trained, open-source AI models, and a Google Edge Tensor Processing Unit (TPU). Their method involves analyzing electromagnetic radiation while the TPU chip is actively running.
“It is very expensive to build and train a neural network,” said the study’s lead author and NC State Ph.D. student Ashley Kurian in a call with Gizmodo. “It’s an intellectual property owned by a company, and it takes a lot of time and computing resources. For example, ChatGPT—it’s made of billions of parameters, which is a kind of secret. If someone steals it, ChatGPT owns it. You know, they don’t have to pay for it, and they can sell it.”
Theft is already a high-profile concern in the AI world. However, it is often the other way around, as AI developers train their models on copyrighted works without permission from their human creators. This extreme pattern is sparking lawsuits and even tools on help artists fight by “poisoning” art generators.
“Electromagnetic data from the sensor essentially gives us a ‘signature’ of AI processing behavior,” Kurian explained in a statementcalling it “the easy part.” But in order to determine the hyperparameters of the model – its architecture and determine its details – they need to compare the electromagnetic field data with the data obtained while other AI models are running on the same type of chip.
In doing so, they “were able to determine the architecture and specific characteristics—known as layer specifications—we needed to create a copy of the AI model,” explained Kurian, who added that they were able to ” 99.91% accuracy.” To achieve this, the researchers had physical access to the chip for testing and running other models. They also worked directly with Google to help the company figure out what you can attack its chips.
Kurian estimated that getting models running on smartphones, for example, would also be possible – but their super-compact design would inherently make it more difficult to monitor electromagnetic signals.
“Side channel attacks on edge devices are not new,” Mehmet Sencan, a security researcher at AI standards nonprofit Atlas Computing, told Gizmodo. But this particular technique “of capturing the entire hyperparameters of the model’s architecture is essential.” Since the AI hardware “does the inference in plaintext,” Sencan explained, “anyone who deploys their models inside or on any server that is not physically secure must assume that their architectures can be through much scrutiny.”







