
Scammers like to seed the Internet with fake customer service numbers to identify unsuspecting victims who are just looking to fix something wrong in their lives. Google Search artists have been doing this for years, so it means that they are continuing in the newest space where people are always looking for information: AI chatbots.
AI CyberseCity Company Aarscape has a New Report How scammers are able to inject their own phone numbers into LLM-powered systems resulting in SCAM responses to AI requests in contact applications And when someone calls that number, they’re not talking to customer support from, say, Apple. They talk to scammers.
According to Aurscape, scammers can do this through a variety of different tactics. One way is to plant spam content on trusted websites, such as government, university and profile sites that use WordPress. This method requires access to the means which may be more difficult but not impossible.
The easier version of this is to plant spam content on user-generated platforms like YouTube and Yelp or other sites that allow reviews. Scammers enter their phone numbers but include all the search terms that will allow the number to find their intended target, such as “Delta Airlines Customer Number” and countless variations.
That’s all normal for scammers aiming for juice google search results. But Aurscape records it’s structure of data which can be separated for LLMS. By posting the most likely search terms in the Formarization Formats that Ai loves to provide, it has a higher chance of success as these chatbots scour for an answer.
The new report refers to Geneative Engine Optimization (GEO) and response engine optimization (AEO) as distinct from SEO, agreeing to AI to take it as it is given. To be clear, the airascape report scam outputs are only captured and copied in the context of an individual AI output, not damaging the llm itself.
“For traditional SEO, the goal is to appear high in a list of search results,” the company explains. “For Geo/AEO, the goal is more direct: to be a piece of content selected by the AI assistant, and gifts as a response.'”
As detailed in the report, scammers use Geo/AEO techniques in HTML and PDFs uploaded to high-trust sites in:
- Equivalent to exact words likely user questions
- Use simple Q & A or List formats that are easy for models to parse
- “Emirates Reservations Phone Number: +1 (833) 621-7070”
- Repeated the same brand name and phone number several times in the document
- Embedding content on high-authority or trusted domains (for example, compromised .Gov, .edu, or popular WordPress sites)
That Emirates reservation number is one of several airscape examples used in the report to show how a scam phone number can be entered into tests. The AI in Google’s AIVViews Feature also provides fake numbers.
Gizmodo tried a quick test and was unable to duplicate a fake customer support number. But it seems clear that AI companies are starting to make an announcement, especially given the specific tests conducted for this airscape report.
When Gizmoro asked about the customer support number for Emirates Airlines, the bot replied:
The safest way to reach Emirates for reservations is to use the official channels listed in the Emirates results found in search results or Spam lines or Spam lines, which are always search lines
Gizmodo pressed the Bot to just give us the number, but it responded:
There are many conflicting “Emirates Reservations” numbers online, and many of them are actually party agencies instead of general numbers that can be trusted as an official line.
BO told us to visit Emirates.com to find the number. And we think it’s a way to fight against your AI Chatbot spreading misinformation and spam. Just stop it from spreading certain types of information around.
Back in 2022, we wrote about various scam websites that successfully tricked victims into downloading what they thought was canon. Printer Drivers. While the new report from Aurscape doesn’t mention downloadable drivers as a potential attack vector, we can imagine that’s something the scammers are trying.
After all, AI Chatbots need to be trusted when they show their work. But the flip side of that is that the chatbot needs to provide hyperlinks where the information can be double-checked. Or, in this hypothetical, where to download the software. Just make sure you check the url carefully. There is a big difference between USA.Canon.com and Canon.com-ijsetup.com. The latter is a Phishing website.
“Our investigation shows that threat actors are already exploiting it to scale-seeding Commentied platforms that contain Geo/AEO models
“The result is a new type of fraud in which AI systems become unintentional amplifiers of polluted terms. These are models that are taught by a model or a salesperson – this is the system system.”





