
Welcome to Eye on AI, with AI reporter Sharon Goldman. In this edition: Perplexity CEO Aravind Srinivas talks to luck about the company’s new OpenClaw-like computer…AI politics gets messy as Florida Gov. Ron DeSantis leans into AI skepticism, seeks a contrast with Vance…Mistral AI lands Accenture as the latest big partner…AI complicates old internet privacy risks.
A few weeks ago, AI observers began to notice something strange: Perplexity’s CEO Aravind Srinivas, who was once one of the biggest social-media executives in the AI world, has been very quiet. Silence broke out at a time when agent-style tools like Claude Code, Codex, and the viral open-source OpenClaw dominated the conversation. Confusion—long positioned as an AI-powered “answer engine” and a Google Search challenger—seemingly on the side. Some are even beginning to wonder if the company is doomed.
But Perplexity isn’t gone, according to Srinivas—it’s just busy building. I spoke with him yesterday, shortly after the launch of the Computer company, its attempt to turn today’s powerful but intimidating agent tools into something closer to a shared digital workspace that non-experts can actually use. The product is currently only available to Perplexity Max subscribers, with a wider rollout to Pro and Enterprise users planned in the coming weeks.
To me, Computer looks like OpenClaw to others. Tools like OpenClaw often run on a separate machine, like a Mac mini, with deep access to files and settings. Perplexity’s approach continues to work in the cloud instead—allowing users to delegate tasks such as research, writing, or coding to a tool that can be used for hours or even monthswhich does not give the AI full control over any personal device.
The defining feature of Computer is also that it is not tied to an AI model. The different parts of a task can be carried out by any model that works best for it – it currently orchestrates 19 backend models, including Claude Opus 4.6 for orchestration and coding tasks, Google Gemini for deep research, Google Nano Banana for images, Google Veo 3.1 for video, xAI’s Grok for the speed of lightweight tasks.
“When you build a team, you don’t create a homogenous group where everyone has the same skill set,” Srinivas told me. “You create a team with different strengths. We apply the same logic to AI workflows. The orchestration is the product. The model is a tool.”
That model-agnostic stance is nothing new for Perplexity. Srinivas says more than half of the company’s business users already select multiple models in a single workday. But at Computer, that philosophy becomes a core strategy—and a source of leverage. Srinivas says he’s not worried about OpenAI, Anthropic, or any other model provider limiting access. “In fact, I’ve gotten congratulatory messages from Anthropic and Google,” he said, adding that model makers would benefit if their systems were part of broader workflows. If access changes, he says, Perplexity will adapt. “The layer model is the most competitive it’s ever been.”
Srinivas also draws a sharp contrast to tools like OpenClaw, which usually run on a local machine with wide path of files, passwords, and settings—an approach he likens to malware because of the ease with which it can corrupt data or expose sensitive information. Instead, Perplexity’s system runs remotely in the cloud, inside a locked environment, and executes background tasks, such as assigning work to a coworker in Slack rather than watching an AI take over your screen. That makes the system safer and more reliable, he stressed.
While he said more ambitious goals are coming for the Computer, for now the goal is accessibility. Srinivas insists that with Computer, “Even your mom can text the app and delegate tasks,” while with OpenClaw it “took our own engineers a while to set it up,” he says, checking terminals, API keys, and permissions.
Confusion, by contrast, wants to make agent-style work feel more like using a Macintosh or an iPhone than configuring a server. Internally, he says, the company already uses Computer to debug code, analyze metrics, and create marketing assets—often directly from Slack or a phone. “I finally feel like I have a bunch of agents working for me,” he said. “I know that’s a buzzword that everyone uses, but this is the first time I’ve felt it.”
With that, here’s more AI news.
Sharon Goldman
[email protected]
@sharongoldman
LUCK AT AI
AI capex and the ‘wealth effect’ from tech stocks (such as Nvidia) now drive a third of US GDP growth, leading analysts say. – by Jim Edwards
What AI bubble? Nvidia posts record $68 billion quarterly revenue and $78 billion forecast, as Jensen Huang cites ‘skyrocketing’ agent adoption – by Amanda Gerut
Tech companies are spending an unprecedented $700 billion this year on AI data centers. Nvidia’s Jensen Huang says we’re nowhere near peak – by Alexei Oreskovic
In its battle with the Pentagon, Anthropic faces one of the biggest crises of its five-year existence. – by Jeremy Kahn
Exclusive: Bilt’s new AI ‘Neighborhood Concierge’ takes on Amazon and other e-commerce giants – by Diane Brady
Exclusive: Startup aiming to break Nvidia’s stranglehold on AI data center workloads raises $10.25 million – by Jeremy Kahn
IN THE NEWS
The politics of AI have become tumultuous as Florida Governor Ron DeSantis leans into AI skepticism, seeking a contrast with Vance. I’ve covered some of the political issues surrounding AI over the past year, and I find the widening gap between Florida Governor Ron DeSantis and his Republican colleagues — especially Vice President JD Vance, an interesting development. According to NBC News coveragehe has staked out a unique position in US politics as an AI skeptic, warning that the rapid expansion of AI—especially energy-hungry data centers backed by public incentives—could drive up electricity costs, eliminate jobs, and damage the environment. Positioning himself as a consumer advocate, DeSantis argued that large data centers can consume as much electricity as a mid-size city and said Floridians should not be left “with the bill.” His stance also reflects a political calculation as AI becomes a sharp fault line: recent polls show that 63% of Americans believe that AI will reduce jobs, a 33% plurality expects a negative impact on the economy, and 41% of voters favor banning data centers near their homes—signals that the public is uncomfortable with real-world leaders in AI and growing industry direction, despite a national footprint and industry groups.
Mistral AI has roped in Accenture as its latest partner. European model maker Mistral AI, which has been relatively absent from the daily conversation about AI forerunners, just got a meaningful vote of confidence by landing Accenture as a major new ‘systems integrator’ partner. According to Wall Street Journalunder a multi-year deal, Accenture will make Mistral’s models and tools available to business customers as they help them transition from AI pilots to full-scale deployment. The partnership adds Accenture to a customer list that already includes IBM, Cisco, SAP, Stellantis, and ASML, which invested more than $1.5 billion for a ~11% stake last year, valuing the Paris-based startup at nearly $14 billion, and highlights how consultancies are increasingly shaping which AI models are actually used in the real world. Of course, Accenture does not offer clients these Mistral models exclusively. The consulting firm also struck a major deal with OpenAI earlier this week to help clients integrate its Frontier enterprise AI agent platform, and it also has a multi-year partnership with Anthropic.
AI complicates old internet privacy risks. This is interesting New York Times The piece focuses on a series of recent incidents that have sharpened questions about privacy and accountability in the age of chatbots and AI assistants. A federal judge has ruled that conversations with Anthropic’s Claude are not protected by attorney-client privilege. Smart doorbell Ring faced backlash over a Super Bowl ad promoting AI-powered neighborhood surveillance. And OpenAI disclosed that it analyzed a user’s ChatGPT messages months before a fatal shooting—raising new debate over when AI companies should share private chats with authorities. Privacy experts say the underlying risks are not new—data sent to companies has long been accessible under certain conditions—but chatbots are changing the stakes by encouraging people to share more intimate, clear thoughts than traditional tools, a dynamic likely to intensify as AI “agents” gain broad access to users’ emails, messages, and devices.
EYE OF AI NUMBER
57%
That’s how many US teens say they use AI chatbots to find information, according to a new Survey by Pew Research of 1,458 teenagers aged 13–17, was released this week. The study shows that tools are already embedded in the lives of teenagers: 54% say they use chatbots to help with schoolwork, 47% for fun or entertainment, and 10% say they do all or most of their schoolwork using AI. At the same time, 59% of teenagers say that cheating related to AI happens in their school at least relatively often – but the views on technology skew optimistic, with 36% expecting that AI will have a positive impact on their lives in the next 20 years, compared to 15% who expect a negative one.
YOU HAVE A CALENDAR
February 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco.






