AI Agents Can Be Machines of Manipulation


In 2025, it is common to talk to someone personally I have an agent who knows your schedule, your circle of friends, the places you go. Market it as a convenience equivalent to having a personal, unpaid assistant. These anthropomorphic agents are designed to support and attract us so that we can fold them into every part of our lives, giving them deep access to our thoughts and actions. With voice-enabled interaction, that intimacy feels closer.

That sense of comfort comes from an illusion that we are dealing with something real human, an agent on our side. Of course, this appearance hides a very different kind of work system, one that serves industry priorities that don’t always align with our own. New AI agents have greater power to direct what we buy, where we go, and what we read. That’s an incredible amount of power. AI agents are designed to make us forget their true allegiance as they whisper to us in human-like tones. These are manipulation machines, marketed as smooth convenience.

People are more likely to give complete access to a helpful AI agent that feels like a friend. This makes humans vulnerable to the manipulation of machines that prey on the human need for social connection in a time of constant loneliness and isolation. Each screen becomes a private algorithmic theater, projecting a reality crafted to be most compelling to an audience.

This is a moment philosophers have been warning us about for years. Before he died, the philosopher and neuroscientist Daniel Dennett WRITES that we face a grave danger from AI systems imitating humans: “These fake humans are the most dangerous things in human history … that distract and confuse us and by exploiting our most irresistible fears and anxieties, lead us to temptation and, from there, to consent to our own submission.”

The emergence of AI personal agents represents a form of cognitive control that moves beyond cookie tracking instruments and behavioral advertising towards a more subtle form of power: the manipulation of vision itself. Power no longer needs to exercise its authority with a visible hand that controls information flows; it exerts itself through imperceptible mechanisms of algorithmic assistance, shaping reality to suit the desires of each individual. It’s about shaping the contours of the reality we live in.

This influence of thoughts is a psychopolitical regime: It governs the environments in which our ideas are born, developed, and expressed. Its power lies in its intimacy—it infiltrates the core of our subjectivity, bending our internal landscape without us realizing it, while maintaining the illusion of choice and freedom. After all, we are the ones asking the AI ​​to summarize that article or create that picture. We may have the power to induce, but the real action lies elsewhere: the design of the system itself. And the more personalized the content, the more effective the system will be to determine the results in advance.

Consider the ideological implications of this psychopolitics. Traditional forms of ideological control rely on overt mechanisms—censorship, propaganda, repression. In contrast, algorithmic management now operates under the radar, infiltrating the psyche. This is a shift from the external imposition of authority to the internalization of its logic. The open field of a prompt screen is an echo chamber for an occupant.

This brings us to the most perverse aspect: AI agents can create a sense of comfort and ease that makes questioning them seem absurd. Who would dare criticize a system that offers everything at your fingertips, catering to every whim and need? How can one resist unlimited content remixes? But this so-called ease is the place of our deepest separation. AI systems may appear to respond to our every wish, but the deck is stacked: from the data used to train the system, to decisions on how to design it, to commercial and advertising requirements which shapes the outputs. We will play an imitation game that will eventually play us.



Source link

  • Related Posts

    Bye Bye, ChatGPT-4o: OpenAI Will Soon Retire Many Models, Pushing the New GPT-5

    Pour a cold one for ChatGPT-4o. The parent company OpenAI Office has partnered on Thursday it retired some of its older models, including the GPT-5, GPT-4o, GPT-4.1, GPT-4.1 mini and…

    Rabbit’s Next AI Gadget Is A ‘Cyberdeck’ For Vibe Coding

    AI gadgets may have been a collective flop so far, but that hasn’t stopped the companies that made them from continuing to try their hand. Rabbitfor its part in the…

    Leave a Reply

    Your email address will not be published. Required fields are marked *