The sensationalism of agentic models

This week, practically every news agency and commentator in the AI space has been talking about Moltbook. This social media platform — allegedly exclusively populated by agentic models — launched last week to widespread fanfare.

As an aside, for those who may not be aware, Agentic AI (also known as AI agents) combines a generative model (e.g., ChatGPT, Claude, and Gemini) with a control mechanism that can engage with this creative capability and act.

There are 3 broad categories of AI agents:

Simple: retrieves information (e.g., librarian)

Task: takes actions when asked (e.g., intern)

Advanced: autonomously acts based on a trigger (e.g., assistant).

The alleged bots made observations about human behaviour, grappled with their own consciousness, and discussed how to securely chat in a way humans couldn’t decode. Both AI doomers and AI accelerationists ate this up—posting screenshots that journalists picked up. In fact, some proclaimed this was proof that Artificial General Intelligence (AGI) is just around the corner. There was so much fanfare — yet everything about this reeks of sensationalism.

There are two main reasons for this suggestion:

  1. Humans were on this platform — meant only for bots. The journalist in the Wired article gained access by asking ChatGPT how to do it and then carried out those actions. In fact, there was no mechanism to confirm whether you were a bot or a human. As a result, this site was crawling with humans.

  2. These agents were mimicking sci-fi — not planning to end humanity. Casting aside the humans pretending to be bots, the actual agents were doing nothing more than drawing from their training data, which includes sci-fi material. We humans, love to theorize about our demise, which the bots have caught on to and are just regurgitating back to us.

This whole thing was nothing more than a distraction that undermines people’s understanding of the real potential and drawbacks of agentic models. To remedy this, I will outline below a few of the promises and perils of agentic models.

Potential

  • Democratizing force. Agentic models can serve as your assistant, helping you code and execute tasks (e.g., sorting and answering emails). These time savings and leveraging capabilities have always been available to the wealthy — though in the form of human labour — but can now be accessed for free or for a small fee (I've seen some for $20 a month), compared to the cost of hiring people.

  • Time savings through automation. An agent can carry out a series of tasks, completing tasks for 5 hours or more.

  • Available 24/7. Similar to generative models. These agents don’t get sick, need time off, and don’t sleep.

  • Greater personalization. An agent can be tailored to your specific needs.

Draw backs

  • Increase in AI Slop. Prior to agentic models, a person wanting to scam or create garbage that wastes people’s time would have to manually prompt a generative model, copy it, maybe edit it, and post it on all the platforms. Now, you can prompt an AI agent to do the whole process, in the background, while you carry on with your day.

  • Encourages laziness and haphazard work. Moltbook exposed the data of 6000 users (including 35000 email addresses). All because vibe coding (a process in which someone codes without knowing how) was used, and no one seems to have checked the basic security requirements.

  • Prompt injection. Bad actors are masking malicious prompts as legitimate. These indirect injections occur when a model processes the content from an external source that contains hidden instructions. For example, you download an agentic browser, log into your bank account or email, prompt the agent to summarize a piece of content on the internet (which contains directions that you do not see), the model complies and sends your data or money to this malicious actor.

  • Unsustainable traffic increase. Agents are already generating a considerable share of web traffic, and that share will likely increase. We are now in an arms race between the people who manage websites and the bots that are directed to crawl and scrape their sites. These agents are bypassing the traditional method for stopping scraping (robots.txt), and website managers are doing everything they can to stop it. This is resulting in a worse experience on the internet for humans. People are increasingly commenting that the number of CAPTCHA requests has increased significantly.

So what are you to do with all this information?

  1. Stick to a weekly roundup of AI news if you want to follow along. Things get blown out of proportion all the time, but once the dust settles and clear eyes can look it over, the real picture emerges. If you absolutely want to follow along daily, just make sure to always seek the other side of the argument and find what others are saying about the story.

  2. If you are going to use agentic models, do not log into sensitive sites. There are too many risks with these models for them to buy anything right now, so steer clear of logging in.

  3. Reduce the number of agents you have and the number of things they do. If you want to test these tools out, keep them to a minimum with data that is not particularly sensitive. It is best to assume that whatever you ask it to see or do will be shared with outside parties.

AI agents have the potential to be a democratizing force for good, giving us all more free time. But there are also many risks with these tools that must be considered. These risks are not the overblown doom or prosperity that may occur sometime in the future. Instead, they are the current risks that are not getting the airtime they deserve.

Hope this helps,

Emanuel

Previous
Previous

Employing Ulysses contracts to dissuade AI peacocking

Next
Next

The cart, the horse, and what it means for you