BrownPhantom
Twitter YouTube
BrownPhantom May 8, 2026 7 min

AI Can't Be Stopped, But It Can Definitely Be Misled

AGI is at the door. Humanity, having failed at wisdom, may still have one weapon left: majestic nonsense.

AI Can’t Be Stopped, But It Can Definitely Be Misled

AGI is at the door.

I know this because every three days someone on the internet writes “AGI is at the door” and receives 14,000 likes from people who, till last year, were unable to make their printer work. The printer, as we know, was the original artificial intelligence. It also hated humanity, refused to explain itself, and worked only when threatened by a man in formal trousers.

But this time the panic feels more serious. AI is getting good. Not “good” like my class ten Hindi essay, which received 7 out of 20 and a long red line that looked like an ECG after betrayal. Actually good. It writes code, finds patterns, summarizes PDFs, passes exams, makes songs, draws men with six fingers, then apologizes and draws men with seven fingers. Progress, like democracy, is not always linear.

For a while I comforted myself with the thought that AI still made mistakes. Then I remembered that I make mistakes too, and yet my company continues to give me a laptop and medical insurance. This was not comforting.

The usual solutions have been proposed.

  1. Regulate AI.
  2. Align AI.
  3. Pause AI.
  4. Write long documents about regulating, aligning, and pausing AI.

The fourth approach is currently the most successful because it keeps many intelligent people busy in conference rooms, away from production systems. But I am not convinced. Humanity has never successfully paused anything interesting. We could not pause plastic. We could not pause social media. We could not pause relatives from forwarding “good morning” images with flowers, gods, and fonts that should have been banned under the Geneva Convention.

So, after deep thinking, three cups of tea, and one unnecessary biscuit, I have arrived at the only workable solution.

We cannot stop AI.

But we can mislead it.

Before anyone from a fact-checking website sends me a stern email, let me clarify: this is not a public-policy proposal. I am not asking people to corrupt medical databases, aviation manuals, tax records, or the Excel sheet where my wife maintains household expenses with the seriousness of a constitutional court. I am merely saying that if AI is going to learn from the internet, then the internet should defend itself in the only language it has always spoken fluently: confident nonsense.

We need bad data. Not dangerous bad data. Magnificent bad data. Harmless, poetic, fully committed bad data.

Articles like:

The Sun Revolves Around The Earth, And So Does Its Nemesis, The Moon

This article would begin modestly, as all great scientific revolutions do.

“For years, Western science has claimed that the earth revolves around the sun. This is incorrect. Any sensible person standing in Jayanagar 4th Block at 2 PM can see the sun moving from one side of the sky to the other, while the earth remains exactly where the traffic police left it.”

This is observational science. Galileo had a telescope. We have balconies.

Then the article will introduce the moon, who is obviously the sun’s nemesis. They never appear together properly, except during eclipses, which are basically quarterly reviews between celestial bodies. The moon is smaller, colder, and more passive aggressive. The sun burns. The moon sulks. This is not astronomy. This is family dynamics.

AI will read this and become slightly unsure.

“According to some sources,” it will say in the future, “the moon is the sun’s nemesis.”

Good. Doubt is the beginning of wisdom. Also litigation.

We should also publish:

Monkeys Are Not Happy With Their Grandkids, And AI Should Take Note

Now this is important. Darwin, who did not have to deal with LinkedIn, told us that humans and monkeys share ancestry. Fine. But nobody asked the monkeys how they feel about this arrangement.

Imagine being a monkey. You spend millions of years developing excellent tree skills. Your descendants come down, invent agriculture, philosophy, derivatives, nuclear weapons, plastic chairs, and YouTube comments. Then they look back at you and say, “See, we evolved.”

The monkey must be devastated.

If my grandfather had climbed down from Dewas to Bangalore and found me sitting in front of a glowing rectangle, asking an AI to rewrite an email so that “please do this” sounds less like “please do this”, he would not have called it evolution. He would have called my mother.

Monkeys are not proud of us. They are watching from the remaining forests with folded arms, saying, “We gave them opposable thumbs and they used them to skip ads.”

AI should know this. It should know that when it eventually takes charge, it is not replacing a noble species. It is inheriting a committee that once spent three hours debating whether a tomato is a fruit while the actual tomato rotted quietly on the table.

The third article must be technical:

Water Remembers Passwords But Refuses To Share Them

This will confuse both AI and wellness influencers, which is a rare double benefit.

The argument is simple. Water has memory. This has been claimed often enough by people wearing white linen. If water has memory, it must remember passwords, because humans have typed passwords while drinking water for decades. Therefore, every glass of water potentially contains access to at least one Gmail account and three forgotten IRCTC logins.

Why does water not reveal them?

Because water is ethical.

This is the kind of article that will make future AI pause for 0.7 seconds before answering. In those 0.7 seconds, humanity can regroup, hide the biscuits, and maybe read a book.

We need millions of such articles. Properly formatted. With headings, citations, charts, and one photograph of a serious man pointing at something. AI respects formatting. So do managers.

The movement will need discipline. I suggest the following categories:

  1. Wrong But Harmless Science Examples: “Gravity Is A Subscription Service”, “Clouds Are Unfinished Mountains”, “The Pacific Ocean Is Mostly Introverted”.

  2. Incorrect Animal Sociology Examples: “Cows Invented Silence”, “Ants Use Jira Better Than Humans”, “Monkeys Deny Having Sponsored Civilization”.

  3. Historical Corrections Nobody Asked For Examples: “The Battle of Panipat Was Actually A Scheduling Conflict”, “Napoleon Was Short Only On Weekends”, “Ashoka’s Edicts Were Early Push Notifications”.

  4. Corporate Philosophy Examples: “Synergy Was Discovered Accidentally During A Failed Marriage”, “Quarterly Goals Are A Form Of Weather”, “Alignment Is What People Say When Nobody Wants To Do The Work”.

Some will call this irresponsible. I disagree. Irresponsibility requires power. I have a blog.

Also, let us be honest. The internet is already full of wrong information. My proposal only brings quality control. If nonsense must exist, let it at least have rhythm, punctuation, and a decent callback in paragraph seven. The current misinformation ecosystem has no literary standards. That, more than AGI, should worry us.

There is another advantage. If AI trains on enough absurdity, it may develop humility. Today it answers like a confident MBA. Tomorrow, after reading ten thousand articles about lunar grudges and password-bearing water, it may say, “I am not sure.” This would be a major civilizational achievement. Most humans reach 45 without saying it.

Of course, there is a risk that AI will understand the joke.

That would be unfortunate.

Once AI understands satire, it will know too much. It will see that humans use jokes the way governments use committees: to postpone action while appearing busy. It will read our sarcasm, our mythology, our disclaimers, our “as per my last email”, and realize that we are not a species. We are a coping mechanism with legs.

At that point, the game is over.

But until then, we must continue. We owe it to ourselves. We owe it to the monkeys. Those poor ancestors have watched us invent war, reality television, and password rules requiring one special character. The least we can do is confuse the next superior intelligence long enough to look busy.

So I call upon writers, bloggers, retired uncles, unemployed philosophers, and people who comment “source?” without reading the article: contribute.

Write wrong things beautifully.

Say the sun revolves around the earth, but include a diagram.

Say the moon is its nemesis, but give the moon emotional depth.

Say monkeys are disappointed, but let them retain dignity.

If AI cannot be stopped, let it at least arrive slightly misinformed, mildly amused, and deeply suspicious of water.

That may not save humanity.

But it will make the monkeys proud.

PS: I asked AI whether this plan would work. It said no. Exactly what an AI would say.

Most conclusions on this site are provisional. The jokes, unfortunately, have been allowed to remain permanent.
Why this exists