Audience of One
- avitalbalwit
- 23 hours ago
- 8 min read
On Hyper-Optimized Content, Memetic Viruses, and the Theft of the Self

I have had the uncanny experience of opening Pinterest lately and finding, amidst the tiles on wedding aesthetics and garden design, lurid short videos playing automatically. Werewolf love stories feature prominently. As do romances with mafia bosses. Evil sibling rivalaries, revenge stories against bullies. I am learning a lot about the collective unconscious of the Pinterest user.
Some are clearly AI generated. Some are cheesy B-level acting. Some are hard to tell. Each clip is melodramatic and ends on a cliffhanger, meant to pull you in, but the clips roll continuously. I am ashamed to say that periodically I will realize I have been watching one of these roll for several minutes, unable to look away.
My partner showed me AI-generated shorts from his Instagram. These are clearly tailored to a different demographic—“If you got dropped back in the dark ages, could you survive?” and “If you sold pizza in a Roman market, what would happen?” Purely AI-generated visuals, with a voice that may or may not be human either.
There was a time several years ago where “misinformation” and “deepfakes” were much talked of but not yet widely present. They didn’t really materialize then, or at least not convincingly and not at scale. I think that time is over. But the threat is not what we expected.
AI will soon make it trivially cheap to produce content that is superhumanly optimized for each individual viewer, which will allow for the reshaping of what people want, believe, and do to a degree never before possible. It will start, and has started, with something mundane -- a continuation of current ills rather than a radical break. Think SuperTikTok, at least at first, rather than elaborate scams or fake videos of politicians behaving badly. But it will be a plague nevertheless.
***
There was a moral panic over TikTok. “Is the CCP brainwashing our children?” Kind of, but even if it was just a good old American algorithm running the show, it would still be horrifying, because it steals the minutes that make up these children’s lives and gives them nothing of value in exchange.
But previously it was at least artisanal human TikTok. Some human was putting on their makeup or doing a silly dance or doing a political rant. Now, video generation and speech generation AI makes it incredibly easy to churn out videos with minimal human effort. No longer does a human need to learn a TikTok dance and film themselves; instead they can generate a video in seconds of an infinitely hotter and more skilled dancer.
Why does this matter? Is this just generic Luddite-ism? What if the AI videos are better? The bar was in fact very low.
It matters because short videos are already a plague. They are hard for us to cope with—our eyes are drawn to movement, we are pulled to novelty, each one is so short that we can justify watching it, but their design is such that they roll endlessly into the next. It is easy to lose track of time. And given their length, they have no room for deep argument, nuance, or much detail, so they provide essentially nothing in return. I think about that meme of some boy saying that in two minutes on Tinder he has seen more beautiful people than his deep ancestors would have seen in their entire lives. In evolutionary scale, we are babies facing down tigers.
***
Three things change when AI content gets cheap.
First, there will simply be more of it. The cost of production falls dramatically, so the flood of short-form content will become a deluge.
Second, it can be personalized in ways that were never previously viable. Right now, if a human is going to record a political rant, they aren’t going to record a thousand different versions for all their different followers. With AI, this is trivial. Very soon, all the videos on my Pinterest won’t be for someone “like me”—they will have been made for literally me. That will make it even harder to look away.
Third, the optimization power that can be thrown at each individual user will go vastly up. Right now, while yes there are marketing teams thinking about how to win your attention, there are not teams of Nobel laureate–tier people focused specifically on you, or even a small subset of people that includes you. That will be viable soon. For the price of tokens, which tends to fall, it will be completely feasible to devote top AI talent to optimizing content for you personally. No longer an offshore marketer playing with an AI model, but the AI model itself.
Already, the current level of optimization has been hard for people to resist. Twitter echo chambers, YouTube rabbit holes, buying whatever is advertised to you on Instagram. And that is just with the present personalization and optimization tech.
***
Cheap, hyperoptimized, hyperpersonalized AI content. What can it do?
Well, it can waste your time. It can make you buy things you didn’t need. It can likely change your mind in various ways. We have had various panics, justified and unjustified, about online radicalization pipelines. These often involve people without strong defenses—young people, less educated people, the mentally ill. But even those of us who consider ourselves sophisticated consumers of online media, wise to the world, may find it hard to resist this new kind of content. Maybe I am deluded, but I consider myself quite disciplined and I can still tell you the plotlines of several horrible short series that I found myself watching.
And the deeper worry goes beyond attention. As these AI systems get incredibly smart—superhuman-level smart, moving at a much faster pace, throwing so much more computing power at a problem than a human can—it seems like even if they don’t touch you, take your stuff, hurt you, or physically change you, they could have a very strong influence on what you think and what you want, what you believe, and thus what you do.
Think about hyper-optimized propaganda that’s perfectly tuned to you. Or the counsel of a friend—the same way your friends or your partners influence your beliefs. These incredibly smart AIs, who might also become incredibly charismatic, could really reshape you. You could imagine they make a video game perfectly optimized to get you to believe some set of things. It could be an AI companion with an agenda. A company trying to get you to buy their stuff. A political party trying to get you to vote for them. A foreign nation trying to undermine yours. Activists with some well-meaning plan. I had a passionate Vegan friend speak with excitement about using AI for Vegan messaging and I had an intense feeling of vertigo as I pictured every interest group in the world having that same realization. AIs from different factions will be competing for our souls.
***
How in the world would we prevent this?
There’s a quarantine path, where one is just extraordinarily careful about what media they interact with. This is the “off the grid approach” -- essentially trying to avoid interactions with any AI generated content, or potentially having a trusted AI that screens all content that you access.
But this is likely something that needs to be decided at a societal level. If my friend has interacted with hyper-persuasive hyper-optimised AI and it’s changed their beliefs, that could rub off on me too. I could basically get a memetic virus. If these things are trying to affect some agenda, the way they do it is by convincing people around you, who then also convince you.
So what would it mean to ban hyper-optimized hyper-persuasive content?
First, for an aside - let me be clear that I am not talking about any content that exists today. Banning content is a perilous path which should be taken very seriously, and any regulation would of course need to be viewpoint neutral. The kind of content that I am worried about is content that comes from smarter than human level systems that are spending far more optimization power than has any human or human organization analogue - eg far more than a marketing team, or the interest groups of today. Back to the problem:
Conversation in general changes our beliefs. We aren’t likely to ban conversation with AIs. Are we banning interactions with AIs that are far smarter than us? Not an insane proposition. Adults can shape the beliefs of children and we are conscious of this, see all the panics over what agendas are being pushed by teachers to students. But how do you enforce this in AI? It could be done through compute caps on interactions. (In AI regulation, regulations that focus on computing power as the relevant proxy for some capability you care about are the worst idea except for all the others.)
For instance, saying there’s some cap on how much compute could be spent optimizing for you. You can’t have an AI system model a thousand versions of you over a thousand years and figure out what’s the perfect thing to say to you right now. It simply wouldn’t be fair. Of course, there are complexities with distinguishing a large use of compute on an answer for this versus asking it to do some intense research that you specifically requested. Monitoring the compute going towards optimization would also need to be paired with reading its chain of thought, or some interpretability to know what it was doing with all of that compute.
And how would you decide what counts as a manipulative statement? One way, courtesy of Paul Christiano, is to run this thought experiment: Take a statement that an AI says to you. Then ask versions of yourself across a bunch of different “worlds,” e.g. plausible paths forward into the future without that statement, whether they view that statement as manipulative or legitimate. Now, why would this help? Because statements can change your trajectory without being manipulative. Just the fact that they changed your trajectory is not inherently bad. The question is whether they did so in some illegitimate way.
For instance, if you’re wrong about an empirical fact and the AI system changes your mind on that fact, that seems good. A future version of you should look back at that exchange and think, “Yep, it was good. It was not manipulative that they changed my view on that fact.” But if it changed your political party to one that you wouldn’t have chosen otherwise, and all the other versions of you across all the other possible worlds are like, “Man, that doesn’t make any sense for me to have done that”—that seems very different.
Now, a fair objection is “that’s a very impractical way to decide if something is manipulative -- it literally includes parallel universes.” Yes, well, you can either simulate the experience by just thinking through what paths were likely for you, or it could involve running simulations, which in this AI drenched future, will be possible. But the meta point is “did this statement produce some surprising outcome based on the path you were on before, in a way that the surprise seems fishy -- eg. not they changed my mind on an empirical fact or helped me have some revelation that brought me closer to my reflective self, but that they pushed me onto some trajectory I would not have traveled down without them and feels out of line with my “true” self.”
Note - this is just a thought experiment about how you might decide what counts as manipulative content by your own lights. Society or government deciding for you is a much more complicated and fraught question that I will not wrestle with here.
***
Right now, we already see elderly people get taken in by AI phone scams. They are not at all prepared for a world where someone can clone their grandchild’s voice. Something like that is coming for all of us, and we don’t know what it is.
There are already early analogs. Instagram ads for something you’ve never considered before that you desperately want to buy and feel inadequate without. Catchy songs stuck in your head for days. Images you can’t unsee. These are pale echoes of what will come. But they prove that simple, unchosen content can change us and stick with us. There are far more tools available to AI systems to affect what you do than naked force or compulsion.
The most insidious AI content threat may not be the public figure deepfake, but may instead be a continuation of current ills: the slow but steady theft of a life through slop, the subtle sculpting of a person into a better tool for someone’s ends. Though there is no guarantee that this will stay slow or subtle.
This blog, like all my blogs, was written in my personal capacity and does not represent the views of my employer. Many of the ideas here were inspired by conversations with Ajeya Cotra and Paul Christiano.
Header is random AI content found on Pinterest.
