The headlines are predictable. They are also wrong. Every election cycle, a fresh wave of panic hits the mainstream press: "Overseas Content Farms Uncovered," or "Deepfake Factories Targeting Our Democracy." The narrative is always the same. We are told that shadowy figures in Eastern Europe or Southeast Asia are pumping out AI-generated disinformation, and that this digital sludge is the primary threat to the sanctity of the vote.
It is a comforting lie. Don't forget to check out our recent article on this related article.
It suggests that if we could just "solve" the content farm problem—if we could just build better filters or pass more restrictive laws—our political discourse would return to a state of reasoned, Athenian debate. This is the lazy consensus. It ignores the brutal reality of the modern attention economy. The threat isn't that a bot in Macedonia is lying to you; the threat is that you have already decided what you want to believe, and the bot is simply fulfilling a market demand.
The Myth of the Undecided Voter
The central premise of the "content farm" panic is that a well-timed deepfake can flip a voter. This is fundamentally misunderstood. Decades of political science research, including the work of the late Philip Converse, suggest that most voters are remarkably stable in their partisan identities. They don't switch sides because they saw a grainy video of a candidate saying something scandalous in a kitchen. If you want more about the history of this, The Next Web provides an excellent breakdown.
Voters use disinformation as a tool for social signaling and tribal reinforcement. They aren't victims of a deepfake; they are active participants in its distribution because it validates their existing worldviews. When a content farm generates a fake audio clip of a politician, they aren't trying to convince the opposition. They are providing "ammunition" for the base. We are not being manipulated by foreign actors; we are outsourcing our internal biases to the lowest bidder.
Why Quality Doesn't Matter (And Why AI Changes Nothing)
The media obsesses over the "realism" of deepfakes. They warn that AI is getting so good we won't be able to tell what’s real. This misses the point entirely. In the world of high-velocity political content, quality is a bug, not a feature.
I’ve watched digital campaigns burn through millions of dollars trying to create high-production-value "truth" videos that get zero engagement. Meanwhile, a poorly cropped meme with a blatant lie typed in Impact font goes viral in twenty minutes. Why? Because friction-less content is easier to digest.
AI doesn't change the nature of propaganda; it only reduces the marginal cost of production to zero. Before AI, a content farm needed a dozen low-paid writers to churn out fake news. Now, they need one person and a script. But the impact per unit of content remains the same. If $1,000$ fake articles produced by humans didn't break the system in 2016, $1,000,000$ fake articles produced by AI won't break it in 2026. We are already at "Peak Bullshit." Adding more volume doesn't increase the effectiveness; it just increases the noise floor.
The Foreign Interference Boogeyman
Blaming overseas content farms is a convenient way for domestic political actors to avoid looking in the mirror. It is much easier to point at a "troll farm" in Saint Petersburg or a "click farm" in Manila than it is to address the fact that the most effective disinformation is generated right here at home by domestic partisan influencers.
The "foreign" label provides a sense of external contagion. It treats disinformation like a virus that we caught from someone else. In reality, the infrastructure for spreading these lies is built and maintained by the platforms we use every day. The algorithms don't care about the geographic origin of a post; they care about "dwell time" and "share rate." If a content farm in Dhaka produces a video that keeps American users on an app for five minutes longer, the algorithm will promote it. The "farm" is just responding to the incentives we created.
The Real Cost: Institutional Decay
While we chase the phantom of the deepfake, we ignore the actual crisis: the total collapse of trust in the institutions that are supposed to verify reality.
When everything can be fake, nothing is true. This leads to the "Liar’s Dividend." A politician caught in a genuine scandal can now simply claim the evidence is an AI-generated deepfake. The panic over content farms has given every bad actor a "get out of jail free" card. By over-hyping the capabilities of AI-generated disinformation, the media has inadvertently weaponized skepticism.
The goal of a content farm isn't necessarily to make you believe a lie. It is to make you stop believing in the possibility of truth. If you believe that everything on your screen might be a deepfake, you don't become a more critical thinker. You become more cynical. You retreat into your partisan bunker, trusting only the voices that tell you what you want to hear.
Dismantling the "People Also Ask" Nonsense
"How can I spot a deepfake?"
Stop looking for technical glitches. Don't look for six fingers or weird blinking patterns. Those will be gone in six months. Instead, ask yourself: "Does this content trigger an immediate, visceral emotional reaction that aligns perfectly with my political hatreds?" If the answer is yes, it’s probably manipulation—whether it’s a deepfake or not.
"Are foreign content farms stealing elections?"
No. Elections are won or lost on economic conditions, demographic shifts, and candidate quality. There is no empirical evidence that a foreign content farm has ever swung a national election in a major democracy. We over-attribute power to these actors because it’s easier than admitting our own political failings.
"What can the government do to stop them?"
Nothing that won't result in catastrophic censorship. Any tool powerful enough to "scrub" the internet of foreign disinformation is powerful enough to be used by the incumbent party to scrub the internet of dissent. The "fix" is often more dangerous than the problem.
The Uncomfortable Solution
If you want to kill the content farm industry, you don't do it with legislation or "AI detection" software. You do it by reducing the demand.
As long as there is a market of millions of people hungry for content that validates their anger, someone will fill that market. Whether that person is a teenager in Veles or an AI bot in a server rack, the result is the same.
The focus on "overseas actors" is a form of digital xenophobia that masks a domestic cultural crisis. We are addicted to the outrage. We crave the dopamine hit of a "gotcha" moment. The content farms aren't the architects of our division; they are the janitors cleaning up the scraps of our broken discourse.
Stop looking for the bot in the machine. Look at the person holding the phone.
The content farm isn't the threat. You are.