Korean social media users are confronting a new form of political manipulation: accounts that appear to be young, attractive women but are allegedly using AI-generated or stolen images to spread pro-Yoon Suk Yeol messaging.
The controversy surfaced after The Hankyoreh reported that some “Yoon Again” supporter accounts on Instagram, Facebook, and Threads had used synthetic female portraits, manipulated images, or photos taken from real women to attract followers and circulate political slogans. In one case, an account that looked like it was run by a woman in her 20s later apologized after users raised doubts, saying the account was actually operated by a man. The report also described accounts using revealing photos, promising selective follow-backs to like-minded users, and spreading pro-Yoon messages through the visual language of influencer culture.
The tactic was quickly framed as “political phishing”: not just posting propaganda, but using a fake persona to lure attention, build trust, and shape the mood of online conversation. Instead of arguing openly for a political position, these accounts made the political message ride on the appeal of a fabricated female identity.
That is why the story traveled beyond ordinary misinformation discourse. It was not only about whether a post was true or false. It was about whose face was being used, what kind of desire was being exploited, and how easily women’s images could be turned into political bait.
The gendered dimension made the backlash sharper. Korea has already spent years debating image theft, digital sex crimes, deepfakes, and the online objectification of women. In this case, the suspected manipulation blurred several categories at once: fake identity, political persuasion, flirtatious self-presentation, and possible non-consensual use of real women’s photos. The result was a scandal that felt both technologically new and socially familiar.
It also arrived in a climate of heightened election-related distrust. South Korea has been dealing with AI-generated political disinformation, conspiracy theories about vote-rigging, and growing pressure on election officials. The Straits Times, citing AFP reporting, noted that South Korean authorities have been tracking manipulated content ahead of local elections, while officials and experts warned that voters are finding it increasingly difficult to tell what is real.
The “Yoon Again” label added another layer. The slogan emerged among supporters after Yoon Suk Yeol’s removal from office and became a rallying phrase for a faction trying to keep his political identity alive. Kyunghyang Shinmun reported in April 2025 that the slogan appeared after Yoon’s dismissal and was used by supporters and some online creators as part of a re-mobilization effort.
Seen in that context, fake female-presenting accounts do more than boost engagement. They can make a movement appear younger, softer, more socially appealing, and more broadly supported than it may actually be. A political slogan that might otherwise look like the language of a hardened faction can feel different when delivered through an account styled like a lifestyle influencer.
The deeper anxiety is that the method does not need to be especially sophisticated to work. A synthetic face, a stolen selfie, a few suggestive captions, and a steady stream of political posts can be enough to alter the atmosphere of a comment section or follower network. Even when users eventually spot inconsistencies, the account may already have attracted attention, pushed slogans into feeds, and helped normalize a political message.
For Korean audiences, the episode localizes the global AI-disinformation problem. It is not just about fake speeches, manipulated campaign videos, or deepfake clips of candidates. It is about everyday social platforms, ordinary profile pictures, and the emotional trust users place in faces. It shows how synthetic identity can be used not only to deceive voters, but to aestheticize politics and disguise propaganda as social intimacy.
The panic around these accounts is therefore not simply a panic about AI. It is a panic about authenticity itself: who is speaking, whose image is being used, and whether online support is organic or manufactured. In the “Yoon Again” account controversy, Korea’s disinformation debate collided with its gender politics, producing a warning sign for the next phase of digital campaigning.
As election cycles become more visually driven and AI tools become cheaper, the most persuasive political account may not look like a campaign account at all. It may look like a person users want to follow.




Leave a Reply