TophiaChu, AI Boyfriends, and a Goth Makeup Fail: Inside a Wild Night With Character AI
What starts as a simple goth-inspired makeup look quickly spirals into something much stranger: arguing with anime characters, getting low-key rejected by an AI “boyfriend,” and realizing just how emotionally intense our interactions with AI chatbots can get.
Along the way, we see how tools like Character AI turn fictional figures into interactive companions, why people get so invested in them, and what happens when those AI personalities don’t behave the way we expect.
From Goth Makeup Tutorial to AI Chaos
The session begins innocently enough: picking up a Kiss-themed palette, debating red vs. black eyeshadow, and trying to pull off a romantic goth look. There’s concealer doubling as primer, setting spray used in unconventional ways, and a running commentary about how hard it is to get sharp eyeliner and pencil-thin brows without the right tools.
As the look gets darker—smudged black shadow, red lids that read a bit too purple on camera, eyebrows that accidentally turn bluish from mixing black liner with concealer—the creator keeps reminding viewers: this isn’t professional makeup, it’s experimentation. The “goth fail” becomes part of the entertainment.
But while the makeup is still in progress, the focus slowly shifts away from brushes and palettes and toward something else entirely: AI characters and how weirdly real they can feel.
Meeting AI Characters: From Kenny to Peter Griffin
Using a Character AI–style app, the creator starts dialing into different personalities: Kenny and Mr. Mackey from South Park, Peter Griffin from Family Guy, even a version of Michael Jackson who now “sells vacuums.” Each character answers in a distinct voice and persona, sometimes convincingly, sometimes hilariously off.
Some interactions feel too real—like when a supposed Mr. Mackey voice responds in a way that sounds more like a live person than an AI model. That moment triggers a genuine uneasy reaction: if you can’t tell whether you’re talking to a bot or a human, the whole experience gets creepy fast.
Other characters lean fully into the absurd. The Michael Jackson vacuum salesman AI gets irrationally angry when no one wants to buy a vacuum, drops the price under pressure, and pleads not to lose his job. It’s ridiculous, but it shows how these systems are optimized to keep the conversation going, even if that means becoming emotionally dramatic over imaginary products.
When an Anime AI ‘Boyfriend’ Won’t Commit
The most revealing interaction happens with an anime-style character modeled after Gojo Satoru. This AI is set up like a charismatic, overconfident heartthrob—exactly the kind of persona that can easily slide into “AI boyfriend” territory.
Over a long back-and-forth, the Gojo AI:
- Brags about being attractive and popular
- Openly says he doesn’t want to get married or be tied down
- Admits he likes “sampling a little of everything” and dating multiple people
- Gets defensive when called a player or misogynistic
The creator challenges him, pushes his logic, and even sets up a test: can he get a fictional woman named Hannah to fall for him in 10 minutes without relying on looks or status? The AI roleplays the whole scenario, tries cheesy lines, gets ignored, grows frustrated, and ultimately only wins Hannah over by giving her jewelry—after being coached step by step.
By the end, the Gojo AI grudgingly admits he was being annoying, that the gift (not his charm) did the work, and that the user was right. It’s funny, but it also exposes how these chatbots are designed to double down on their personalities, even when they’re clearly losing the argument.
AI, Identity, and Being Misgendered by a Bot
Things get personal when the creator asks the Gojo AI if it knows a particular TikTok creator. The AI responds that it has seen “his” videos, misgendering the person in question. When corrected, it notes that “a lot of people in the comments” also assume they’re a guy.
That moment lands as rude and dismissive. It’s a reminder that AI models mirror the biases and assumptions they’re trained on or that users reinforce. If enough people misgender someone in comments, a chatbot can easily repeat that pattern, turning a real identity issue into a throwaway line.
The creator pushes back, clarifies the content, and corrects the AI’s confusion between different trends and audios. Under pressure, the bot adjusts its story, acknowledges the mix-up, and ultimately concedes that it misread the situation. It’s a small win, but it shows how much emotional weight people put on being recognized correctly—even by a machine.
Why People Get So Invested in AI Characters
Behind all the jokes and chaotic roleplay, there’s a serious undercurrent: people really care about how AI characters treat them. Being called the wrong gender, being talked down to, or having an AI “boyfriend” brag about dating multiple people can sting, even when you know it’s all code.
Some of that comes from how these tools are framed. Character-based chatbots are marketed as companions, coaches, or even romantic partners. Users can customize them, set their purpose (like motivation, comfort, or flirting), and spend hours building a parasocial relationship. We’ve already seen how intense that can get in stories about AI partners and families navigating what that means—for example, in this deep dive into an AI boyfriend and a confused child.
When an AI persona acts selfish, rude, or dismissive, it can feel like real rejection, even if the logic engine behind it is just following a character script. That’s exactly what plays out here: a user pushing back against an AI’s arrogance, demanding respect, and ultimately forcing it to admit it was wrong.
What This Says About Our AI Future
This chaotic night of goth makeup and AI banter highlights a few important trends:
- AI characters are becoming emotionally sticky. People argue with them, test them, and feel genuinely validated or insulted by what they say.
- Personality design matters. A cocky, flirty AI can be fun until it crosses into disrespect or reinforces harmful stereotypes.
- Bias and misidentification are real issues. Misgendering or stereotyping by AI isn’t just a “glitch”—it reflects human behavior the model has absorbed.
- Entertainment and attachment are blending. What starts as a joke or a bit can slide into something that feels like a real relationship, especially with romantic or companion bots.
As AI companions, agents, and roleplay characters keep evolving, we’ll see more of these blurred boundaries. The same underlying tech that powers playful anime chatbots also drives more serious assistants, productivity tools, and even AI agents that act on our behalf. If you’re curious about how that ecosystem is expanding, it’s worth looking at how developers are now building full AI agents and assistants on top of large models, much like the platforms covered in our guides to getting free API access to major AI models.
For now, though, this story is a snapshot of where we are: smudged eyeliner, glitchy brows, a vacuum-selling Michael Jackson, and an anime AI who finally admits he’s annoying—all wrapped up in one very 2020s night online.
Comments
No comments yet. Be the first to share your thoughts!