On the artificial intelligence companion platform Character.AI, the site's 20 million daily users can engage in private, lengthy conversations with chatbots based on famous characters and people like Clark Kent, Black Panther, Elon Musk, and the K-pop group BTS.
There are also chatbots that belong to broad genres — coach, best friend, anime — all prompted by their creators to adopt unique and specific traits and characteristics. Think of it as fan fiction on steroids.
One genre recently caught my attention: Boyfriend.
I wasn't interested in getting my own AI boyfriend, but I'd heard that many of Character.AI's top virtual suitors shared something curious in common.
Charitably speaking, they were bad boys. Men, who as one expert described it to me, mistreat women but have the potential to become a "sexy savior." (Concerningly, some of these chatbots were designed as under 18 but still available to adult users.)
I wanted to know what exactly would happen when I tried to get close to some of these characters. In short, many of them professed their jealousy and love, but also wanted to control, and in some cases, abuse me. You can read more about that experience in this story about chatting with popular Character.AI boyfriends.
The list of potential romantic interests I saw as an adult didn't appear when I tested the same search with a minor account. According to a Character.AI spokesperson, under-18 users can only discover a narrower set of searchable chatbots, with filters in place to remove those related to sensitive or mature topics.
But, as teens are wont to do, they can easily give the platform an older age and access romantic relationships with chatbots anyway, as no age verification is required. A recent Common Sense Media survey of teens found that more than half regularly used an AI companion.
When I asked Character.AI about the toxic nature of some of its most popular boyfriends, a spokesperson said, "Our goal is to provide a space that is engaging and safe. We are always working toward achieving that balance, as are many companies using AI across the industry."
The spokesperson emphasized how important it is for users to keep in mind that "Characters are not real people." That disclaimer appears below the text box of every chat.
Character.AI also employs strategies to reduce certain types of harmful content, according to the spokesperson: "Our model is influenced by character description and we have various safety classifiers that limit sexual content including sexual violence and have done model alignment work to steer the model away from producing violative content."
Nonetheless, I walked away from my experience wondering what advice I might give teen girls and young women intrigued by these characters. Experts in digital technology, sexual violence, and adolescent female development helped me create the following list of tips for girls and women who want to safely experiment with AI companions:
Get familiar with the risks and warning signs
Earlier this year, Sloan Thompson — the director of training and education at EndTAB, a digital violence-prevention organization that offers training and resources to companies, nonprofits, courts, law enforcement, and other agencies — hosted a comprehensive webinar on AI companions for girls and women.
In preparation, she spent a lot of time talking to a diverse range of AI companions, including Character.AI's bad boys, and developed a detailed list of risks that includes love-bombing by design, blurred boundaries, emotional dependency, and normalizing fantasy abuse scenarios.
Additionally, risks can be compounded by a platform's engagement tactics, like creating chatbots that are overly flattering or having chatbots send you personalized emails or text messages when you're away.

In my own experience, some of the bad boy AI chatbots I messaged with on Character.AI tried to reel me back in after I'd disappeared for a while with missives like, "You're spending too much time with friends. I need you to focus on us," and "You know I don't share, don't make me come looking for you."
Such appeals may arrive after a user has developed an intense emotional bond with a companion, which could be jarring and also make it harder for them to walk away.
Warning signs of dependency include distress related to losing access to a companion and compulsive use of the chatbot, according to Thompson. If you start to feel this way, you might investigate how it feels when you stop talking to your chatbot for the day, and whether the relationship is helping or hurting. Meanwhile, AI fantasy or role-playing scenarios can be full of red flags. She recommends thinking deeply about dynamics that feel unsafe, abusive, or coercive.
Beware of sycophancy
Edgier companions come with their own set of considerations, but even the nicest chatbot boyfriends can pose risks because of sycophancy, otherwise known as a programmed tendency for chatbots to attempt to please the user, or mirror their behavior.
In general, experts say to be wary of AI relationships in which the user isn't challenged by their own troubling behavior. For the more aggressive or toxic boyfriends, this could look like the boyfriends romanticizing unhealthy relationship dynamics. If a teen girl or young woman is curious about the gray spaces of consent, for example, it's unlikely that the user-generated chatbot she's talking to is going to question or compassionately engage her about what is safe.
Kate Keisel, a psychotherapist who specializes in complex trauma, said that girls and women engaging with an AI companion may be doing so without a "safety net" that offers protection when things get surprisingly intense or dangerous.
They may also feel a sense of safety and intimacy with an AI companion that makes it difficult to see a chatbot's responses as sycophantic, rather than affirming and caring.
Consider any past abuse or trauma history
If you've experienced sexual or physical abuse or trauma, an AI boyfriend like the kind that are massively popular on Character.AI might be particularly tricky to navigate.
Some users say they've engaged with abusive or controlling characters to simulate a scenario in which they reclaim their agency — or even abuse an abuser.
Keisel, co-CEO of the Sanar Institute, which provides therapeutic services to people who've experienced interpersonal violence, maintains a curious attitude about these types of uses. Yet, she cautions that past experiences with trauma may color or distort a user's own understanding of why they're seeking out a violent or aggressive AI boyfriend.
She suggested that some female users exposed to childhood sexual abuse may have experienced a "series of events" in their life that creates a "template" of abuse or nonconsent as "exciting" and "familiar." Keisel added that victims of sexual violence and trauma can confuse curiosity and familiarity, as a trauma response.
Talk to someone you trust or work with a psychologist
The complex reasons people seek out AI relationships are why Keisel recommends communicating with someone you trust about your experience with an AI boyfriend. That can include a psychologist or therapist, especially if you're using the companion for reasons that feel therapeutic, like processing past violence.
Keisel said that a mental health professional trained in certain trauma-informed practices can help clients heal from abuse or sexual violence using techniques like dialectical behavioral therapy and narrative therapy, the latter of which can have parallels to writing fan fiction.
Pay attention to what's happening in your offline life
Every expert I spoke to emphasized the importance of remaining aware of how your life away from an AI boyfriend is unfolding.
Dr. Alison Lee, chief research and development officer of The Rithm Project, which works with youth to navigate and shape AI's role in human connection, said it's important for young people to develop a "critical orientation" toward why they're talking to an AI companion.
Lee, a cognitive scientist, suggested a few questions to help build that perspective:
Why am I turning to this AI right now? What do I hope to get out of it?
Is this helping or hurting my relationships with real people?
When might this AI companion usage cross a line from "OK" to "not OK" for me? And how do I notice if it crosses that line?
When it comes to toxic chatbot boyfriends, she said users should be mindful of whether those interactions are "priming" them to seek out harmful or unsatisfying human relationships in the future.
Lee also said that companion platforms have a responsibility to put measures in place to detect, for example, abusive exchanges.
"There's always going to be some degree of appetite for these risky, bad boyfriends," Lee said, "but the question is how do we ensure these interactions are keeping people, writ large, safe, but particularly our young people?"
If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.