the ethics of ai-generated people who don’t exist

The Ethics of AI-Generated People Who Don’t Exist

Have you ever stared into the eyes of a photograph and felt certain you were looking at a real person—only to find out later that the face was completely fabricated by AI?

I still remember the first time it happened to me. It was one of those hyper-detailed portraits, crisp lighting, natural-looking smile, a face that could’ve belonged to someone I might pass on the subway. The caption said, “This person does not exist.”

I laughed nervously at first, but the longer I looked, the more unsettling it felt. Who was this person? Nobody. And yet, paradoxically, they looked like everybody.

That’s the strange reality we live in now: algorithms can conjure faces of people who never lived, never laughed, never had a story. They are hauntingly believable, and that raises a storm of ethical questions.

What Does “AI-Generated People” Actually Mean?

Let’s get technical for a moment. AI-generated people are produced through models like Generative Adversarial Networks (GANs) or diffusion models, trained on massive datasets of human faces.

The AI doesn’t just copy; it learns patterns—the way skin folds around eyes, how light plays on cheekbones, how hairstyles frame different face shapes—and then generates new, unique composites.

Websites like This Person Does Not Exist have popularized the concept. Tools embedded in design platforms and stock-photo libraries now offer instant access to synthetic faces for marketing, advertising, and product mockups.

But what starts as a clever trick quickly veers into thorny territory.

Why This Matters

You might think: “Okay, so an algorithm made a face. Big deal.” But let’s step back. Faces are loaded with meaning.

They are how we connect, recognize, empathize. Seeing a face that feels real triggers the same psychological responses as encountering another human being.

Now imagine those responses being manipulated—whether to sell products, sway voters, or spread misinformation.

Suddenly, the fact that these people “don’t exist” doesn’t make them harmless. Quite the opposite.

Realism Insights: Why Fake Faces Feel So Convincing

One of the most fascinating aspects is how quickly synthetic faces became indistinguishable from real ones.

A 2022 study published in PNAS (Proceedings of the National Academy of Sciences) found that participants could correctly identify AI-generated faces only about 50% of the time—basically chance level. In other words, our brains are no longer reliable filters.

These realism insights matter because they reveal the power of visual cues. AI has nailed details that once betrayed fakes—like asymmetry, unnatural teeth, or awkward shadows. And since faces are central to trust-building, the potential for deception is huge.

The AI-Powered Trends Driving Adoption

Why are businesses and creators so eager to use AI-generated people? The reasons aren’t mysterious:

  1. Cost savings – Hiring real models is expensive. Stock photo libraries charge licensing fees. AI faces can be generated at near-zero cost.
  2. Scalability – Need 10,000 unique faces for testing software or populating avatars? AI can deliver in minutes.
  3. Diversity control – Companies can generate specific age ranges, ethnicities, or styles to target different demographics.
  4. Avoiding legal hassles – Using AI-generated people sidesteps issues of privacy rights and model releases.

These ai-powered trends look attractive on paper, but they come with hidden risks that are easy to ignore when cost-cutting is the main driver.

Ethical Dilemmas in Plain Sight

Here’s where I can’t help but lean into opinion: the ethical dilemmas aren’t abstract—they’re immediate and messy.

  • Deception: If you see an “employee testimonial” featuring an AI-generated face, is that honest marketing? Or is it a lie, plain and simple?
  • Representation: If diversity in ads is just AI-generated optics, is that genuine inclusion or a cynical shortcut?
  • Misinformation: Bad actors can use fake faces in fake news, social media bots, or political propaganda. This isn’t speculation—it’s already happening.
  • Erosion of trust: When we realize we can’t trust the faces we see, does that undermine our ability to trust at all?

These aren’t small issues. They touch on the very fabric of how humans build societies.

A Business Guide to Using AI Faces Responsibly

So, what should businesses actually do? Here’s a practical business guide for navigating this ethically fraught space:

  1. Be transparent: If you’re using AI-generated people in marketing, disclose it. Consumers appreciate honesty.
  2. Don’t fake authenticity: Never use AI faces for testimonials, employee profiles, or any context where a real person should be represented.
  3. Audit suppliers: Make sure your vendors disclose how their AI was trained. Were images sourced ethically? Was consent involved?
  4. Blend, don’t replace: Use AI-generated people for functional tasks (e.g., placeholder design, testing UX flows), but lean on real humans for authentic storytelling.
  5. Stay ahead of regulation: Laws around synthetic media are coming. Companies that build responsible practices now will be less vulnerable later.

Behind Scenes: AI View—How Creators See It

When you talk to designers, marketers, and developers who use these tools, you get a layered perspective.

Many describe the relief of cutting costs or finally having access to diverse imagery. But there’s also unease.

In these behind scenes: ai view conversations, I’ve heard creatives say things like:

  • “I love the speed, but it feels strange to showcase something that doesn’t exist.”
  • “It’s like working with ghosts—convincing, but hollow.”
  • “We saved thousands, but sometimes I wonder what we lost.”

These insights reveal the double-edged nature of AI. It delivers efficiency, but it also introduces a kind of moral vertigo.

Legal Gray Zones

Copyright law is scrambling to keep up. By definition, AI-generated people don’t have “rights of publicity” since they aren’t real.

That means companies can use them without risk of lawsuits from models. But this loophole could incentivize overuse.

Meanwhile, regulators in the EU and U.S. are debating disclosure requirements for synthetic media.

The Federal Trade Commission (FTC), for instance, has warned companies about deceptive advertising using AI. But concrete laws are still patchy.

This legal uncertainty creates a dangerous in-between moment: widespread use without clear accountability.

The Emotional Impact

Here’s where I get personal again. Part of what makes me uneasy isn’t just the misuse potential—it’s the subtle erosion of what we value about human presence.

When I look at an AI-generated face, I don’t see a story. I don’t imagine childhoods, families, triumphs, or heartbreaks.

And maybe that’s the point—they’re blank slates. But something in me resists. Our empathy is wired to respond to realness, not simulations.

If society leans too heavily on synthetic faces, we risk dulling that empathy. And empathy, frankly, is something we can’t afford to lose.

Case Studies: Where Things Go Wrong

  • Political misinformation: In 2019, a network of fake social media accounts using AI-generated faces spread propaganda in multiple countries. They looked convincingly real, and that made the lies more dangerous.
  • Fake businesses: Some shady “companies” listed AI-generated employee headshots on LinkedIn to appear legitimate, tricking investors and clients.
  • Marketing missteps: A few firms have been caught using AI models as “happy customers,” damaging their reputation when the truth came out.

These aren’t just embarrassing—they’re damaging to trust, and sometimes democracy itself.

Where Do We Go From Here?

Looking ahead, a few things seem likely:

  • Disclosure standards will become mandatory. Think “this image contains AI-generated elements” labels, similar to nutrition facts.
  • Detection tools will improve, though they’ll always lag slightly behind generation tools.
  • Consumer skepticism will grow, which may ironically increase the value of authentic human stories and photography.
  • Cultural norms will evolve. Just as stock photos once seemed odd but became normal, AI faces may become socially accepted in certain contexts.

The question is whether we can balance adoption with integrity.

My Take: Ethics Before Efficiency

If you asked me point-blank whether manual photography and human models should be replaced by AI-generated people, my answer would be no.

Efficiency can’t be the only metric. Faces are too important, too loaded with meaning, to outsource entirely to algorithms.

But I’m not naive. AI-generated people are here to stay. So the responsibility falls on us—business leaders, creators, policymakers, and ordinary consumers—to set boundaries. To ask whether using a fake person is respectful, fair, and honest in a given context.

Because once we normalize faces without lives behind them, we risk normalizing stories without truth.

Closing Reflection

The ethics of AI-generated people who don’t exist isn’t just a design problem—it’s a cultural, psychological, and moral challenge.

The technology is dazzling, yes. But it also reveals deep vulnerabilities in how we process trust, reality, and empathy.

If I leave you with one thought, it’s this: just because we can conjure people out of pixels doesn’t mean we should. And if we do, we need to be very careful about the line between innovation and deception.