Believing AI Made It Makes You Feel More Creative

In a time where machines can compose symphonies, paint surreal dreamscapes, and craft witty banter at the click of a button, the lines between human and machine creativity are blurring. What happens, then, to our own sense of creative worth when we consume these works? A recent study published in the Journal of Personality and Social Psychology unveils a compelling psychological twist: simply believing that a piece of art, poetry, or humor was generated by artificial intelligence—not a fellow human—can significantly boost our confidence in our own creative abilities.

This phenomenon isn’t just a quirky side effect of the AI boom. It offers a window into how we evaluate ourselves, and how AI, often feared as a creativity killer, may ironically be helping some people unlock their creative potential.

The Age of Generative AI and the Identity Tug-of-War

Generative artificial intelligence—or gen-AI—refers to systems like ChatGPT, Midjourney, and DALL·E that produce text, images, music, and more, mimicking forms of human creativity. These tools, once limited to research labs, are now woven into everyday digital life. They generate jokes on social media, illustrate blog posts, and assist with everything from writing birthday cards to drafting scientific reports.

As our exposure to AI-generated content increases, researchers have begun to wonder how these encounters might shape not just our perception of AI, but of ourselves. Do we see these synthetic creators as rivals, partners, or mere digital scribes? Do they intimidate us—or inspire us?

That’s where the new study led by Taly Reich of NYU’s Stern School of Business comes in. Reich and her collaborators set out to explore how people psychologically respond when they believe they are engaging with AI-generated creative work. Their question was deceptively simple: Does thinking something was created by a machine make you feel more or less capable in your own creativity?

The answer, it turns out, reveals something profound about human psychology.

The Power of the Label: Human vs. Machine

Across seven meticulously designed experiments involving over 6,800 participants from the United States and the United Kingdom, the researchers tested a curious idea rooted in a classic concept from psychology: social comparison theory. This theory suggests that people evaluate themselves by comparing their abilities to others—feeling better or worse depending on who they believe they’re stacked up against.

In these experiments, participants were presented with the exact same pieces of creative work—jokes, poems, drawings, short stories, and captions. However, the content was randomly labeled as having been created either by a fellow human or by a generative AI system. That label alone was the only difference.

The results were striking. Participants who believed the content was created by an AI reported higher levels of creative self-confidence. They felt more capable, more inspired, and more willing to create something themselves.

In essence, the mere belief that a piece of content came from a machine made people feel better about their own creativity—even when the quality of the content remained unchanged.

Confidence Without a Cause? The Curious Upside of Misjudgment

The pattern repeated across creative domains. When participants read jokes supposedly authored by an AI, they were more likely to believe they could craft a better punchline. When viewing AI-labeled poems or artworks, they saw themselves as more artistically gifted. In storytelling tasks, those told a story was machine-made felt more emboldened to try writing one themselves.

In one experiment, participants even rated their own creative output more favorably after reading an AI-attributed cartoon caption—despite external judges finding no actual difference in quality between their work and the control group’s. The confidence was real, but not always justified.

That’s part of what makes these findings so compelling. The perception of AI as a lesser creative being acts like a kind of psychological trampoline. We bounce higher in our self-assessments simply because we believe the bar is lower. This is a textbook example of downward social comparison—comparing ourselves to someone (or something) we perceive as less competent to boost our own ego.

Beyond Confidence: Action and Aspiration

Importantly, the effects weren’t just about how people felt—they also influenced behavior. In a storytelling experiment, participants who thought the original story came from AI were significantly more likely to attempt writing one of their own. Here, the boost in self-confidence translated into actual motivation to create.

This is a key point: the study shows that perceived inferiority of AI isn’t just soothing to the ego, but also energizing to the spirit. It makes people more willing to try, to take creative risks they might otherwise avoid. That can be tremendously powerful in contexts where self-doubt often paralyzes action—like classrooms, workplaces, or creative careers.

It also raises a provocative question: could AI be used not just as a creative tool, but as a creative catalyst—a source of inspiration precisely because it lowers the stakes of comparison?

Limitations in Logic: When Confidence Isn’t Competence

Yet, there’s a cautionary note hidden within these findings. In a world increasingly driven by AI-generated content, an inflated sense of one’s creative talent might not always lead to great results. Confidence can encourage effort, but without feedback and skill development, it may also foster overconfidence and disappointment.

One of the study’s experiments tested whether participants’ boosted confidence matched the quality of their output. It didn’t. Even when people felt more creative after seeing AI content, independent judges didn’t rate their work as better. The psychological high didn’t necessarily lead to objectively higher creativity.

This disconnect suggests a kind of “illusion of competence” that can arise when comparing oneself to AI. It may help people get started—but it might also backfire if expectations are not managed.

The Role of Content Quality and Emotional Depth

To explore this further, the researchers examined whether the quality of the AI-attributed content mattered. Interestingly, the boost in confidence persisted regardless of whether the creative work was objectively high- or low-quality. People’s confidence seemed less tied to what they were looking at and more to who they believed made it.

This is particularly revealing because it hints at the depth of our assumptions about AI. When we think of machines, we often assume a lack of emotion, soul, or authenticity—qualities we typically associate with human creativity. That perception persists even when the machine outputs something beautiful or clever.

But what happens if those perceptions shift?

In additional follow-up studies, researchers found that people’s confidence was affected by how the AI was described. When participants were told the AI system had emotional understanding or creative authenticity, their confidence advantage began to erode. The more “human-like” the machine was seen to be, the less it served as a psychological foil.

This indicates that public perception of AI’s creative ability is still flexible—and may evolve over time. As generative systems improve and become more expressive, our internal comparisons may shift. The AI we once saw as inferior could become a peer—or even a superior—making the confidence boost more elusive.

The Boundary Between Facts and Feelings

Notably, the confidence-boosting effect was limited to creative tasks. In a clever control study, participants read both creative stories and factual explanations—such as a paragraph explaining why it rains. The factual writing didn’t produce the same effects. People judged AI and human authors equally in this non-creative domain, and their own self-confidence didn’t change based on who they believed wrote the passage.

This suggests the effect is tied specifically to creativity, a realm still widely regarded as inherently human. It’s not just that people think machines are bad at writing facts—they believe facts don’t require creativity, so comparison doesn’t matter as much.

Creative work, by contrast, touches identity. When we write a poem or make a joke, we express ourselves. Being better—or worse—than an AI at such a task can feel like a verdict on our uniqueness. That’s why the AI label matters more in this domain: it either challenges or uplifts how we see our creative selves.

Implications: A Tool for Empowerment or a Trap of Illusion?

The practical implications of this research are nuanced. For educators, team leaders, and organizations aiming to foster creativity, exposing people to AI-labeled content could be an easy, low-cost way to help them overcome creative inhibition. If people believe AI is a safe comparison point, they may be more willing to try, experiment, and grow.

But there’s a fine line between encouragement and delusion. False confidence without feedback can be a trap, especially if people come to overestimate their creative abilities. Like any motivational tool, the AI mirror must be used thoughtfully.

Lead author Taly Reich emphasizes this dual potential. “This can lead people to be more likely to attempt a creative activity, even if they don’t have the objective ability underlying their newfound creative self-confidence,” she told PsyPost. The confidence may be inflated—but if it gets people moving, experimenting, and learning, that in itself might be a net benefit.

Reflections on the Future: What Happens When AI Gets Better?

Perhaps the most intriguing aspect of this study is what it suggests about the future. As AI-generated content becomes more advanced, more emotionally nuanced, and more human-like, the perception of AI as a “lesser” creative comparator may fade.

Already, some AI-generated music and art have won awards. Some readers can’t distinguish AI-written essays from human ones. If generative AI continues to blur the lines, will it still boost our confidence—or begin to threaten it?

That question doesn’t yet have an answer. But it underlines a central truth: how we see ourselves is often shaped not by what we produce, but by what we believe about what others can produce. AI may never feel pride, fear, or doubt—but it can still affect our own experience of all three.

And maybe that’s the most human part of all.

Think this is important? Spread the knowledge! Share now.