There is bias in AI.

It might not be the bias you thought


Published on: May 2024

Written by: Peter Mulford

In the expansive dialogue surrounding AI systems, biases often take center stage, typically those woven into the fabric of the machines by their human creators. Yet, an intriguing facet emerges when evaluating the biases rooted within humans, which are illuminated by the presence of AI. Recently, Yunhao Zhang and Renée Gosline of MIT embarked on an exploration of this phenomenon, probing how the identity of the author—be it human or AI—affects the perceived quality and persuasiveness of content. The verdict? Human Favoritism.

The experiment:

Zhang and Gosline engineered a series of experiments comprising four distinct conditions to unravel the impact of authorship on content perception:

  • Content crafted exclusively by human experts.
  • Content generated solely by AI.
  • Content initially created by AI, followed by human refinement.
  • Content initially authored by humans, subsequently polished by AI.

In instances where evaluators remained unaware of the authorship (a form of blind evaluation), AI-generated content garnered commendable ratings. However, upon disclosure of the four experimental conditions, a discernible uptick surfaced in the perceived quality and persuasiveness of content intertwined with human involvement.

The insight:

Human favoritism, characterized by a cognitive bias, manifests prominently when individuals are cognizant of human participation in content creation. This predisposition extends its influence beyond evaluators, shaping the perceptions of employees and customers alike. The mere presence of human input infuses content with a heightened sense of value and credibility.

Charting the path forward:

  1. Transparency and disclosure: Embrace transparency by openly disclosing the involvement of AI in content creation. This fosters trust and informs consumers, employees, and stakeholders about the collaborative nature of content production.
  2. Blind evaluations: Implement blind evaluation procedures where possible to mitigate the influence of human favoritism. By withholding information about the authorship of content during assessment, evaluators can provide more objective judgments.
  3. Diverse authorship: Promote diversity in content creation teams, encompassing both human experts and AI systems. By leveraging a diverse array of perspectives, biases can be minimized, resulting in more inclusive and balanced content.
  4. Continuous education: Educate stakeholders about the capabilities and limitations of AI systems. By enhancing understanding and awareness, individuals can make more informed judgments, reducing the impact of biases on content perception.

Navigating the unfolding narrative in the Iron Age of AI has revealed unseen aspects of human behavior. Addressing biases entrenched in AI-generated content demands a candid acknowledgment of human favoritism. Transparency emerges as the critical instrument for navigating this intricate landscape, requiring humans to confront uncomfortable realities to ensure that technological progress is both positive and equitable for people and machines.

Ready to start a conversation?

Want to know how BTS can help your business? Fill out the form below, and someone from our team will follow up with you.