Blind Matching: How Removing Bias Creates Better AI Teams

Made on Merit Team
Hiring Diversity AI Teams
Blind Matching: How Removing Bias Creates Better AI Teams

The Bias We Do Not See

A landmark study published by the National Bureau of Economic Research found that resumes with "white-sounding" names received nearly 50% more callbacks than identical resumes with "black-sounding" names. Same qualifications. Same experience. Different names. Different outcomes.

This is not ancient history. This is the hiring system most companies still use.

In AI and tech, the problem compounds. White men account for 72% of corporate leadership in the U.S. When hiring managers pattern-match for "culture fit" and familiar backgrounds, they reinforce the same homogeneity that limits innovation.

The Orchestra Lesson

The most famous blind hiring study comes from classical music. Economists Claudia Goldin and Cecilia Rouse studied auditions at eight major U.S. orchestras and found that using a screen to hide the musician from the judges increased women's advancement likelihood by 11 percentage points in preliminary rounds and 30% in final rounds.

Female representation in top orchestras rose from 6% in 1970 to 21% by 1993. The talent was always there. The bias was hiding it.

Why Diversity Produces Better AI Teams

McKinsey's 2023 "Diversity Matters Even More" report quantified what many already suspected:

  • Companies in the top quartile for gender diversity are 39% more likely to financially outperform their peers
  • Companies in the top quartile for ethnic diversity see the same 39% outperformance
  • Companies in the bottom quartile for both are 66% less likely to outperform

For AI teams specifically, diversity is not just a fairness issue. Homogeneous teams build homogeneous models. When everyone on the team shares similar backgrounds, blind spots in training data, evaluation metrics, and deployment contexts go unnoticed.

Diverse teams catch more edge cases, question more assumptions, and build products that work for more people.

How Blind Matching Works

The principle is simple: evaluate skills first, reveal identity later.

At Made on Merit, the matching process works in three stages:

  1. Skills-first evaluation. Businesses post project needs. AI professionals apply. The initial review is based entirely on demonstrated skills, hackathon performance, and coaching engagement. No names, no photos, no school names.

  2. Mutual interest. Both sides review anonymized profiles. Only when both express interest does the process move forward.

  3. Identity reveal and connection. Once mutual fit is confirmed based on capability, both parties connect directly.

This is not a gimmick. Companies like Cockroach Labs saw female employees increase by 50% after implementing blind hiring, reaching 30% of their workforce and management. FCB Worldwide hired 19% more women and interviewed 38% more ethnically diverse candidates.

The Real-World Results

When you remove bias from the initial filter, two things happen:

Better candidates surface. People who would have been filtered out by name, school, or background get evaluated on what they can actually do.

Better matches stick. When hiring decisions are based on demonstrated capability rather than pattern-matching, the resulting partnerships are stronger and last longer.

Build Your Team on Merit

The name says it all. Made on Merit exists because the best talent is not always the most visible talent. When every professional on the platform has been coached, tested in hackathons, and peer-reviewed, businesses can trust the match. And when matching is blind, the best person for the job actually gets the job.

Subscribe to my newsletter

Sign up with your email address to be the first to know about updates and new releases.