Economists have a penchant for triangles, and not just because they make for great illustrations in textbooks. In the realm of trade, one simply can’t juggle high tariffs, avoid retaliation, and maintain stable prices all at once. Similarly, in monetary policy, it’s impossible to fix interest rates, control the money supply, and ensure flawless economic stability simultaneously. The same principle applies to hiring practices under conditions of inequality, where a hidden triangle lurks just beyond the surface of fairness debates.
When businesses embrace algorithms to navigate the scarce landscape of job recruitment, they often aim for three enticing objectives: high efficiency (selecting candidates most likely to excel), robust representation (ensuring outcomes reflect the demographics of the applicant pool), and strict formal neutrality (applying uniform rules across the board).
Yet, herein lies a rather uncomfortable truth: it’s impossible to achieve all three at the same time. Companies can prioritize any two, but the third will inevitably fall by the wayside. This conundrum, dubbed the “fairness trilemma,” sheds light on the complexities surrounding hiring algorithms and initiatives aimed at equity and inclusion, transforming what once seemed perplexing into a straightforward application of classic economic theory. For a deeper dive, you can explore my working paper, âThe Fairness Trilemma: An Impossibility Theorem for Algorithmic Governance.â
The grand illusion
For a time, the narrative spun by many organizations regarding hiring was refreshingly simple. Bias, they claimed, was merely a figment of human imagination, and inefficiency stemmed from gut feelings. The solution seemed clear: standardize, automate, and measure. By replacing subjective judgment with data-driven decisions, hiring could simultaneously become fairer and more effective.
This narrative fueled a surge in investments in Diversity, Equity, and Inclusion (DEI) programs and algorithmic hiring technologies. Vendors promised a tantalizing proposition in both public policy and corporate governance: moral enhancement without any trade-offs. Improved outcomes for marginalized groups, no sacrifice in performance, and fewer awkward conversations about bias or authority.
Algorithmic hiring systems were marketed as the solution to this dilemma. By analyzing rĂ©sumĂ©s and applications, predicting performance, and enforcing âfairnessâ through mathematical means, organizations could let these models find a balance.
However, algorithms don’t eliminate discretion; they merely shift it elsewhereâinto model design, data selection, and the very definition of âfairness.â This shift often occurs in ways that are less visible and more challenging to contest.
A cautionary tale
The well-documented case of Amazonâs experimental hiring algorithm serves as a poignant illustration. Trained on historical rĂ©sumĂ©s and hiring decisions, the algorithm learned that candidates whose profiles mirrored those of previous male hires were more likely to receive high scores for technical roles. Consequently, it unfairly penalized rĂ©sumĂ©s that appeared âfemale-coded,â reflecting the male-centric nature of the tech industry.
While the model operated efficiently within a technical frameworkâoptimizing predictive performance based on available data and applying uniform scoring across all applicantsâit failed to produce representative outcomes from a non-representative data set.
This left the company with three choices that directly correlate to the trilemma. It could retain the model and accept the resultant unequal outcomes (efficiency + neutrality, with weak representation), introduce fairness constraints to push outcomes towards equality at the cost of predictive accuracy (efficiency + representation, weaker neutrality), or reestablish human judgment and discretion to amend the algorithm’s biases (representation + discretion, weaker formal neutrality). Ultimately, Amazon chose to abandon the system.
A similar trajectory occurred with HireVueâs AI video interviews. The company promoted automated assessments of facial expressions, tone, and language as a means to standardize and eliminate bias in early-stage recruitment. However, critics pointed out that these factors often correlate with disability status, neurodiversity, and demographic background in ways that are hard to justify as relevant to job performance. Under increasing scrutiny, HireVue ultimately eliminated facial analysis from its process.
In both instances, the failure wasn’t with the concept of screening itself; rather, it lay in the misguided belief that measurement could remain neutral within a context of inherent inequalities, and that efficiency, representation, and neutrality could be obtained without compromise through an ideal model.
A simplified model
Consider a straightforward model: envision a firm needing to fill a specific number of positions from a pool divided into two groups, A and B. Applicants from both groups receive scores based on a predictive model estimating their potential for success. Due to disparities in starting conditionsâsuch as education quality and prior experienceâgroup A possesses a higher average predicted success rate than group B. The firm contemplates a single threshold rule: hire anyone with a predicted success score above a certain level.
In this scenario of unequal baseline rates, one rule cannot satisfy all three goals. It cannot simultaneously select the highest-performing candidates, ensure hires from groups A and B reflect their respective shares in the applicant pool, and apply the same threshold uniformly. If the firm prioritizes strong efficiency and strong neutrality, it sets a common threshold, resulting in hires disproportionately from group A, thus diverging from true representation.
Should it prioritize strong efficiency alongside strong representation, it must relax neutrality with group-specific thresholds or weights to increase hiring from group B while still aiming to select the best within that group. This approach, however, means treating applicants from A and B differently when they possess identical scores.
If the firm opts for strong representation and strong neutralityâapplying identical rules for all with similar hiring rates by groupâit will not select the highest-scoring candidates overall, potentially leaving some higher-scoring applicants un-hired in favor of lower-scoring ones, thereby sacrificing efficiency until the fundamental inequalities are addressed.
This illustrates the fairness trilemma in its most basic form: any two corners of the triangle can be selected, but the third will inevitably suffer. The challenge isn’t primarily about machine learning; it’s about allocating limited opportunities amid unequal circumstances.
Scarcity is persistent
Economists have witnessed this narrative unfold before. Take rent control, for instance. When governments impose price ceilings below market-clearing levels, scarcity doesn’t vanish; it merely shifts. It manifests as waitlists, non-price screening, informal payments, and declining quality. Landlords unable to regulate through rent will resort to waiting lists, personal connections, and discretionary practices. Empirical studies, such as the DiamondâMcQuadeâQian study of San Francisco rent control, illustrate this phenomenon.
Hiring systems behave similarly. When one allocation mechanism is restricted, scarcity finds alternative pathways. If performance metrics can’t serve as the gatekeepers due to fairness constraints, organizations turn to committees, exceptions, holistic reviews, and opaque overrides for rationing. Each adjustment maintains two corners of the trilemma while compromising the third. Policy constraints shift scarcity; they do not eliminate it.
What firms should consider
Recognizing that efficiency, representation, and formal neutrality cannot all be maximized simultaneously reframes the conversation. Instead of asking, âHow do we eradicate bias without trade-offs?â firms should inquire, âWhich aspect are we prepared to compromise, and where should discretion be exercised?â
An honest approach to equity and inclusion in hiring algorithms should encompass at least three actions: clarify priorities and tailor governance accordingly; place discretion in easily monitored areasâstructured committees, documented overrides, and review processesârather than obscuring value judgments within model design and opaque fairness metrics; and abandon the narrative of algorithms as miraculous solutions. Models cannot effortlessly resolve the inherent trade-offs stemming from unequal starting conditions; at best, they illuminate where constraints apply and the costs associated with various choices.
The objective isn’t perfection; rather, it’s legitimacy: transparently determining where the trilemma intersects in a given context and accepting accountability for the resulting implications.

