October 2015This will come as a surprise to a lot of people, but in some casesit's possible to detect bias in a selection process without knowinganything about the applicant pool. Which is exciting because amongother things it means third parties can use this technique to detectbias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at leasta random sample of the applicants that were selected, (b) theirsubsequent performance is measured, and (c) the groups ofapplicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. Whatit means for a selection process to be biased against applicantsof type x is that it's harder for them to make it through. Whichmeans applicants of type x have to be better to get selected thanapplicants not of type x.[1]Which means applicants of type xwho do make it through the selection process will outperform othersuccessful applicants. And if the performance of all the successfulapplicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a validone. And in particular it must not be invalidated by the bias you'retrying to measure.But there are some domains where performance can be measured, andin those detecting bias is straightforward. Want to know if theselection process was biased against some type of applicant? Checkwhether they outperform the others. This is not just a heuristicfor detecting bias. It's what bias means.For example, many suspect that venture capital firms are biasedagainst female founders. This would be easy to detect: among theirportfolio companies, do startups with female founders outperformthose without? A couple months ago, one VC firm (almost certainlyunintentionally) published a study showing bias of this type. FirstRound Capital found that among its portfolio companies, startupswith female founders outperformedthose without by 63%. [2]The reason I began by saying that this technique would come as asurprise to many people is that we so rarely see analyses of thistype. I'm sure it will come as a surprise to First Round that theyperformed one. I doubt anyone there realized that by limiting theirsample to their own portfolio, they were producing a study not ofstartup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. Theinformation needed to conduct such studies is increasingly available.Data about who applies for things is usually closely guarded by theorganizations selecting them, but nowadays data about who getsselected is often publicly available to anyone who takes the troubleto aggregate it.Notes[1]This technique wouldn't work if the selection process lookedfor different things from different types of applicants—forexample, if an employer hired men based on their ability but womenbased on their appearance.[2]As Paul Buchheit points out, First Round excluded their most successful investment, Uber, from the study. And while it makes sense to exclude outliers from some types of studies, studies of returns from startup investing, which is all about hitting outliers, are not one of them.Thanks to Sam Altman, Jessica Livingston, and Geoff Ralston for readingdrafts of this.