Why it’s bad that 9 out of 10 BrainCanada /Azrieli Foundation early career awards went to men and how we might do things differently

Michael Hendricks
6 min readMar 12, 2019

I said on Twitter that this is bad, but I wanted to expand on how bad this outcome is, and lay out some thoughts on why things like this happen even when funders and reviewers have good intentions.

tl;dr
1. Academia is a dunking contest where every time you make a basket, they lower your rim 6 inches. Early support isn’t just important, it’s determinative.
2. Review processes can’t rank people in any meaningful way, so we shouldn’t.
3. Accounting for systemic bias should be part of decision making.

The concise, erudite argument I imagine I could make in my head but in real life will be full of typos and bad dunking metaphors has already been made by @hwitteman, so feel free to skip what follows and just read this:
https://twitter.com/hwitteman/status/1105306348416368640

First, what I am *not* saying: That the winners of this or any science competition are undeserving, or that the reviewers or funders were motivated by bias or acted in bad faith in any way. I can’t know for a given case, but increasingly I think “bias in the room” is not necessarily what produces these outcomes (though it can and undoubtedly it often does). Instead, we work within a distributed system of disadvantage for some groups (in addition to the individual actions of bad actors). Fair-minded people acting in good faith are a part of systemic bias.

48% of assistant professors in Canada are women. This varies by field, and it may be slightly lower in neuroscience, but no matter what, 1 in 10 awardees is an unlikely outcome. Like a lot of ECR fellowships, this is meant to be prestigious. My first thought is that this could lead people to weigh the “investigator excellence”-type criteria heavily. The more you focus on the person rather than the science, the more risk of bias there is.
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(18)32611-4/fulltext

But the really insidious thing is this: by the time people are assistant professors, women have, on average, already accumulated more than their fair share of disadvantages. Women are less likely to have been mentored by the “elite” male faculty that dominate professional/social networks, and whose letters might be more persuasive. There are still some people who actually believe in pedigree, but we are all potentially swayed by it.
https://www.pnas.org/content/111/28/10107

In at least some fields, recommendations reveal a strong gender bias in language used and level of enthusiasm. https://www.sciencemag.org/careers/2016/10/recommendation-letters-reflect-gender-bias

Women receive much smaller start-up packages — the money and resoures you need to generate preliminary data and hire people and develop your research program. https://www.the-scientist.com/the-nutshell/study-men-get-bigger-start-up-packages-34813

Women are likely to have to deal with sexual harassment and hostile work environments during their training and as junior faculty, which can impact productivity and opportunities. https://www.nap.edu/download/24994

Believe it or not (this never ceases to amaze me), there are scientists who believe that who paid your salary as a student or postdoc (a grant, a funding agency, a charity, a rich person) is a relevant indicator in assessing you for future grants, fellowships, jobs in science. I know, right? Science. However, this reliance on past assessments is a general, pervasive feature of academic evaluation — not only can early resources help you materially (which helps your productivity), but people will always be happy to count how many times you’ve been hit with the excellence stick. Reviewing is hard and we look for ways to make it easier. But the result of these shortcuts is that we import the uninspected judgements and unknowable motivations of strangers from processes known to be rife with bias, transmute them into numbers, and call them “objective measures.”

It is this assigning resources and opportunities based on how many resources and opportunities you’ve gotten your hands on in the past that creates the Category 5 hurricane of a Matthew Effect we currently enjoy. It’s an absurd dunking contest where every time you dunk the ball, they lower your net. So it matters a lot when you give some people a boost (award) or a trampoline (fellowship) or a pair of clown shoes with springs on the soles (pedigree) early on.

This is not only why women are already at a disadvantage “early” in their careers, but why it is so tragic to have such disparate outcomes for things that confer prestige and funds early on — it will matter a lot in the subsequent rounds of competition. One study compared applicants just above or just below the funding cutoff (so equivalent assessments) for an early career award. It had massive implications for career outcomes.
https://www.pnas.org/content/115/19/4887

The lesson is this: awards like this — and awards in general — don’t recognize excellence, they create [scare-quotes] “excellence.” They determine who is anointed: who will be given the opportunities and resources to tick the boxes and metrics we’ve decided mean “excellent.” This circular reasoning is such an accepted mindset that we have actual funding competitions that are restricted to those who have received a particular salary award — a salary award that is famous for equity problems.

So, where in this vicious cycle do we try to dampen the positive feedback? It is implausible in the extreme that there is any gender difference in ability and research potential between men and women who have made it to this stage of their academic career. It is, however, a certainty that the women have made it have, on average, received less support and fewer opportunities in both material ways and network/prestige ways, and likely put up with a lot of discouraging crap along the way to boot. So, when your review process leads you to an outcome like this, it should be recognized as a symptom that something is wrong.

Sometimes what might be wrong is your own biases. Sometimes it might be bias inherent in your criteria. But always always always there are structural disadvantages and systemic bias that no amount of fairness or objectivity or reviewer training can address. It’s baked in already.

Second, we have to let go of the idea that we can rank people in any meaningful way through review processes like this. I mean…c’mon. Of course we can make a defensible ranking — one of many possible “fair” rankings — we are professional analyzers and argument-makers. What is more fair and realistic is to make tiers. These tiers could be as broad as “we’d be happy to give these people this award/job/grant, less happy to give one to these other people,” where tier size > number of awards to give out. And if we’re honest, the number of deserving applicants is always >> the number of awards.

By doing this, we stop pretending that uninterpretable numeric differences, or which florid superlatives letter writers use are valid reasons to nudge people up or down a ranking ladder. We are then much less likely to inadvertently import past biases into our decision making. And because we’ve defined a large enough equivalence group, we have the flexibility to ensure that our allocations of prestige and resources do not, on average, systematically favor groups that have historically enjoyed that favoritism.

Whose job is this? This is hard, and where a certain degree of accountabilty laundering comes in to play. Reviewers can reasonably say: “This is not our money. We agreed to review and advise according to the criteria and process determined by the funder. It is overstepping our mandate to do otherwise.” On the other hand, funders can reasonably say: “We recruit expert reviewers who know this field. Who are we to second-guess their recommendations?”

Everyone knows to worry about bias in the room, our own bias. But I’ve never been part of a review process where I was asked to consider systemic bias, bias that is baked into the review criteria because the world is as it is and not as it should be. This should change.

Changing how we assess and reward is hard. But we can to some extent circumvent this problem by looking at where assessment seems to be less biased in our existing mechanisms, and place more of our funding bets there. Perhaps not surprisingly, this tends to be where the focus is more on getting science done and less on manufacturing prestige. In Canada, this is the open tricouncil programs — Project, Discovery, Insight — that are about doing the work the public funds us to do and not about gilding a handful of careers.

Taking money away from pageant programs and various insider clubs and putting it in these core mechanisms has the advantage of being more accountable, more equitable, and the best scientific return on investment going.

--

--