At Chaitin School, we’ll be doing mentorships soon. We want to do this fairly both for mentors and for people applying.
Admission criteria are a hot topic for schools. In our effort to be inclusive, we want to accept everyone. This might be problematic because there might not be enough mentorship positions offered. But what if there are? Are there reasons to have elaborate admission criteria, then?
We are a very new and very small community; few people know us. It’s unlikely many people will apply but it is likely more than one person will apply to the same mentorship. Who’s doing the mentorship first?
We could opt to leave that at the preference of the mentor. But, is it fair to deprioritise someone, if admission criteria are not publicly stated? What if someone consistently deprioritises people because of their race? Or gender, religion, age[^age], etc.
Or what if the opposite happens and the mentor is the one at a disadvantage? Maybe they feel uncomfortable mentoring someone specific; uncomfortable as in: afraid, stressed, unable to communicate, or maybe they just don’t vibe with them. Obviously, we can’t force someone to mentor—not to mention that a mentoring relationship should be one with a foundation of high-caliber communication.
If we decide to adopt admission criteria, we don’t want to base them on the applicants’ knowledge of computer science or software engineering—that’s what we want to help them learn. On the same note, we don’t want to rank them on how fast or efficiently they can learn these—that’s also what we want to help them with. Maybe we could prioritise people who are less privileged. Or less financially wealthy. Or those who are less likely to find mentorship somewhere else.
Choosing which one (or ones) of these, weighing them amongst themselves, as well as ranking applicants in these scales, is a hard problem. Further adjacent problems and dangers exist, such as acquiring truthful information about the applicants in terms of these qualities (someone might lie and that will be at somebody else’s expense) as well as participating in a system which ranks morality (we don’t want to become a community which dictates who’s socially virtuous).
Out of all these arguments, let’s start with the strongest, the ones we’re most sure of:
- We absolutely must not become racist[^racism].
- We absolutely must not force anyone do something they don’t want.
Ideally, after an initial discussion between the mentor and the mentee, they find themselves in consensus; either they go ahead or they agree this mentorship between them wouldn’t work. In the less ideal case of lack of accordance, when a mentor rejects (or deprioritises) a mentee application, we need to be concerned and conscious of the reasons of rejection.
If I am a mentor and want to (or have to) reject an applicant, the first person I need to convince of my lack of bias[^define] is myself. Did I reject (or deprioritise) the applicant because it’s truly clear I cannot help them; or am I biased for thinking that?
We want to prevent bias in any form and one method for fighting bias is to ask more people about the process, people as diverse as possible. So, let’s do just that: ask more people if the decision was bias-free.
To serve as a forcing function, we can adopt a rule: another member needs to be asked to review a rejection.
This rule is too simple for a solid and reliable bias review process, but at this point we don’t have enough experience to know what would work to continue. We’ll apply the maxim of premature optimisation being the root of all evil and stop optimising further—at least until we have one mentorship published, one applicant, one rejection. Potential improvements include making it more transparent as to who reviewed it and why it was not biased, as well as adding multiple reviewers, third-party or independent ones, random ones so that they are more diverse, et al.
More radically: we can ask the person who was rejected if they think the mentor’s decision was fair. Or even more radically—and probably where we want to head if we become more popular: ask everyone what they think, ie. let the mentor along with all applicants together decide the ranking and who should fill the space of mentorship(s) and/or get priority.
This is yet another example on how highly probable it is for good intentions to develop horrible institutions. It’s yet another example on how, something transparently benevolent, such as freely giving out time and knowledge, can produce unfair discrimination.
The reason is that knowledge and experience have power—and when giving out power, one has to be careful to divide it fairly[^power].
Once we abstract it as such, as a power distribution process, many analogies come to mind, for example, a state government that wants to be objective in giving out money only to those who need it the most. How terrible, even the state of the art solutions are for this problem, it is well known.
[^age]: Maybe age is part of a next wave. Silicon Valley is blatantly ageist and in the next decades there will be many more exceptional engineers older than 50 or 60 years old. Will Silicon Valley (or its replacement) continue to be ageist?
[^racism]: The reasoning behind using the phrase “become racist” is meaningful. Not because I take it for granted we’re not already racist—but because our concern should be during the process of becoming racist, rather than noticing when we already are.
[^define]: Let’s define bias as the prejudice in favor of or against one thing, person, or group compared with another, in a way considered to be unfair.
[^power]: Maybe that’s also reason not to do it. Refrain from sharing power because you might share it with the wrong people; and then the power imbalance will be even worse, with you as an active agent of immorality.