When the Default User Is Male
What I keep finding in the codebases I work on, and what changes when somebody who isn't the assumed default sits down to read the code carefully.
The first thing I want to tell you about is the moment I watched a function return 80 kg, 180 cm, male, active, maintain, on a Wednesday evening at my own desk, with the home screen of a calorie app sitting open on my phone next to me, telling somebody on the test profile that she could eat five hundred more calories a day than she actually could.
That value, 80 kg and 180 cm and male and active and maintain, was the hardcoded fallback the app fell back to whenever no user had been saved yet. It is a perfectly reasonable-looking line of code. It is also a near-perfect specification of who the people who wrote it imagined the user to be. Someone tall, someone heavier than most women, someone male, someone active, someone trying to maintain their current weight rather than lose or gain it. Each of those defaults is small in isolation. Together they are a quiet portrait of the user the codebase had assumed it was being written for.
I have come to call this male-by-default behaviour, and I find it everywhere I work, in code written by people I love and respect, with no malice anywhere in it, again and again and again. It is one of the patterns I notice most often in the codebases I am asked to look at, and I want to write about it carefully here, because I am still working out exactly what it means and what the kind response to finding it actually is.
What I keep finding
The hardcoded male dummy was one example, and there were two more in the same codebase, sitting quietly in adjacent functions, none of them doing anything that anybody had ever flagged as wrong. The Physical Activity factor function took a gender argument and branched on gender == male, treating male as the explicit case and female as the implicit “everything else” else-branch. I have written about the bugs that caused elsewhere; what I want to name here is the shape of the assumption underneath them.
In each case, the original developer had a default user in their head, and that default user looked like them. There is nothing strange about this on its own. It is what most of us do most of the time. The strangeness is what happens when an entire engineering team is more or less the same kind of person, and so the default user that lives quietly in the codebase becomes the only user the codebase actually serves well. Everybody else gets a slightly degraded version of the feature, or a silently wrong calculation, or a path through the app that has never been exercised by anybody who really cares about whether it works.
Once you start noticing this, you find it in places that have nothing to do with calorie math. Test fixtures with male names by default, except when the test is specifically about a woman’s case. Sample data populated with male profiles unless you set it otherwise. Error messages that say “he” when the user’s pronouns have not been collected. Avatar shapes that show short hair and broad shoulders by default, with longer hair and other shapes available only if you go and find them. Calibration assumptions in fitness apps that quietly assume a male body composition unless told otherwise. On-call alerts designed for an engineer with the slack to investigate them, who is assumed to be available and unencumbered when the page comes through, in a way that quietly maps onto a particular kind of life and a particular kind of household responsibility.
The same pattern repeats across other axes of who the codebase has assumed it is being written for, and I want to name those too, because they all share the same underlying shape. Sign-up forms that assume two parents of different genders for a household, and quietly fail when the household is two mums or two dads or a single parent or a chosen family. Relationship status fields that make “married” the natural default and “single” or “partnered” the second-class option. Sample data with skin tones that cluster around one narrow range, and names that all sit inside one cultural tradition. Colour choices that look fine to most eyes and become unreadable to the eight per cent of men and the half a per cent of women who have any form of colour vision deficiency. Accessibility paths that are tested only on default abled bodies, with screen-reader behaviour considered only when somebody on the team uses a screen reader. UX patterns that quietly assume a kind of cognitive bandwidth that is harder for a tired or neurodivergent brain to bring on a hard day. Premium tiers that price the useful version of the app at a level that makes it inaccessible to anyone for whom money is not abundant.
None of these things are decisive on their own. None of them are an emergency. Every one of them is the kind of small decision that a busy engineer makes without much thought, because there is no obvious reason to think harder about it in the moment. And every one of them, taken together, adds up to a codebase that works best for the people it was implicitly written for, and works less well, in ways the maintainers cannot easily see, for everyone else.
Patriarchy as a pattern in the code
I want to use the word patriarchy here, not because I think it is the most comfortable word, but because I think it is the most accurate one for what is actually happening, and I do not want to dress it up in a softer term that obscures the shape of it.
Patriarchy in software does not look like deliberate exclusion. I have almost never seen anyone deliberately write code that is hostile to women or to anybody else, and I want to say that plainly so that nobody reading this thinks I am pointing fingers at colleagues. Patriarchy in software looks, almost always, like the absence of pushback. The default user is male because the people writing the code are mostly male, and the people reviewing the code are mostly male, and the people testing the code are mostly male, and so the assumption never has to defend itself. Nobody asks why the dummy is male. Nobody asks why the sample data skews the way it does. Nobody asks why the calibration assumes a particular kind of body. The question simply does not arise, because there is nobody in the room for whom the answer would be obviously wrong.
The harm is not in any single decision. The harm is in the slow compounding of a thousand small assumptions, each of them individually defensible, into a piece of software that quietly tells anybody who is not the assumed default that they are not who the app was written for. Most of those people will never file a bug about it. They will, as I wrote about in the cross-platform iOS post, simply close the app and stop using it, and the team will never know why their retention numbers look the way they do.
What changes when somebody not-default reads the code
The first thing that changes, when somebody who is not the assumed default sits down to read a codebase carefully, is that the implicit assumptions stop being implicit. They become visible, sometimes uncomfortably so, in a way that cannot be unseen once it has been seen.
I notice the male dummy because the male dummy is not me. I notice the gender-keyed if-branch because the implicit default it falls through to is the half of the binary I am closer to, and I can feel which half of the equation has been written for the foreground and which half has been left in the background. I notice the pronoun in the error message because the pronoun is not mine. I notice the avatar default because the avatar default is not what I look like. None of this is a special skill. It is simply what happens when the default reader of the codebase is someone the codebase was not written to expect.
This is the part that I want to be careful about, because every kind of marginalised perspective catches a different set of things, and none of them is interchangeable with the others. A woman on the team will catch the male-default patterns I have been writing about here. A queer engineer will catch the assumed-straight patterns in relationship fields and family forms and the marketing copy for the premium tier. A disabled engineer will catch the accessibility gaps that have been quietly there for years, and that nobody else on the team can see because nobody else uses a screen reader or a switch device or a magnifier or a body that gets tired before lunchtime. A neurodivergent engineer will catch the UX patterns that assume a kind of focus and task-switching that not every brain has on a given day. An engineer of colour will catch the sample-data assumptions and the skin-tone defaults and the form-validation rules that were written without anyone non-white in mind. A working-class engineer will catch the premium-tier decisions and the assumptions about disposable time and the pricing models that read as natural to people who have never had to count the cost of a subscription. None of these perspectives substitutes for any of the others, and I want to be honest about that here, because the temptation is to treat one marginalised person on the team as a stand-in for everybody who is not the default, and the truth is they are not.
What I have learned, over the years of being one of the people who is not the default in a lot of the rooms I have been in, is that the noticing is partial and specific and bounded. I catch the male-by-default patterns because they are the ones I am closest to. I catch some of the queer patterns because of who I love and who I cook for. I catch some of the caregiving patterns because of my partner and the daily texture of our shared life. I miss most of the patterns I do not have direct experience of, and I have to lean on colleagues who do for those, in the same way they have to lean on me for the patterns I see. The richness comes from the differences, not from any single one of us being a universal proxy for everybody who has ever been excluded.
The second thing that changes is that the test coverage for non-default cases starts existing at all. I write tests for the female and non-binary cases of TDEE because those are the cases I most need to be sure about. The team would have written tests for the male case in any event, because the male case was the path the original code lived on. The new tests I add are not redundant; they are the first time those code paths have ever been pinned to a specific numerical answer, and the coverage they give the codebase is permanent, in the sense that the next person whose calorie budget depends on those code paths does not have to be the person who finds out the path is broken.
The third thing that changes is harder to describe, but I want to try anyway. The shape of conversations on the team starts to shift, slowly, when somebody on the team is asking different questions in code review. “Who is this default for?” “What happens if the user is not in the assumed shape?” “Whose body is this calibration assuming?” “What does this notification say to somebody whose pronouns we have not collected?” Those questions, asked gently and in good faith, are not aggressive. They are the questions an engineer asks when she is doing her actual job carefully, and they get easier to ask the more often somebody on the team asks them. The codebase becomes more careful, slowly, in a way that benefits everyone who uses it, including the people who would otherwise have been the default the code was written for.
What kindness in the work actually looks like
I want to say something now that I keep coming back to in my own thinking, which is that this kind of work is not anybody’s punishment, and it is not a separate piece of work bolted on at the end of a sprint when somebody finally insists. It is part of the ordinary daily work of writing software carefully. The same impulse that asks “have I tested this on iOS?” can ask “have I tested this with a non-male user?” The same impulse that asks “what happens if the network drops?” can ask “what happens if the user does not look like the dummy?”
Nothing about that is hard, in the abstract. What makes it hard is that it requires somebody on the team to ask the question, and the question is most likely to be asked by somebody who is not the assumed default. Which is one of the reasons it matters who is on the team, and one of the reasons I keep finding myself protective of the women and non-binary engineers I work with, and one of the reasons I am gentle with myself about the times I have noticed something my colleagues did not, because that noticing is part of what I am there for.
If you are reading this and you are an engineer who has not been asking those questions, I am not writing this to make you feel bad. I am writing this to say: the questions are askable, the noticing is learnable, and most of the work happens by sitting with the discomfort of asking “who is this code unkind to” before you click the merge button. Almost every team I have worked on has improved, slowly and quietly, when somebody started asking it.
The other thing I want to say, as plainly as I can, is that the answer to all of this is not to recruit a single woman or a single queer engineer or a single disabled engineer onto a team and then expect that one person to catch everything. That is unfair to the person, it is bad engineering, and it does not actually work. The answer is to build teams where multiple kinds of people who are not the default are present together, in real numbers, with enough seniority to be heard, so that the questions get asked across many different axes of perspective at once and the burden of asking them does not land on any single person. Until we are doing that, every codebase is going to keep having quiet pockets of unkindness in the places that none of the people in the room had any reason to look at.
Where I keep landing
I am still thinking about how often this pattern surfaces in the codebases I work on, and I do not have a clean theory of it yet, beyond what I have written here. What I have noticed is that the work of finding the male-by-default assumptions in code is, almost always, the work of finding what the system has been quietly assuming and then deciding whether the assumption still holds. That is the same work I named in the non-binary calorie tracking post. It is, in the end, what good engineering should be doing for everyone, all the time, and not only when somebody who is not the default sits down to read the code.
If you are an engineer who has ever opened a piece of software and quietly recognised yourself as not the assumed user, I see you. I have been you, and I keep being you, in code I love and code I tolerate and code I have to work with for reasons I did not choose. I want to be writing code that does not do that to anybody, and I am still learning how to do it well, and I think that is most of what taking this seriously actually means.
The default user does not have to be male, or straight, or abled, or wealthy, or any of the other defaults that quietly get baked into software when nobody in the room is in a position to notice. It is just easier for those defaults to be there, when the rooms look the way the rooms tend to look. That is the shape of patriarchy in software when you trace it carefully, and it overlaps with the shape of every other system of default-by-omission that we live inside, and it is the shape of the work I most want to keep doing, with as many different kinds of careful noticing colleagues alongside me as I can find.