There's a simple fix to removing discrimination in hiring practices that no one seems to notice. Remove all demographic questions from the application. Hide the name and gender and attach a applicant ID. It's as easy as that. Every job should be looking for the most qualified individual regardless of race, nationality, religion, and sex. Demographics in the application are a recipe for disaster on both sides of the isle.
Everything is easy until you account for the real world.
A disabled person who has to request accommodations for the application process will immediately be outed for having a disability. The same applies for people who speak different languages.
Beyond that, the application is only one place in which discrimination occurs.
- It also happens during interviews which are much harder to anonymize.
- It also happens in testing and requirements that, while not directly correlated to job performance, do serve to select specific candidates.
- It also happens on the job, which can lead to a field of work not seeming like a safe option for some people.
- It also happens in education, which can prevent capable people from becoming qualified.
Lowering the bar is not the right answer (unless it is artificially high) but neither is pretending that an anonymous resume will fix everything.
Many (or most) vision, hearing and speech impairments would likely be disqualifying for ATC; if they were to the point of needing accomidation during an interview. Mobility impairments would likely be able to be reasonably accomidated though; someone without use of their legs could work in an ATC facility that can be accessed without stairs, which would exclude some towers but not all of them. The workstation height may need to be adjustable as well, but that's not an unreasonable accomidation either.
Let's see. The OP didn't specify they were talking about the ATC, I gave two examples of ways you could de-anonimyze resumes in the normal application process; I'm sure there's others. And glad to hear you don't think people with cancer or those who use wheelchairs should be allowed to work at the ATC, I guess.
The FAA were already not allowed to ask employees about their demographics. The article you're commenting on states that the actual problem was that the FAA added a new biographical questionnaire to the ATC hiring process, which had strangely weighted questions and a >90% fail rate. Applicants who failed the questionnaire were rejected with no chance to appeal. Employees at the FAA then leaked the correct answers to the questionnaire to student members of the National Black Coalition of Federal Aviation Employees to work around the fact that they couldn't directly ask applicants for their race. Here's a replica of the questionnaire if you're interested: https://kaisoapbox.com/projects/faa_biographical_assessment/
My company's DEI program effectively does this. The main tenets are:
- Cast a wide recruiting net to attract a diverse candidate pool
- Don't collect demographic data on applications
- Separate the recruiting / interview process from the hiring committee
- The hiring committee only sees qualifications and interview results; all identifying info is stripped
- Our guardrail is the assumption that our hiring process is blind, and our workforce demographics should closely mirror general population demographics as a result
- If our demographics start to diverge, we re-eval our process to look for bias or see if we can do better at recruiting
The separation allows candidates to request special accommodations from the interview team if needed, without that being a factor to the committee making the final decision.
Overall, our workforce is much more skilled and diverse than anywhere else I've worked.
> Our guardrail is the assumption that our hiring process is blind, and our workforce demographics should closely mirror general population demographics as a result
> If our demographics start to diverge, we re-eval our process to look for bias or see if we can do better at recruiting
These are not good assumptions. 80% of pediatricians are women. Why would a hospital expect to hire 50% male pediatricians when only 20% of pediatricians are men? If you saw a hospital that had 50% male pediatricians, that means they're hiring male pediatricians at 4x the rate of women. That's pretty strong evidence that female candidates aren't being given equal employment opportunity.
A past company of mine had practices similar to yours. The way it achieved gender diversity representative of the general population in engineering roles (which were only ~20% women in the field) was by advancing women to interviews at rates much higher than men. The hiring committee didn't see candidates' demographics so this went unknown for quite some time. But the recruiters choosing which candidates to advance to interviewing did, and they used tools like census data on the gender distribution of names to ensure the desired distribution of candidates were interviewed. When the recruiters onboarding docs detailing all those demographic tools were leaked it caused a big kerfuffle, and demands for more transparency in the hiring pipeline.
I'd be very interested in what the demographic distribution of your applicants are, and how they compare against the candidates advanced to interviews.
Yea when I have done hiring the vast majority of applicants were of specific races and demographics. It isn’t a private companies’ job to skew hiring outcomes away from the demographics of the incoming pool of qualified applicants. If you have 95% female applicants for a position I would expect that roughly 95% of hires are going to be female and vice versa.
I think it is damaging when hiring outcomes are skewed as well as it undermines the credibility of those who got hired under easier conditions fabricated by the company.
I too agree with the grandparent post that we should try to be scrubbing PII from applications as much as possible. I do code interviews at BIGCO and for some reason recruiting sends me the applicants resume which is totally irrelevant to the code interview and offers more opportunities for biases to slip in (i.e this person went to MIT vs this person went to no name community college).
> If you have 95% female applicants for a position I would expect that roughly 95% of hires are going to be female and vice versa.
I would disagree for the most part. As mentioned above, there are roles where you'll see gender bias that may not be addressable. In the OB/GYN example, I understand some women would only be comfortable with a doctor that is also a woman. That's not necessarily addressable by shoe-horning in male doctors. But again, that can be accounted for in DEI programs.
It's also more understandable to non-remote jobs. Some areas have staggeringly different demographics that could only really be changed by relocating candidates, which isn't feasible for all business. Mentioning this specifically as my company is fully remote.
Otherwise, in my opinion, a candidate pool that is 95% some demographic shows a severe deficiency in the ability to attract candidates.
If I'm interviewing for pharmaceutical technicians, and my goals is to give all candidates equal opportunity for employment, why would I expect something vastly different from 87% women? If the candidate pool for pharmaceutical technicians was somehow 50/50, then it'd indicate a severe deficiency in attracting female candidates on account of the massive underrepresentation relative to the workforce of pharmaceutical technicians.
> These are not good assumptions. 80% of pediatricians are women. Why would a hospital expect to hire 50% male pediatricians when only 20% of pediatricians are men? If you saw a hospital that had 50% male pediatricians, that means they're hiring male pediatricians at 4x the rate of women. That's pretty strong evidence that female candidates aren't being given equal employment opportunity.
We track these, but don't establish guardrails on that fine grained of data.
In your example, it would be balanced by a likely over-representation in urology by male doctors. But when looking at doctors overall, the demographics tend to balance out, with the understanding that various factors may affect specific practices.
To give you a more solid answer, in our data we see that men are a bit overrepresented in our platform engineering roles, while women are within our data science and ML roles.
General backend/frontend roles are fairly balanced. Overall engineering metrics roughly fit out guardrails. We look at the same for management, leadership, sales, and customer support.
I don't have direct data on the recruitment -> interview process on hand. I work on the interviewing side though, and can tell you anecdotally that I've run dozens of interviews and overall haven't noticed a discrepancy in the candidates I've seen. I can also say that of those dozens, I think I've only advanced 2 candidates to the hiring committee. So we seem to err on sending a candidate to interview vs trying to prematurely prune the pool down.
> To give you a more solid answer, in our data we see that men are a bit overrepresented in our platform engineering roles, while women are within our data science and ML roles. General backend/frontend roles are fairly balanced. Overall engineering metrics roughly fit out guardrails. We look at the same for management, leadership, sales, and customer support.
So you have a slightly more than 50% women in data science, a field that's 15-20% women [1]. Likewise, software development is ~20% women. But your frontend and backend roles have 50/50 men and women. You're achieving results representative of the general population but you're obtaining a very large overrepresentation of women relative to their representation in the workforce. We're talking overrepresentation by a factor of four or five.
All of the fields you listed ~80% male. This isn't like a hospital that's equally comprised of urologists and OB/GYNs. It's like a hospital exclusively comprised of urologists, but somehow hires 50% women.
> I don't have direct data on the recruitment -> interview process on hand. I work on the interviewing side though, and can tell you anecdotally that I've run dozens of interviews and overall haven't noticed a discrepancy in the candidates I've seen.
Discrepancy is a relative statement. What is the gender breakdown of the candidates you've interviewed? Remember, if the software developers you're interviewing are 50/50 men and women, that is representative of the general population but it's a 4x overrepresentation of women relative to their representation in the field. If by "no discrepancy" you mean "no discrepancy relative to the general population" it sure sounds like female applicants have a much better shot at getting interviewed. If you're seeing 50 / 50 male and female interviewees in a field that's 80% male, you really ought to question whether recruiters are using gender as a factor in deciding which applicants to advance to interviews.
Is your company's goal to achieve representation equitable with respect to the general population, even if it means applicants from one gender are significantly disadvantaged in interviewing? Or is it to give equal employment opportunities to candidates, regardless of their gender? It sure sounds like your company is pursuing the former. I would highly suggest pushing for more transparency in the application to interview pipline if you care about gender equality.
notice how these solution requires a dedication to diversity throughout the process from candidate sourcing to interviewing and all the way through, and not some simple cut and paste answers.
The road to a more inclusive solution is dedicated effort, with continuous re-assessment at every step. There is no magical answer.
> Hide the name and gender and attach a applicant ID. It's as easy as that.
Doing so doesn't hurt. In my college, exams and coursework were graded this way.
Unfortunately with resumes it isn't so easy. If I tell you I attended Brigham Young University, my hobby is singing in a male voice choir, and I contributed IDE CD-RW drive support to the Linux kernel - you can probably take a guess at my demographics.
They could replace the university name with things like the university's median SAT admissions score, and admissions rate.
Previous work experience is relevant to the job, so it'd be hard to argue removing that information, and working on older technology does imply a minimum age. Though I guess theoretically one could be a retro computing enthusiaist.
> Demographics questions on job applications do not get shown to recruiters nor interviewers.
But Recruiters can glean this information from names and other information on resumes. And yes, many do deliberately try to use this information to decide who to interview. Recruiters at one of me previous employers linked to US census data on the gender distribution of names in their onboarding docs. They also created spreadsheets of ethnically affiliated fraternities/sororities and ethnic names.
This is literally one of the things DEI programs push to implement. I have a friend who helps make hiring decisions and this is one of the changes their DEI push included, as well as pulling from a larger pool.
It just shows how much propaganda there is around DEI, you're saying we should get rid of DEI and replace it with the things DEI was trying to do. It really has become the new critical race theory.
It really depends on what the outcome is. There has been pro-DEI pushback on blind interviews and auditions when it resulted in fewer minorities being represented. One particularly famous case is when GitHub shut down their conference on diversity grounds after the blind paper review process resulted in a speaker slate that was all male. For another example, here's a pitch against blind auditions for orchestras to "make them more diverse": https://archive.is/iH2uh
In both those examples, why are you not giving the benefit of the doubt to the failed attempts?
If GitHub attempted to anonymize applications and resulted in a biased selection, can that not be a result of them failing to eliminate the bias they set out to?
Same with the blind auditions for orchestras, if they found that they weren't actually eliminating bias with the stated methods, why is it bad that they're not doing it anymore?
If you don't know anything about the other person and are selecting blindly, there's no bias by definition, so that particular selection is not biased regardless of what it looks like.
If the resulting distribution is not what you expected it to be, then there are two simple explanations: either your model was wrong, or the bias that causes the deviation is happening on an earlier stage in the process.
At the same time, if going from non-blind to blind changes the result, it means that there was bias that had been eliminated. The second article pretty much openly admits it and then demands that it be reinstated to produce the numbers that they would like to see.
Agreed. However Progressives argue (wrongly in my opinion) that taking into account a person’s race and gender identity is the only wait to guard against discrimination. They explicitly regard ‘merit’ based hiring as racist and discriminatory.
Who is this "progressive" that for some reason is only allowed to speak in the most general of statements and not make claims backed up with evidence?
A lot of people seem to be arguing against caricatures of arguments either they or people theg trust have instilled in them, and not actual points being made by actual people...
I read your post three times and I still can’t parse it. If you’re asking which Progressives are making the arguments I described, go see Ibram Kendi among others. I am not caricaturing their position. This is what they believe.
This assumes that the hiring managers or whoever are honest people who are not racist or bigoted in any manner and only display incidental racism or subconscious bias. If I see a HBCU as an applicant's alma matter, it's almost certain that they are black.
You could share data on the college, like the median SAT score of admits or the admissions rate.
And I'm not entirely sure that omitting colleges entirely would be such a bad idea. Colleges apply selective admission criteria all the time, for athletes and legacy admits. Skills based screening would probably work better.
Well, then you have to account for certain jobs and hobbies being coded, as well as word choices in the personal statement. Once you blank all that out, though, we should be good to go.
> There's a simple fix to removing discrimination in hiring practices that no one seems to notice. Remove all demographic questions from the application.
For job applications? (How) do you also hide their appearance in the interview?
I doubt they're deleting the master copies or the master renders or anything, so the only thing that would be offloaded would be the "consumer" renders. A Blu-ray movie with no additional compression added is between 15-40 gigs.
A consumer like me has a 300 terabyte storage array, presumably Warner Bros has even more than that (and certainly could afford more than that), so it feels like 40 gigs per movie is basically nothing.
there's even a worse option that his article doesnt include. and thats apps that have limited functionality compared to the web app. My bank is one of them. Heres your account and here's your most recent bank statement. You want to see more bank statements or add/remove a auto bill or change your password, you need to go to the website for that. What the f** why do you even have an app?
In some scenarios that’s really a difference without distinction though.
If I have a key to my house attached to a chain so it can be used to open the door but not leave the property and then secure it in a lockbox. If someone steals the key to the lockbox they technically don’t have access to the house key but they can still rob my house
Your scenario makes it so the house key doesn't matter in the end though; if they're able to get to the lockbox to use the lockbox key they're already in the house and thus already able to rob it regardless of whether they got the lockbox key. In the end your door lock did nothing for you at all. I don't get how that relates to using a PIN stored in the TPM to protect your actual password, other than suggesting "well your account can be hacked without even touching your device" which I mean yeah sure.
But in the end that PIN is still different from that Windows/Microsoft password. The PIN only works on that one device and gets totally invalidated after only a few failures. This is untrue of passwords which usually never get fully invalidated and are then used across multiple devices.
If you manage to find out my PIN to log into device A with my Microsoft account is 1234, you don't have access to my Microsoft Account in general or on device B. If you see I log in to my device A with hunter42 (my Microsoft account password), you can now log in to my Microsoft account and every other device I'm using my Microsoft account.
Is that a difference without distinction? I'd say that's quite a bit of distinction! And that's only one of the many differences!
Which is why I was careful to say that it was a difference without distinction only in some scenarios. Namely offline attack to a physical device.
In this scenario, even with the attempt restrictions the attacker has a couple of chances of relatively easy guesses, before falling back to the password protection. If we consider shoulder surfing, it’s a lot easier to distinguish a four or six digit PIN than a password.
I aware the PIN doesn’t give actual access to credential and so doesn’t impact online attacks. But that isn’t the only scenario.
Incidentally how much work is “in general” doing when you talk about the access Io Microsoft services granted by the PIN + TPM? It isnt zero access is it.
> how much work is “in general” doing when you talk about the access Io Microsoft services granted by the PIN + TPM?
I mean you can't just go to microsoft.com and log in knowing only my pin on a single device. If you know my PIN for a device, but you don't have the device, you don't have access to my Microsoft account at all.
And if you have all my devices? And what if you have all my external security tokens? And what if you also have all my passwords? And what if you have a complete replica of every thought in my head? And what if what if what if what if...
Sure. Whatever buddy. Nothing is truly secure. If they guessed my password as well along with my device I'd be in an even worse situation. At least my PIN just disappears forever after a few failed attempts and requires that physical device.
Needing a physical device which wipes itself after a few failed attempts is more secure than having a password that could be used anywhere on any device however many times they want to guess.
> without distinction only in some scenarios. Namely offline attack to a physical device.
There is a distinction in this domain though, and it's pretty massive. Offline attacks at guessing passwords, if you fail the PIN a few times (three on most of my machines) the PIN gets cleared never to be used again. Meanwhile you can keep trying the password over and over. The account password on the device isn't getting cleared. So I can make the PIN pretty simple and easy to type in while making my regular password very long and complicated. It doesn't matter if its a pain to type in, because its not like I'm typing it in every time I walk away and come back to my computer.
Except in this case it’s really important to learn how the implementation works because it has meaningful differences:
If you login to Google.com with a password, the remote server knows your password and if you are phished the attacker can use your password to access Google.
If you login to Google.com using a passkey secured by Windows Hello, your PIN or biometric check is between you and your computer, and the passkey is used for a public key exchange with Google’s servers. They do not know your PIN and you cannot be phished. That’s a transformative change.
A bicycle and a semi-truck are both machines with rubber tires to move people and things a faster speed than walking. The rest is implementation details.
X and Y are both Z. The rest is implementation details. Except sometimes "implementation details" makes the two pretty radically different in usage.
Incorrect. The PIN does not grant access to the service.
If all you have is the PIN, you don't get access to the service. Therefore, its not the PIN that grants the access.
If you know my keepass database passphrase, but don't have the actual database file, do you have access to the services contained within?
And as acdha mentioned, the entire login workflow is radically different with security keys / passkeys. Its a radically different implementation of authentication with different guarantees.
Do you leave SSH open on port 22 with only password authentication? It's just the same as using SSH keys, just a difference in implementation.
> If all you have is the PIN, you don't get access to the service.
That depends what the service is. If the "service" is a session on my desktop PC, then it absolutely does grant access. You'll have to take my word that if I type my PIN into it, it will start an interactive session.
My kid wants to play minecraft, but he can't because he doesn't have the PIN. If he did have the PIN, he could play minecraft.
I am willing to believe that the implementation of the PIN is totally different from passwords, but in this use case, the user experience is identical. The "attacker" does NOT need the password.
It is still not the PIN in the same way the password to the password vault isn't the password to an account. If you had a physical TPM that got removed, your pin wouldn't do anything. If the TPM got reset in the BIOS, the PIN wouldn't work. It's a step in the authentication workflow, but the PIN itself is not the credential. If a person tried to RDP to that computer with the PIN, they wouldn't be able to access it.
If your kid fails the PIN too many times, the PIN gets disabled. No more PIN retries until the real password gets used. If they tried the password a bunch of times, they'd get a timeout but could come back in a few minutes and try again.
I mean, I get what you're saying about from the user perspective the pin is the login, but the under the hood nuance makes things pretty different in the end when thinking more about what's happening.
Same thing with a fingerprint with a passkey to some service. The fingerprint itself isn't the login; you can't just go to any phone and press your finger and log in to the service. So the fingerprint isn't the login, its a part of the process on that particular device to unlock that particular saved credential that logs you in.
Before iPhones had biometric authentication, a PIN was the only means to unlock the cryptographic key that protects your data on the phone. It still is; you can bypass Face ID and Touch ID at any time by entering your PIN.
So it's not like this is a new thing. It's the same concept, but applied to a PC as well.
If you’re going to speculate about ulterior motives, fill in the supporting details so people can tell you’re not just promulgating conspiracy theories.
So you think that Netflix has gone to Microsoft to start a multi-year industry-wide standardization process to change how people login because that’s easier than looking at their own log files?
Netflix didn’t crack down on shared passwords when they were growing rapidly but that’s not because they couldn’t.
I don't, but yes; many seriously actually believe this is why the industry is moving to passkeys. It isn't logical, it isn't reasonable, but these are your customers.
Yes, if you're both using the same password manager. But, while you live in Silicon Valley bubbleland, most people don't. The world's most popular password manager is Excel; and sadly it does not support sharing passkeys (or, really, passkeys at all).
Baseless? Can you think of a reason why Netflix wouldn't support it for precisely this reason? Their campaign against account sharing is widely publicized. Do you think account sharing is easier or harder under passkeys? Just because it's a conspiracy theory doesn't mean it's false.
It’s baseless because it’s pure speculation without any evidence, or even a coherent argument for why they’d go to so much work for something they already do at much lower cost.
I think the argument is misunderstood here. I'm not saying this is the only reason that Netflix would be in favor of passkeys, just that it's one reason, not even the main one.
Here's the argument. I guess it's up to you whether you think it's coherent.
1. Netflix dislikes account sharing. They'd rather have two people pay for two subscriptions. They're a business, and are in favor of higher subscription numbers.
2. Passkeys make account sharing harder. Customer behavior modeling probably suggests that some fraction of account share-ers would create new subscriptions if they switched to passkeys.
3. Of all the reasons Netflix is in favor or against passkeys, this reason is in favor of them, via 1. and 2.
I think the argument was a flippant response by another user riffing off of a conspiracy theorist who misunderstood how passkeys work, and while I appreciate the effort you’ve made trying to salvage it I am skeptical that Netflix would be motivated enough to be part of their hypothetical Netflix/Google/Microsoft/Apple conspiracy but not enough to even implement passkey support.
That only works if the share target is using the same password manager.
If you asked most people "what password manager do you use" they would give you a blank stare; but sadly, the answer is rarely "I'm not using one" the answer is usually Apple or Chrome or whatever is built in and most convenient.
The real question, which is why these issues still persist, is why the heck are people answering AND responding to these robocallers. They're making money somehow or it wouldn't be lucrative to keep doing it.
People get scammed every day - especially older folks and non-native speakers. There is a huge scam up here in Canada targeting immigrants informing them that their passports are being held by <local embassy> - the scam call isn't in English to minimize how quickly it gets reported and seems to rake in a fair number of folks.
It's really difficult to solve these problems through education alone.
from reading other comments apparently phone calls now generate json web tokens to be authenticated, but loses the jwt when switching over to tdm lines or coming from tdm. So why not just only allow authenticated calls from sip/voip lines to any destination and only allow calls from tdm to tdm. it would rid any unauthenticated sip/voip calls and not allow any spam coming from tdm lines to modern systems.
thats usually the case with companies that spout stupid buzz words like this. You know whats not green? A data warehouse storing all those indexes, but that think they can gain followers with their buzz words.
The teachers, the tools, the curriculum are not the problem. The No Child Left Behind Act is the problem. When everyone passes regardless if they learned anything or not, literacy skills are going to go down. You used to stay in a grade until you passed that grades curriculum, now everyone gets a pass. There are no consequences, resulting in no incentive to learn. Repeating 8th grade while all your friends move to high school is a pretty good incentive to get your act together.
More incrementally different than foundationally different. ESSA was trying to address complaints about NCLB, to allow more autonomy and flexibility. Neither are directly responsible for "everyone passes." They both were trying to stop everyone passes by adding a standard of accountability.
100% agree AI will ruin healthcare. I'm an IT director at a rural mental health clinic and I see the push for AI across my state and it's scary what they want. All i can do is push back. Healthcare is a case by case personal connection, something AI can't do. It only reduces humans down to numbers and operates on that. There is no difference between healthcare AI to a web scraper on webmd or mayo clinic.
I get that but thats from the corporatization of clinics. They push for numbers not client well being. They want doctors to meet with x amount of patients a day and doctors dont control their schedule. So typically the execs meet together and say if we schedule every 20 minutes instead of 30 minutes we can generate x amount of more dollars. Which is why doctors just get you in and out as fast as possible because its their head on the chopping block if they dont meet the measures. Best thing you can do is find smaller clinics, which are becoming increasingly rare because the cost it takes to run one.