Interesting that this guy claims to be a "staff level SWE at a major company", yet one year ago he was on HN posting about how horrible of a time he was having getting a SWE job, how he's failed multiple interviews, including at FAANGs, was being rejected for even no-name small startups, had failed multiple interviews because of inability on algorithm questions ... and yet within the last year he was supposedly successfully hired on at a "major company" for a staff-level senior coding position.
What I think is the strangest part of it is that they don't respond to a single comment. They've only done it twice in their entire comment history (3 pages). Once 2 years ago where they talk about banging women and the other being a few months earlier talking about HFT (which the comment previous to that says they work at a HFT firm)
But I think I found the answer...
That's a mistake. A lot of people lie on their resumes.
Source: I've lied on every resume I've ever sent out.
- https://news.ycombinator.com/item?id=33903978
Something tells me they aren't the most honest person. That something is thw09j9m...
Seriously... why lie about these types of things on an anonymous forum? There's literally nothing to gain
It's not necessarily inconsistent though. People get rejected for so many different reasons and the job market is tough recently. And there's a post about getting lucky with the offer.
Is that a useful thought experiment? Claude benefits you as an individual more than a coworker, but I find I hard to believe your use of Claude is more of a value add to the business than an additional coworker. Especially since that coworker will also have access to Claude.
In the past we also just raised the floor on productivity, do you think this will be different?
No that’s not true at all. Humans can deal with ambiguity and operate independently. Claude can’t do that. You’re trading one “problem” for an entirely different one in this hypothetical.
Isn't that what polishing 'the prompt' does? Refine the communication like an editor does for a publication? Only in this case it's instructions for how to get a transformer to mine an existing set of code to produce some sort of vaguely useful output.
The human factor adds knowledge of the why that refines the results. Not just any algorithm or a standard pattern that fits, but the correct solution for the correct question.
people talking as if communication overhead is bad. That overhead makes someone else able to substitute for you (or other one) when needs happen, and sometimes can discover concerns earlier.
I get the point you are making, but the hypothetical question from your manager doesn't make sense to me.
It's obviously true that any of your particular coworker wouldn't be useful to you relative to an AI agent, since their goal is to perform their own obligations to the rest of the company, whereas the singular goal of the AI tool is to help the user.
Until these AI tools can completely replace a developer on its own, the decision to continue employing human developers or paying for AI tools will not be mutually exclusive.
Im there with you at the govt contracting company i work for we lost a contract we had for ten years. Our team was 10 to 15 employees and we lost the contract to a company who are now doing the work with 5 employees and AI.
My company said we now are going to being bidding with smaller teams and promoting our use of AI.
One example of them promoting the company's use of AI is creating a prototype using chatGPT and AntiGravity. He took a demo video off of Youtube of a govt agency app, fed the video to chatGPT, GPT spit out all the requirements for the ten page application and then he fed those requirements to AntiGravity and boom it repilcated/created the working app/prototype in 15 minutes. Previously that would take a team of 3 to 5 a week or few to complete such a prototype.
For your answer to be correct for your employer, the added productivity from your use of LLMs must be at least as much as the productivity from whichever coworker you're having fired. No study I've seen claims much above a 20% increase in productivity, so either a) your productivity without LLMs was ~5x that of your coworkers, or b) you're making a mistake in your analysis (likely some combination of thinking about it from your perspective instead of your employers and overestimating how helpful LLMs are to you).
It makes him (presumed) 20% more effective than his coworker makes him. Overall effectiveness of the team is not being considered, but that's why his manager isn't asking him :)
> And now I'm preparing for my post-software career because that coworker is going to be me in a few years.
Which implies they anticipate their manager (or someone higher up in the company) to agree with them, presumably when considering overall effectiveness of the team.
You would probably have the same answer if your boss said, I have to get rid of one of your co-workers or your use of editing tools - ie all editors. You either get rid of your co-worker or go back to using punch cards.
You would probably get rid of your co-worker and keep Vim/Emacs/VsCode/Zed/JetBrains or whatever editor you use.
All your example tells us is that AI tools are valuable tools.
In some companies, “one of your coworkers” have the skills to create & improve upon AI models themselves. Honestly at staff level I’d expect you to be able to do a literature review and start proposing architecture improvements within a year.
Charitably, it just sounds like you aren’t in tech.
But isn't living in a stable society, where everyone can find employment, achieve some form of financial security, and not be ravaged by endless rounds of layoffs, more desirable than having net productive co-workers?
I’ll make sure to pour one out in memory of all the lamplighters, the stable hands, night soil collectors, and coopers that no longer can find employment these days. These arguments were had 150 years ago with the advent of the railroad, with electricity, with factories and textiles, even if you don’t have net productive coworkers, if there’s a more productive way to do things, you’ll go out of business and be supplanted. Short of absolutely tyrannical top down control, which would make everyone as a whole objectively poorer, how would this ever be prevented?
The difference is that back then we were talking a few jobs here and there. Now we are talking about the majority of work being automated, from accountancy to zoo keeping, and very little in the way of new jobs coming in to replace them.
By the way stable hands and night soil collectors are still around. Just a bit harder to find. We used to have a septic tank that had to be emptied by workmen every so often. Pretty much the same.
Whereas a government's responsibility is to ensure peace and prosperity for as many of its citizens as possible. These things will be at odds when increased profits for companies no longer coincides with increased employment.
Yes, believe it or not some of still believe in this and vote accordingly. Aspirational, as it has always been, with the understanding that we will always fall short.
No, not Sweden where 40% of the population have been employed in some way by the Wallenberg family and its corporations in recent times. The other Nordic countries are not as egalitarian as they are presented either.
What's most useful to you is not necessarily most useful to the business. The bar for critical thinking to get staff at this company I've surely heard of must not be very high.
Why would a company pocket the savings of less labor when they could reinvest the productivity gains of AI in more labor, shifting employees to higher-level engineering tasks?
I am truly, both as a being and having a manager, at loss as to how a manager would ask that, and would get such an answer… what is the rationale and the expectation?
here's the thing, my manager won't need to do that. windsurf swe-1 is good enough for my use case and swe-1.5 is even better. Combined with free quotas of mixed openai, gemini and claude I don't really need to pay anything.
In fact I don't want to pay too much, to prevent the incoming enshittification
They're not obligated to enforce the non-compete. If you don't have any sensitive information to take to a competitor, they might not give you any garden leave.
OTOH, I've seen non-competes as long as 2.5 years from places like Citadel.
What's worse is actually those non-competes with a variable period. The company doesn't have to tell you in advance how long it will be; only when you hand in your resignation letter will they tell you. It entirely serves to make your job hunt more difficult.
> I've seen non-competes as long as 2.5 years from places like Citadel.
Congrats: You are part of the 0.01% of the industry. Did they also offer to pay your bonus during that period? Else, it looks like a shitty deal that I would never accept. I heard that Florida now has some weird state-specific rules about high income people with non-competes.
I mean it's kinda amazing, this was just in 2019 that this company was basically lying about being able to do this to get investment, yet now I can actually do it myself, right now, for free*. Incredible - where will we be in another 5 years?!!!
This is terrifying for me, how are you handling it? I don't think I can live like this, I am so behind on everything because I can't get myself to do anything.
If they wanna do research where they claim they did something novel, without showing that they didn't just "look it up" in their massive training set, then yes, they should share what is and what isn't contained within.
I think the big picture that many people are missing here is the motivation that all these AI/tech companies have for buying up so many GPU's in the first place: achieving AGI/ASI.
And while some still try to portray a dedication/duty to AI Alignment, I think most have either secretly or more publicly moved away from it in the race to become the first to achieve it.
And I think, given that inference time compute is so much cheaper than pre-training, the first to achieve AGI might have enough compute on hand from having been to first to build it that they would not need to purchase many more GPU's from Nvidia. So at some point, Nvidia's revenues are going to decline precipitously.
So the question is: how far away are we from AGI? Seems like most experts estimate 3-10 years. Did that timeline just shrink by 50x (or at least by some multiple) from these new optimizations from DeepSeek?
I think Nvidia's revenue is going to continue to grow at its current pace (or faster) until AGI is achieved. Just sell your positions right before that happens.
It doesn't feel like DeepSeek has a big enough breakthrough here. This is just one of many optimizations we're going to see over the next years. How close this brings us to "AGI" is a complete unknown.
The large investments were mainly for training larger foundation models, or at the very least hedging for that. It hasn't been that clear over the last 1+ years that simply increasing the number of parameters continues to lead to the same improvements we've seen before.
Markets do not necessarily have any prediction power here. People were spooked by DeepSeek getting ahead of the competition and by the costs they report. There is still a lot of work and some of it may still require brute force and more resources (this seems to be true for training the foundation models still).
Not that I believe it's likely to happen, but it seems incredibly fucking dangerous for there to be an ASI race with one winner. To the extent these companies believe it's possible, what are they hoping will be the outcome for humanity?
That they get to be the trillionaires with an untouchable moat? Wouldn't this be like creating a Kwisitz Hadarach thinking you can control it, to borrow a Dune reference?
Tech market is rough. I think BigTech currently thinks that there's a better ROI spending $300k on compute (training an LLM) than $300k on an employee. All the startups I've interviewed with are HEAVILY leveraging LLMs (via Cursor) to keep engineer count and burn rate low.
I think it's going to blow up given the number of broken libraries and products out there. Management does not care until things really break. In 3-4 years, there will be a serious shortage of software developers and maintenance of the stack will be as expensive as ever.
This, plus in several years we will observe many MVPs generated (partially) by LLMs that will require fixing and further development. LLMs barely produce 1k lines of coherent code, riddled with bugs. Benchmarks are improving, but there is no breakthrough in handling real, large codebases. I have no idea about the future job market, but the quality of the "material" we work with will be quite poor.
I don't think these companies are generating MVPs using LLMs. They're just using LLMs to assist in coding. A human reviewers is there. And in a few years, LLMs might be so good that they can fix mistakes by 2024 LLMs.
To clarify, LLMs will lower the entry barrier for non-programmers to test new ideas (which is good). Later, those successful projects will require development by professional developers. I suspect that many of these projects will have lower quality than regular proof-of-concept projects today.
>I suspect that many of these projects will have lower quality than regular proof-of-concept projects today.
Exactly. It will be way worse than a hacky PoC because you might able to reason about the structure or logic based on the human author and their intent which is impossible in the case of an LLM author. Further, assuming the human author isn't hit by a bus or whatever, you can query them directly if anything is unclear. Good fuckin luck asking an LLM for a reason why it did something in the PoC.
>> This is the toughest market I've ever seen. I easily made it to on-sites at FAANG a few years ago and now I'm getting resume rejected by no-name startups (and FAANG). The bar has also been raised significantly. I had an interview recently where I solved the algorithm question very quickly, but didn't refactor/clean up my code perfectly and was rejected.
I've since landed one decent offer, but mostly got lucky (the sys design interview was about an obscure optimization problem that I specialized in for years - though I didn't let on that fact)
Between that time, I failed multiple interviews (always solving the question, but never quickly or cleanly enough).
Companies are incredibly slow to respond back (up to 4 weeks from time of application to first interaction with a recruiter).
Some companies are incredibly demanding (recruiter screen -> tech screen -> tech screen -> take-home test -> group discussion about take-home test -> behavioral / culture interview).
Don't think it's about race. It's just an employers' market. And if you refuse to jump through the hoops, somebody else will.
For reference, the last company on my resume is a top tier company that every recruiter has heard of.
> the sys design interview was about an obscure optimization problem that I specialized in for years - though I didn't let on that fact
Honestly I feel this situation is an invitation/opportunity to show your depth. I had a similar experience where I was asked to implement a box filter, which I did naively, and then asked to do it the clever way.
I remembered about Haar classifiers https://docs.opencv.org/3.4/db/d28/tutorial_cascade_classifi... which solve the same problem, and I was familiar with the OpenCV implementation. I mentioned this and started to write code, but at that point the interviewer was much more interested about why I knew about some obscure old-school opencv method than finishing the coding exercise.
I've had some interviews stretch on for months and multiple interviews only to be rejected in the end. I'm just trying to concentrate on creating video games. I'll probably go get a job at a store in town soon. Pointless to keep trying in this market it seems.
Yeah the process these days is insane. I went through 4 rounds of interviews with one company, did really well and was expecting an offer but they want me to come in for a full day of onsite work mixed in with more technical rounds.
I’m interviewing with like 10 companies right now and it feels like a full time job, I have full days of interviews lined up for almost every day in the first 3 weeks on January.
This is actually identical to my experience in 2021/2022 when the market was red hot. I literally did treat it as my full time job, failed a lot, but rarely had less than 4 interviews which it seems most of the good companies do.
This is the toughest market I've ever seen. I easily made it to on-sites at FAANG a few years ago and now I'm getting resume rejected by no-name startups (and FAANG).
The bar has also been raised significantly. I had an interview recently where I solved the algorithm question very quickly, but didn't refactor/clean up my code perfectly and was rejected.
I can’t solve everyone’s problems but I can say that I have 1-2 (depending if an outstanding offer is accepted) open principal engineering roles directly working for me. The process will be talking to the recruiter, then two tech interviews, then an interview with me. This will take you a couple weeks to go through because of our team is tiny and we’re fairly busy. Team is 100% remote and async. All the stuff in the JD applies, the one update that we haven’t gotten out yet is that you must know terraform really well. Comp is scaled for engineers outside the USA.
My observation as well. Going remote makes it harder to switch jobs because of loosing the personal connections and networks that get developed working on site. Going through the front door when applying for opportunities is usually more difficult than the back door. Back door as in getting introduced via your network. As a result, I've learned that a good recruiter is now a necessity because they have those relationships with hiring managers which can put you to the front of the line and also prep you better for the process.
I don't mean to sound harsh, but it does sound like front/ backdoor metaphor is code for filtering people by the social biases that form much of the personal connections and networks you mention.
The effect of remote work, seems to be leveling the entry point for everyone; an advantage for people who got discriminated against before and a disadvantage for people who enjoyed their privilege for far too long.
The comments above you replied to were saying something different to your interpretation, I believe.
They were saying that by going remote (the last 5 years), people haven't formed as many deep connections by legitimate social connections at jobs and their reputation. You could say this is the "good" or "reputation" based way of getting jobs in the back door and it's not all just likeability. So there is less of this kind of back door hiring right. I don't know if this is true or not.
But the back door hiring for nepotism or "my brothers girlfriend" is still as ever present, since the connections aren't predicated on real life in person social interactions.
Discrimination or disadvantage comes in to neither of these hiring methods innately. Subconscious bias could exist for back door recommendations ofcourse.
So if anything, it's not really leveling anything. It just means for increasingly experienced engineers, benefits don't accrue so much when looking for work if remote. If anything these backdoor references could help people (e.g. from poor families) get a shot even despite other innate culture differences (e.g. style of speaking). And nepotistic hires will remain.
Think it has to do with trust. If a hiring manager, for example, has a contributor who is good and recommends a former colleague, that resume can get to the front of the line. There are plenty of engineers who are interview/leet code ninja's or have fluffy resumes. Getting someone to vouch for work ethic, skills, etc of a candidate carries allot of weight. Thats just the reality. Its less risk of a bad hire if you have first hand knowledge of a candidate.
Aye. Of my 5 jobs, 4 were via personal connections; 2 were basically bumping into strangers and chatting them up, one at a wedding, the other at a Linux User Group.
My current job is the only one I applied for. Even then, dudes from previous jobs have hit me up in the past for gigs in the last 6 months.
Who you know matters, even w/r/t code, so know people.
my experience must be an outlier. I’ve applied and gotten the job via the “front door” 3 times in my 10 year career so far. This is in the US at large companies, 2 of them with competitive salaries, 2 fully remote. The last one was at the end of 2022, right as the market was turning to shit.
I probably _could_ get a job more easily today because i’ve made connections over 10 yrs. But i’d probably still try the front door first because i’m stubborn lol. But the resume needs to be PERFECT when there’s so much competition, especially for remote roles. Everything on 1 page, needs to be very easy for a hiring manager to visually scan in 10 seconds, to make a quick decision. And obviously add necessary keywords for the stupid “resume filters”. It’s a real chore…
I will say though, there’s something really rewarding about getting the job you want without asking for favors.
It’s tough out there today. Many experienced engineers were laid off, i bet it’s brutal.
Essentially post-school every job I’ve been hired for was through strong personal connections. And I certainly feel I don’t develop the same kinds of connections remotely. Maybe some folks who grew up in that of remote environment are better adapted.
> The bar has also been raised significantly. I had an interview recently where I solved the algorithm question very quickly, but didn't refactor/clean up my code perfectly and was rejected.
Bearing in mind the implicit comparison to "a few years ago", a few years ago I interviewed with Google, at which point the recruiter told me I'd passed the interview, wished me congratulations (!), told me to expect a job offer, and finally, ~6 weeks later, informed me that while I'd "passed" the interviews, my scores were too low for them to make an offer.
It remains a mystery what it might mean to "pass" without actually advancing beyond the threshold you passed.
There will be a pool of candidates who "passed", and hiring managers can search through that pool to decide who to take for their team. If there are fewer openings than candidates who "passed" or hiring managers aren't interested in taking someone, then these folks won't get an offer.
As someone with a couple of open (not engineering) recs, I can agree this is the toughest market I’ve ever seen. For both recs, we received many highly qualified candidates with relevant skills,
often in a relevant domain.
I’ve never seen this before. It’s always been very hard to find a single highly-qualified candidate.
The media seems to think the great resignation has ended, but really it’s just that jobs have dried up. Professionals, especially good ones, are eager to leave their current jobs, but the pool is so small, there’s just nowhere to go.
And with so many people looking for new positions, if you’re not highly keyword targeted, you won’t go anywhere.
They already sort of do. Every time you apply to a job on LinkedIn, even if you click through to the company's ATS, LinkedIn tries to upsell their premium services, which allegedly promotes your resume. I saw no difference in the applications I submitted during the trial, which I assume is similar to the results folks see with super likes on Tinder.
I think a lot of this comes down to AI. In a recent hiring round we experienced multiple candidates using AI tooling to assist them in the technical interviews (remote only company). I expect relationship hires to become more common over the next few years as even more open-discussion focused interview rounds like architecture become lower signal.
If you're giving remote interviews, your loop should assume candidates can use AI. it's like giving a take home math test that assumes people won't use calculators at this point
I disagree. We pretty explicitly ask candidates to not use AI.
While it's fine when doing the job the purpose of the interview is to gauge your ability to understand and solve problems, while AI can help you with that you understanding how to do it yourself signals that you'll be able to solve other more complex wider-spanning problems.
Just like with a calculator - it's important for candidates to know _why_ something works and be able to demonstrate that as much as them knowing the solution.
Asking candidates: "Don't use AI" is like all those other arbitrary handicaps that interviewers used to (and sometimes still do) weirdly insist on:
"Write this code, but don't read the API definition (like a normal developer would do in the course of their work)"
"Whiteboard this CRUD app, but don't verify you did it right using online sources (like a normal developer would do in the course of their work)"
"Type this function out in a text document so that you don't have the benefit of Intellisense (like a normal developer would have in the course of their work)"
"Design this algorithm, but don't pull up the research paper that describes it (like a normal developer would do in the course of their work)"
You're testing a developer under constraints that nobody actually has to actually work under. It's like asking a prospective carpenter to build you a doghouse without using a tape measure.
Take this online test in 30 minutes with awkwardly or ambiguously worded abstract problem. You don't get to ask anyone for clarification on anything like any normal developer would do in the course of their work.
I've never been in a situation where I could not ask for clarification on something except in interview situations. I asked an interviewer once "is this how people normally work here? they just get a few sentences and plow ahead, without being able to ask for more details, clarifications, or use cases?". "Well, no, but you have to use your best judgement here".
> Just like with a calculator - it's important for candidates to know _why_ something works and be able to demonstrate that as much as them knowing the solution.
And this is why I never give coding-puzzle interviews. I just have a chat about your past projects (based on resume). We'll go deep into the technical details and it is easy in such a conversation to get a feel for whether you actually contributed significantly to the things the resume says you did.
Only sometimes is your historical deep-dive approach going to give you the right signal.
I’ve been out of work for a couple of years due to complicated immigration reasons, and I was most recently a people manager (although with a few direct technical tasks still). I honestly don’t remember many of the deep technical details of things to which I genuinely contributed significantly as an individual contributor or tech lead, despite those being entirely real and despite me still being a capable hands-on technical person. I’ve had so many jobs recently reject me for reasons like this without giving me a chance to actually demonstrate what I can do.
Memory tests are biased toward people who did the work recently, and biased against people with ADHD (who often have worse long-term memory for such details without being worse hires).
The coding interview shouldn’t be just a blind submit and wait for feedback, nor a live rushed and high-pressure puzzle test (you’re quite right in that regard). Ideally it should be the candidate doing what’s expected to be 1-3 hours of work asynchronously at their convenience within a period of a few days, and then discussing (maybe even presenting/demoing) live in a way that shows deep technical understanding and good communication skills. That avoids conflating memory tests with technical tests. Certain live coding tests can also be okay, but I agree it’s easy to make them unnecessarily uncomfortable with a false signal either way.
There is an interesting dichotomy in your interview process. You say you want someone who can solve problems, but then go on to say (perhaps unintentionally; communication is hard) that you only want someone who has already rote-memorized how to solve the particular problems you throw at them, not someone who can figure things out as the problems arise.
> but then go on to say (perhaps unintentionally; communication is hard) that you only want someone who has already rote-memorized how to solve the particular problems you throw at them
They said the opposite of that. Unless you think it's not possible to figure out problems and you can only do them by rote memorization?
> Unless you think it's not possible to figure out problems and you can only do them by rote memorization?
It is not possible to solve a problem from scratch. You must first invent the universe, as they say. Any solution you come up with for a new problem will build upon solutions others have made for earlier problems.
In the current age, under a real-world scenario, you are going to use AI to help discover those earlier solutions on which to build upon. Before AI you would have consulted a live human instead. But humans, while not what we consider artificial, are what we consider intelligent and therefore presumably fall under the same rule, so that distinction is moot anyway.
Which means that, without access to the necessary tools during the interview, any pre-existing solution you might need to build upon needs to be memorized beforehand. If you fail to remember, or didn't build up memories of the right thing, before going into the interview, then you can't possibly solve the problem, even if you are quite capable of problem solving. Thus, it ends up being a test of memory, not a test of problem solving ability.
And for what? AI fundamentally cannot solve new problems anyway. At best, it can repeat solutions to old problems already solved, but why on earth would you be trying to solve problems already solved in the first place? That is a pointless waste of time, and a severe economic drain for the business. Being able to repeat solutions to problems already solved is not a useful employment skill.
> you only want someone who has already rote-memorized how to solve the particular problems you throw at them, not someone who can figure things out as the problems arise
This is literally what AI is, and why they don't want it used in the interview.
Literally someone (or, at least, some thing) that can figure things out as problems arise? That seems quite generous. Unless you're solving a "problem" that has already been solved a million times before, it won't have a clue. These so-called AIs are predictive text generators, not thinking machines. But there is no need to solve a problem that is already solved in the first place, so...
It is really good at being a "college professor" that you can bounce ideas off of, though. It is not going to give you the solution (it fundamentally can't), but it can serve to help guide you. Stuff like "A similar problem was solved with <insert research paper>, perhaps there is an adaptation there for you to consider?"
We're long past a world where one can solve problems in a vacuum. You haven't been able to do that for thousands, if not millions, of years. All new problems are solved by standing on the shoulders of problems that were solved previously. One needs resources to understand those older problems and their solutions to pave the way to solving the present problems. So... If you can't use the tools we have for that during the interview, all you can lean on is what you were able to memorize beforehand.
But that doesn't end up measuring problem solving ability, just your ability to memorize and your foresight in memorizing the right thing.
>We pretty explicitly ask candidates to not use AI.
This doesn't work, because regardless of what the rules say if i think all my competitors are using AI (and you won't be able to reliably detect it) i'll feel pressured to use it as well. This is true of any advantage (spending extra time on 2 hour takehome assignment is the classic version of this)
To be fair, you don't have to fall into the Tragedy of the Commons trap. If everyone else chooses to use LLMs during an interview that's their choice. If you are asked not to and are uncomfortable using it anyway, just don't.
Prob everyone is usually AI at this point, only a few bother to make the solution mor human-made-like.
Personally, there is not way I am writing boilerplate again unless you are paying me hourly for the test.
Well that could explain why I've personally had a hard time finding a job with around 15 years of experience - I don't use LLMs.
With that said, none of the interviews I've had over the last couple months included questions that could reasonably be done with an LLM. The context is usually wrong, technical challenges were done live on a video call and it would be horribly obvious if a candidate was just prompting an LLM for an answer.
A cheap multi camera system + software, that can be quickly installed at candidates location to watch interviewing candidate. This can be sent by employer before interviews. its cheap enough that it can be thrown away.
traditional way - A company that provides interviewing centers across major cities for software interview, the location will have cameras that will make sure candidates are not cheating.
The bar has been lowered in many respects. 14 years experience, 5 of them at a FAANG. Successful startup exit for a small company I founded, and and endless array of India-based recruiters that, for example, want me to mention “Ruby 3.2” in my resume when the resume says I have 14 years of Ruby experience including using it daily in my present job. I could literally say “I invented Ruby and have worked with it daily for 20 years,” and they’d say, “but you don’t have 3.2 listed.”
I play the game then they pass me off to another Indian who is less understandable than the first guy and then they offer $45/hour. I say yeah sure, submit me to the potential client. Then I never hear anything back.
The Indian 3rd party recruiting industry is absolutely horrible. Another anecdote was spending 10 minutes explaining how Java and JavaScript aren’t the same thing. I am convinced there is rampant discrimination happening as well against non-Indians, and especially those who aren’t H1Bs. (H1B is a trap that makes it easy to hire people at lower wages and then also makes it harder for them to switch employers.)
I’m not well articulating the problem, but anyone who has done this dance knows exactly what I’m talking about.
By the way, I used to contract at $95/hour and now I can’t get calls back for $45/hour.
The outsourcing offshoring business needs to be significantly reformed. I had a gig at Best Buy ($70/hour) and I got to spend 3 months training some Accenture H1Bs to replace me. I thought H1B was to fill “critical shortages of highly skilled workers?” Best Buy didn’t have a shortage — they fired my entire team and replaced it with Accenture. Best Buy should be heavily taxed for that and Accenture et al should have their offshore labor tariffed into oblivion. (They typically have onshore H1Bs directing offshore teams.) The Best Buy CEO talks all sorts of DEI crap, while firing people to cut costs. Not very inclusive if you ask me.
Indians,Brazilians,Portuguese,Ukrainians,Russians are possibly the worst clique of discriminative nationalities. You can be 100% assured that you will be driven out of a company/job if more of them take a hold and get into higher up positions. This is not racism. This is a pure fact based on statistical evidence.
You will be sold a dream of infinite scalability by hiring "talent" in those countries for cheap and what you will get is usually disfunctional teams riddled with incompetence and nepotism.
Even if a job/team is meant to be multicultural,diverse they will find a way how to hire more and more of their countrymen until knowing the language is basically a requirement for the job.
I am not even an american and i've seen this happen in Europe as well so I imagine in the US it must be 1000 times worse.
Odd... you're not the first person I've heard this from, but have been hearing this particular process happening from multiple colleagues over the last 18 months. They are all out of work, all have had multiple interviews with various size companies, and they can never get past an Indian interviewer, and the teams seem to be growing in Indian folks, while non-Indians are let go or passed over for advancement, and eventually leave.
I was slightly skeptical when I heard of this the first time. It sounds a bit like some post hoc justification for why they didn't get hired. But nothing about it sounds far-fetched, really. It sounds more like a natural progression and part of human nature. But still stinks for a lot of my friends/colleagues who can't seem to get hired anywhere.
You're describing the effects of normalising remote work. Employees now are able to work from anywhere in the world, so companies get employees from all over the world. Since we're already looking at employing anywhere, why not also contract from anywhere?
Yeah I feel like the recent application experience is almost coercing me into racism - I don’t actually believe in racial superiority, I’m not into any kind of bigotry or mistreating other people… buuuuuut I’ve noticed that I have this visceral reaction to seeing a typical Indian name on an email from a potential employer, or a thick Indian accent on a phone or zoom call. It always just seems like an indication that I don’t actually have a chance and am just wasting my time.
It’s awful catching myself in a “I’m not racist but” situation. It really worries me.
On the bright side, racist thoughts creeping into classes who haven't been exposed to circumstances that lead to racist thoughts before might become more understanding of and helpful towards those who have contended with them for ages instead of simply dismissing them as people born with "incorrect thoughts". Racism is never actually about race.
Please don't start flamewars on HN, and please avoid generic ideological tangents (and ideological battle generally). It's not what this site is for, and destroys what it is for.
This has been my experience in the US as well, but I'm not so sure that this paradigm extends outside the US.
I lived in Europe for a few years and didn't feel that same context as well - it wasn't almost assumed that white people were racist and anyone might be seen as racist regardless of their skin color or heritage.
But in the current woke public opinion it's not and only whites can be racist.
I would love if it was possible to label racism whenever it appears but it's not possible without being called a racist yourself if you are white.
Looking at sibling parent poster there mentioning Germany, a similar example would be trying to make rational arguments about Israeli conduct with regards to settlements in the west bank. Any German even thinking out loud about those would be labeled a Neo-Nazi.
> But in the current woke public opinion it's not and only whites can be racist.
In my opinion "the current woke public opinion" is almost entirely a bogeyman construction of primarily the US right wing media with trailing support from their UK, AU, and CA siblings.
It's rare (ie. never.) that I read in "left wing" media of any substance articles about the pressing need for kitty litter trays for Furry students.
It's disturbingly commonplace to hear of what can only be performative manufacted outrage about such things from the asshole tanning bowtie wearing media wing.
The conclusion is that the statement " in the current woke public opinion it's not and only whites can be racist" is very much a localised subjective opinion and not any kind of actual global truth.
localised subjective opinion and not any kind of actual global truth.
Global truth: definitely not. But we're also not talking about that. At least I'm not. I'm talking about "western realities". In the parent sibling's Germany example, the only thing relevant is Germany, not globals. If you're a white German in Germany, then in regular media public opinion a lot of what should be normal pro/con type discourse is easily labeled "Neo-Nazi". So much so that people self-censor. Except for the real Neo Nazis. Americans can be "proud to be American". A German proclaiming to be "proud to be German" is a Neo Nazi.
And yes in "NA" context, I can definitely say that it's not "right wing outrage" construct. It actually feels like this conversation is the perfect example now. I'm absolutely not a Trump supporter for example. Not even a republican supporter. But it's absolutely my belief that H1Bs (which primarily means Indian nationals) is a very detrimental construct for America. I would never in my wildest dreams mention this "IRL" in my regular social circles for fear of being labeled racist. I am actually struggling to write any further here right now without fear of being labeled racist. I absolutely want to avoid "being thrown in the Trump camp" as well. I absolutely abhor the kind of vote buying Musk engaged in for example. A million a day for a signature? WTF!? It's such a brazen thing for a person with money to do, it's against everything I always believed the United States of America, the leader of the free world, would stand for. But I guess I was just naive. The world is way less fair than I hoped it was. I do recognize that. I still hate it.
My life did not change because you called me racist in this particular conversation.
If my regular social circle or work environment thought I was racist however, my life very definitely would change.
It does not matter whether I'm actually racist (by some imaginary objective standard) or what "the world" would think about me. Nor what might qualify or not qualify in some other part of the world and whether or not I was wild. What matters to each person is always their own current local reality. Whether they like it, whether it's fair, whether it makes sense objectively, or not.
India is a big place with the widest cultural and socioeconomic range I’ve ever witnessed. You might just be facing the less pleasant ends of those ranges.
I experience the other end of the spectrum. The skilled Indians that made it to Germany and write to me through my website are usually delightful people.
Consider just how different your experience of most countries would be if you just interacted with their people with the most imbalanced incentives. This is sort of what’s happening.
Had similar experiences a decade ago and now I hang up if the voice on the phone has an Indian accent and block all domains that send recruiting emails where the name is Indian sounding.
anecdata - I am Indian and on h1b. Intel hired Accenture folks, they just put more people with no experience, it took more time to train them than doing stuff ourselves.
Another instance, Intel gave tens of millions to do something that few employees could have done in couple of months. Its basically creating two conda environments, one with intel optimized software stack & one with default and compare the results for 20 use cases.
Not sure its the case with all the contract companies, but this was my experience.
i believe they are doing this because it gives them the ability to only pay people while they are working on that project, and then let them go because they are not employed by IBM. they think it is cheaper, or they just abhor the idea that the people they hire are not busy the whole time, even if that would be cheaper.
Weirded how, if you don't mind elaborating slightly?
For example, does it mean: the actual skill level (e.g., smartness) people actually look for and hire hasn't changed, but the activities that hiring teams require candidates to have experience with are (seemingly weirdly) not a great thing to need anyway and therefore lots of great candidates end up twiddling their thumbs?
In that way, the "height" of the bar is the same, but it's a "weird" bar, in that one could have to accept it for what it is, or even stoop to it, or perhaps shift over to it, in order to pass it?
Or more that the overall interview experiences are weird caricatures in and of themselves?
Weird is a great word, but it can be a little non-specific, so I'm left curious about the intended usage/meaning.
Many companies are filtering candidates in favor of mercenaries while pretending they are looking for dependable, committed professionals.
If you don’t have specific experience with some CTOs favorite esoteric API or don’t have experience in the same, specific corner of some insurance or usury industry, your ability to actually engineer solutions is considered irrelevant.
It’s as if the industry has forgotten that building software is about the application of algorithms to data structures to accomplish some user need. Instead, company after company wants to hot glue some service via some API using some framework on some cloud platform. And because the MBA decision-maker can write Excel macros with GPT, we don’t need programmers to build systems anymore. Just wire up foo SaaS to bar SaaS and MVPFailFastLeanAgile our way to success!
> It’s as if the industry has forgotten that building software is about the application of algorithms to data structures to accomplish some user need
My 3 months experience searching - and getting only ~3-4 initial interviews - is the New-AI-Kids-on-Da-Block think software-engineering is just another plumbing for their Artificially Great Intelligence. One CEO even used the exact word.
They are just saying the quite part out loud now because they think they can finally get away with that.
Well when the discount plumbing they are getting put in starts to leak shit all over the place there will be a premium again in actually knowing how to do it properly.
> It’s as if the industry has forgotten that building software is about...
This really isn't new. A look at Slashdot will give you similar complaints as far back as at least the 2000s. I'm sure someone older than me will dig up Usenet posts with exactly the same complaints and tell me to get off their lawn ;-)
> * Diversity hiring: Big US/EU companies are trying to hire more females here because cost is low and they can show great diversity numbers. It hurts when you see posh privileged urban women having much more chance of getting into a good company than a man who worked his way up through sacrifice.
do you have any examples of this happening? or is this just a boogie man?
my experience has been different: so many mediocre men in this industry. all of the women i've worked with have been brilliant.
I've had the same experience, and I've reasoned it as follows:
It's very difficult to be a woman in Computer Science. CompSci is uniquely awful for women. Like other STEM, it's overrun by men, so you get all the subtle discrimination of that. But CompSci men also tend to be, for lack of a better word, asocial weirdos. Civil engineers have to work with people, CompSci people built up their skills in front of a screen.
All this means that the large majority of women are filtered out. The ones that remain are the ones most skilled with navigating tough situations, and who have a strong passion for engineering. A passion strong enough to wade through the downsides.
I also think they have to constantly prove themselves, which also builds up their skills.
I always got the feeling that the distributions were different for men and women. Male professionals seem to be normally distributed, with a lot of mediocrity, some excellence, and some incompetence. In women, the mediocrity part of the population seems to be missing, so the distribution looks more bimodal.
If that's actually the case and not just a warped perception on my part, it could easily happen that, depending on your own skill-level and environment, you'll be more likely to work with one of the two groups in the bimodal distribution.
> It hurts when you see posh privileged urban women having much more chance of getting into a good company than a man who worked his way up through sacrifice.
It's not a gender issue. I would be looking to hire someone competent who works hard, not someone who makes "sacrifices" and then expects a job/promotion.
The latter never works. That's not the work culture in most places. I've seen it many times, people who make "sacrifices", allowing themselves to be exploited, expecting some promotion from it, and are then passed over for someone who actually has demonstrated they are good at the job and ready for more responsibility, and not being a doormat. Then they become resentful.
Right, but certain careers have figured out that women are, on average, better at the socialization game and thus, broadly speaking, a better fit for the job. Ain't nobody "diversity" hiring on the oil rigs or other jobs where the physical act is more important than interpersonal interaction.
However, it is difficult to measure those positive traits for what they are, so employers are selecting based on gender hoping for positive correlation. But that's illegal, so "diversity" hiring was created as a scapegoat to help avoid legal fire.
Are you asserting that companies in a free market are intentionally hiring people unqualified for their jobs and paying them wages as if they were qualified?
This has not been my experience. The women I've worked with in software engineering teams have all been, well, engineers. One of them worked on some kind of real-time printer operating system before becoming a Java dev, another one is currently team lead on the cluster team on a distributed software product, another one has the most in-depth knowledge of CSS of anyone in a 50+ people frontend department.
I see a lot of people online complaining about the job market and blaming all kinds of things for their inability to find a job, but I think what has changed is that there are no more defensive hires, where companies like Google hire as many people as possible just to deny their competitors those people. Lots of relatively unqualified people found very high-paying jobs that way and are now surprised that they can't land those jobs anymore.
If you're competent and personable and know your own strengths, you can still find a job relatively easily.
Coincidentally, I was fired during the tech downturn two years ago, and within a few weeks, had three job offers out of three applications. I have a good CV, I applied at local companies that matched my specific expertise, I asked how the interview would go and what was expected, and prepared specifically for each company.
Complaining about women because you can't find a job isn't just misguided, it's harmful to yourself, because it prevents you from understanding what the actual issue is, and working on it.
When I studied comp sci around 2000, women were actively discouraged from continuing their classes by professors. During an oral exam, a prof told a female friend of mine that women had no place in comp sci. As a result, not many women graduated, so two decades ago, there were just fewer women in the field in general.
I'm not sure what exactly qualifies as a "legacy game engine", but given the small number of women who worked in comp sci when games were made ten or twenty years ago, and particularly in male-dominated videogame studios, I would naturally not expect to see a lot of cis women with experience working on these engines (or on related tech stacks) today.
This seems like a bit of a special case, rather than a general representation of women in software engineering.
> i did not know that one gender could be better at this work. that seems like huge news if true.
There is research that has shown that men are, on average, better at single-focus tasks. And, indeed, it was huge news at the time the research was published – at least as huge as being reported in major news publications is.
It wouldn't be huge news now. We quickly grow bored and tired of widely reported things from the past. Humans, of all genders it seems, tend to seek novelty.
I assumed you were already dead by the time I got to reading your response. One does not normally tell you that they are dying if they expect to still be around for a long time. Shame on me for assuming. What is the expected lifespan for someone diagnosed with your condition?
Name a prominent open source project created be and led by a woman.
Who created and contributes to linux kernel, python, c, c++, go, python, ruby, ruby on rails, php, wordpress, ghost, SumatraPDF, zig, oding, nim, nodejs, deno, bun.
I could just keep listing major open source projects because I literally cannot think of a single one created and led by a woman.
And if you find one, it still doesn't negate the 100 to 1 ratio.
Open source, unlike jobs, are pure meritocracy. A woman can create a GitHub account and start coding just as easily as a man. There are no gatekeepers and open source contributors / maintainers are abused by random people as a matter of course.
To me it's reality. To you, somehow, saying that out loud is bias.
And to be clear: I don't think there's anything preventing women from learning to code and contributing at a high level and I've known a few that do. But for some reason they overwhelmingly don't. The stats are brutal.
go ask claude to give you feedback about this comment, because it shows a misunderstanding of how a male-dominated society works if this is what you think about women in tech.
The words you're looking for aren't conservative or reactionary they are bigoted and asshole. Best software engineer I've ever worked with is a woman, the rest had exactly the same range of ability as the men. Overall though a much lower level of entitled ignorance than "guys like us".
It's so peculiar that the man who thinks few women are willing to "get in the trenches" never sees women really getting into the trenches. It must definitely be because that's just how all women are. And definitely not because most women choose not to work with you or a company that perpetuates that idea.
> It's so peculiar that the man who thinks few women are willing to "get in the trenches" never sees women really getting into the trenches.
I have seen women in the trenches but I don't see how that contradicts my claim that they are relatively few and there's probably a reason despite all the efforts to bring more women into tech.
For what it's worth, it is definitely thawing (at least if what we're seeing in our client base is anything to go by), it's just going to take some time for that to work its way through everyone looking for work.
Is that the reason you were rejected, or was that just an attribute of what you did && you were rejected? What stage of the interview, what level?
I truly don't doubt that's what happened, I've had it happen, but when it did their feedback was (insultingly) not up to their professional standard. I say insultingly because it was an amateurish evaluation of what I did, specifically because it was like the 5th god damn interview by that point and really they should have been looking for more than trivialities like the coding style I chose to use in HackerRank with a visible ticking clock.
The reason I ask is just for context; if people are interviewing for Senior roles and being rejected for code formatting problems, something is even more deeply wrong than it seems, since they probably shouldn't be concerning themselves with that _at_all_. If it's possible though that you were rejected for some other reason, but that your code style was the strongest negative apparent signal, that's also worth exploring. In my case, that's a possibility, that they were looking for just one more reason to narrow the funnel, but it can be hard to accept.
Anecdotal but I felt it was worst during the mass layoffs in late 2022/early 2023. It has been very slowly recovering since then.
The late 2020 period was the absolute best time though. I got multiple offers almost instantly, and after I signed on it felt like the org was bloated with engineers just fiddling away on meaningless projects.
This is not surprising. The companies received grants during covid lockdowns, so they started listing a lot of fake jobs. When the money dried up the layoffs followed, which was an expected outcome. Media does not let people know about the grants and presents everything in such light that it seems like the market is getting absolutely destroyed, while in reality it only slightly declined.
The bigger problem are HRs employed by big companies who autoreject every application and take no responsibility for doing this.
Though FAANG offers are usually more attractive than startups (considering pay level and stability), some startups could be more selective since they couldn't afford to hire the wrong candidate.
It’s more about calibration. If you interview 100 people at a FAANG, you get an idea of where the bar is. This idea gets calibrated with a panel of interviewers, along with shadow interviewers.
At a startup… who knows? I had an interviewer burn up 25 minutes on a 45 minute coding question trying to “hint me” towards the solution I mentioned in the first 2 minutes.
> I had an interview recently where I solved the algorithm question very quickly, but didn't refactor/clean up my code perfectly and was rejected.
It sounds like they intentionally present a simple problem because they are not filtering by who can solve it, but by who writes clean maintainable code.
I wish I had thought of that when I was interviewing candidates, because it is a good criterium for a well established organization.
Or 1930s Great Depression no work for almost five years tough?
I expect most people here won't have a good sense for 2009. While it wasn't great for many industries, for tech it was the "app" boom. You couldn't hardly go outside without someone throwing money at you.
If my manager said to me tomorrow: "I have to either get rid of one of your coworkers or your use of AI tools, which is it?"
I would, without any hesitation, ask that he fire one of my coworkers. Gemini / Claude is way more useful to me than any particular coworker.
And now I'm preparing for my post-software career because that coworker is going to be me in a few years.
Obviously I hope that I'm wrong, but I don't think I am.