Hacker Newsnew | past | comments | ask | show | jobs | submit | benkuhn's commentslogin

The time lost estimates here are comically implausible--if Apple bugs were wasting 32m person-years per year, with around 1.5b Apple product users total, this would imply that the average Apple product user loses 32m/1.5b ~= 2% of their life, or about 11 16-hour days, to Apple bugs. If that were happening to you you'd, uh, notice :)


“The bugs are real. The math is not. All estimates are made up. Your frustration, however, is valid.”

I’m pretty sure it’s way less than 2%, but I definitely notice running into the same bugs many times.


I'm not an evolutionary biologist, but it seems to me that the claimed magnitude of the change is wildly implausibly fast from an evolutionary perspective. I'm confused that neither the article, nor the paper it cites, addresses this.

To go from 10% to 30% in ~5 generations, the median-artery-having population would have had to expand by (30/10)^(1/5) = 25% more than the non-median artery population over each generation. It just seems totally implausible that median artery carriers could have that much more offspring.

This makes me pretty suspicious that the paper may be wrong.


You could also suspect that your understanding of evolution is not 100% correct.

Genes do not necessarily propagate in a population because they enhance reproduction. They can have no or very little influence but still propagate by chance. (Similarly, beneficial mutations can disappear from the genetic pool by chance). Lookup "genetic drift".

By the time the environment does change (which can happen as quick or slow as one wants), the lucky genes are already spread in the population and may turn useful or detrimental to the individuals after that environmental change.

Also, genetics is not the only driver for evolution.


It's also really easy to have high quality audio! The author recommends a "podcasting" microphone, but a $35 standalone headset mic[1] is almost as good and much easier to use. If you want to hear a comparison, I got kind of obsessed with this problem at one point and took some comparison recordings here[2].

(You need a standalone mic since most headsets, even really nice ones, have really bad mics because most headset buyers don't care about or even know how good their mic sounds. The one I linked is wired because wireless is evil[3] and in particular, Bluetooth will silently degrade your audio quality. If you want a pair of wired headphones, I like these[4] which are "open back" and therefore sound more natural + cool your ears better, although the open back also means they "leak" sound and are only suitable for working without people next to you. But you shouldn't be having calls with people next to you anyway!)

[1]: https://www.amazon.com/V-MODA-BoomPro-Microphone-Gaming-Comm...

[2]: https://www.benkuhn.net/vc/#get-a-better-microphone

[3]: https://www.benkuhn.net/wireless/

[4]: https://www.amazon.com/Philips-SHP9500S-Precision-Over-ear-H...


The ModMic is also excellent, and you can attach it to existing headphones [1]. I use this at home with my prized Sennheisers.

It baffles me that some people don't seem to care about their audio quality on calls. The most obnoxious are those who use speakers and you get echo on all your talking, and despite telling them, they still never bother to get a decent mic.

Another common offender are the Bose QC35s: they have a terrible mic - I wish people would stop using them.

All the Apple things have great mics. I always keep an old pair of 3.5mm earpods in my bag as a good, portable laptop mic.

[1]: https://www.amazon.co.uk/ModMic-GDL-1420-UNI-Mute-Switch/dp/...


> It baffles me that some people don't seem to care about their audio quality on calls. The most obnoxious are those who use speakers and you get echo on all your talking, and despite telling them, they still never bother to get a decent mic.

I see comments along these lines here all the time, and I don't get it. I'm on zoom a majority of my day, and have maybe two colleagues that don't just use the laptop mic/speakers and have a headset. I almost never have trouble hearing or understanding or listening to background garbage. In fact, those with headsets will sometimes be worse because they're making a lot of mouth sounds close to the mic.

Maybe it's just that Zoom is good at this? TBH, when we used to use Webex on dedicated phones I felt like I couldn't ever hear or understand anything. Maybe that's where this microphone feedback comes from?


If they're using external speakers, the only reason you're not hearing echo is because it's being software-cancelled. Different systems are better or worse at this software-cancelling; phones are good, Apple computers are good, otherwise YMMV.


I assume it also depends on if they are using the laptop speakers or some standalone ones. I'm guessing the cancelation tech is tuned for the onboard speakers


This depends. On my (dell) laptop, the mic is basically right between the two speakers, below the lip of the laptop. It’s possibly the worst placement you could come up with for a microphone, because it barely picks up voice, and picks up all the typing, desk noises and speaker echo in the world. But I suppose that’s not surprising from the company that thought that a webcam beneath the laptop display would be a good idea...


I really thought they had some clever software or leasing to make the picture appear as if you were looking into it because of the placement but nope...just a nose cam.


Yeah that would make sense


Teams is good, Slack is good, Zoom is good. Which ones are bad?


Google hangouts is the worst in my experience. Bringing external people in who aren’t used to google meets are always surprised. We buy everyone nice microphones and our meeting protocols are you unmute you talk then remute when done. We have a bunch of parents so this has been a good practice no matter what.


Those are all good until they’re not. I’ve had echo and other room audio problems crop up intermittently in all three of those platforms during calls.


Interesting. I normally use the external speakers on my iMac. I have verified with a number of different people that they're not getting echo.

Yet one sees other people utterly convinced that using external speakers is bad, bad, bad.

That may explain it.


The most common problem I see is not echo, but software audio ducking that happens as a result of using onboard speakers and mic.

Some people have a hard time realizing that they're interrupting someone else because that other person's audio is getting ducked while the laptop prioritizes mic input over speaker output - with the intent to reduce echo.


Almost. The laptop of the person being interrupted is essentially muting its mic temporarily to avoid sending an echo of the interrupter. You could say it's prioritizing its speaker over its mic.

Basically, of all the ostensibly unmuted mics, only the one with the loudest human is truly unmuted.

It's closer to half-duplex than full-duplex. Full-duplex with no artifacts requires no echo cancellation which requires headphones.


What does the term "duck" mean in this context? I'm not sure what you mean.


“Ducking” refers to lowering volume so that other audio can play on top of it. When an announcer speaks over a song in the radio, or when Siri lowers your music so she can talk over it - that kind of thing.


Try talking while they are also talking. You'll see the problem.

It's easy to have conversations with friends on discord where 3-4 people are talking at once all with headphones. However this has never worked on a zoom or hangout with less techy family members or work colleagues using ext. speakers.


That may be part of it. On calls that I'm on people generally don't talk over each other.


After having used both Webex and Zoom extensively for the past year, it seems that Zoom had much more aggressive echo cancellation up until recently. It feels like Webex has tweaked theirs recently so it's not quite as bad for those people who insist on just talking at their laptops with no external mic or headphones. Still, any of them with laptop speakers/mic sound worse than any other of them with a halfway passable headset.

I'd say if you're dealing with difficult people who really don't want to do more than point at an icon on a screen and go, the most bang for the (effort) buck is to ask if they have a set of headphones. Most people still have some earbuds around from when their phones still had headphone jacks. Just getting rid of the speakers makes a huge difference when folks refuse to mute while not speaking.

I was lucky enough to have an old Shure vocal mic and a cheapo XLR-USB interface sitting in a box of electronic stuff, so I typically put on my headphones and speak into the mic (on a desk stand). For camera...I tried the phone thing and while it does look a lot nicer, the phone gets warm and has to run for an hour or two at a time. Eventually just got a Logitech C920 once they dropped back to non-scalper prices.

A couple of clamp lights with parchment paper clipped over the end made more of a difference than buying a mirrorless camera would've (and they were way cheaper). My DSLR doesn't (and wasn't meant to) run for hours as a video cam so I didn't bother with that.

Also, using OBS and its virtual camera plugin means I can tweak and color correct the cam feed without having to dig into the OS webcam configuration. Plus, real chromakey beats crappy Zoom/Webex background removal when I do just want to goof around with cool backgrounds and overlays.


> don't care

Until you spend 1/2 hour talking to a certain family member, the one who calls from Burger King and sits right next to the soft drink machine so you can hear the ice being dispensed, you haven't fully lived.


I live next to a U.S. Marine Air Station. Until you get to share the full force of F/A-18s buzzing your place at full throttle, you haven't lived. Seriously - very loud.....


When Moffet Field was an operational Naval Air Station we would get P-3s, both going out to/returning from patrols, and circling around for touch an go landings for training, also C-5s and C-17s, and some fighters. The fighters were of course the noisiest, so you've got some serious loudness going on.


It's probably just related to crappy laptop hardware. Macbook speakers/mics are great and I never hear any feedback from them. When it happens, and you can hear your voice echoing on everything you say, it gets quite annoying.


> In fact, those with headsets will sometimes be worse because they're making a lot of mouth sounds close to the mic.

Yes this also freaks me out. Also when people use headsets in a room with lots of background noise, it sounds as if they use an open mic.

I'm also quite convinced that the Mac with just the internal mic/speaker is quite good for most cases. But I definitely want to look further into the issue. Also I certainly don't want to use a dedicated external mic, that seems total overkill to me.


Depends strongly on where your colleagues are. If they’re in a dedicated office at home the chances for background chatter are low.


I care, but not enough to ask people to QA my setup.

I don't know of a way to check how I sound without bothering anyone.


I mostly use Zoom and Webex, but both have an option (usually accessed via a little arrow next to the mute button) to open settings. Both give you the option to choose which mic/speakers you want to use and both allow you to do a test record for a few seconds and then have it played back to you.

I know in Webex you get this option before you are connected to the actual meeting, but Zoom may have it somewhere else I haven't bothered to look for. I make a habit of testing my mic every time I connect to a meeting, just in case I mucked something up or there's some other issue I wouldn't have known about. It's a minute of checking to save several minutes of embarrassment and delay later on.


Using the Zoom "record in the cloud" feature should roughly correspond to how people hear you BUT it does not let you know if eg your setup echoes someone else's voice. Bother someone, find a friend, ask your manager, geek out about audio, something.


There's a way to launch a "test meeting" where you can hear yourself as others would: https://zoom.us/test


Just listen to your own audio? In windows there is a checkbox for this, and most call apps have a settings page where you can listen to your own mic.


Not really. Zoom applies lots of noise canceling and other filters, so your raw audio doesn't correspond to what you actually sound like to other people (unless you use "original audio").


> they still never bother to get a decent mic.

one more damn thing to get

one more damn thing to research

one more damn thing to fit into your budget

one more damn thing to acquire that you maybe hope to never ever use again after the Year Of Videoconferencing is over and will have cluttering up your life forever after unless you find someone to pass it off to

(if you are really passionate about it: cut the gordian knot of all those problems by convincing whoever holds the purse strings that it would make all these interminable meetings much better if everyone had a nice mic, and get the company to buy a bunch and send them out.)


It baffles me that some people don't seem to care about their audio quality on calls.

Here's the thing about perception: A lot of it happens without your conscious knowledge.

One of the things about using Audacity as one's cheap studio software, is that you have to adjust for recording latency for multitrack. It's really easy to see how a part of perception is unconscious with the delay.

Almost no one is going to notice 5ms or below. At 20ms, many musicians are going to have this definite sense that something is off, but they can still hang. In between, it's a spectrum.

In order to introspect enough to notice things that are below conscious perception, some people require some training. This is also why audio snake oil works.

I use the wireless ModMic myself.


> Almost no one is going to notice 5ms or below. At 20ms, many musicians are going to have this definite sense that something is off, but they can still hang. In between, it's a spectrum.

Reminded me of this article, easily one of the top 20 I've ever read (Brian Eno, Francis Crick, Italo Calvino, roller coasters, trepanation, time, death, drumming)

https://www.newyorker.com/magazine/2011/04/25/the-possibilia...

> “I was working with Larry Mullen, Jr., on one of the U2 albums,” Eno told me. “ ‘All That You Don’t Leave Behind,’ or whatever it’s called.” Mullen was playing drums over a recording of the band and a click track—a computer-generated beat that was meant to keep all the overdubbed parts in synch. In this case, however, Mullen thought that the click track was slightly off: it was a fraction of a beat behind the rest of the band. “I said, ‘No, that can’t be so, Larry,’ ” Eno recalled. “ ‘We’ve all worked to that track, so it must be right.’ But he said, ‘Sorry, I just can’t play to it.’ ”

> Eno eventually adjusted the click to Mullen’s satisfaction, but he was just humoring him. It was only later, after the drummer had left, that Eno checked the original track again and realized that Mullen was right: the click was off by six milliseconds. “The thing is,” Eno told me, “when we were adjusting it I once had it two milliseconds to the wrong side of the beat, and he said, ‘No, you’ve got to come back a bit.’ Which I think is absolutely staggering.”


I also go to a game developer meetup. This developer was actually delaying all of the players, so that 40ms was their typical latency, no matter what. The developer had done some research with his multiplayer game, and concluded that most people didn't notice under 40ms round trip.

Some of the hardcore FPS players in the group could definitely tell!


10ms is 3m, thus e.g. in an orchestra, 20ms latency is normal.


Yup. 30 feet or 10 meters is about the limit for comfortable improvisation. Really large orchestras can require musicians to compensate. I had to do this once when my school's band joined up with a National Guard band to form a huge orchestra for an 1812 Overture. (With actual cannon!)


It baffles me as well. Especially because I do get feedback like wow your voice “carries”, or it is clear, or that it is “calm”. The best comment I received was that it sounded like I was there in the room and that it captured my voice well. Related to the OP my voice also has been called convincing.

This is with a beyer dynamic microphone extension for a studio headphone. And I have the gain fixed.

Everyone else in our comp keys team sessions has keyboard sound, plops, distortions. But in general it pretty well understandable at the cost of having to spent effort to understand. So maybe software is doing a hell of a job here.


The most difficult part is testing how you actually sound for other people. The software can do whatever to the signal coming out of your machine.


You are wondering why people who prioritize something else "don't care about audio quality"? Remember open offices? The likely culprit for them going with noise canceling headphones? Yeah they still have their old gear and are accustomed to it and the form factor.

Philosophically it is also why would you go with something big and cumbersome for a feature you seldom use? You don't carry a glass bed scanner in your laptop bag - you take a photo if you really need to get a digital copy of a printing. Plus not all are equally enthused or know how to filter through the crap without a large /in person show room/ that would be either filthy or a pain in the ass to disinfect before a pandemic.

Not helping matters are audiophiles being infamously placebo connoisseurs and walking proof that it is easier to fool someone than convince them they were fooled. That market is flooded with bullshit and specious claims so the default assumption for people claiming you need new more expensive audio equipment has been "ignore them, they are gullible idiots who think you need gold cables for digital connections to reduce low level noise for digital signals".


> You are wondering why people who prioritize something else "don't care about audio quality"? Remember open offices? The likely culprit for them going with noise canceling headphones?

Exactly that. I've been working for 5 years in more or less noisy open offices. Some of them so noisy that there were regular arguments between the self-proclaimed quiet ones and the noisy phone callers. I followed this with amusement.

So yes, it is quite an exaggeration to now ask for Hifi audio quality during meetings. Apart from that, I think a little noise makes the lockdown in the home office a bit less boring, the majority of people worked on-site before the pandemic.


> It baffles me that some people don't seem to care about their audio quality on calls.

1. It is a bunch of extra work and expense for something I probably do not really want to be on. Easy audio communication is bound to induce more audio communication.

2. I have to maintain a bunch of infrastructure for it, manage configuration, and deal with all the wires. It is far from a free and easy improvement.

3. I rarely speak in meetings anyway.


> Easy audio communication is bound to induce more audio communication.

Alternatively: if you you are going to be hassled with an online meeting, get it over with quickly and with the least stress. It is very slow and stressful to fumble around with “Can you repeat that?” or worse, people not mentioning that they didn’t actually understanding you and then dragging out the meeting with their misunderstanding.

“If you have to eat a shit sandwich, take big bites.”


There are very good USB mics like the Blue Yeti for example. Plug and play with just one cable. You don't have to have a studio recording setup to get your voice to come through nicely.


The Blue Yeti is an okay mic, but is a little pricey for what you get and also buys you into some other stuff you may not want to spend the money on, like a bit of a heavier-weight arm, etc. to be close to one's mouth. It's also a little sensitive for spoken word and while it can sound great in a treated room it's not great for conferences or untrained users due to its habit of picking up a lot of ambient noise through untrained positioning or habits (drumming on a desk, that sort of thing).

Most folks I know recommend the Samson Q2U or the Audio Technica ATR2100 instead as easy mics to deal with for untrained users; shameless plug, but I wrote an article for Mux about this not long ago which explains in some depth why one mic may be preferable to another for untrained users: https://mux.com/blog/zoom-like-you-mean-it-1/


I wouldn't get a Blue Yeti for voice calls. Besides being pricey, it's a condenser mic and a lot more sensitive and prone to picking up other sounds you probably don't want.

Something like the Audio-Technica AT2005 also supports USB plug and play, is half to two-thirds the price, and is a dynamic mic so will reject a lot more of the undesirable sound before it even gets into the computer.

It's easier to not capture undesirable sounds than it is to try and clean it all up after.


And come appraisal time you get marked down, your peers will have possibly negative opinion of you.


> The ModMic is also excellent

I have a ModMic 4 and I am disappointed. I used it for voice calls with my Sennheiser Momentum headphones.

- Accidentally pulling on the wire will cause it to turn on the magnetic handle and create unpleasant noise for others. - It picks up signal from the phone trying to connect and transmits it to the listeners as buzzing sound. So I had to put my phone far away to avoid that. - The mute switch does not really mute, it’s more like turning the volume to 10%. Learned that the hard/awkward way. - Sound quality is mediocre, to me it always sounded like any generic mid-range headphones+mic combo.

If I could test ModMic before buying it, I would pass. I’d rather put the money towards a standalone mic (e.g. yeti) + boom arm. It’s expensive, but the quality is way better. I now use Røde PodMic with Scarlett Solo. It’s whole other price tier, but I do not regret spending that money, which I cannot say for the ModMic.


> It baffles me that some people don't seem to care about their audio quality on calls.

They might care but have no idea it is bad. You can’t hear yourself on a call.


I absolutely love my Bose QC35s. With the modmic that I attached to them. When using the mic built into the Bose QC35s it switches to mono audio, and the mic itself is indeed also terrible. Very unfortunate.


Which mod mic do you use? The QC35s have the extra small plug so I thought most mod mics would not fit.


The modmic has their own little sticker that is stuck to the outside of the mic. That's what the modmic attaches to. If you have the wireless one, that's that. If you have the wired one, the 3.5mm jack goes into your PC, not into the QC35s. So it doesn't matter what kind of plug the QC35s have.

I actually have both a wired (very old, wire kind of broken because I treated it poorly) and a wireless modmic, and both work fine with my QC35s.


Has the modmic gotten better? I've had one for years and it has always sounded like garbage.


There are a bunch of different versions with different capsules. For example, the Modmic Uni doesn't sound very good, but since it's unidirectional (it's a 6mm cardioid electret I think) it is rather more resistant to ambient noise. The Omni has your usual run-of-the-mill 6mm capsule, these are all very similar in terms of sound and noise performance. The Uni is kinda good enough for pure communication, but you'd really wouldn't want to use it for content production.

Also, being electret capsules directly wired up to your soundcard, the soundcard has quite an influence on the quality of the audio (mostly in terms of noise and hiss). Meanwhile the digital versions don't suffer from bad microphone inputs.


It depends a lot on your sound card I guess. Pro streamers use them on twitch as portable options (like Seagull) and they sound great to me.

The only real downside to it is the cable is sort of flimsy and the 3.5mm termination is not great quality. That's how my last ModMic perished, although it lasted a few years.


It also depends on positioning and configuration; having it directly in front of your mouth and/or having the gain too high are common problems I've run into.

As an aside, it's been interesting as someone who knows things about audio to realize how much I've unconsciously internalized that most people apparently don't know. Like more gain != more better or what a plosive is.


I got their wireless one recently and everyone I regularly use it to talk to immediately noticed the quality and commented on it. Can't speak to the wired ones.


In my experience AirPods have excellent mics for what they are. They're definitely a million times better than the built in mic on the various (high-end) phones and laptops I've used in recent years. I wonder how they compare to a standalone mic or a decent headset mic (or that ModMic you mentioned.)


AirPods have a worse mic than pretty much anything you can get. Macbook Pro's built-in microphone or Apple's $20 wired earbuds both have much better mics than AirPods.

I would suggest recording yourself using different microphones and comparing them to see how bad Airpods mic is.


Just did this. My AirPods Pro sounded a lot worse (very soft, muffled and way less resonant) than the pair of analog 3.5mm wired earbuds that came with an older iPhone.

I suspect it's because the wired earbuds had a mic near my throat whereas the Airpods' mic were up near my ears. The difference is very noticeable.

Looks like I'll be keeping my wired earbuds around for future conference calls.


Airpods can't compete with a decently priced boom microphone that actually comes close to your mouth. The distance from your mouth to your ear (where the mic resides) is quite long especially considering how little space they have to throw in a capsule into.

So, either you'll get a lot of ambient noise, the signal is quiet or Apple will do some algorithmic trickery that tries to approximate some kind of echo cancelation on the audio signal, but compared to a simple dynamic microphone that just has a more favorable position and form factor, it'll always lose.


Hmm. I just measured the distance between my Airpods and the corner of my mouth at 3" or around 7.5cm. Very approximate measurement, but it seems to be not far off of recommendations for where to place headset boom mics (google says 1-3 inches from the mouth.)

Also, it's worth noting what the goal is here. The aim is not to capture the most accurate sound period. I've had calls with people who clearly have very expensive setups, but I end up hearing pen clicks, keyboard sounds, breathing, swallowing etc. The Airpods seem to do a great job of making my voice sound good in general. I've gotten compliments on my audio (so it can't be that bad) and the Airpods don't seem to pick up my breathing, typing, etc so I'm happy.


ModMic(tm) is quite expensive.


I have found that many of the people who didn't shower in hot weather are the same people who don't care about their audio quality; I think it requires a certain amount of empathy for other people to realize how jarring and annoying bad audio is for the listener.

It's also similar to the anti-mask problem, frankly. Even if you don't care, you should realize that others do and not abuse them for your own convenience.


The booming, echoy audio you get in most zoom calls from people sitting 4 feet from their microphone is a little aggravating. If you'd like to help your colleagues hear you better and want something subtler than a large microphone on a big boom arm then go for a lavalier microphone. See https://www.sweetwater.com/store/detail/ME4--sennheiser-me-4... for one example, but even a $10 microphone from Microcenter, Amazon or Ali{baba,express} will do. What do you don't want is a microphone hanging off earphone adapters because you end up having to eat those to be heard. A lav mic in Zoom with both auto level adjustment and background noise suppression enabled gives a pretty pleasant experience.

If you don't have a dedicated microphone port then you may have to purchase an adapter because some input ports are wired tip ring ring sleeve (TRRS) and a microphone will just be tip ring sleeve (TRS).


I had this exact problem with a client I was working with a couple years ago. For meetings, they would all gather in one cramped room with nothing on the walls and plop a conference mic in the middle, and the audio was so bad that most of the time they were incomprehensible to me. I even told them this, but I pretty much got ignored. Glad I stopped working with them.

It amazes me how, even now with so many people working remote, how few of us take audio without even a modicum of seriousness.


What would you recommend for someone who specifically would want a large mic on an arm? Would that pick up keyboard noise?


That depends on the pickup pattern of the microphone, the gain on the mic and the distance between the mic and the keyboard.

The pickup pattern dictates which direction(s) the microphone is sensitive to: see https://ehomerecordingstudio.com/microphone-polar-patterns/ as one example. You'd want a cardiod mic turned so that the least sensitive part of the mic faces your keyboard.

Secondly you'll need to get the microphone close to you so you don't need a ton of gain to be heard well. You can put the mic 10 feet from you and with enough amplification people will hear you, but they'll hear every chair squeak and you shuffling your feet and the ac coming on as well. Get it close and you don't have to increase gain nearly as much.

Finally the greater the distance between the mic and keyboard the less likely your tapping will be heard. But again if gain has to be used to pick you up, you're more likely to hear the keyboard even with a cardiod mic because sound still reflects and echoes. Consider something like the Blue snowball as an intro microphone or call up a supplier and have a conversation with a real audio tech, which I am not.

The issue is once you get off into mics then you have to ask yourself which input type(s) do you want, what sort of pre-processing you may want, etc.

Good luck.


I think I may have to send this link to our thursday night GM :-) his cheap headset mic keeps popping and has terrible quality.

I use a chaepo Plantronics £40 for work but for my steaming I use a Focusrite claret and a separate cheap dynamic mic (plus an exciter).

I do need to upgrade that mic to a sm58b or a AT 3035.

I have thought about buying a focusrite scarlet and use a separate dynamic mic for work as well.


>AT 3035

In my previous life I was a recording engineer, and this microphone was what I used in just about every session. It is one of the most versatile and best bang-for-the-buck condensers on the market, and has been for a lot of years. Very highly recommend to anyone wanting a microphone that can do just about anything.


I have a similar setup, but use a Sennheiser e935. Sounds incredible. After 25 years in both live audio and recording, I would highly recommend it over the Beta 58. I might even use the e835 before the beta; certainly before the standard 58.

Also, regarding the AT3035, I've recently purchased an AT2020 on a park since the price was insane (like $90 US), and it sounds great! I used it on a remote recording session as the second mic on a guitar can and it was the perfect complement to the other mic (sm57).


I’m using the ATR2100x-USB. It’s great. I can use it with USB for zoom meetings or XLR for recording. My RE320 sounds better, but in a listening test with friends, not by much. Other factors come into play.


Ty for that.

I just used some 15 year old entry level Shure's I had from 15 years ago - massive self noise.

I think I was tending to the 30 as its a slightly hotter mic


> his cheap headset mic keeps popping and has terrible quality.

That might have nothing to do with the microphone. For a headset mic it is important that it's placed completely outside the airstream of mouth and nose, otherwise all mics will sound atrocious and full of wind and popping noises. Look at how headset mics are rigged by pros on talent, they're quite a bit back from the mouth.


Physical mute button with red LED mute status is a killer feature.

Got a wired Plantronics headset with USB-C that I'm happy with. Not sure if the above products have this feature, but I recommend checking for it.


> Physical mute button with red LED mute status is a killer feature.

I had a headset with that feature, and sure enough, it failed me on a sales call. I groaned at something our salesperson said, and despite the button having been pressed and the light being on, everyone heard me.


As someone who recently forgot they were still sharing their screen while simultaneously starting to chat with a colleague about how incompetent the person currently talking is... I feel your particular kind of pain.


ouch! how did you handle it? apologize and move on as if nothing happened??


Physical mute switches can be worse for other listeners as it creates an audible pop every time you mute and unmute on a 3.5mm connection. Digital (USB) mute switches are better.


Why is this? It certainly doesn’t need to be so I suppose?


Analog microphone audio is one wire (and ground) having the AC signal of the audio, superimposed on a DC signal powering microphone capsula. The simplest way of making a killswitch is to either 1) short the signal to ground or 2) cut the signal between mic capsula and the soundcard input. Done with just a switch, both of these will impact the AC component as well as the DC component, and the DC offset change that causes the pop.

And yes, there are many ways to avoid this problem. I think adding a resistor and capacitor to form a high-pass filter for the shorting option would work fine. If there is already a PCB for the switch, adding these two components would cost practically nothing.


The "pop-less" microphone switch is generally a series R-C pair, where the R is, say, 1 MOhm, and the C a few µF. The switch shorts the R out; the R charges the capacitor to the bias level when unmuted, and so shorting the R produces very little pop. The capacitor then shorts the AC audio component.

XLR switches are easier, just short hot and cold, done. Works with all microphones and doesn't produce a pop, because XLR uses phantom power instead of T-power.


The best system for me is the one on the Sennheisser Game One I have and probably many others.

There is a microswitch in the mic boom, so that it is disconnected when you lift it away. I mean, you can't get more simple: when it is in front of your mouth, it is on, when it isn't, it is off. No need for a LED. Also, the headset is passive, with a good old jack connector, I consider it a plus.


My Sennheiser PC37X (their conservative/stealth-looking gaming headset) has the lift-to-mute. I was excited about this feature but struggled to remember to unmute myself and gave up using it. I would like an LED indicator somewhere.


This was an excellent writeup. There's only one thing I would add: put the camera closer to where people's faces are. It feels like you're looking directly at them, and it makes a big difference. I made a habit of looking directly into the camera now.


the camera thing is really an issue in my company, we mostly work on software development so we share our screen constantly in our meetings... no one cares on turning on the camera and this has become regular behavior

the problem is that you dont know if the other people are actually paying attention and human interactions need that feedback


Honestly though, seeing people's faces/active backgrounds is super distracting. If I'm actually paying attention on a call I'm usually looking down off to the side of the screen so I can focus.

I recently setup a camera pointing down at my keyboard/mouse instead of my face for demonstrating a keyboard that I built (analog hall effect--from scratch!) and I think that's good enough to let people know, "I'm here" without being super distracting (assuming I turn off the LEDs and the gigantic LED matrix display haha).


Then they see you writing emails instead of listening?


> the problem is that you dont know if the other people are actually paying attention and human interactions need that feedback

This is going into the realm of the kind of monitoring software that tracks your eye movement to make sure you're concentrating.

If people are not paying attention to whatever you're presenting in your meeting, maybe the meeting is not relevant for those people. Consider cancelling it.


There are devices you can buy for a few hundred that place the image of the person you're talking to directly in front of the camera. That way you can look at who you're talking to while also looking directly into the camera.

https://www.youtube.com/watch?v=6nCYWhYagqk

There are also a lot of homebrew DIY versions of the same device:

https://www.youtube.com/watch?v=2AecAXinars


Or, even better, position the camera far away and zoom in if the camera has an optical zoom. This gets rid of a ton of distortion in your face.


Related to your open back headphones comment: I hate using closed back/ noise canceling headphones while talking in calls. Fortunately, I don't have anyone around me so I don't need them but I can't imagine having to use them and listening to my own voice through my skull.

I'm currently using the HD58X but I might look into getting the SHPs as a "beater" pair with the VModa mic.


Sony noise canceling headphones deliberately start passing some ambient sound through (including your own voice) when you are in a call of any kind.


Not really easy to be honest. Depending on the day, I am getting horrible static in my desktop microphone(s). This might be caused by no grounding in the outlet.

I'm living in a really old house with no ground for most rooms (yes, I know), with only a bootleg ground to prevent _really_ bad noise and occasional static zaps. Though I've read of many people having the same issues with properly grounded machines (as far as it goes for domestic use. I'm not talking about studio-grade grounding).

My Macbook, on the other hand, doesn't have any static, even though its charger doesn't even have a ground pin, nor does my Steelseries Arctis 1 wireless (which uses a non-bluetooth dongle. Might be because it's wireless, or just because it's an external device.

In any case, I don't feel comfortable shelling out upwards of 400$ for an audio setup.


I don't recommend the following but in our old house I used to tie my outlet ground (that was free floating) to the radiator which was grounded. It worked until my mother reported the shower water was feeling "very harsh".


Sounds like the radiator was not actually grounded and the device plugged in had a ground fault.


If the radiator was actually grounded, why would there be any effect on the water?


Yeah, I could theoretically tie my outlet ground to the gas pipe. Doesn't sound like a good idea.


This shouldn't be a problem from what I understand: "real" ground is just tied off to a rod buried in your backyard, but it's also bonded to neutral at the switchboard anyway.


>but it's also bonded to neutral at the switchboard anyway

Depends on the country. Over here protective earth is entirely separate from neutral, and there's a separate earth stake for each consumer. This is the TT system.


Yes, but with a certain resistance meaning there will always be a voltage difference between neutral and ground.


You can also try the best kept secret in radio: https://youtu.be/gPbQYmkyqaE


The video is excellent and I would encourage anyone reading this to watch it, but for the benefit of those who don't like clickbait the answer is: surround yourself with a quilt, jacket, pillow fort, or similar, because although it looks ridiculous it gets rid of background noise and muffles reflected sound.

(I haven't "saved you a click" because you should watch the video anyway. It's not just about how to get better sound when recording or broadcasting. About ten minutes.)


This is my problem. In order to get a decent sound in my untreated office (reverberant bare walls, hardwood floor, etc) I need to have my dynamic mic with a low gain setting and I have to be right up on it, which makes me look like I'm on Joe Rogan's podcast or something. For Zoom I'd prefer if you couldn't see the mic.


A lavalier hidden in your collar (or tie knot, if you're that kind of guy) might work for you. Because they're surrounded by clothes and your body it's less susceptible to room noise.

https://www.youtube.com/watch?v=D85HmR825wM


I'm sure this will get lost because I caught this thread late, but there's one more thing you can do with a "real" DSLR-type camera for better image quality: zoom in.

Ideally, the camera is as far from you as possible, and zoomed in on your face. "Zooming in" is really just increasing the focal length, and zooming out is decreasing the focal length, producing an effect best known as "fish eye".

This is one of the first things people will tell you about photographing a human being for a portrait (which is essentially the same problem as a video conference). Get rid of distortion on the face. Use a focal length of at least 50mm (zoomed all the way in on the lenses mentioned in your article). Otherwise, the nose gets blown up and everyone looks worse.


Yep, this is correct. Others in the thread have recommended getting the camera as close as possible to compensate for the wide angle lenses of webcams, but this is suboptimal. It creates the unmistakable visual impression of being right in someone's personal space while you talk to them. You can create the same effect where someone is easy to see just by using a camera with a narrower field of view and a longer focal length, without the distorting effect caused by being too close.


For those who are not looking to spend a fortune, a simple Apple earpod (wired) is still better than most headsets out there. And it costs 20 bucks. I think my yeti actually sounds worse at it cost 3 times as much.


Curse anyone that uses an inline microphone on some earbuds. They sound awful and people frequently bump against them causing even more terrible experience for the listener.


Earbud microphones rubbing against clothing is like nails on a chalkboard for me.


I agree, but between that and most of my co-workers currently using their laptops built-in mics, I'd rather deal with the noise from the earbuds.


The worse is laptop mic + speakers. If you noticed the people speaking to you stop mid-sentence, it's because hearing themselves with some timelag tends to make them stop speaking.

Thanks to some people, everyone can experience speech jamming for free! https://arxiv.org/vc/arxiv/papers/1202/1202.6106v1.pdf


You just have to do the TikTok hold


> For those who are not looking to spend a fortune, a simple Apple earpod (wired) is still better than most headsets out there.

I don't disagree, but the results are widely variable with different TRRS I/O across different soundcards. E.g. on a MacBook, the EarPods probably sound great, with a good level of gain and plenty of headroom. On a Lenovo Thinkpad, they sound hissy and terrible because you have to turn the gain all the way up.

> I think my yeti actually sounds worse at it cost 3 times as much.

Something is probably wrong if this is the case. Which is understandable; a USB microphone that's not attached to your person requires some positioning and mic technique that you don't have to think about with the inline mic on the EarPods.


The scuffing sounds coming from my coworkers (wired) earpod mic as they rub it against their clothes says otherwise. I'll take my Blue Yeti over airpods any day.


Also as someone at a company almost exclusively MacBooks, I’ve never noticed and issue with sound or video quality


Totally agree. I've been using my old Apple EarPods and I'm always told that I sound great.


The most important thing is to have the microphone close to your mouth. There is nothing more annoying than listening to echo-y voice.

The mic even have to be that expensive. I use a cheap dynamic mic from ebay with a windscreen and a mic arm and it sounds fine.


> I use a cheap dynamic mic from ebay with a windscreen and a mic arm and it sounds fine.

How do you know what it sounds like?

How do you know how good you sound to other people compared to if you were speaking through a good condenser mic?


Open voice recorder, record, say things, listen.

Plus multiple services now offer test calls/contacts where you can open a voice call, say things, and then listen back to how the other side hears it.


Put on headphones, linked to your phone, mute the phone mic (do not skip this step), and hold a video call with yourself between your computer and your phone.


Make your own zoom call and record it...


Quality supercardioid microphone will reject echo well enough for most rooms and a meter or two distance. That is usually enough distance to not typically require a pop filter, this giving improved clarity.

Hypercardioid "shotgun" works too as long as its back is placed far enough off a wall, however these tend to have sound coloration.

It just so happens that most microphones are the less directional cardioid. Or worse, omnidirectional.


Step 1. Get a quality mic. Step 2. Control the sound in your environment. The best mic in the world won’t help if you sound like you’re recording in the middle of your kitchen.

One of the worst aspects of listening to a great interview is when the guest is in a space with tons of audio reflections. You want the sound of your voice, not the room.

Many podcast hosts climbed into closets with sound dampening clothes on hangers during the pandemic. It worked out reasonably well.

If you’re doing audio professionally, consider treating the recording space. If you don’t want to put panels on the walls, get free-standing panels that can be stored when not in use:

https://auralex.com/

Get a pop filter:

https://en.m.wikipedia.org/wiki/Pop_filter

Control sibilance:

https://urm.academy/death-to-sibilance/

No use having a great mic if you don’t control the things you don’t want it to capture.


This video from Electroboom has a lot of similar comparisions, examples, tips and tricks for high quality audio. With all the science behind to back it up. https://www.youtube.com/watch?v=m7CtnR47w20 (Fantastic channel btw)


Oh hey, I use that mic from your first link. Works quite well for the price. For anyone wondering, I do have V-MODA headphone, so I knew it would fit but it does fit in a couple other headphones as well.. it just won't fit in everything, so be aware of that.


Yeah, in particular you need headphones whose 3.5mm cable is detachable. Thanks for flagging, I should have included a warning!

For other headphones you can use the various flavors of Antlion ModMic, but it’s more expensive and less convenient because you have two cables.


question - do you use this on video zoom calls? I can see the benefits on a non-video zoom call. But having a microphone on your face during a video zoom meeting makes me feel like a radio DJ trying to have a call.


The V-Moda is great, but if you don't have headphones where it can be attached I recommend the $20 Sony ECMCS3 mic. It sounds fantastic for the price.

https://www.youtube.com/watch?v=VzGPyekZE7w

https://www.amazon.com/Sony-ECMCS3-Omnidirectional-Stereo-Mi...


I've seen in some places that this might need some power (as a Rodes VideoMicro needs from the camera) so it wouldn't work in a laptop. Where did you connect it?


It has worked on anything I've used it on (MBP, iMac, PC desktop, iPad, etc).

You do need one of those TRRS splitters though.


Great article! Is there any inherent audio quality difference between USB and XLR in your experience?


You’re always gonna connect XLR over USB anyway, so not really. It’s just that XLR gives you a lot more flexibility to change microphones, use your interface to control gain or add padding, or if you’re a musician record instruments. But a USB AT2020 or similar is gonna be excellent for calls no matter what.


This is not true. USB microphones do not have as high quality as XLR microphones connected to a usb interface. In general, USB mics have a lower signal to noise ratio (SNR) and a higher noise floor.

Does this matter for gaming or calls? Not really, as it will definitely sound better than crappy laptop or headphone mics. But there is a marked difference. The AT2020 usb mic doesn't even go up to 20khz. Not to mention the A-D conversion from a dedicated unit and the mic preamp are going to be better than the onboard electronics of a usb mic.


Err no an xlr mic into a sound card is going to be better.


What sound cards support direct XLR input?

In nearly every case, a dedicated usb interface is going to have better quality ADC and mic preamps. High-quality sound cards are not prioritized by consumers, so they remain rather poor quality in most laptops and pcs. Even something like a focusrite scarlett is going to improve the signal chain immensely (plus you get the added bonus of a decent DAC as well).


Focusrite for one :-) and external sound card worth its name with have them

I did of course mean a real external sound card.

And a usb mic is not going to have as good a mic capsule at the same price point which was my point.


Probably less static, lower noise floor, more tonal and full sound. I've used both and I obviously prefer XLR but it probably doesn't make any difference for casual use.


I think one of the things people often overlook is the distance between the mic and your mouth. The closer the mic is to the source, the higher the signal-to-noise ratio will be, so the less echo and background noise you'll get. Many smartphone mics will sound very impressive if you hold them around 6-12 inches from your mouth. But you don't really want to do this with your hand, so it's important to get a mic with a nice stand or a form factor that allows you to comfortably place it where you'll get good audio.

Another thing people forget about is the noise canceling and other filters that are applied to your audio by default. If you're in a reasonably quiet place, it's probably reasonable to put "noise canceling" in Zoom on low. This will make your audio less garbled. If you have a really solid audio setup with headphones, you should try turning on "use original sound," which can make your audio really nice (unfortunately not available in Linux).

I highly recommend Fifine's mics. They have a USB condenser mic with a boom arm for $60 (~$35 for just the mic) [1], and a lavalier (lapel) mic for $20 [2]. The audio quality is really quite impressive.

[1] https://www.amazon.com/FIFINE-Microphone-Adjustable-Instrume...

[2] https://www.amazon.com/Lavalier-Microphone-Cardioid-Condense...


The recommendation above: V-MODA-BoomPro and Philips-SPH9500S is pure gold and will save you hundreds of hours of research. After trying more than 20 to 25 different products and solutions I arrived to the same conclusion. I work on Linux but sometimes need to use Windows. I work regularly delivering sessions, workshops etc... Very high quality sound is critical for me.

I have multiple professional level microphones SM57, Neumann(s), BlueYeti and also tried some of the cheaper USB mics. I spent well over 60 to 80 hours doing research on how to get good audio quality online and would like to offer the following recommendations:

DO NOT rely at all on YouTube recommendations from specialized channels, even the ones with high reputation. They have a business running, and a bad review for a product will make sure they will not get another “sample” from the same vendor. I had instances where I ordered professional level headphones in the 300 to 400 US dollars price range, reviewed by several of the high reputation channels as the best out there. Within minutes of receiving the product would realize how uncomfortable they feel, or how bad sound they offer. When I would return to re-watch some of these YouTube “reviews” I would quickly realize the reviewer had skillfully omitted to mention any of these failures within the product. If there is an issue, these reviews just “omit” any comments around problematic areas of a product. On a second though … Maybe there is a business opportunity here.

Recommendation: Choose a reliable online vendor that can offer returns on the product. Be ready to order several products and do your research.

You also have to take into account a couple of things:

- What OS are you using ? If you are using a USB mic some vendors have great mics but terrible drivers ( ex BlueYeti Windows drivers ) and they do not seem willing to put the effort in. Windows is particularly terrible out of the box, with energy-saving OS plans that pause USB ports configurations. It took me hours to get Windows 10 to sound good and reliably for online meetings. This is a good starting point: https://support.focusrite.com/hc/en-gb/articles/207355205-Op...

- Do you want to sound good while doing Podcasts, creating YouTube videos OR during via WebMeeting platforms like Webex, GotoMeeting, Zoom, Jitsi? From my experience, due to internal audio processing done by many of the online conference platforms you are going to need different solutions for each use case. Some of the Studio level Condenser mics used for podcasts do not sound very good during online conferences. Its also the case they are too sensitive and your conference participants can hear you with great audio quality but they will also hear you neighbor dog barking.

Warning: I am not associated with any of these companies in any way but I would suggest the following:

- Do you want to sound good for Web Meetings ? Get two V-MODA-BoomPro and Philips-SPH9500S . One set to use and one as backup. It will be relatively cheap compared to other solutions and the price/quality ratio of this recommendation is exceptional. The mic has good quality and the headphones are high quality. You won’t feel them if you use these for 8 hours. You can spend more if you are willing to put the research effort. Just do not settle for any first choice.

OR

- Do you want to sound good while creating YouTube Videos ? Always get a Pop Filter and a Mic Stand with isolation from vibrations. Get a BlueYeti ( but use the XLR port not USB ). The BlueYeti USB drivers on Windows will randomly cause distortion and I given up on the Company putting the effort to fix the issues.

You can also

Get an SM57. Sounds great for voice and its not by accident it’s the official mic of the US President. https://en.wikipedia.org/wiki/Shure_SM57

Be careful where your order, the SM57 and the SM58 are some of the most frequently counterfeited mics by Chinese or Taiwanese vendors. Then get one of the Focusrite Scarlett interfaces and you will be sorted.

If you don’t use Mac or Linux but Windows be ready to spend some effort troubleshooting driver issues. This solution will not be cheap but still manageable and save you hours. You welcome !

[Edit] Spelling


As a comment. I got the v-moda. I like it a lot. It sounds great. BUT it's a very omnidirectional mic, it picks up everything going on in the room in clear detail.

If your environment is noisy, you would likely be better off getting a shotgun or cardioid style microphone with some directionality to it.


I use a home studio so its easier. If you participate in conferences from an open floor office I would agree.

Also important and already mentioned in the original post. Avoid any Bluetooth based mics or headphones. Avoid Wifi connections and go for cable based connections.


I use that v moda mic with sennhesier hd598 open back. Had to mod them to connect them, but they’ve worked quite well for many years now. I might need to get a new mic because the volume control is starting to cut in and out if I move it too much. Great recommendation through!


I recently bought a new dynamic mic. And it has absolutely changed the way I do WFH. No more crappy noises. No more background sounds. In fact, I believe that having a good microphone is a good initiative to seriously start a better workflow for WFH.


I love the modularity of the boom mike attaching to existing headphones. I have been using a Bluetooth adapter which keeps things modular. You can plugin a wired headset to it. Of course, keep things wired when talking with someone else. But you can reuse that wired headset as Bluetooth when you just want to listen and want to roam around. https://www.amazon.com/Mpow-Bluetooth-Receiver-Connection-Ha...


I have one of these and have been very impressed with the output.

https://www.amazon.com/dp/B07QVNXBDL

It was ~$50.


I got a decent headset w/mic, use it for Zoom, use it for cell phone as well when I am at home. Great improvement in what I hear, and what others hear as well.

On Zoom I look a little goofy with the phones on but better that than missing what people say and getting echoes.


I use the Philips-SHP9500S headphones. I found they were very uncomfortable with the ear pads they came with. I replaced the ear pads with some thinker ones (Shure HPAEC940) and it really helped a lot.


Wow, I actually did a bunch of research on upgrading from my current "gaming" headset a while ago and those are the exact items I landed on. Maybe it's time to finally pull the trigger.


> The author recommends a "podcasting" microphone, but a $35 standalone headset mic[1] is almost as good and much easier to use.

so if it's almost as good how smart does each one make you sound?


It couldn't have come at a better time for me. I have just started looking for a better mic to sound better to my colleagues. Thanks for the wonderful write-ups and suggestions


Thank you. Do you also happen to have recommendations for those of us happy to spend a little more on a "podcasting" or any similar higher quality microphone?


This depends a lot on your budget, voice and whether your room is treated or not. My room is not treated. I use a Røde M3 condenser mic just outside the camera range for Zoom calls, it's fine but sensitive to outside noise. A mic with hypercardoid pattern or a lavalier would probably be better for that purpose. In any case, the audio quality is very good. For recording audiobooks, I use a dynamic Røde Procaster.[1] It's outstanding and was the right choice for my voice. It has very good background noise rejection. I'd recommend it.

Generally speaking, there are many good condenser microphones but I'd recommend a dynamic microphone if you can get close to the mic, your room is not treated, or there is outside noise.

[1] I'm in no way affiliated with Røde, just happened to like their mics. There are many other good choices in the same price ranges.


This is the reason why i got some Wired Bose Soundsport, and I had to get the lime green ones because they don't make it anymore. No wireless for me.


The V-MODA BoomPro finally made microphone useful in combination with my 1000XM3s. Cheap and easy upgrade there.


Fantastic article, brb going to spend way too much on gadgets now


Great article!


Shot in the dark, but many of my Linux coworkers have a problem where their video chat software sets their mic volume/gain to 100%, which causes horrible sounding clipping. Check your mic gain settings and perhaps disable the automatic volume equalization in whatever video call software you're using.


This is killing me right now. :( And I'm not finding a clear "this disables it" setting.

For the record, I'm trying to use Google Meets and it seems to be the culprit in what is doing the "push it to 100%".


I ended up buying an audio interface that has no digital volume control, only analog, so applications cant mess up the levels at all. Its amazing how conferencing apps just think that 100% is better.


This article completely mischaracterizes the beliefs of most of the people quoted or referenced, then engages with those beliefs only via asserting the opposite without any supporting argument. I'm disappointed.

For example:

> The most charitable explanation of Singer’s dismissal of political action is that he is trying to sell being an altruist and he thinks a consumer -hero version is the one people are most likely to buy. Singer and other effective altruist philosophers believe that their most likely customers find institutional reform too complicated and political action too impersonal and hit and miss to be attractive.

Interestingly, the author quotes part of Singer providing an argument against the effectiveness of institutional reform, but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have." (I think that's far from obvious!) Instead, he jumps straight to accusing Singer of arguing in bad faith. This is actually the opposite of charitable.

For another example, I'm deeply confused about how the author of this piece could cite Will MacAskill and Toby Ord, then write:

> The underlying problem is that effective altruism's distinctive combination of political pessimism and consumer-hero hubris forecloses the consideration of promising possibilities for achieving far more good.

Ord and MacAskill co-founded an organization, 80,000 Hours[1], which advocates mostly not for effective giving (which the author derides as a "consumer hero" approach) but rather for spending your career working on one of the world's most pressing problems; notably including for instance several types of policy change.

EDIT: and I missed this one the first time around:

> One could spend at most a few tens of millions of dollars on anti-mosquito bed nets before returns start dramatically diminishing because everyone who can be helped by them already has one.

A single bednet charity, the Against Malaria Foundation, has literally already raised 10x this amount without substantially diminishing returns: https://en.wikipedia.org/wiki/Against_Malaria_Foundation


> Ord and MacAskill co-founded an organization, 80,000 Hours[1], which advocates mostly not for effective giving (which the author derides as a "consumer hero" approach) but rather for spending your career working on one of the world's most pressing problems; notably including for instance several types of policy change.

That’s not a universally true statement about 80,000 hours. I took their career choice questionnaire and was told to work as a well paid Software Engineer and to donate a percentage of my income (which is what I was already doing — I’m fond of some of Effective Altruism). You could argue that is classic Consumer Hero advice.

I know you qualified your statement but I just want to emphasize it as I think it leaves room for criticism.


For more concrete data: If you look at their current "key ideas" page,[1] they go over 4 categories of high-impact careers (notably including government/policy) and then say "if you think none of the categories above are a great fit for you, we’d encourage you to consider earning to give. It’s also worth considering this option if you have an unusually good fit for a very high-earning career."

This post[2] suggests 80k's key researchers think about 15% of people interested in EA would be the best fit for earning to give, while 10% of people attending an EA-themed conference were perfectly planning to.

I don't think criticizing effective altruism based on the assertion that it's mostly about earning-to-give is reasonable given those numbers or the framing in 80k's "key ideas" post.

[1]: https://80000hours.org/key-ideas/#career-categories [2]: https://forum.effectivealtruism.org/posts/LrKFNQxjETPvzXQcv/...


> I took their career choice questionnaire and was told to work as a well paid Software Engineer and to donate a percentage of my income

The good thing about earning to give is it works no matter how niche your skill sets are.

What's an efficient way to deploy an expert microchip designer, when so few global poverty NGOs need microchips designed?


I got that same advice circa 2015. However I'm under the impression they started moving away from "earning-to-give" recommendations around 2015-2016.

It's still there mind you, but research and policy are prioritized higher. I'm not sure what else they would recommend to someone who is ill suited for these other careers.


Yeah, that article is pretty weird. To add on your last point:

> > One could spend at most a few tens of millions of dollars on anti-mosquito bed nets before returns start dramatically diminishing because everyone who can be helped by them already has one.

That's a feature. It's half of what makes "effective" part of "effective" altruism effective. The fact that charities aren't infinite money sinks allows everyone to donate by a simple algorithm of:

  do {
    let charity = runQuery("
          SELECT *
          FROM charities
          ORDER BY effectiveness DESC
          LIMIT 1;")
    donateTo(charity);
  } while(charity != null && hasMoneyToHelp());
The idea being, charities that can do most good now get the funds, and as they saturate and hit diminishing returns, they stop being the most effective ones, so other charities takes their place. Repeat until there are no more charities remaining, thus no more problems to solve.

(People may have slightly different definitions of "effective", or one may prefer to do e.g. LIMIT 5 and pick one of the top 5 at random - the idea still works, the slight variance is a hedge against uncertainty.)

It's an obvious idea, and it's how people approach other aspects of their lives if they care about the outcome, so why not donating too?

The article continues:

> > This points to the limits of the individualistic consumerist approach to ending poverty. The best – most beneficial – choice you can make as an individual spending $50 or even $5,000 is different from the best choice you should make if you have several hundred billion dollars to spend.

There's only one entity with several hundred billion dollars to spare, and that's US Government. Other than that, this is why people donate to charities: charities exist to pool money, to turn ten thousand $50 donations into a single $500 000 force that can be better used than if everyone tried to apply their $50 directly at the problem.


Effectiveness of charities has exactly the same problems as efficient market hypothesis: markets can remain irrational longer than you can remain solvent. Or in other words, charities can remain full of shit for longer than you have money for.


Yes, and the same rules apply:

- Don't invest more than you can comfortably lose.

- Don't throw away the entire concept just because it's not perfect.

- Get better at telling who's full of shit, or delegate it to someone you trust.


Don't 2 & 3 contradict each other in this case? Throwing away the yoke of institutional giving is predicated upon recognizing that overhead and start up costs eat up transferable dollars.


They're complementary.

#2: Just because there are charities or companies that do bad things, doesn't mean the concepts of charities and companies, or philanthropy and markets, are fundamentally broken. There are good and bad charities/companies, even though it may be hard to tell which is which.

#3: You can make your donations more effective if you get better at telling which charity is good (same for investments and companies). There are diminishing returns on effort here - particularly if you aren't planning to specialize in charity evaluation. In that case, you can convert this problem into an easier problem of finding trustworthy specialists in charity evaluation, and donating to places they consider good/effective.


That's why we have charity evaluators like GiveWell. https://www.givewell.org/


Now consider how long a government can remain irrational.


> but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have." (I think that's far from obvious!)

Argentina, Venezuela and Zimbabwe are basically nations that destroyed themselves through bad political reforms. If there had been a way to prevent those reforms you could have prevented millions of people from slipping into poverty and needing micro interventions.


Effective altruists would generally be open to the idea that political reforms might theoretically be the best way to spend money in some cases.

But in the case of despotic or otherwise terrible leaders, the problem is that it's unclear what to do to stop them even if you have control over a large military, let alone if your only resources are a relatively small amount of money and the time of a relatively small group of people.

It remains an unsolved problem how to evaluate causes like political advocacy that have very long-term or very uncertain effects. Combine that with the rancor and division that talking about contemporary politics causes and its no wonder EA as a movement chooses to eschew political advocacy for the most part. I guess the overall sentiment is that the money some people donate to feed the Democratic or Republican party machines could probably be spent better.

That said, I've have seen (relatively) small EA grants given to organizations advocating and acting to improve governments in undeveloped countries. For example, the first one I came across was this: https://www.givewell.org/research/incubation-grants/innovati...


Countries, not nations. There are many nations living in Argentina and Venezuela, some do have political representation, some do not. e.g.: the Mapuche nation in Argentina lacks political representation.

Then, Argentina is working extremely well, except for themselves, that is.

They are in a treadmill of unpayable debt that works in the following way:

- The left is tasked with buying people's complacency with borrowed money.

- The right is tasked with giving away sovereignty: privatization and military bases.

People are expected to pick a side and stay busy fighting over which side is right. Meanwhile, the country is taken over. Divide and conquer.

It is working extremely well. There are now US military bases near Ushuaia, Neuquen and the Guarani acquifer, the 3 most strategic locations in Argentina. Everything that can be privatized has been privatized, and provisions have been made so that Argentineans can never pay off their debt, so that they can continue to lose their sovereignty. What should they privatize next? the sky is the limit.


Have they tried selling themselves back to Spain?


Argentineans are happy with their limbs and they surely are not in need for illegitimate children, so no.


"the most obvious and powerful tool we have." is not the same "if it worked then it would be useful in some cases".

Author makes extremely strongly claim and expects to believe it without any support.


One problem with politics is that it's adversarial: if a political cause has opponents, spending resources on that cause can turn into a war of attrition (e.g. the amount spent on US election campaigns)


It has to be adversarial, because politics by its very nature is how people control the behavior of other people.


> Argentina, Venezuela and Zimbabwe are basically nations that destroyed themselves through bad political reforms. If there had been a way to prevent those reforms you could have prevented millions of people from slipping into poverty and needing micro interventions.

Now imagine if the people championing those bad reforms had simply stayed out of politics? The more powerful a tool is, the more cautiously it should be wielded.


Yes, but how much would such a reform cost and is it even possible with only money? Being effective means counting how much good you’re doing per one dollar.


Maybe political change would be the most obvious remedy but it's track record of helping the poor is gives ample reason to look for change elsewhere.


Political change has done pretty well at helping the poor in Scandinavia.


Yep. It's just bad. He's essentially advocating for effective altruism to become another lobbying group or NGO.


But effective altruism is an NGO, isn't it?


I think you are mischaracterising the argument. His argument is that effective altruism can not eliminate poverty, it's inherent in the concept. You also dis not show that he is the one who mischaracterises the "believes" (maybe you mean arguments here?). I think he gave a reasonably accurate description of effective altruism, what in the description do you believe is wrong?

You have not actually engaged with the actual argument, you simply assert that change through political activism is not the "obvious most powerful tool we have". I think you need to back that up, I would say history at least tells us that the largest changes in wealth distributions have come through (often violent) political action.

Regarding the bednet argument, you are simply nitpicking on the numbers. The argument still stands at some point you end up in a position of diminishing returns, i. e. When everyone has a net giving nets is not helping anyone.


> I think he gave a reasonably accurate description of effective altruism, what in the description do you believe is wrong?

I disagree that this article give an accurate impression of the EA. The main point of effective altruism is that people should use evidence in choosing which charitable causes to devote their time to. I don't feel like the author sufficiently engages with this point; instead he attacks Singer for not coming up with a satisfactory standard for what percentage of one's wealth to donate, and laments that EA isn't political enough.

EA arose based partially based on the observation that people do most of their giving to, for example, local churches and schools than to truly desperate people in other parts of the world. People also tend donate their effort to local and relatable causes. The argument isn't that buying Malaria nets is going to eliminate all the evil in the world, the argument is it's a better use of money than other charities, and that we should use evidence to determine how to expend our resources.


Well, I disagree the whole premise of Singer is that we can't change things through the political process so instead we should use "effective altruism".

Now if we are donating to charity should we follow the principles of EA? The answer to that is probably yes, but that's a different question and is not the point raised by the author.


> the whole premise of Singer is that we can't change things through the political process so instead we should use "effective altruism".

(I'm assuming "I disagree" was meant to be a separate sentence)

I think Singer would say that it's often but not always hard to change things through the political process. But he hardly shuns politics entirely; he often speaks and writes about political issues, and he ran for Australian Senate in 1996.

But anyways Singer's beliefs about the effectiveness of political causes isn't the central point of Singer's EA advocacy, it's just one piece of his beliefs.


I don't know how "at some point you end up in a position of diminishing returns" in this case is an argument against EA. In the book Doing Good Better one of the core ideas of EA is defined as investing in problems that are neglected, which prevents investing in diminishing returns. Whenever the malaria nets run into diminishing returns I am sure that GiveWell will start ranking AMF lower. If you are arguing that we should not invest in problems that have diminishing returns EA strongly agrees with you.


GiveWell is literally a priority queue of charities. The moment AMF starts getting diminishing returns, it'll drop down the list. If some other organization figures out an even cheaper way of saving lives, it'll overtake AMF on the list.

It's a simple and obvious system. I'd say the only two things to potentially quibble about is whether or not one likes their sorting function, and the lag time between resorting.


His argument is that effective altruism (EA) is unable to eliminate poverty and therefore it is ineffective? I mean it's an pretty empty criticism, and can be used for any sort of altruistic philosophy given that poverty is a quite an intractable problem. Eg. "Your altruism is not effective unless you work to eliminate world poverty."

Effective altruism boiled down, is getting the most value out of altruistic work as I understand it. Obviously if every child in malaria endemic areas had a net, donating to malaria foundations would not be the most effective thing to do. If we discovered an asteroid that would destroy 20% of the population, EA would probably dictate channeling of all resources to that cause.


No, the burden of proof lies on the person claiming that something is "obvious".

And nitpicking the numbers is incredibly important if said nitpick happens to span orders of magnitude.


It does not fundamentally change the argument, so nitpicking does not change how valid an argument is.


> I would say history at least tells us that the largest changes in wealth distributions have come through (often violent) political action.

History shows just the opposite. The largest changes in wealth distribution in history have come about through industrialization, starting in Britain and then spreading to other parts of the world. Purposeful political action has most often been harmful to these developments, with limited exceptions (such as 19th-century Japan, and other East Asian countries in the 20th century).


> [...] but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have."

Maybe because vastly more people were lifted out of poverty (and associated issues) by political decission to switch to some version of free market economy than by any kind of individual charity?

Author probably didn't state it because he assumed it's a common knowledge.


HN should require (non-randomized) in the title for this kind of thing, the same way it requires (2014) or (pdf).


The author’s OO example is hard to understand, but they’re wrong about why. It’s not bad because it’s OO, but that it’s very badly done OO: the class couples two different concerns (network API client and database). That’s why it makes more sense as a bag of functions.

The general version of the point doesn’t work very well, and many of the other OO use-cases the author discusses actually work much better than alternatives.

For example, on abstract base classes: if you replace this with a bag of functions I think you end up reinventing virtual dispatch—that is, each function’s top level is a bunch of `if isinstance(...)` branches. This is much harder to read, and harder to add new implementations to, than abstract methods. It’s also no easier to understand.

(There is a subset of this advice that I think does improve your code’s understandability, which is “only ever override abstract methods,” but that is very different from “don’t use OO.”)

For impure classes, the author suggests e.g. using `responses` (an HTTP-level mocking library) instead of encapsulating these behind an interface. This is a fine pattern for simple stuff, but it is not more understandable than a fake interface. The hand-written fake HTTP responses you end up having to write are a lot less readable than a mock implementation of a purpose-built Python interface. (Source: I once mocked a lot of XML-RPC APIs with `responses` before I knew better; it was not understandable.)


> It’s not bad because it’s OO, but that it’s very badly done OO

This argument seems to come up for every criticism of OO. The criticism is invalid because true OO would never do that. It seems like a No True Scotsman. Notably, when you whittle away all of the things that aren’t true OOP, you seem to be left with something that looks functional or data-oriented (something like idiomatic Go or Rust). There isn’t much remaining that might characterize it as a distinct paradigm.


My problem with these arguments is that its meaningless, if this "bad OO" (or "bad <insert paradigm>") is what most developers write.

Its a big problem I have with ORM's. Every time a discussion about what's bad about ORM's is brought up, there's a multitude of people saying that its just used wrong, written wrong or otherwise done wrong, yet that's basically my experience with every non-toy codebase I've ever worked on as part of a larger team. Telling me that its just done wrong is useless because its out of my hands. Is it the ORM's fault? I argue yes because it encourages that kind of programming, even though its not technically the ORM's fault.

I see it the same with OO or anything else. Is this "bad OO" the type of OO people tend to write? If yes, then OO is the problem. If no, then OO is not the problem.

Personally, I like a mixed approach where I use some OO and a lot of functional. I'm writing a little toy game in C++ from scratch and I have a very flat design, preferring functional approaches where I can, but I have a few objects and even a little bit of inheritance where it makes sense. Sometimes dealing with an object-based API is just more convenient and I won't shy away from using OO principles, but I also don't default to them just because either.


But every programming paradigm can be (and perhaps most often is) done badly. They each just create their own set of issues when "done badly". I think it just comes from not properly understanding why you're making certain choices. Deciding to use a design pattern is going to come with a certain set of strengths and weaknesses, so ideally you'd consider your problem and choose design patterns that makes the most of the strengths and minimizes the impact of the weaknesses. But I don't think it's very common to put that much thought into programming decisions, so people will often make a design decision, and then invest their effort into fighting the constraints it imposes, while usually failing to realize the potential benefits of it. OOP can be a perfectly suitable tool to solve a problem, but if you haven't thought about the actual reason why you want to use it, then you're just going to end up creating complicated abstractions that you're not properly benefitting from. ORMs can be great tools to, but if you want to use an ORM, you have to solve your problems using the approaches implemented by the ORM. If a programmer can't/won't do that, then they've just chosen the wrong tool for their problem/the approach they wanted to take to solve it.


Maybe the overall takeaway is that most programmers are bad.


Perhaps. I think a more likely explanation is that it's just not a very professional field. As a field it hasn't been around for very long at all, and it's changed a lot during that period. If it was obvious to tell the difference between bad and good programming, then perhaps things would be different. But I don't think that's obvious at all, and I don't think it's possible to create any meaningful level of agreement in that area. Look at how hard it is to agree on any sort of standard. IETF RFCs take years to process, with the typical outcome being a standard that enough people agreed to publish, but that 0 people faithfully implement.


I have another opinion about that. Most of us don't work in product companies but rather as consultants or in IT enterprise environments. In such contexts the real objective of software development is often the improvement of a process that is enabled by the software. As such the cons of bad software are not immediatly clear to any management layer above the first. Sometimes even the first layer of management doesn't really care until they get the process improvement they want. If the new software produces an increase of 30% in sales then the fact that it is badly written is just of secondary importance: the objective has been met. The fact that bad software quality will hurt maintanance in the mid-long term is often too far in the future for anybody to really care.

As software engineers I have seen that we usually make two mistakes in these cases. We focus only on our craft without taking into consideration the real business objectives of the customer and what he really cares about. We introduce unnecessary complexity just for the sake of it, often adding unnecessary abstractions. In a few words we often make the sin of believing that software is the objective while in most of the cases there is a business objective for which software is just the enabler and because of that we lose focus on what is the right thing to do.

In software product companies code is part of the final product and code quality directly impacts costs and the core business of the company.


I’d still say this is at least partially attributable to not having any real measurements of what “good” software is. At least aside from at a very basic level.

If you want to build a bridge, I’d assume there’s a set of standards it has to comply with. If it doesn’t meet the spec you could explain in very objective terms why the quality isn’t sufficient. If you had to sit in front of the board and explain why you think a piece of software wasn’t built to a sufficient standard of quality, you’re going to really struggle to do that in objective terms. The best you could really hope to do is present a well articulated opinion. Anybody who disagrees with you could present their own opinion, and unless they have a high degree of technical competence, they’re not going to have any objective approach for making judgements about that.

Sure lots of companies prioritize feature velocity over code quality. But I doubt most of them a very well informed about what quality compromises they’re actually making.


In some large Internet Scale firms there were ARBs, Architectural Review Boards. They served this purpose, by having very senior members of staff review proposed architecture and implementation approaches PRIOR to coding occurring. The danger was that over time, this sometimes devolved into the members waiting to just beatdown people presenting to them and being internally "political".


This resonates with me.

In other fields it can be easy to say when a job is done well. There can be clear guidelines. That said, other fields cut corners and do a “bad” job, too. Construction for example. It is probably possible to explain to average Joe why a damp proof course wasn’t installed properly or the insulation used is dangerous and flammable.

Programming as it exists now seems often in service of something else. So a deadline causes cut corners. So in this context a cut corner is fine and “we’ll fix it later”.

I suppose programming differs from a physical thing like construction. Sure you can replace the wiring and install a mezzanine but it’d be insanely impractical.

I’m going off on a tangent now...


Construction is a prime example, especially Home Construction. Just look at the TikTok videos showing horrendous plumbing and electrical work.

Hell, I moved into a house built 3 years ago by a major firm and the ceiling fans were ALL wired wrong, and many of the electrical plugs are upside down.

This is a house that had a specific design model. My presumption is that unskilled labor was brought in to complete work and just did a shitty job.

You can see this sometimes with outsourced programming work or local work that is poorly designed and inadequately tested.


Maybe its true, but that just shifts the blame and doesn't change anything or solve anything. Try as we might, "fixing" programmers just isn't realistic, so the only thing we can do is look for ways to improve the tools to make them encourage better practices.

Going back to the ORM example, I'm cool with just using SQL. The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code. However, I'd also be ok with an ORM that enforced this boundary and clearly separated query logic from application logic.

As for OOP, I don't know what the solution there is. Maybe there isn't one. I like to use OOP, but sparingly, and at least in my personal code (where I have control over this), its worked out really well. A lot of code is transforming data structures and a functional approach maps really well to this, but for overall architecture/systems and even for some small things that just map well to objects, its great to have OOP too. In my toy game, I use inheritance to conveniently create external (dll/so) modules, but inside the engine, most data is transformed in a functional style. Its working quite well. I'm not sure how you could redesign the paradigms to encourage this though, outside of designing languages to emphasize these things.


>The language encourages a style of thinking that fits with databases and it being a separate language makes it clear that there's a boundary there, that its not the same as application code.

The separation is orthogonal to the data access style used, and you really have to make sure the engineers you're working with understand separation of concerns. I have seen many applications with controllers filled with loads of raw SQL, just as I have seen them filled with lots of ORM query building. If the programmers don't get why that's bad, they will do the wrong thing with whatever tools they have in front of them.


Take this contrived presudocode example:

    result = []
    for frob in Frob.get_all(where=something) {
        if frob.foo = expected {
            result = {
                frob: frob,
                quux: Quux.get_all(where=some-query-using-frob)
            }
            results.append(result)
Basically, the idea is that you fetch some data and check some condition or do some calculation on it, then fetch more data based on this condition/calculation. This entire thing could be a single logical process.

The only reason there's a separation of concerns here is because some of this is done in the database and some in the application. Logically, its still part of the same calculation.

But the ORM hides this distinction and makes both the part that runs in the database and the part that runs locally look the exact same and super easy to intermingle. Worse still if you access properties on your ORM-result-object which actually trigger further queries to get. It looks like a field access, but is actually a database query. I've seen this cripple performance.

In many cases, if you step back and don't think about it in terms of application code, but rather the data access and transformations that you want to achieve, then it can be rewritten as a query (joining related data in as needed etc). At the very least, it makes you aware of what the boundaries are.

I'm not saying that scrapping the ORM will magically make the problems go away and I know people also write terribly intermingled application and database logic when using SQL, but at least the boundary is more explicit and the different sides of the boundary actually look different instead of just looking like application code.

My point isn't that there's a silver bullet, but that we can nudge and encourage people to write better code by how the languages/libraries/tools structure solutions.

I'm also not necessarily saying that we have to use SQL instead of an ORM, that's just one possible suggestion that I personally find works due to the mental separation. I'm sure you can design ORM's that make the boundaries more explicit, or design frameworks that encourage thinking about application boundaries more explicitly. Same as how I'm not actually suggesting to get rid of OOP, just... if most people's OOP is so-called "Bad OOP", then we need to think about how to improve OOP, because changing "most people" is just not going to happen.


Yeah, I understand what N+1 queries are and how many ORMs make them too easy.

For me, ORMs become a problem when they're an excuse to avoid learning SQL. If you understand SQL, you will probably understand why the example you give is a bad idea. If you don't, you won't. I'm speaking from the point of view of having written, at this point, thousands of lines of SQL at minimum, and having decided that it's not how I primarily want to access data in the applications I write. The ORM queries I write are written with rough knowledge of the SQL they generate, and I try to be very careful with query builders to avoid the exact case you bring up.

I think LINQ in C# does a pretty good job of bridging this gap, actually. It could be better, but it discourages looping and encourages the declarative style that efficient SQL requires.


I think this is the problem that tools can’t solve. If you want to interface with a database, you need to know how a database works, regardless of what tools you use. I like using ORMs, and I think I write good ORM code. But I also think that I know quite a bit about how databases work, and that I’d be incapable of writing good ORM code if I didn’t.

If you give a novice developer an OOP assignment, they’re likely to struggle with the learning curve, and likely to ultimately create something that has a lot of unnecessary complexity. If you give a novice developer MEAN assignment, they’ll create something that “works” a lot easier. But it’ll likely be full of bugs created by not understanding the underlying problems that the framework is simply not forcing you to address.

Which is what I think these “simple” frameworks do. Allow you to create working features without necessarily considering things like consistency and time complexity. I also think it’s why things like Mongo have been slowly adding more ACID consistency, and schema design features. Because people are coming to realize that those are problems that simply can’t go unaddressed. And why ORMs and things like GraphQL have become so popular, which to me look remarkably similar to the structured approach of OOP design patterns.


I read somewhere that the amount of programmers in the world doubles like every 5 years. That means that at any given moment, half of programmers have less than 5 years of experience.


I've been programming for 30+ years. I still have less than 5 years of experience in anything that's currently popular.


The biggest problem with intermediate level programmers is their love of hard and fast rules. It's like they have a backpack full of hammers ("best practices") and they're running around hammering everything in sight. For this I feel like it doesn't really matter how much experience you have in an individual piece of tech - the overall amount matters more.


> The biggest problem with intermediate level programmers is their love of hard and fast rules

I think that’s just a problem of dogmatism, which doesn’t necessarily go away with experience.


I'll second that. I have seen that by "developers" with 20 years of experience. Dogmatism is the real enemy.


I find it hilarious that you’d mention “20 years of experience”. I work on a team at the moment with one of the most dogmatic developers I’ve ever met, and when challenged his rationale for any decision is “20 years of experience”.

He was recently asked to fix a rather simple bug in a C# project, a language he doesn’t use very often, but he was certain this would be an easy task for him. The project used a small amount of unmanaged resources, and he just couldn’t figure out how the ‘using’ keyword worked. He spent a few days trying to get to the bottom of ‘using’ before demanding to be sent on a C# course, which everybody thought was a great idea because it would keep him busy for a while. Maybe he’ll have “21 years of experience” by the time he gets around to fixing this bug.


Yup. Also, unwillingness to learn, which, to me, is a crime.

Learning is difficult; maybe because it starts with admission of ignorance.

Most of my work, these days, is a ghastly chimera of techniques, from 30 years ago, grafted onto patterns from 30 minutes ago. Real Victor Frankenstein stuff.

It seems to work, though.


> Learning is difficult

But not insurmountable. In my own experience, all it takes is a small amount of consistent effort each day. I've learned a few programming languages that I use regularly like this, I learned sleight of hand card magic like this and most recently, I've learned to play guitar like this. This year, I hope to learn Spanish like this too. Many people don't want to put consistent regular effort in and are looking for shortcuts. That doesn't work.

> a ghastly chimera of techniques

Well, yeah, the right tool for the job, right? But it does work, so...


In order for a programmer to program in many patterns elegantly, one must first have understanding of those patterns and how they are roughly reduced to machine code.

The skill of experiencing trumps the amount of experience.

Bad programmers need to hone their skill to experience in order to become good programmers. The most popular way is to provide them a safe environment to make mistake and learn from it. But the most important way is actually knowing that skill of experiencing and getting out of biases are important, like in this case "if OOP is indeed bad, why is it invented in the first place", and the rabbit hole actually gets interesting from here.

At least that's my take after a few years in the industry.


And yet after 2-3 years they are appointed as "senior" in some of the most developed countries :)


My main point is that the things we use encourage a certain approach and if most people are "doing it wrong [tm]", maybe its not the people's fault, maybe the tools are encouraging wrong approaches.

I mean, sure, you can just say most developers aren't very good, aren't disciplined enough, don't think things through. Maybe its true. But that doesn't solve the problem, it just shifts the blame. You can't fix the people, because its a never-ending problem: move team or company and you're back at square one. Good luck trying to change all the university courses to try and teach better practices.

So if you can't fix the people, the only thing left is to take a long hard look at the tools and see if we can't improve them in a way that encourages better code.


It’s really hard to make a really good ORM.

Except rails ActiveRecord there is probably no one.

Even though AR also has some issues.


Diesel (rust)

Also it's maybe notably by a (former?) maintainer of ActiveRecord, though, I haven't really used RoR but I think the design is probably quite different - Diesel is very much just the SQL with a veneer of Rust, almost FFI wrapper like.

Why should I have to remember to use '.values' instead of 'GROUP BY'; '.filter'/'.except' instead of 'SELECT'? It's not helpful. I frequently have a clearer idea of the SQL I want (I'm certainly no DB expert) and have to make it at least twice as long, install a plugin, or in some cases just can't mangle it into Django. For what?


Diesel is nowhere near being able to fully map SQL to rust. It's ok for relatively simple queries that are typical for OLTP applications. Anything beyond that and you're going to run into trouble.

If you want to do something more OLAP-ish or insert data in bulk, you have a huge problem. SQLAlchemy is far better, even though SQLAlchemy also has its own share of limitations (dealing with jsonb_to_recordset and functions like it for example).


sqlalchemy in Python. Having used both ActiveRecord and sqlalchemy, I'll take sqlalchemy all day everyday.


SQLAlchemy is really special - coming from Django I wasn't convinced initially, but I think the reason it works so well is it doesn't try to hide any magic. It's simply a nice way to represent objects in your database, you still have to explicitly request it to commit / run queries.


I've messed around with activerecord for sqlalchemy :)

https://twitter.com/arundsharma/status/1338596939600781313


My main issue with ORM's is that they encourage you to mix application code and queries (by making it easy and by making the code for both look the same), blurring the boundary between your application and the database.

This is a problem, because there is a real boundary, typically including a network round trip. I've been on way too many projects where ORM-using code would pull some stuff from the database, do some application logic to filter or process the results, then loop and do another query for each result. Instead of doing it in the database query itself using joins (and possibly being able to use indexes for the filtering).

Even when people are very disciplined about this, I find that once you get to non-trivial queries, they become a lot harder to read in ORM code. I tend to have to think in terms of SQL, then translate between that and the ORM code on the fly. Its not so easy.

Sure, you could say all the teams I've ever worked on that did stuff like this are just bad at using ORM's, but after a few experiences like this on different teams, in different companies, with different developers, its time to stop blaming the people and maybe take another look at the tools.


I really feel your pain. This is not yours or ORM concept fault.

There is just no good ORMs out there.

Except ActiveRecord (and maybe mongoose for mongo). And even these 2 are not the best choice for all use cases.


Most ORMs fails because they map one table with one class.


It is really hard - DBIx::Class (Perl) is an example of one of the better ones, in my opinion.


Hibernate and Entity Framework?


Hibernate is ok technologically but it is a usability nightmare. However its reliance on reflection makes me want to stay away from it.


Do you care about performance? In my experience, N+1 queries are impossible to avoid with hibernate (or any JPA framework, generally). But maybe I’m just using it wrong.


See https://vladmihalcea.com/

He has a lot of info on high performance Hibernate and particularly the n + 1 issue.

It should be mostly avoidable.


Thanks for sharing this.

For anyone else interested, here’s one post he has on the problem: https://vladmihalcea.com/n-plus-1-query-problem/


Thanks for digging out this direct link.

I haven't been there for a while but I'm fairly sure there should be a few more.


I haven't found that to be the case. In particular I think expecting the ORM to always do the best thing no matter how you use it is folly, ORMs are a tool that you have to take the time to understand in much of their full complexity.

Some would say this makes them a bad abstraction, but to me, data mapping is going to have to happen somewhere, and I would rather be building on someone else's work to write the data mapping for my applications than do it from scratch. You have to know when the ORM is the right tool to use, and when to drop into plain SQL for your querying, because they do not eliminate the need to write SQL, just reduce it significantly.


We are currently considering moving away from Entity Framework Core. Simple things work fine, but it generates ridiculous queries if stuff gets more complex.


Many ORMs are going to have trouble at some point when you start making more complex queries. I believe using an ORM well is largely in understanding the balance between when its tradeoffs are acceptable and when they are not. Per the other reply, I would probably start with EF Core in new applications, but would not hesitate to add Dapper or just plain SQL with ADO if I saw the need.


Is there any reason why it needs to be either/or, or are you suffering from OCD light like so many others of us? ;-)


> My problem with these arguments is that its meaningless, if this "bad OO" (or "bad <insert paradigm>") is what most developers write.

Only if that's more true than “bad <foo> is what most developers write in paradigm <foo>” where <foo> is the presented alternative paradigm, otherwise we’re comparing the platonic ideal of <foo> against real world OO.

Sturgeon’s Law may not be quantitatively precise, but it addresses a real qualitative issue that must be considered.


> This argument seems to come up for every criticism of OO.

The same is true of functional and procedural programming, too.

The problem, as I see it, is that the profession has a history of treating programming paradigms as just being a bag of features. They're not just that, they're also sets of guiding principles. And, while there's something to be said for knowing when to judiciously break rules, blithely failing to follow a paradigm's principles is indeed doing it badly.

This is a particular problem for OO and procedural programming. At least as of this current renaissance that FP is enjoying, advocates seem to be doing a much better job of presenting functional programming as being an actual software design paradigm.

It's also the case that, frankly, a lot of influential thought leadership in object-oriented programming popularized some awful ideas whose lack of merit is only beginning to be widely recognized. If null is a billion dollar mistake, then the Java Bean, for example, is at least a $500M one.


> The same is true of functional and procedural programming, too.

There are some specific criticisms that are levied against FP for which proponents respond “that’s not true FP”. For example, the criticism “FP is too preoccupied with monads” might engender such a response; however, this response is appropriate because it’s not one of the defining features of FP. Yet for FP, there are still pretty widely-agreed upon features or conventions (functional composition, strong preference for immutability, etc).

For OOP, I can’t think of any features or styles that are widely agreed upon. If you mention inheritance, half of OOP proponents will argue that inheritance isn’t a defining characteristic because Kay didn’t mention it in his definition. If you mention message-passing, many others will object. If you mention encapsulation, then you’ve accidentally included virtually all mainstream paradigms.

If OOP is a distinct thing, and it isn’t about inheritance or message passing or encapsulation or gratuitous references between objects (the gorilla/banana/jungle problem), then what exactly is it?


The term may be muddied but it’s not meaningless. Alan Kay’s vision of OO (“it’s about message passing”) never seemed to really take off in the world. When people talk about OO they usually mean classes, objects and methods, used for almost everything as the primary unit of composition. And classes with public methods encapsulating private fields. And often with a dash of inheritance. C++ and Java seem like the flag bearers for this style of programming. It’s popular in C# too.

When people criticise OO, they’re usually criticising this the “Effective Java” style, expressed in whatever language. This style is deserving of some criticism - it’s usually more verbose and harder to debug than pure functional code. And it’s usually less performant than the data oriented / data flow programming style you see in most modern game engines.


Agreed. The term is muddled and diverging.

OOP the paradigm (messaging and memory encapsulation) is pretty different from OOP the style (access modifier, class+method+properties, inheritance).

As a strong proponent of FP the paradigm, I insist OOP the paradigm is great to study and apply in a system software where there are multiple agencies. For example, in a browser there are multiple "agencies", network-facing workers, storage-facing workers, human-facing workers, etc, as each "agencies" runs at different to pace to make its "client" (networkAPI, storageAPI, human) happy, therefore messaging and buffering between those "agencies" inevitable.

FP also suffers the same issue.

FP the paradigm: pure functions, expression-based, recursion, first-class function, parsing > validation (universal, almost applicable in any programming language) FP the style: monad, functional-based language, tail-call optimization

Being overly-critical over style is not productive in the long term. Someday one will have to leave the tool for a new one.

Learning the universal part of paradigms is useful because it is not dependent to tools.


I couldn’t agree more whole heartedly with this comment, and I love FP. Unfortunately OO has come to mean this Java style.


So, that's still focusing on features. If we're talking about it from a paradigmatic perspective, instead, then I would argue that the prototypical idea behind object-oriented design is indeed encapsulation.

It's true that this is not a distinguishing feature. I don't see that as problematic, it's just how things work. Object-oriented programming, like any paradigm, evolved over time, as people built languages with new ideas, and then spent time figuring out how best to make use of them. And there's no rule saying that nobody is allowed to subsequently adopt and adapt a useful idea in other domains. Fuzzy boundaries are part of the natural order, and that is a good thing.

But, if you go back and read the seminal papers, it's clear that the common thread is encapsulation, every bit as much as referential transparency is the core idea that unites all the seminal work in functional programming. The idea was that programs would be decomposed into modules that were empowered to make their own decisions about what code to execute in response to some instruction. And that this was supposed to liberate the programmer from needing to micro-manage a bunch of state manipulation. Not by eliminating state, as is the ideal in FP, but by delegating the responsibility to manage it.

This is why the concept of Beans bothers me. A Bean is a stateful object that throws its state into your face, and forces you to directly manage it. That sort of approach directly contradicts what OOP was originally supposed to be trying to accomplish. For my part, I am convinced that the bulk of the backlash against OOP is really a response to the consequences of that sort of approach having become orthodox in OOP. Which is deeply ironic, because this is an approach that enshrines the very kinds of practices that the paradigm originally sought to eliminate.


The problem with encapsulation in OOP, is that the (mutable) state is not truly encapsulated, it is just hidden. State encapsulation would be a pure function that uses mutation internally. OOP encourages mutable state and in general it is non-trivial to compose such effectful code. The state space of an OOP application can become a combinatorial explosion. This might be what you want if you are modelling something with emergent complexity, e.g. a simulation or game, but doesn't sound good for a line-of-business app.

As an anecdote, I once saw a stack overflow reply from a distinguished OOP engineer advocating for modelling a bank account with an object containing a mutable balance field! That is certainly not modelling the domain (an immutable ledger). OOP might fit some problems well, but the cult of presenting it as the one-true-way (made concrete in 90's Java) is deserving of a backlash IMHO.


> But, if you go back and read the seminal papers, it's clear that the common thread is encapsulation, every bit as much as referential transparency is the core idea that unites all the seminal work in functional programming. The idea was that programs would be decomposed into modules that were empowered to make their own decisions about what code to execute in response to some instruction. And that this was supposed to liberate the programmer from needing to micro-manage a bunch of state manipulation. Not by eliminating state, as is the ideal in FP, but by delegating the responsibility to manage it.

I agree with that assessment, but I also think the problems with OO are inherent to that paradigm, and it's time to acknowledge that the paradigm is bad and has failed.


> This is why the concept of Beans bothers me. A Bean is a stateful object that throws its state into your face, and forces you to directly manage it.

I disagree.

Beans are Java's version of the ideas expressed in Visual Basic and, later, COM -- ideas often called something like "software componentry". They are objects that can be instantiated and configured without having to write custom code to invoke their methods, because the instantiation and configuration follow accepted standards, as well as the means by which they register to subscribe to and publish events. This lets them be instantiated and manipulated by other tools in the pipeline, such as a GUI builder tool or an automatic configurator like the one in Spring Framework.


If we go by Alan Kay's definition, then we can argue Elixir/Erlang is an OO language [0]. It's not the first language one would think of when talking about OOP, is it?

To me, OOP is all about implementation details.

[0] https://elixirforum.com/t/the-oop-concept-according-to-erlan...


See: https://www.quora.com/What-does-Alan-Kay-think-about-Joe-Arm...

In which Alan Kay himself answers the question, saying, "Erlang is much closer to the original ideas I had about “objects” and how to use them."


Whether total functional purity is desirable or not is a huge divide in the FP community, as large as anything in the OO world.


OOP is about code organization. Type dependent namespaces (also known as classes) are pretty much the core of OOP.


So is it dot-method notation (`foo.Bar()`)? If so, does that make Go and Rust OOP as well? That's just syntax though; what about the C convention of `ClassName_method_name(ClassName obj)`? That's still namespace by the type even if the language* doesn't have a first-class notion of a type prefix.


I don't see how Java Bean is a costly mistake. It's just a convention. It is annoying to follow but it doesn't cause damage directly.


I've found that objects best when they are used in moderation. Most of my code is plain old functions, but occasionally I find that using an object to do some things is just much cleaner. The rest of the time doing away with all the baggage associated with objects and using structs or other, simpler structures is just easier to wrap my head around.

Java IMO is the classic case of going way too far overboard with being object oriented. Much worse than Python.


I feel like OO is okay if objects implement good well thought out interfaces. The problem is designing good well thought out interfaces is hard. Also the rise of distributed computing causes problems for OO. Passing data between systems is pretty easy. Passing functions with that data, not so much.


> I feel like OO is okay if objects implement good well thought out interfaces.

I find that much of what I do is taking something from structure A and manipulating structure B. If you are a slave to OOP, you end up bolting that method to either one or the other object when really it affects both and belongs in a separate place entirely.

The other big issue I have with OOP is frequently I don't want or care for reference passing, I just want value passing. That is usually the place where I start using objects is when passing references is useful.

It's particularly useful if you are building a thing which requires multiple steps and you need that state to persist between steps. I've seen it done well with functional programming, but for me it's just easier to build something out with an object that holds persistent state.


My experience with distributed computing is that the biggest issue isn't passing functions with data. Instead the biggest issue is sharing state between systems. This is part of why immutability is very nice.


But that's exactly the point!

Object-oriented means few different things: it could mean "my program contains 'class' keyword" to "Everything must be a class which obeys SOLID".

If you say general statement, like "OO in Python is mostly pointless", you are likely wrong. There are ways to use 'class' keyword to make programs better (shorter, more readable) and there are libraries out there which use OO this way.

If you want to write something which can be proven true, you want to have something much more specific. Here are some possible titles one can have meaningful discussion about:

"SOLID everywhere is not worth it"

"Don't use objects where a function will do"

"single responsibility is the key to usable OO"

and so on


You’re missing the selection bias at work here. The issue is that good OO doesn’t get you upvoted in programmer related forums like HN because it’s not terribly popular. Instead what does get upvoted is arguments against OO, and these often contain a lot of bad OO code as counter-examples.


And yet a lot of this so-called "bad OO" is representative of most of the OO I see written. If something that is bad is the norm, that says something about the paradigm itself, as well.


Bad code is the norm (regardless of paradigm), because it's a profitable industry that rewards entry but not necessarily skill. It's not really a failing of the paradigm that it's applied wrong when that is the norm in all aspects of the practice.


That is not my experience. I see a lot of boring, working, OO code. Nothing that I’d excitedly put on the front of HN as an example of anything, but code that’s reasonably easy to read, and just works day in and day out.


I think you took me to be arguing for a stronger claim than I actually was. My preferred paradigm is also roughly “idiomatic go or rust.” But the OP isn’t arguing for idiomatic-go-or-rust, as far as I can tell they’re arguing against ever using methods or virtual dispatch (which are core features of both of those languages)! My claim is that the OP’s diagnosis of the problem—“virtual dispatch causes your code to be confusing, get rid of it”—is incorrect.


The style OP is advocating lines well with Rust at least. Rust still tends to use method syntax, but it is very strict around mutability and virtual dispatch is rarely used, so you tend to write your programs with similar flavor as the OP is arguing for.


This is an industry where the waterfall lifecycle was presented as being a bad methodology, which was then promptly adopted by almost every firm in existence.

Then Agile & SCRUM arrived and now people complain about giving status on what they are working on now, what issues they have and what they intend to work on today. You see posts here bitching about it.

I still get software from engineering that appears to lack ANY sense of what we are supposed to use it for. The developer gets tons of back slapping by engineering management, and it leaves us STILL doing grunt work with the poorly designed tooling, that the developer is now writing perl scripts to deal with issues in his delivered work.


It's not like the criticism in this case is particularly esoteric: Literally the first explanation is that the proposed classes don't encapsulate single concepts. That's the theory of the first course you'd get about this in school, or if you're self taught something you'd see within the first 30 minutes of casual googling about the subject.

I think it's fair to question what is going on in the article if the mistakes made are of such a basic nature.


You can create an OO mess in Go and Rust that is equivalent to this too.

Dependency injection is a nice name for factory functions + dependency list + topological sort, but people still use it horribly, injecting dependencies everywhere, not writing functional and data-oriented composable code.

It's nice to learn about patterns but it's wrong to think about patterns as a solution.


>> ...idiomatic Go or Rust

> You can create an OO mess in Go and Rust that is equivalent to this too.

To be quite clear, I qualified with idiomatic. I don't think you can create an OO mess with idiomatic Go or Rust, but then again, the whole point of this thread is that there's no clear consensus on what OO actually means, so what does "OO mess" mean.


Idiomatic says nothing about code design. Code design is far above what idiomatic means. You can still have dependencies where you shouldn't have them. Your services can still be designed without composability in mind.


many software problems boil down to badly done X, where X is some paradigm, technology or technique. Then someone concludes, like this article, that it is pointless / to be avoided etc. It would instead be better that you write an article where you steelman X, then compare it to Y, and look at the comparative advantages. Badly done X or Y should always be highlighted but not really used as a reason not to do X or Y. If you can steelman X and show that even best effort has far too many disadvantages, and there doesn't seem a way forward to overcome them compared to other approaches, then sure, call it out. What I see is often new things have less history of people badly doing it compared to the old things that have a lot of examples of people badly doing it and they mistake that as the old thing as being bad.


Right but it actually is just horrible code with zero separation of concerns. Having a client class is a great example of good OO. The library you're using having a Session object is basically essential in python. Any other paradigm would be a mess. Things storing state about themselves in a totally unambiguous way is brilliant. Most complaints about OO just focus on bad usecases. Calling that is not a no true scottsman.

If you saw someone only driving in reverse, complaining about car UX, would you not state the obvious?


> This argument seems to come up for every criticism of OO. The criticism is invalid because true OO would never do that.

This passertion is disingenuous. The whole point is that the reason why bad code is bad is not because it's OO, it's because it's bad code.

You do not fix bad code by switching programming paradigm, specially if it's to move back to procedural programming and wheel reinvention due to irrational class-phobia.


> This passertion is disingenuous. The whole point is that the reason why bad code is bad is not because it's OO, it's because it's bad code.

This is pretty false on its face. Let's say that some paradigm insists that inheritance should be used for every problem. Almost all programmers agree that this is bad code, thus code written in this paradigm is bad because of the paradigm.

> You do not fix bad code by switching programming paradigm, specially if it's to move back to procedural programming and wheel reinvention due to irrational class-phobia.

Agreed. Instead you should start with your current paradigm and remove the problematic aspects until you're left with something that works reasonably well. I posit that when you start with OO and drop its problematic aspects like inheritance or Armstrong's observation of banana-gorilla-jungle architecture, you end up with something that looks pretty functional or data-oriented, a la idiomatic Go or Rust. If your definition of "OO" already excludes these things, then it probably looks something like Go or Rust or perhaps even Erlang.

The issue I'm raising with "OO" is that there is no consensus about what OO is; rather every defense of OO is about what it isn't (and then there's the ever predictable straw man, "But you can write bad code in any paradigm!").


> Let's say that some paradigm insists that inheritance should be used for every problem.

There is no such paradigm, including and specially OO.

In fact, your example clearly illustrates the perils of blaming the tools for problems caused by incompetence.

I mean, how many decades have passed since "composition over inheritance" has been taught like a mantra in every OO programming 101 course?

And still the best OO example you could manage to come up wit is a blatant error that would lead you to fail a OO programming 101 course?

I'll restate the obvious: the reason why bad code is bad is not because it's OO, it's because it's bad code.

And here you are, trying to pass blatantly bad code as somehow an OO problem?


If you want to see real OO, look here: - https://pharo.org/ - https://gtoolkit.com/

If you don't want to, it's ok, just stop bashing it without first learning it (you learned FP too, it took some time too).


I disagree. Good OO is about encapsulation of concerns. Make a Client object and push data to it; let the class worry about the networking and sockets. Make another object to deal with the database.

For sufficiently complex systems I don't see how functional or other paradigms can manage without holding a huge amount of global state.


The function system doesn't hold 'global' state. It holds the state in arguments given to functions. If you want a client, you still have a bit of client 'data' and you pass that around your functions.

If you wanna do stuff to the client, you call functions with the client as an argument. If you want polymorphic clients, you define a client typeclass / trait / interface called Client, and then your functions take a generic Client bit of data.

In some sense, this approach is a lot like replacing foo.bar() with foo(bar). Which doesn't do much to change your program. The interesting difference in FP is how you 'change the state' of your client.

In FP, if you want your client to e.g. count its connections you would do something like `nextClient = connect(oldClient)` instead of the OOP `client.connect()`. This means that you are a lot more explicit about your state changes. This has a lot of advantages that can be hard to wrap your head around. It also comes with some disadvantages.

As a result though, all of your state is carried very 'locally' in your functions scopes.


How do you manage to not have global objects?

FP and imperative handle data without huge amounts of global state the same way. By structuring your code properly.

Per your example, your Client instance still has to be passed around everywhere it's needed, and you call client.push_data() on it; in FP or imperative approaches, you would pass around a Client struct or tuple or similar, and call push_data(client).

OO just bundles state and functions together, as fields and methods on an object. Which has its pros, and its cons.


> Per your example, your Client instance still has to be passed around everywhere it's needed

Yes, because our program probably needs more than one Client.


Not sure if you're trying to make a point here?


I mean in Java or C# there are "global" objects, being the Main class, I suppose. There's always a top-level construct.

But my Sword class doesn't hold a reference to a NetState object.


That was a rhetorical statement. The same reason you don't need global objects in an OO language is the reason you don't need global data structures in an imperative or functional one.

Your Sword class doesn't hold a reference to a NetState object, but then, my Sword struct doesn't need a reference to a NetState struct, either.

If I were to climb the reference hierarchies back to my 'main' function, I would find a common ancestor, just like in OO if I were to climb my reference hierarchies, I would get back to the class with my 'main' method. But that doesn't imply global state; it's all scoped down in the code where I defined and use it.


I haven't the faintest clue what point you're trying to make but I'm sure it's very clever.


The point is the graph of data dependencies in an FP system doesn't necessarily look any different than in an OO one.


Closures and internal state of objects are basically the same thing.


I don't usually deal with stateful closures. In OO we often have file objects, and if you call Close() on said object, subsequent Read() invocations fail because the encapsulated state has changed. What's the closure analog for this?


You've never worked with a generator before? That's a stateful function. For throwing after a file handle is closed, something like -

  const read = (fileHandle) => {
     return () => {
       if io:isOpen(fileHandle) && io:hasNext(fileHandle) => {
          return io:readNext(fileHandle)
       } else if io:isOpen(fileHandle) {return undefined
       } else {throw "File is closed"}
     } 
  }(myFile)

  read()
  read()
  io:close(myFile)
  read() //throws


Separating concerns the way you describe can be accomplished leveraging the polymorphism mechanisms in an OO language (objects and subclasses) or just as easily in a functional style (multimethods, protocols, interfaces).

Show me an example of something you think is decoupled in OO style which could not be similarly decoupled in FP, and I'll give you the FP version which does the same.

Not saying one is better than the other, but you can definitely keep your code decoupled in both styles.


In an OO program all state is global, because you can't understand any given object's behaviour without knowing what its state is and what the behaviours of all the objects it communicates with are, and you can't understand those without knowing their states either.

State can and should be hierarchical - and you can do that easily in FP - but the idea that OO lets you make it somehow non-global is a myth.


>In an OO program all state is global, because you can't understand any given object's behaviour without knowing what its state is and what the behaviours of all the objects it communicates with are, and you can't understand those without knowing their states either.

By that logic all state in any program is global.

But if I'm writing an MMO server my GoldCoin class doesn't need to know about the client's connection state.

Have you actually written or worked with OO code before? Your comment reads like you have not.


> By that logic all state in any program is global.

All state in any program is global in the sense of being reachable from top-level. You can, and should, make your state hierarchical - reachable from top-level should not mean reachable from anywhere, you can have (non-toplevel) parts of your program that only access parts of your state. Unfortunately OO is uniquely bad at this, because objects are encouraged to have hidden coupling.

> But if I'm writing an MMO server my GoldCoin class doesn't need to know about the client's connection state.

But you have no way of knowing or enforcing that. "You wanted a banana but what you got was a gorilla holding the banana and the entire jungle" - your GoldCoin might contain a ("encapsulated" i.e. hidden) reference to another class that has a reference to another class and so on, and so eventually it does depend on the client's connection state.

> Have you actually written or worked with OO code before?

Yes, for many years, which is why I've become an advocate of FP style instead.


> All state in any program is global in the sense of being reachable from top-level.

Well, think about a random number generator. It has some internal state, which gets set by some action taken (perhaps indirectly) by the top level. And that state is "reachable" by getting the next random number, but that random number may not be a direct representation of any part of the generator's state. Also, after initialization, the generator's state should not be alterable by the rest of the program.

So to me, that's not really "reachable". The entire point of encapulating that state in the random number generator is to make it not reachable.


That's a perfect example; in my experience that kind of RNG state is global (or at least, it has all the problems of traditional global state). Potentially any object might behave in a strange way, because it contains a reference to that RNG (hidden from you): you might find a sequence of calls that usually, but not always, reproduces a bug in test, and then it turns out that it depends on whether other tests running in parallel are affecting the RNG state. Essentially any test you write that's bigger than a single-class mock test has to be aware of the RNG and stub it out, even if you're testing something that on the face of it has nothing to do with RNG.

In a functional program you would make the RNG state an explicit value, and pass it where it's used (but not where it isn't - so there might be large program regions that don't have access to it). It'll be an explicit part of the top-level state of your program. I'd argue that that's not actually any more global - or at least, it causes fewer problems - than the OO approach.


It's also a cherry picked example, networking code, of a problem that can be expressed equally as nicely with OO or functions. Now refactor a gui or a game without OO and you'll find the OO version easier to understand or cleaner.


Fwiw, many games uses a different representation of state and behaviour than standard OO, called entity-component-system (ECS): https://en.wikipedia.org/wiki/Entity_component_system

ECS doesn't represent game objects with standard OO such as classes in C++/Java/C#. You can argue about whether it's still OO in principle, but it's not the kind of OO an enterprise developer would recognise.


Is this true for games? I read an article[1] some years ago that pointed out the OO design (with C++ in this case) was bad for performance due to the data not being arranged in a particularly cache-friendly way.

[1] http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfal...


Depends on the problem domain. See the posts on Wizards and warriors:

https://ericlippert.com/2015/04/27/wizards-and-warriors-part...

For card games, OOP is very well-suited (so to speak).


"Easier to understand" and "performant" are different axes.


There is a fantastic GDC video about Blizzard's use of ECS in Overwatch and how it made it far easier to understand the system due to all of the interacting subsystems.

https://www.youtube.com/watch?v=W3aieHjyNvw


It's funny because a few years ago I read about how the developers of a game from the same studio -- specifically StarCraft -- ran into the exact same problems I did when trying to write a game in the then-common "bag of subclassed game objects framework" in the 90s. To this day I thank the gods that I chose ECS for my latest game because it has made changes, bugfixes, and extensions of functionality so much easier. To say nothing of more data-driven.


Some day I believe that when you think for performance your brain gets things faster because it's not potential benefits or stylistic argument. It's raw perf and everybody seeks that (not at the expense of tech debt ofc.)


Well, yes. When in a domain that is heavily noun driven it makes sense to reach for OO. It's just that most domains aren't (hence Yegge's famous article re: nouns and verbs).


Isn't the point here whether nouns should be able to perform verbs or not?

In a "noun driven domain" you can still disallow the noun to perform the verb.


More do verbs -have- to be bundled with nouns (or live second class citizen lives), as that's more what OO requires. Even in functional languages, you -can- have verbs living with nouns (your struct can have a field that is of function type); it just isn't required (or recommended for most use cases).

It's not a question of can they, or should we prevent them, it's 'should it be required?'

It may make sense to require all verbs to be coupled to a noun when your problem domain is mostly modeling nouns, and verbs only describe how those models change and interact with each other. But when the problem domain is mostly process, with nouns being ancillary, it is unwieldy to talk that way. It's why you have so many "ThingDoer" and "Utils" and the like in most OO codebases.


Exactly, my take is "use the right tool for the right job"


> Now refactor a gui ... without OO

made me chuckle — that’s my day job! Check out RamdaJS btw


Hi, what I'd really like is to test my conjecture - would you (or anyone reading) be able to email me a (< 500 line, ideally not super-deep-library-code) bit of code (constructed or otherwise) that you think is "so innately OO friendly that the conjecture won't hold".

Given some examples, I'll do a follow-up blog post.


Fwiw I've felt this way for the 10 years I've used python. IMO all of Python's strengths lie outside of it's oo functionality.


Your conjecture may only hold for tiny (< 500 lines of code) problems.

But when you deal with big problems -- millions of lines of code, written by hundreds of developers over a period of years -- you'll definitely see the benefit of OO.

Of course, most projects are somewhere in the middle; but you shouldn't dismiss OO just because you can't see the benefit in tiny projects.


I'm currently on a project with 100s of thousands of lines of code (admittedly not millions) - working in the bag-of-functions style - and I've yet to miss any OO features.

Surely it's possible to construct a smaller example? It's not like the millions of lines codebase isn't de-composable?


For what it's worth, Emacs is an example of what you describe and it's extremely functional (and carries a huge amount of global state). Millions of lines, an insane number of contributors, development period of decades with a very decentralised management structure.

Emacs has a lot of problems but once you learn the base language, it's quite easy to interface with. I'd argue it's because of the limitations on structural complexity - you don't need to wrap your head around non-locally-defined structure in order to understand any one line. The parts that attempt to emulate OO tend to be a lot harder to deal with as an outsider.


you end up reinventing virtual dispatch

But you can have virtual dispatch without OO, see e.g. Clojure.


Thanks for the recommendation! Sounds like I may have just been reading the wrong books :)


This is a shoddy analysis.

1. There's a statistical gadget specifically for doing this—a "scoring rule" [1] which is a principled way to compare different probabilistic predictions. A bunch of scatterplots of random quantities against each other are... not that.

By comparing only binary win/loss predictions instead of probabilities, like in the first chart, you throw away almost all information contained in the probabilistic estimates—if Democrats win a state, there's no bonus for predicting (say) 95% Dem instead of 55% dem.

It's plausible that 538 would actually win under a proper scoring rule, because betting markets were underconfident (relative to 538) in deep dem/rep states (predicting e.g. <95% Dem win in VT, vs 538's >99%). [2]

2. The calibration analysis assumes that different state win/loss rates are independent, but that's really untrue: 538's predictions were specifically not independent because they assumed polling errors were correlated between states.

3. Many of the other scatterplots look outlier-driven and don't include r^2 or p-values. With so few datapoints, it's unclear if they are meaningful at all.

[1]: https://en.wikipedia.org/wiki/Scoring_rule

[2]: Maybe we should cut prediction markets some slack here because liquidity constraints make them inaccurate for small probabilities. If that's the article's position, though, they should address this instead of just... not using a scoring rule.


You might be interested in this blog post, which _does_ use a scoring rule: http://zackmdavis.net/blog/2020/11/scoring-2020-us-president....


Oh, great! And the prediction market did not beat polls (it tied with them on swing states and beat them on safe states, as I speculated).

I guess between this and the other commenter who made money betting 538's model on predictit, I consider this pretty thoroughly debunked.


You can’t do the analysis on PredictIt, they limit market size heavily.


Do you know of any similar sites that don't limit bet size?

I saw some sports betting sites were offering odds, but not really a true market like predictit I think


Betfair seems to offer most of the same contracts as PredictIt as two-sided markets with lower fees and no cap on bet size or number of participants.


They are geolocked for some features, however


PredictIt also appears to have this issue. I must proxy my traffic through my US VPS to use it.


augur


I still kinda feel like the market beat the polls. Kinda.

In order to interpret the polls, you need top statisticians with lots of domain knowledge. In order to interpret the betting market, you simply read the odds and are done.


If you went to the market and you bet that the polls would be correct, you made money.

The particular market available to US residents has separate issues about "reading the odds" where events that strictly contain other events are often priced multiple pennies cheaper, the total number of expected presidents is often ~1.1, events with probability at least 0.9 are almost always underpriced, often by a nickel or more, etc.


I’m interested in the liquidity constraints theory - Betfair has ~$1bn matched for the most recent election. At present there is ~$700k bid and offered at the market for both candidates (the market is still open). Could you expand on your thinking?


Sorry, that was imprecise. My impression is that, at least on some prediction markets, transaction fees (and maybe also inflation?) make it low-return to buy high-probability contracts. I don't bet on prediction markets myself so I may be wrong about this though!


How would the fees make it any less profitable for high probability contracts vs low probability contracts? And could you expand on the inflation theory? Appreciate the insight!


If you look at it in terms of risk-adjusted returns (Sharpe ratio), a fixed transaction fee costs more Sharpe on a low-variance bet than it does on a high-variance bet. The variance of balanced contracts is higher than the variance of unbalanced ones (p(1-p) is concave with maximum at p=0.5), so after adjusting for risk the transaction fee is more significant for the unbalanced contract.


what’s a ‘balanced’ contract?


I would be guessing that it's one where equal amounts of money are bet on each side/ there's a 50% probability of the result going either way.


A market where expected returns are 1, I. e. the implied probabilities sum to 100%


High probability contracts require you to put up a lot of capital to make a small profit. This would be fine for a bet that has a quick turn around, but prediction bets often require the bet to be placed well in advance of the event, which means the capital is locked up for a long time.... it is basically a low interest loan to the bookie.


Yep cost of capital != inflation


Opportunity costs on long term high probability events eats up significant portions of the value of winning.

Add withdrawal fees and you could easily lose money while making an accurate prediction.


Sure but cost of capital != inflation. And I understand the impact of fees but I’m wondering how they could influence high likelihood bet profitability more than low likelihood bet profitability?


Let’s suppose your paying a 10% withdrawal fee on winnings and 0% fee on principle. If your payoff is 10% and you're also losing out on 2% to opportunity cost. Net winning is (10% * 0.9 - 2%) = 7% but your withdrawal fee is effectively reducing your winnings from 8% or 7% a loss of 12.5%.

On the other if the bet paid 100% of your money you would net (100% * 0.9 - 2%) = 88% and your fee dropped you from 98% to 88% a net loss of 10.2%.

PS: These numbers get much worse if the withdrawal fee includes the principle. Then you would be have a net loss of (110% * 0.9 - 2%) = 3% on the first bet and a net gain of 78% on the second.


I don’t know how you got those numbers. In all the betting markets I participate in, the fees are charged on profit - so it isn’t related to how much of my money I bet, it’s just related to profit. Win a 3:1 bet with 10% fees? You’re losing 10% of $3 if you bet $1 (ie: 30c). Win a 30:1 bet with 10% fees? You’re losing 10% of $30 if you bet $1 (ie: $3). I’m still trying to work out how the fees are “higher” for high probability bets - they’re always the same rate.


Edit: Numbers where picked for clarity, the principle is the same with a 0.1% fee or an 80% fee.

To simplify not using money for some time period has an opportunity cost. Either because you could have paid off a loan sooner, or bought a T bill, or whatever. If the bet takes a year then at the end of that year you could either have (principle + opportunity) cost if you don’t make the bet, (principle + winnings) if you make the bet and win, or nothing if you lost. Therefore the cost of making the bet in 2010 that paid out in 2020 isn’t the money you put upfront, but the money you could have had in 2020 without making the bet.

However, the winnings are calculated based on your initial payment not the payment + opportunity cost. So let’s look at a bet that pays 1$ after fees and costs you 1$ worth of opportunity costs. In such a bet the fees reduce your winnings to zero because you could have the same amount of money without the risk of the bet.


To simplify, why bet when the winnings are tiny. There's other things you could do with that money.


Sure, but that’s a cost of capital discussion not a “fees are higher for bets with higher likelihood of winning” one.


It’s related. Your actual gains = (Fee * winnings ) - opportunity cost.

The effective withdrawal fee is 1 - (actual gains / ( winnings - opportunity cost)).

So if the actual fee is 10% your winnings are 2$ and your opportunity cost is 1$ them the effective fee is 20%. In other words a 10% withdrawal fee dropped your actual gains from 1$ to 80 cents.

However if your winnings where 1,000$ then your effective withdrawal fee is 10.01% dropping your actual gains from 999$ to 899$. In other words fees are more impactful on high probability bets than low probability bets.


yep, predictit takes a 5% fee on profit + 10% on withdrawals


Other way around. 10% profit 5% withdrawal.


They also only looked at a single election where Republicans outperformed polls across the board. If you look at the 2016 election, 538 was more bullish on Republicans than betting markets, and came out ahead.


In addition to just using Brier scores (or alternatives) to evaluate their performance, I'm also interested in seeing how 538's underlying Monte Carlo models shift based on x% of error/uncertainty? (for example instead of evaluating a polling advantage as a single value K within a single simulation rather consider it a normally distributed variable).


Also betting markets are highly influenced by both polls and published election models, so it’s hard to say that they ‘beat’ them while also being influenced by them.


Not to mention that 538 gave Biden a ~90% chance to win right before the election, whereas betting markets gave Biden a ~60-65% chance to win.

So if you go by the most important metric of what was trying to be predicted, 538 was far more accurate for the one giant data point.


I actually ended up making v2 of the ultra-bright light out of 3 7-way socket splitters and 21 100w-equivalent normal bulbs[1]. No fans needed, higher CRI and I got the bulbs during a 50% sale so was cheaper IIRC :)

[1]: https://www.homedepot.com/p/Cree-100W-Equivalent-Daylight-50...


Thanks for the update!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: