Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

PMs have gotten addicted to the sugar rush of features and developers to fried salty frameworks, creating a culture of obesity in software engineering. This needs to stop. Every prod test should go through a 10 year old system and perform satisfactorily.


I couldn't agree more.

I have a 4 year old laptop that cost me $3000. Xcode is so laggy sometimes that if I type, it'll miss keystrokes and words come out garbled beyond recognition. A few weeks ago some source code I downloaded from github wouldn't compile because the swift compiler spent too long doing type inference, and decided to error out.

I'm haunted by a vision of computing. In the vision, us software folks just add bloat to everything until it starts feeling gluggly and slow on our modern expensive computers. Then we optimize our programs just enough to keep them running vaguely ok on whatever hardware is on our desks.

The only way to have a snappy computer running modern software is via the tireless work of hardware engineers. Whenever a big hardware performance improvement comes (like the M1), there's a window of a year or so where if you upgrade, everything will run fast again. And of course, all the devs with the new machines stop optimizing their programs and gradually everything slows down again. Eventually our software goes back to being as slow as it was before, except if you didn't upgrade your computer, now it barely functions.

I want off this ride. My 6 year old FreeBSD server is somehow still snappy and responsive. Maybe the answer is to just not run modern desktop software.


A good chunk of that is unique to the swift part of xcode. Do the same things in an Objective-C codebase and you'll be surprised about how snappy and non-buggy everything is.

I maintain today you could've gotten %80-%90 of the benefits of swift with a syntax refresh and add a few new cheap to compile features to Objective-C itself, like stronger nullbables and the ADT enums.

I think the 'good enough software' stuff will stop happening when physical limits start limiting improvements in compute, because the only way to have a competitive advantage then is through better software.

I hope that day is far away although, because it means that something like 8k consumer VR is something that will never happen, where the software is written properly to take proper advantage of the hardware.


Right; probably because most of the obj-c compiler toolchain was implemented when computers were slower. I was beating on xcode in my comment because it deserves it, but this problem isn't confined to xcode and swift. Compiling rust is also extremely slow. The new reddit feels like a slideshow (and despite launching years ago, how have the reddit team still not fixed the performance problems??)

We're having this conversation in the context of notion. Starting up Notion on my laptop feels like wading through honey. I really want to love notion, but every time I open it and try to do anything I find my motivation tricking away. Notion isn't even doing anything impressive with all that compute - its just slow. The sales pitch I make in my head for notion is that its a place to organize all my notes and workflows, but like Jira, living in notion means accepting that I work at the speed notion does, and that feels antithetical to the feeling of productivity I have in snappier, lighter tools like Bear and iA writer.

But Notion and XCode aren't really the problem. The problem is cultural. It seems to be the worst in the web ecosystem, though its not confined there. My favorite example of all is this issue in web3.js[1]. I can't help but play Benny Hill music in my head while I scroll through this 3 year old, apparently unsolvable issue thread.

When the new M1 macs came out lots of people were complaining that there's no 32 gig variant. Holy cow that is so many bytes. The Atari 2600 had 128 bytes of ram. The NES ran super mario bros in 2kb. Its completely ridiculous to me that our software can even fill 16gb doing everyday computing. Is there a ceiling? Will there ever be a ceiling, or should we expect a document editor in 2030 to sit on 256gb of ram, just because thats how much ram modern javascript frameworks use by then?

[1] https://github.com/ethereum/web3.js/issues/1178#issuecomment...


It's an economic tradeoff. Engineers are expensive and businesses choose what is most productive for them to what customers respond to. Also performance work is not fun and more work than new features. Performance bugs are harder to detect and quantify than normal crash bugs and such because they are statistical ranges, not binary on off states.

Performance is taken into account when it matters for the product, you'll notice this in games where they do a lot of perf improvement work.

Also swift & rust are slow due to their design as a language. They provide very strong guarantees and make a lot of thing static. It's very similar to the reasons why C++ is slow to compile.

Also notion is like 10 engineers last time I checked, and they've decided to go towards speed of dev & features more than nicer, but slower to develop tools.


The economic stuff fails to account for slow death of their product as customers don’t even know how things have gotten slower over the years. Then all of a sudden a snappy competitor product appears and they jump the ship en masse.

The business people are still clueless and continue to double down on more features their customers don’t want. While the competitor is gaining market share.


> Performance is taken into account when it matters for the product, you'll notice this in games where they do a lot of perf improvement work.

I think most development shops chronically underestimate how much application performance matters for sales. Most people will never file bug reports when software is slow. They just "won't like that program for some reason". (Or worse, misattribute the reason they don't like your product.) Slowness is invisible to the developers because they're almost always using computers which are much faster than the computers their customers are using. And unless you actually take the time to record performance statistics, you won't have any idea this is the case.

Game developers know this. People measure the FPS they get on their gaming computers. But as far as I can tell, the reason isn't an essential quality of video games. The difference is culture. There's a culture in video game development of measuring performance. People have the tools and terminology to talk about it. In comparison, what are notion's performance numbers like? Does notion run at 60fps for the average user? How many ms of typing latency does it usually encounter? Does the team even know?

> Also swift & rust are slow due to their design as a language. They provide very strong guarantees and make a lot of thing static. It's very similar to the reasons why C++ is slow to compile.

Roll to disbelieve. Swift and rust are simpler languages than C++, and yet compile much more slowly. The reason C++ seems to compile slowly isn't templates. Its because of the way C++ header files need to be parsed over and over again, and thats not a problem rust and swift share. (Rob Pike has some great rants about this if anyone's interested.) A well written, from-scratch Swift compiler should be able to get compilation speeds closer to Go. At least, for debug builds.

I suspect if swift and rust were invented 20 years ago, we would still have been able to write compilers that ran at tolerable speeds on the computers of the day. And if thats the case, the reason for slow compilation speed is something other than language design.

> Also notion is like 10 engineers last time I checked, and they've decided to go towards speed of dev & features more than nicer, but slower to develop tools.

Notion can make that choice if they want. But until they fix performance, I won't / can't use their product.

> Also performance work is not fun

Speak for yourself - I love performance work. Its so measurable and the feeling of taking something sluggish and making it scream is delightful.


I work in improving performance at a big tech company. It can be like dragging feet to get executives, managers and developers to care about improving performance because incentives are misaligned and it's frankly a chore compared to the million other things you have to be doing. It's also significantly harder to show the business metric improvement that comes from performance improvements than it does from simple A/B tests due how much variance there are in performance metrics compared to A/B testing. It's very much like security actually.

For games, it's not a 'culture', it's because games become very frustrating to play when non-responsive in anything fast paced, and thus will review badly and become 'not fun' and thus will not sell. It's directly connected to sales metrics, which is why management in game companies give space for it, and why CDPR recently offered refunds for a game because of bad perf on the PS4 tier of consoles, which is fairly unprecedented. You'll notice worse performance in slower paced games typically, such as Civ 6.

Also I've worked on trying to improve swift build perf and have worked on large C++ codebases before. The reason why C++ is slow to compile typically is because templates cause an explosion in code generation, everyone uses boost & the stdlib smart pointers & containers, which all template heavily and result in slow build perf and large binary sizes. Swift & rust do that, often implicitly and in an invisible to the programmer way and type inference are the typical reasons why it is slow and they too, create large binaries. When I started running into swift's build problems, I internally said to myself "oh boy, it's C++ again" and if you go to WWDC and talk to Xcode's engineers or swift compiler engineers, they often call swift 'new c++' and other not as nice names. Swift and rust are also incredibly complicated languages with a lot of edge cases when you start getting into the weeds, on par with C++.

Why are these languages using features such as type inference, stricter and more complicated type systems along with heavy templating? It's because now computers are fast enough to provide these 'developer ergonomics' and 'multithreaded memory safety without locks' that these features become viable in the compilers of today vs. the compilers of yesterday. If they could deliver such abilities in the past without having a compiler that takes too long, they would have. In the past C++ was known as a slow compiling language, like swift is today. You also notice slow build times in other 'statically sophisticated' languages such as scala or haskell.

Golang was specifically designed as a language to provide fast compile times. When they were designing the language, if they had to decide between cool language feature and fast build time, they chose fast build time.

Try to expand your horizons, just because you like something, it doesn't mean most people like it. I wish most people liked performance improvement work.


Fair enough; thanks for taking the time to chat about this. I'm not sure I agree, but I appreciate your perspective and experience.

For all that you've said, I still strongly suspect that a standalone rust / swift compiler designed for speed above all else could achieve many times the performance of the current LLVM based designs. The resulting binaries might need to lean heavily on dynamic dispatch over monomorphization to achieve that - but during development thats a tradeoff I'd take just about every time. The common factor in swift and rust is llvm - both using it for codegen and having the compiler think about the program in terms of llvm constructs. The fastest compilers I know about - V8, LuaJIT, Go, Jai(?) are designed with both performance and the target language in mind, and (I assume) don't translate the code through a series of complex intermediate representations internally. And you're right - none of those languages do complex template specialization.

Also its probably no coincidence that maybe except for V8 all of those compilers were designed by a single expert mind, not a huge team spanning multiple companies. As you point out, large scale refactoring (required for top tier performance) gets increasingly difficult as team & code size increases. The reasons are partially political ("your commit conflicts with how much??"), and partially because it gets super difficult for any one mind to conceive of the whole program at once when it has so much code and so many authors. Let alone do large scale refactoring.

I don't know if its true but I heard the chrome team started with just a dozen or so very senior engineers who all had experience with webkit. Before they brought anyone else onto the team, they spent months iterating together on the internal design of the browser engine until they were happy with it. They did it that way because they figured it would be basically impossible to change the core down the road once they had a larger team hacking on it.

> Why are these languages using features such as type inference, stricter and more complicated type systems along with heavy templating? It's because now ... these features become viable in the compilers of today vs. the compilers of yesterday.

Maybe. I think we also just got better at designing languages. Rust's trait system seems both simpler and better than C++'s classes + templates. Traits would have been just as viable in a compiler decades ago, but OO was trendy and we didn't know any better.

> just because you like something, it doesn't mean most people like it

Oh, I know! I've spent enough time working on teams building websites to know I'm not the average bear. But I don't think its because performance work is fundamentally uninteresting. I think its because most engineers working on websites don't have enough raw CS skill to comfortably step into the performance arena.


AFAIK swift and rust were actually initially design and made by one engineer for a couple of years before anyone else worked on it, and were led for years by those specific engineers. Chris Lattner for swift and Graydon Hoare for rust.


Interesting to know! But even if those guys made super fast front ends, they would have been always limited by the speed of the llvm optimizer and code generation backend. Writing a compiler on llvm isn’t the same as writing a compiler from scratch.


In swift's case it's a single threaded frontend that is fairly inefficient: https://github.com/apple/swift/blob/main/docs/CompilerPerfor...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: