Castle Wolfenstein in 1981, which was the first game to include digitized speech and an early example both of stealth gaming and of the World War II shooter.
It was also an early example of procedural generated levels. During startup, it would create and write the castle map layout to the floppy disk. If you wanted to replay the level, you could pull the disk out and add a write protect tab so it couldn’t be overwritten.
It's a minor nitpick, but it's more accurate to say the game would create the room configurations on startup (inner walls, doors, enemy players, chests, stair locations, etc.); the castle map for each level was fixed (Beyond Castle Wolfenstein worked the same way). I still have the maps my brother and I drew to aid in our escapes!
There are lots of Facebook groups I like. No one is talking about my neighborhood on HN! But besides interesting local groups, there are amazing niche technical groups on Facebook that would crossover with HN. A few I find useful:
Calling out the cholesterol is kind of a distraction, because it is no longer what is associated with heart disease. The level of saturated fat in the impossible burger is actually very high.
Impossible Burger, for example, has more than double the saturated fat of an 85% lean beef burger: 3.6 grams per ounce (derived from coconut oil) versus 1.7.
I know there is debate about the link between saturated fat and heart disease, and further question about the specific types of saturated fat in coconut oil. I have no idea what the truth is, but if you are on a low fat diet, impossible burger isn't a good substitute for beef.
I've tried impossible burger, and thought it was very good, so I'd eat it just for the flavor.
It was as seriously discussed and considered 40 years ago as today. Maybe more seriously as Medicare and Medicaid had recently been enacted, so there was momentum.
There are inexpensive youth and junior skates (<$100), but these are for younger kids. As soon as you are wearing adult sized skates (10-12yrs old), the price is many times that of cleats.
Ah, you provided examples unlike the other comment I responded to. Thank you for that.
But if priced low-to-high the senior skates on that site start at $64.99 and go up from there. Assuming that the lower cost skates are not worth it, you get what you pay for and all that, what's wrong with the middle tier skates that are in the $100-$200 range? How long do they last once a kid's foot doesn't outgrow it every 3-to-6 months?
It's exactly this limit that eventually leads engineering teams at your Dropboxes, Facebooks, Twitters, Tumblrs
True, but those are such extreme edge cases. 99.9999% of projects will never see that kind of demand for scale and speed. Facebook shouldn't switch to Ruby, but that doesn't inform whether Ruby would be viable for many (most) projects.
I don't think it's that much of an edge case. If Rails and its ecosystem lead to 300ms render times without load, but your objective is 100ms, you're going to have a problem. I don't think either number here is extreme and I've encountered that sort of situation on previous projects. It's not easily solvable by throwing more machines at it, unfortunately.
I don't think it's uncommon. But the point is it usually has little to do with load and isn't easily solved by scaling horizontally. Sure, you might be able to gain some efficiencies with distributed caches and such, but that still requires a fair bit of effort (the developer time we're trying to minimize).
You're right though. And I managed to sidetrack myself a bit. My intention in participating in the thread was to point out that you can't always optimize for developer speed and think you can get out of it "cheaply" with more hardware. And it's not just a problem when you're big enough for it to be a good problem to have. In those cases you're going to need to find a way to make the application faster.
Maybe it's aggressive in-app caching; maybe it's sitting there analyzing application profiles; maybe it's a rewrite in another language or another framework. Our goal is to provide a fast enough runtime where you don't have to make that decision.
If you can achieve the same rendered output much faster with a different runtime or with a rewrite or with a different framework, I don't think you're looking at a fundamental design flaw with the page. I'm happy to agree to disagree. Maybe there is too much going on with the page. But it's a common problem with Rails apps and is ideally solved without a rewrite. I think a faster runtime would be the ideal solution here.
At my day job we've seen load times for pages > 5 seconds in rails. There's no way to do it faster because the data has to be dynamically generated and loaded, and that's the bottleneck.
It's not uncommon to see this in rails applications — GitLab had the same issues.
In GitLab's case, a lot of their issues are just the classic self inflicted N+1 queries and poor planning of how to represent events in a way that could be quickly queried for display.
OTOH, the background job memory issues were completely not their fault. Glibc malloc trades high memory usage for keeping both high throughput and simplicity. With Sidekiq or puma this causes much higher memory usage than necessary. Switching to jemalloc, which they did, can reduce memory usage by 50-75%.
What does your app do that takes 5 seconds? Is this because hundreds of DB calls are required, or you need to serialize thousands of objects?
I've seen numerous big rails apps with times as long or longer, even after decent effort has been put in to optimize. Generally a case of having a lot of things on a page and rendering it all into html server-side (the classic/old rails way of doing things).
Doesn't basecamp do a ton of caching to achieve response times like that? There are certainly techniques to get there, but how long does it take to populate the caches? That's what you're going to see out of a stock Rails app.
Self-inflicted, sure. But in my experience it's the result of picking developer-friendly tools to optimize for developer time. Erubis is much faster than HAML, but a lot of developers prefer the latter. Those sorts of decisions accumulate.
No more than Match.com do, who run on .NET. Basecamp just uses the standard Rails caching AFAIK.
Templating just isn't the issue now that it was in the days of 1.8. HAML is slower than ERB, but usually neither have a meaningful impact on response time on modern Ruby. ERB is approaching Erubis performance now.
I worked on a high performance Rails API serving dynamically generated map tiles and our response times were actually lower than that.
Sounds like an ideal case for fast response times honestly. I've seen rails endpoints like that aplenty. But there also exists a lot of stuff taking hundreds of ms on a cold cache, and it's the kind of thing where the same effort done in the same way using another language ecosystem would just be 10x as fast.
If you've never seen this, perhaps you've never worked with a large monolithic rails app with a few years of legacy code built up? Brand new API endpoints can be a lot faster of course.
10x faster in, say, Go? Are you sure? Have you benchmarked it? You might be surprised.
Lets pick a weakness of Ruby: tight loops and lots of math. Take 16,000 GPS points and calculate the geographical distance to another point.
On my machine it takes 12-14ms in Ruby and 4-6ms in Go. That's only a little bit more than twice as fast! Almost certainy not worth re-writing your entire app for.
I've worked with a bunch of large, legacy Rails apps. In between a couple of Rails jobs, I worked on a large .NET app, which didn't have much better performance. One service I helped migrate from Rails to Spring/Java for an enterprisey client actually got slower overall, despite moving to the JVM.
I hadn't realized that QQ and WeChat are part of Tencent, but are part of different divisions. It would be interesting to know how collaborative or rivalrous the relation between them is.
QQ was a desktop focused app and WeChat was a ground up write of messaging for mobile. Competing against themselves is pretty brilliant IMO, it creates alternatives for the users but still allows their company to own the market.
I'd call it an interpretation, not a theory. But clearly the conflicting inputs was a UI problem, and while there may be tradeoffs, a better UI could have prevented the AF 447 crash.
This really won't die. It's clear if you read the accident report (linked in another comment) that the plane was already doomed by the time that conflicting inputs were potentially a factor.
You normally can't stall an Airbus aircraft, except in cases where the flight mode changes as it did in this situation. This was a huge part of the problem, an emergency occurred and the pilots were effectively flying an aircraft they hadn't flown before.
Fully pulling back the stick is what you'd normally do in an Airbus to gain altitude. It's guaranteed not to stall the plane.
UI issues contribute to situational complexity, stress, ambiguity, and failures to respond appropriately to circumstances.
And whilst the BEA report ... rather inexplicably, frankly ... fails to mention the inputs issues, it does address all the other factors I've described here.
"Tax credits are subtracted not from taxable income, but directly from a person’s tax liability; they therefore reduce taxes dollar for dollar. As a result, credits have the same value for everyone who can claim their full value."