Hacker Newsnew | past | comments | ask | show | jobs | submit | NoPiece's commentslogin

When I was a kid I had a nice cache of coverless books I'd retrieved from the trash bin behind the local Crown Books.


Castle Wolfenstein in 1981, which was the first game to include digitized speech and an early example both of stealth gaming and of the World War II shooter.

It was also an early example of procedural generated levels. During startup, it would create and write the castle map layout to the floppy disk. If you wanted to replay the level, you could pull the disk out and add a write protect tab so it couldn’t be overwritten.


It's a minor nitpick, but it's more accurate to say the game would create the room configurations on startup (inner walls, doors, enemy players, chests, stair locations, etc.); the castle map for each level was fixed (Beyond Castle Wolfenstein worked the same way). I still have the maps my brother and I drew to aid in our escapes!


There are lots of Facebook groups I like. No one is talking about my neighborhood on HN! But besides interesting local groups, there are amazing niche technical groups on Facebook that would crossover with HN. A few I find useful:

Apple ][ Enthusiasts: https://www.facebook.com/groups/5251478676/

BMW i3 Worldwide: https://www.facebook.com/groups/BMWi3/

Monoprice Mini (3d printer group) https://www.facebook.com/groups/710952782398723/

Commodore Amiga: https://www.facebook.com/groups/CommodoreAmiga/


Calling out the cholesterol is kind of a distraction, because it is no longer what is associated with heart disease. The level of saturated fat in the impossible burger is actually very high.

Impossible Burger, for example, has more than double the saturated fat of an 85% lean beef burger: 3.6 grams per ounce (derived from coconut oil) versus 1.7.

https://www.foodandwine.com/news/great-veggie-burger-debate-...

I know there is debate about the link between saturated fat and heart disease, and further question about the specific types of saturated fat in coconut oil. I have no idea what the truth is, but if you are on a low fat diet, impossible burger isn't a good substitute for beef.

I've tried impossible burger, and thought it was very good, so I'd eat it just for the flavor.


> The level of saturated fat in the impossible burger is actually very high.

Also high in sodium, which is part of the reason why it's so yummy (to some)


It was as seriously discussed and considered 40 years ago as today. Maybe more seriously as Medicare and Medicaid had recently been enacted, so there was momentum.

https://en.wikipedia.org/wiki/History_of_health_care_reform_...


There are inexpensive youth and junior skates (<$100), but these are for younger kids. As soon as you are wearing adult sized skates (10-12yrs old), the price is many times that of cleats.

https://www.purehockey.com/c/ice-hockey-skates-senior


Ah, you provided examples unlike the other comment I responded to. Thank you for that.

But if priced low-to-high the senior skates on that site start at $64.99 and go up from there. Assuming that the lower cost skates are not worth it, you get what you pay for and all that, what's wrong with the middle tier skates that are in the $100-$200 range? How long do they last once a kid's foot doesn't outgrow it every 3-to-6 months?


It's exactly this limit that eventually leads engineering teams at your Dropboxes, Facebooks, Twitters, Tumblrs

True, but those are such extreme edge cases. 99.9999% of projects will never see that kind of demand for scale and speed. Facebook shouldn't switch to Ruby, but that doesn't inform whether Ruby would be viable for many (most) projects.


I don't think it's that much of an edge case. If Rails and its ecosystem lead to 300ms render times without load, but your objective is 100ms, you're going to have a problem. I don't think either number here is extreme and I've encountered that sort of situation on previous projects. It's not easily solvable by throwing more machines at it, unfortunately.


If Rails and your ecosystem are pushing 300ms renders you have bigger problems on your plate


I don't think it's uncommon. But the point is it usually has little to do with load and isn't easily solved by scaling horizontally. Sure, you might be able to gain some efficiencies with distributed caches and such, but that still requires a fair bit of effort (the developer time we're trying to minimize).

You're right though. And I managed to sidetrack myself a bit. My intention in participating in the thread was to point out that you can't always optimize for developer speed and think you can get out of it "cheaply" with more hardware. And it's not just a problem when you're big enough for it to be a good problem to have. In those cases you're going to need to find a way to make the application faster.

Maybe it's aggressive in-app caching; maybe it's sitting there analyzing application profiles; maybe it's a rewrite in another language or another framework. Our goal is to provide a fast enough runtime where you don't have to make that decision.


> the point is it usually has little to do with load and isn't easily solved by scaling horizontally.

My point exactly. Unless you're getting slammed with traffic, there's too much going on in a page load if it takes 300ms to spit out.


If you can achieve the same rendered output much faster with a different runtime or with a rewrite or with a different framework, I don't think you're looking at a fundamental design flaw with the page. I'm happy to agree to disagree. Maybe there is too much going on with the page. But it's a common problem with Rails apps and is ideally solved without a rewrite. I think a faster runtime would be the ideal solution here.


At my day job we've seen load times for pages > 5 seconds in rails. There's no way to do it faster because the data has to be dynamically generated and loaded, and that's the bottleneck.

It's not uncommon to see this in rails applications — GitLab had the same issues.


In GitLab's case, a lot of their issues are just the classic self inflicted N+1 queries and poor planning of how to represent events in a way that could be quickly queried for display.

OTOH, the background job memory issues were completely not their fault. Glibc malloc trades high memory usage for keeping both high throughput and simplicity. With Sidekiq or puma this causes much higher memory usage than necessary. Switching to jemalloc, which they did, can reduce memory usage by 50-75%.

What does your app do that takes 5 seconds? Is this because hundreds of DB calls are required, or you need to serialize thousands of objects?


Out of pure curiosity what took 300ms? Image composition? Huge JSON payload serialization?


I've seen numerous big rails apps with times as long or longer, even after decent effort has been put in to optimize. Generally a case of having a lot of things on a page and rendering it all into html server-side (the classic/old rails way of doing things).


300ms from server side rendering requires something self inflicted. Even bulk JSON endpoints should be faster than that.

27ms is the average dynamic response time for Basecamp, which is server side rendered HTML done the classic way.

I worked on a high performance Rails API serving dynamically generated map tiles and our response times were actually lower than that.


Doesn't basecamp do a ton of caching to achieve response times like that? There are certainly techniques to get there, but how long does it take to populate the caches? That's what you're going to see out of a stock Rails app.

Self-inflicted, sure. But in my experience it's the result of picking developer-friendly tools to optimize for developer time. Erubis is much faster than HAML, but a lot of developers prefer the latter. Those sorts of decisions accumulate.


No more than Match.com do, who run on .NET. Basecamp just uses the standard Rails caching AFAIK.

Templating just isn't the issue now that it was in the days of 1.8. HAML is slower than ERB, but usually neither have a meaningful impact on response time on modern Ruby. ERB is approaching Erubis performance now.


I worked on a high performance Rails API serving dynamically generated map tiles and our response times were actually lower than that.

Sounds like an ideal case for fast response times honestly. I've seen rails endpoints like that aplenty. But there also exists a lot of stuff taking hundreds of ms on a cold cache, and it's the kind of thing where the same effort done in the same way using another language ecosystem would just be 10x as fast.

If you've never seen this, perhaps you've never worked with a large monolithic rails app with a few years of legacy code built up? Brand new API endpoints can be a lot faster of course.


10x faster in, say, Go? Are you sure? Have you benchmarked it? You might be surprised.

Lets pick a weakness of Ruby: tight loops and lots of math. Take 16,000 GPS points and calculate the geographical distance to another point.

On my machine it takes 12-14ms in Ruby and 4-6ms in Go. That's only a little bit more than twice as fast! Almost certainy not worth re-writing your entire app for.

I've worked with a bunch of large, legacy Rails apps. In between a couple of Rails jobs, I worked on a large .NET app, which didn't have much better performance. One service I helped migrate from Rails to Spring/Java for an enterprisey client actually got slower overall, despite moving to the JVM.


These are the response times for Discourse, a complex open-source Rails app. Even the slowest endpoint has an average response time of 46ms.

Percentile: repsonse time (ms) categories_admin: 50: 17 75: 18 90: 22 99: 29 home_admin: 50: 21 75: 21 90: 27 99: 40 topic_admin: 50: 17 75: 18 90: 22 99: 32 categories: 50: 35 75: 41 90: 43 99: 77 home: 50: 39 75: 46 90: 49 99: 95 topic: 50: 46 75: 52 90: 56 99: 101


I hadn't realized that QQ and WeChat are part of Tencent, but are part of different divisions. It would be interesting to know how collaborative or rivalrous the relation between them is.


QQ was a desktop focused app and WeChat was a ground up write of messaging for mobile. Competing against themselves is pretty brilliant IMO, it creates alternatives for the users but still allows their company to own the market.

Interesting YC blog about Tencent and WeChat. [1]

1. http://blog.ycombinator.com/lessons-from-wechat/


I'd call it an interpretation, not a theory. But clearly the conflicting inputs was a UI problem, and while there may be tradeoffs, a better UI could have prevented the AF 447 crash.


This really won't die. It's clear if you read the accident report (linked in another comment) that the plane was already doomed by the time that conflicting inputs were potentially a factor.


Basic airmanship could've prevented the crash too. The pilot-in-command got a stall warning and pulled back on the stick, climbing to 38,000 feet.

That's not a UI issue, it's a forgetting one of the most fundamental rules of flying issue.

https://aviation.stackexchange.com/questions/1418/what-happe...


You normally can't stall an Airbus aircraft, except in cases where the flight mode changes as it did in this situation. This was a huge part of the problem, an emergency occurred and the pilots were effectively flying an aircraft they hadn't flown before.

Fully pulling back the stick is what you'd normally do in an Airbus to gain altitude. It's guaranteed not to stall the plane.


> Fully pulling back the stick is what you'd normally do in an Airbus to gain altitude. It's guaranteed not to stall the plane.

I'd love to see some documentation on "pull back in a stall". In fact, this is all I could find:

http://www.pprune.org/tech-log/415373-new-airbus-stall-recov...


UI issues contribute to situational complexity, stress, ambiguity, and failures to respond appropriately to circumstances.

And whilst the BEA report ... rather inexplicably, frankly ... fails to mention the inputs issues, it does address all the other factors I've described here.


"Tax credits are subtracted not from taxable income, but directly from a person’s tax liability; they therefore reduce taxes dollar for dollar. As a result, credits have the same value for everyone who can claim their full value."

http://www.taxpolicycenter.org/briefing-book/whats-differenc...


Right, they are talking about the situation where someone only owes $6,000 in taxes. They get a $6,000 tax credit.

So setting aside whether this is fair or unfair, people that owe more taxes clearly get a bigger benefit from the credit.


The poor don't have taxable income. You can only claim the credit if you itemize which is restricted to even higher tax brackets.


No need to itemize, it is reported on line 54 and offsets line 44:

https://www.irs.gov/pub/irs-pdf/f1040.pdf

But lots of people don't end up paying $7500 in income taxes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: