Hacker Newsnew | past | comments | ask | show | jobs | submit | mbrumlow's commentslogin

> author has very strong opinions.

So true. To the point I have to maintain my own fork to make the command key my meta


GIAC has zero authority, any group of people can get together and make their own policies and print a nice little certificate when somebody applies.

This comment seems crazy to me.

Chinas political stance more closely resembles right-wing policies than left leaning ones.

All the xenophobic notions you are talking about china has in spades.

I am not saying China is not doing things right here will lead to your described outcome, what I am saying you conflation with western politics is completely out of this world, and is a excellent example of why the outcome you describe may be a reality for China.


Why do you think right-wing policies are intrinsically tied to anti-science sentiment?

Yes, politically China does look very right-wing with some of their policies (like those trying to push women to have babies), despite the "communism" moniker. However, unlike the US, they are very pro-science and they put their money where their mouth is.


You’re arguing against a position I never took.

I didn’t say right-wing policies are inherently anti-science. I said your contrast makes no sense. You’re blaming Western decline on xenophobia and far-right politics, but China has strong nationalism, social conservatism, demographic engineering, speech controls, and limited immigration. All things typically associated with the political tendencies you are criticizing.

So which is it?

If nationalism and social conservatism are corrosive to long-term success, then China shouldn’t be your example of strategic competence. If those traits aren’t inherently corrosive, then maybe "far-right politics" isn’t the explanatory variable you think it is.

You can’t simultaneously argue that right-wing xenophobia is sinking the West while praising a country that institutionalizes many of those same tendencies.

If your actual claim is about state capacity and science investment, then say that. But don’t smuggle in partisan framing as the cause of decline and then retreat to "pro-science funding" when the comparison falls apart.


> They can get rid of 1/3-2/3s of their labor and make the same amount of money, why wouldn't they.

Because companies want to make MORE money.

Your hypothetical company is now competing with another company who didn’t opposite, and now they get to market faster, fix bugs faster, add feature faster, and responding to changes in the industry faster. Which results in them making more, while your employ less company is just status quo.

Also. With regards to oil, the consumption of oil increases as it became cheaper. With AI we now have a chance to do projects that simply would have cost way too much to do 10 years ago.


> Which results in them making more

Not necessarily.

You are assuming that the people can consume whatever is put in front of them. Markets get saturated fast. The "changes in the industry" mean nothing.


A) People are so used to infinite growth that it’s hard to imagine a market where that doesn’t exist. The industry can have enough developers and there’s a good chance we’re going to crash right the fuck into that pretty quickly. America’s industrial labor pool seemed like it provided an ever-expanding supply of jobs right up until it didn’t. Then, in the 80s, it started going backwards preeeetttty dramatically.

B) No amount of money will make people buy something that doesn’t add value to or enrich their lives. You still need ideas, for things in markets that have room for those ideas. This is where product design comes in. Despite what many developers think, there are many kinds of designers in this industry and most of them are not the software equivalent of interior decorators. Designing good products is hard, and image generators don’t make that easier.


Its really wild how much good UI stands out to me now that the internet is been flooded with generically produced slop. I created a bookmarks folder for beautiful sites that clearly weren't created by LLMs and required a ton of sweat to design the UI/UX.

I think we will transition to a world where handmade software/design will come at a huge premium (especially as the average person gets more distanced from the actual work required to do so, and the skills become rarer). Just like the wealthy pay for handmade shoes, as opposed to something off the shelf from footlocker, I think companies will revert back to hand crafted UX. These identical center column layout's with a 3x3 feature card grid at the bottom of your landing page are going to get really old fast in a sea of identical design patterns.

To be fair component libraries were already contributing to this degradation in design quality, but LLM s are making it much worse.


Yeah. For a few years, I’ve been predicting that human-made and designed digital goods will be desirable luxury items in the same exact way the Arts and Crafts movement, in the late 19th/early 20th century, made artisan furniture, buildings, etc. to push back against the megatons of chintzy shit produced during the Industrial Revolution.

Component libraries can be used to great effect if they are used thoughtfully in the design process, rather than in lieu of a design process.


Paying a premium for "luxury" makes sense for people looking status signaling or an unique experience. Software is (most of the time) an utility. People would be willing to pay for a premium when there is tangible performance improvement. No one is going to pay more for a run-of-the-mill SaaS offering because the website was handcrafted.

> People would be willing to pay for a premium when there is tangible performance improvement.

Developers like to assume this because it’s something they value in their own software usage, and something they know how to address. That’s not something you can generalize to non-developers. Look, feel, and features are the main difference users see between FOSS and most commercial software— not performance. In fact, FOSS performance is obviously better in many/most cases. That’s why almost the only FOSS software projects with a significant number of non-dev users are run by organizations that employ designers — Mozilla, Blender, Signal, Android, etc.

Unless you’re making a tool for developers or gamers, or the competition is intolerably bad, people rarely pay for increased performance.


> people rarely pay for increased performance.

I wasn't using "Performance" in the sense of "how fast does it go?", but it the sense of "how well does it do what I need to do?"

> Mozilla, Blender, Signal, Android, etc.

First, this is selection bias. I'm sure we can find plenty of cases of software that failed even when designers were around, and I can certainly point to software/services that have horrendous "UI" but were still incredibly useful/valuable: Craigslist and Bloomberg Terminal come to mind.

Second, you are confusing cause and effect. The examples you gave only employ designers now because they were valuable even without designers working on it.

Anyway, you did not address the core point of my argument: no one is going to pay more for a run-of-the-mill SaaS offering because the website was handcrafted.


> With AI we now have a chance to do projects that simply would have cost way too much to do 10 years ago.

Not sure about that, at least if we're talking about software. Software is limited by complexity, not the ability to write code. Not sure LLMs manage complexity in software any better than humans do.


You are right. What will happen is somebody will pay “x” for the clothing, but the same company will charge “2x” for transport.


I have been thinking a lot about the use of AI and how to use it. Part of my process has been watching others, namely the people who I thought were incompetent at their job before AI.

I have found the following, but I suspect as AI gets better this will change.

1) those who where incompetent before still are, but AI hides it.

2) those who were competent before AI do vastly more with AI. They seem to apply it in away that simply overshadows what the incompetent are doing.

3) the incompetent seem to be fascinated with things like skills, pre prompts and, setting policies and guidelines, and workshops. The competent seem to need none of this, are not going to workshops, already have their own and simply are more productive.


I think somebody is trying to push a narrative. A now deleted post was pointing to this

https://seattlemedium.com/lisa-gelobter-the-trailblazing-com...

And I can see how maybe the author of the post this entire thread is about could see this and just roll with it.

For the record the link I post seems to be entirely and completely wrong, and if I had such a post written so factually wrong about me, all while trying to take credit where none was owed, would be so embarrassing.

But we live in a strange new world where we can just fabricate anything we want and back fill websites and probably pollute AI with nonsense just to push political agenda and gain favor in the masses who ether are ignorant or don’t care to ever know the truth.


It's interesting to poke around and see how the game of telephone works.

The first mentions I saw on Twitter were from February 2018[0]. Subsequent Black History Months[1] would reiterate how she invented GIFs or sometimes animated GIFs. You even get crazy things like how she invented the animated GIF while working with the Obama administration[2]. (That post references an article that talks about her working with the Obama administration that doesn't mention GIFs[3]. The author just merged the two things she's credited with.)

In 2024, the story is that she invented the GIF at Netscape[4], which obviously makes little sense. It could be a reference to GIFs looping, but I see no evidence for that, especially since her LinkedIn[5] doesn't say she worked at Netscape. She worked at Macromedia, which involved work with Netscape, and I suspect the genesis is her work on Shockwave during that time (2005-2010), and that was taken to be a precursor to animated GIFs (obviously false, but an easy mistake for someone young and non-technical. Maybe it's a cultural precursor to modern animated GIF usage in people's minds?)

Overall, though, this is a pretty dumb thing for them to claim, even if the claim is widespread across the Internet these days. She was roughly 17 when animated GIFs were first developed.

As I said, it's interesting to see how the game of telephone plays out. I don't think anyone involved in this was intentionally spreading false information, and I don't really expect random Twitter users to fact-check carefully. I would like NYT to put some effort into it. As it is, we now have a NYT touting an obviously-false claim when she actually did a lot of really important and impressive stuff they could focus on instead. There's no need to spread fake accolades for her. Her actual contributions stand on their own.

[0] https://x.com/Reel365/status/960288180447694848

[1] https://x.com/search?q=%22Gelobter%22%20%22gif%22%20until%3A... / https://x.com/search?q=%22Gelobter%22%20%22gif%22%20until%3A... / https://x.com/search?q=%22Gelobter%22%20%22gif%22%20until%3A...

[2] https://www.linkedin.com/feed/update/urn:li:share:7034543821...

[3] https://theblackwallsttimes.com/2022/08/18/computer-scientis...

[4] https://seattlemedium.com/lisa-gelobter-the-trailblazing-com...

[5] https://www.linkedin.com/in/lisagelobter/details/experience/


Addendum: In the paragraph after merging Shockwave and GIFs, NYT references a Forbes interview[0] where she specifically says she didn't invent GIFs.

NYT:

> Ms. Gelobter was the director of program management at Macromedia where she helped develop Shockwave into a web plug-in that allowed for video games and animation on the web, turning still images into moving GIFs — animated images known as a graphics interchange format.

Interview linked in the very next paragraph:

> Gelobter: I want to clarify that I did not create GIFs although I get credited for it a lot. I think people conflated thinking about animation on the web as being animated GIFs but that was Shockwave. Again, what we did with Shockwave was transformative.

[0] https://www.forbes.com/sites/jumokedada/2021/02/18/meet-the-...


Good note, thank you for highlighting


The NYT seems to have an interest in making Mamdani and his administration more palatable to their readers:

http://lee-phillips.org/nytIHRA


If you go to the IGRA's website [1] and scroll down, you'll find the part of the definition that equates criticism of Israel with antisemitism:

> Denying the Jewish people their right to self-determination, e.g., by claiming that the existence of a State of Israel is a racist endeavor.

> Drawing comparisons of contemporary Israeli policy to that of the Nazis.

[1] https://holocaustremembrance.com/resources/working-definitio...


The parts you quote do not equate criticism of Israel with antisemitism.


Do you not consider calling Israel's existence racist or comparing its actions to the Nazis to be criticism?


Legitimate criticism is about specific actions or behaviors. You could even say that you are opposed to certain laws, like the right of return.

Saying that a nation should just not exist, regardless of how they behave or what laws they have betrays a deeper irrational hatred. Especially if there’s only one state that must not exist, while there are many countries with laws you would disagree with.


Saying that the existence of a state is a racist endeavor is not inherently hateful, and doesn't just apply to Israel.

For example, apartheid South Africa was a racist endeavor.


Are you against the existence of South Africa? Or were you against the existence of some laws in South Africa?

It also isn’t racist to be against the existence of states in general and believe in a world without borders. But to say that Ukraine has a right to exist, Ireland has a right to exist, Palestine has a right to exist, Greenland has a right to exist - only Israel does not have a right to exist is antisemitic.


I am against the existence of apartheid South Africa, just as I'm against the existence of apartheid Israel.

No state should discriminate against people based on their religion. (And absolutely no state has a right to exist. People have rights, not governments.)


I think it is funny they all these companies are spending a ton and racing to have a AI story. It’s almost like none of the executives understand AI.

If you are changing your product for AI - you don’t understand AI. AI doesn’t need you to do this, and it doesn’t make you a AI company if you do.

AI companies like Anthropic, OpenAI, and maybe Google, simply will integrate at a more human leave and use the same tools humans used in the past, but do so at a higher speed, reliability.

All this effort wasted, as AI don’t need it, and your company is spending millions maybe billions to be an AI company that likely will be severely devalued as AI advances.


It’s not that they don’t work. It’s how businesses handle hardware.

I worked at a few data centers on and off in my career. I got lots of hardware for free or on the cheap simply because the hardware was considered “EOL” after about 3 years, often when support contracts with the vendor ends.

There are a few things to consider.

Hardware that ages produce more errors, and those errors cost, one way or another.

Rack space is limited. A perfectly fine machine that consumes 2x the power for half the output cost. It’s cheaper to upgrade a perfectly fine working system simply because it performs better per watt in the same space.

Lastly. There are tax implications in buying new hardware that can often favor replacement.


I’ll be so happy to buy a EOL H100!

But no, there’s none to be found, it is a 4 year, two generations old machine at this point and you can’t buy one used at a rate cheaper than new.


Well demand is so high currently that it's likely this cycle doesn't exist yet for fast cards.

For servers I've seen where the slightly used equipment is sold in bulk to a bidder and they may have a single large client buy all of it.

Then around the time the second cycle comes around it's split up in lots and a bunch ends up at places like ebay


Yea looking at 60 day moving average on computeprices.com H100 have actually gone UP in cost recently, at least to rent.

A lot of demand out there for sure.


Not sure why this "GPUs obsolete after 3 years" gets thrown around all the time. Sounds completely nonsensical.


Especially since AWS still have p4 instances that are 6 years old A100s. Clearly even for hyperscalers these have a useful life longer than 3 years.


I agree that there is hyperbole thrown around a lot here and its possible to still use some hardware for a long time or to sell it and recover some cost but my experience in planning compute at large companies is that spending money on hardware and upgrading can often result in saving money long term.

Even assuming your compute demands stay fixed, its possible that a future generation of accelerator will be sufficiently more power/cooling efficient for your workload that it is a positive return on investment to upgrade, more so when you take into account you can start depreciating them again.

If your compute demands aren't fixed you have to work around limited floor space/electricity/cooling capacity/network capacity/backup generators/etc and so moving to the next generation is required to meet demand without extremely expensive (and often slow) infrastructure projects.


Sure, but I don't think most people here are objecting to the obvious "3 years is enough for enterprise GPUs to become totally obsolete for cutting-edge workloads" point. They're just objecting to the rather bizarre notion that the hardware itself might physically break in that timeframe. Now, it would be one thing if that notion was supported by actual reliability studies drawn from that same environment - like we see for the Backblaze HDD lifecycle analyses. But instead we're just getting these weird rumors.


I agree that is a strange notion that would require some evidence and I see it in some other threads but looking at the parent comments going up it seems people are discussing economic usefulness so that is what I'm responding to.


A toy example: NeoCloud Inc builds a new datacenter full of the new H800 GPUs. It rents out a rack of them for $10/minute while paying $6/minute for electricity, interest, loan repayment, rent and staff.

Two years later, H900 is released for a similar price but it performs twice as many TFlOps/Watt. Now any datacenter using H900 can offer the same performance as NeoCloud Inc at $5/month, taking all their customers.

[all costs reduced to $/minute to make a point]


It really depends on how long `NeoCloud` takes to recoup their capital expenditure on the H800s.

Current estimates are about 1.5-2 years, which not-so-suspiciously coincides with your toy example.


It's because they run 24/7 in a challenging environment. They will start dying at some point and if you aren't replacing them you will have a big problem when they all die en masse at the same time.

These things are like cars, they don't last forever and break down with usage. Yes, they can last 7 years in your home computer when you run it 1% of the time. They won't last that long in a data center where they are running 90% of the time.


A makeshift cryptomining rig is absolutely a "challenging environment" and most GPUs by far that went through that are just fine. The idea that the hardware might just die after 3 years' usage is bonkers.


Crypto miners undervote for efficiency GPUs and in general crypto mining is extremely light weight on GPUs compared to AI training or inference at scale


With good enough cooling they can run indefinitely!!!!! The vast majority of failures are either at the beginning due to defects or at the end due to cooling! It’s like the idea that no moving parts (except the HVAC) is somehow unreliable is coming out of thin air!


Economically obsolete, not obsolete, I suspect this is in line with standard depreciation.


There’s plenty on eBay? But at the end of your comment you say “a rate cheaper than new” so maybe you mean you’d love to buy a discounted one. But they do seem to be available used.


> so maybe you mean you’d love to buy a discounted one

Yes. I'd expect 4 year old hardware used constantly in a datacenter to cost less than when it was new!

(And just in case you did not look carefully, most of the ebay listings are scams. The actual product pictured in those are A100 workstation GPUs.)


Do you know how support contract lengths are determined? Seems like a path to force hardware refreshes with boilerplate failure data carried over from who knows when.


> Rack space is limited.

Rack space and power (and cooling) in the datacenter drives what hardware stays in the datacenter


> The exporter doesn't give a shit because it doesn't affect them at all.

Well that’s not true. Otherwise you are going to have to explain why so many outside the USA were upset with tariffs, and why there were retaliatory ones applied on the inverse.

I make no other claim that your quoted assertion is wrong.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: