Hacker Newsnew | past | comments | ask | show | jobs | submit | wronglebowski's commentslogin

I don't doubt it, but what were they all doing? The Metaverse had 10k employees on it for multiple years and seemed to almost be a standstill for long periods of time. What do these massive teams do all day?

Have meetings to figure out how to interact with the other 9990 employees. Then try and make the skeleton app left behind by the team of transient engineers who left after 18 months before moving on to their next gig work, before throwing it out and starting again from scratch.

Exactly. What Meta accomplished could have been done by a team of less than 40 mediocre engineers. It’s really just not even worth analyzing the failure. I am in complete awe when I think about how bad the execution of this whole thing was. It doesn’t even feel real.

Actually I would like see a post-mortem that showed where all the money actually went; they somehow spent ~85x of what RSI has raised for Star Citizen, and what they had to show for it was worse than some student projects I've seen.

Were they just piling up cash in the parking lot to set it on fire?


At least part of the funding went to research on hard science related to VR, such as tracking, lenses, CV, 3D mapping etc. And it paid off, IMO Meta has the best hardware and software foundation for delivering VR, and projects like Hyperscape (off-the-shelf, high-fidelity 3D mapping) are stunning.

Whether it was worth it is another question, but I would not be surprised is recycled to power a futuristic AI interface or something similar at some point.


Even within the XR industry, we had no clue where all that money went. During the metaverse debacle, the entire industry stagnated. Once metaverse failed, XR adjacent shops started to fail. There was no hardware or technique innovation shared with the rest of the industry, and at the time the technology was pretty well settled.

Since then we lost all the medium players and it's basically just Facebook, Valve, and Apple.


The sad part about this fact is that the tech is mated to a completely rotten ecosystem. If it were sold off I'd be excited to try it.

Big company syndrome has existed for a long time. It’s almost impossible to innovate or move fast with 8 levels of management and bloated codebases. That’s why startups exist.

Everyone is missing the why here, this only happens because the whole stack is vertically integrated. Even if say LG wanted to make a box like this and update it for 10 years they couldn’t, they don’t make the chips. Qualcomm straight up refuses to support chips through this many Android releases. Even if device manufacturers want to support devices forever it won’t matter if the actual SoC platform drops support.


While the vertical integration is definitely the best way to get it done, it's not strictly required as long as there is good enough documentation for a platform. Linux originally supported Intel without any Intel engineers even knowing it existed.

Also consider Apple's chips, which have gotten Linux support without Apple ever submitting a single line of code.

While Qualcomm's behaviour is definitely a massive bummer (not to mention Qualcomm's competitors), it doesn't stop manufacturers from supporting their devices. It merely stops maintaining support from being cheap and easy.


Not only that, "vertical integration" is a red herring. If you had a "vertically integrated" device made entirely by Qualcomm and they stopped supporting it after 3 years then the vertical integration buys you nothing. The actual problem is that Qualcomm sucks.


> Linux originally supported Intel without any Intel engineers even knowing it existed.

It should be noted that Intel makes CPUs, while Qualcomm makes SoCs, which include much more than just a CPU. Usually supporting the CPU is the easiest part, the rest is the issue.

That said, when device OEMs release the kernel sources, modders are able to update custom roms for a long time, so I doubt this is just a Qualcomm issue.


> It should be noted that Intel makes CPUs, while Qualcomm makes SoCs, which include much more than just a CPU. Usually supporting the CPU is the easiest part, the rest is the issue.

Here's a random 15 year old Intel PC (you can also do this on many current ones):

  $ lspci | grep -v Intel
  [no output]
Every piece of silicon in it is made by Intel and most of them, including the GPU, are integrated into the CPU. And it's all supported by current Linux kernels. The same is true for many AMD systems except that you'll usually see a third party network or storage controller which is itself still supported.

So no, it's a Qualcomm problem.


They update the roms while keeping everything provided by Qualcomm the same

so basically the kernel is frozen even if the android version is updated


The kernel is usually frozen but sometimes projects like PostmarketOS can use the changes to upstream the changes and add general Linux support.

Anyone can make a diff between the upstream kernel and the Qualcomm kernel. Maintaining these changes into later versions of the kernel will be quite challenging, but the base is already there.

That said, phones also come with plenty of binary drivers and those cannot be ported. That's an important reason not to bother with later kernel versions in custom ROMs: after all of your hard work, the end result will be missing important features such as GPU acceleration.


What do you think are the reasons pc's don't need Dell xps 13 9350 Windows and Lenovo ThinkPad T14s Gen 6 Linux and so on but phones need Galaxy S26 Linux, Xiaomi 16 Linux and so on?


Because ARM lacks some of the device auto-discovery features that amd64 provides for free, unless you're lucky and use a device with ACPI+DSDTs on ARM. You need a special build for the hardware, but you don't need to alter the source code.

Custom kernels also exist for amd64 devices, often including workarounds and patches that are not in mainline to improve performance or compatibility.

As a vendor, that requires practically zero extra effort.

https://wiki.postmarketos.org/wiki/Devices has a list of devices that run either mainline or almost-mainline Linux. Only the "downstream" devices require vendor Linux kernels. Of course, hardware support is partial for most of these devices because vendors haven't contributed proper upstreamable drivers and volunteers haven't had the time to write them yet, but it's not like every ARM device needs a special kernel fork, that's just something ARM vendors do out of laziness.


> Even if device manufacturers want to support devices forever it won’t matter if the actual SoC platform drops support.

Yeah, so that's not a why, that's a how (and it's not necessary or sufficient anymore, see the Samsung and Pixel reference).

The why seems very much what the article covers.


Yet Microsoft figured this out decades ago.

I (well my mom) had a supported with security updates version of Windows 7 on my 2007 Mac Mini (not a typo) until 2023.


That was from when Macs ran Intel and could easily dual boot Windows. I still have an old Mac Book Pro with Windows 10 on it. Updates only stopped recently because Win10 is at end of life. I've been meaning to blow everything out and install Linux.


I am giving props to Microsoft because it did wrangle an industry together to standardize where one company makes the operating system and other companies make the hardware yet you can still upgrade your operating system even without the support of the vendor.

Yet Google can’t seem to make that happen.


because google doesnt deliver the full OS, it delivers a bunch of stuff that the vendors then bastardize and use with insane drivers and crap from SoC vendors.

they should NEVER accept any of the binary only crap drivers, they should demand code be upstream or wont buy. But they dont care. Google doesnt care.


> Qualcomm straight up refuses to support chips through this many Android releases.

That's not entirely accurate. They do provide chips with extended support, such as the QCM6490 in the Fairphone 5. These are not popular because most of the market demands high performance, and companies profit from churning out products every year, but solutions exist for consumers who value stability and reliability over chasing trends and specs.


If you read the article the actual "why" is because the CEO personally requested it and gave an effectively unlimited budget.


No need to be rude. The person above is adding a new insight to the conversation.

Vertical integration makes it possible but motivation makes it happen. Where is Samsung's ultra LTS Exynos device?


I think it's more a combination of vertical integration and Nvidia upper management actually wanting to provide support for so long. Apple, Google, and Samsung all make smartphones with their own chips, and yet none of them support running the newest OS on 10+ year old devices.


I have to wonder if the Nintendo Switch picking up the Tegra X1 SOC has something to do with it. There's a good chance a lot of components of the (custom microkernel) operating system are derived from Android, and with the Switch receiving active support for so long, I wouldn't be surprised if the work between the Shield TV and Switch are related.

With the Switch being shipped for nearly 10 years, it pales in comparison to the shelf life of most any processor Apple, Google, Samsung, Qualcomm, MediaTek (?) push out.

Though Apple in particular is interesting, as their Apple TV lineup also has the same long legs, with the Apple TV HD/4th Gen releasing in 2015 and receiving the latest OS.


Qualcomm's industrial ARM SoC are supported for nearly 10 years: Qualcomm QCM6490 in Fairphone 5 gets 8 years security updates.


It is called a legal binding contract, business use it all the time to enforce support.


Contracts can be broken and resolved with money. Happens all the time.


Yes, and lawsuits do exist as well.

Point being, blame lies not only on Qualcomm as Google advocates tend to point out.


I've only used it when I'm in a pinch but it's handy. Blowing up mobile apps to a larger screen and multitasking isn't ideal certainly but I've been able to handle "email job" type activities while out of pocket. That said I've never heard of anyone else who's actually used it.


The RK3568 is an interesting choice, why no the H700 or something with a good amount of mainline kernel support already?


I don’t know about the h700, but some of those allwinner chips used to be super cheap around Covid. I checked and didn’t find prices? Does anyone know where the price is now?


Rk3568 doesn't have good mainline support??


There's a scrapyard right by my hometown with a fancy billboard, like the ones for the lottery that have the number displays. It's just for showing copper prices, bright copper, copper #1 and copper #2. There's so much money in it they can afford to advertise now.


The price of copper is not extraordinarily high.

https://www.gurufocus.com/economic_indicators/4553/inflation...


I couldn’t immediately see the price not adjusted for inflation. If copper has held it’s value better than other proceeds of crime that could still make it more attractive.


It’s incredible how bad driver support is the ARM space. I was looking into some of the various Ambernic handhelds and their Linux firmware. Despite their SoCs being advertised as having Vulkan 1.1 support every firmware for the device ships with it disabled.


So many chipmakers and development board manufacturers see software/driver support as some kind of necessary evil--a chore that they grudgingly do because they have to, and they will do the absolute minimum amount of work, with barely enough quality to sell their hardware.


It bewilders me. Software's gotta be easier than hardware right? Not that either is easy but as a software engineer, the engineering that goes into modern hardware mystifies me.


It's different definitions of "easy."

With hardware, you have about one billion validation tests and QA processes, because when you're done, you're done and it had better work. Fixing an "issue" is very very expensive, and you want to get rid of them. However, this also makes the process more of, to stereotype, an "engineer's engineering" practice. It's very rules based, and if everything follows the rules and passes the tests, it's done. It doesn't matter how "hacky" or "badly architected" or "nasty" the input product is, when it works, it works. And, when it's done, it's done.

On the other hand, software is highly human-oriented and subjective, and it's a continuous process. With Linux working the way it does, with an intentionally hostile kernel interface, driver software is even more so. With Linux drivers you basically chose to either get them upstreamed (a massive undertaking in personality management, but Valve's choice here), deal with maintaining them in perpetuity at enormous cost as every release will break them (not common), or give up and release a point in time snapshot and ride into the sunset (which is what most people do). I don't really think this is easier than hardware, it's just a different thing.


From the outside looking in. It really seems like both fields are working around each other in weird ways, somewhat enforced by backwards compatibility and historical path dependence.

The transition from more homogeneous architectures to the very heterogeneous and distributed architectures of today has never really been all that well accounted for, just lots of abstractions that have been papered over and work for the most part. Power management being the most common place these mismatches seem to surface.

I do wonder if it will ever be economical to "fix" some of these lower level issues or if we are stuck on this path dependent trajectory like the recurrent laryngeal nerve in our bodies.


> intentionally hostile kernel interface

If open-sourcing your entire kernel is being "hostile", I don't think that there is or ever was a "friendly" OS.


I think what they were referencing with that is that the kernel hardware interface is unstable, it changes literally every version, which is why you went to upstream it so you don't have to keep it up yourself after that.


I've done both. There are difficulties with both but overall I would say software is significantly more difficult than hardware.

Most hardware is actually relatively simple (though hardware engineers do their best to turn it into an incomprehensible mess). Software can get pretty much arbitrarily complex.

In a way I suspect it's because hardware engineers are mostly old fogies stuck in the 80s using 80s technologies like Verilog. They haven't evolved the tools that software developers have that enable them to write extremely complicated programs.

I have hope for Veryl though.


Wow, super hard disagree, comment here sounds like the typical arrogance hardware engineers face from people in software who've never really done the job or have some superficial experiences.

I won't blindly state "software is easier" but software is definitely easier to modify, iterate and fix, which is why sofware tools and resulting applications can evolve so fast.

I have done both HW & SW, routinely do so, and switch between deep hardware jobs and deep software so I'm qualified to speak.

If you're blinking a light or doing something with Bluetooth you can buy microcontrollers that have this capability and yes that hardware is simple.

But have you ever DESIGNED a microcontroller, let alone a modern processor or complex system ?

Getting something "simple" like a microcontroller to reliably start-up involves complex power sequencing, making sure an oscillator works, a phase-locked-loop that behaves correctly and that's just "to make a clock signal run at a frequency" we're not talking about implementing PCIe Gen5 or RDMA over 100Gbps Ethernet.

Hardware engineers definitely welcome better tools but the cost of using an unproven tool or tool that might have "a few" corner cases resulting in your $5-million SoC not working is a hard risk to tolerate, so sadly(and to our pain) we end up using proven but arcane infrastructure.

Software in contrast can evolve faster because you can "fix it in software". New tools can be readily tested, iterated on and deployed.


> But have you ever DESIGNED a microcontroller

Yes... But in fairness I was just talking about the digital RTL, not the messy analogue stuff (PLLs, power/reset, etc.) I've never done that.

> but software is definitely easier to modify, iterate and fix,

Definitely true.

> which is why sofware tools and resulting applications can evolve so fast.

Not sure I agree here though. It seems to me that EDA tools evolve super slowly because a) hardware engineers are timid old fogies who never want to learn anything new, and b) the big three have a monopoly on tooling.


What do you think about Atopile? I'm not a hardware person yet, but these seem similar.

https://atopile.io/


PCB and RTL are completely separate disciplines.


Software can always ship a new update for bugs or features.

Hardware not so much


In my experience, hardware companies all believe that software is trivial nonsense they don't need to spend any effort on. Consequently, the software that drives their hardware really sucks.


Software is easier than hardware in general but companies generally pay their hardware guys 25-50% less than their software counterparts


People repeat this line a lot but I don’t think it’s true. Companies like Intel, AMD, Arm, Broadcom, etc. afaik all pay their software folks of equivalent YoE or level roughly the same as their hardware folks. To the extent there’s any difference, it’s much less than 25%.

OTOH, there’s a small slice of (mainly) software companies like Google and Meta, along with Unicorn private companies, that skew the average software engineer salary high. Then there’s a long tail of “old school” hardware companies like TI, Motorola, Atmel, Microchip, and tons of smaller less well known companies that all pay much lower than Google. But they pay their software people poorly as well.

So if you just look at “average software engineer salary” vs “average hardware engineer salary” it appears that SW people are making 50% more than HW people, but it’s not at the same companies.


> Companies like Intel, AMD, Arm, Broadcom, etc. afaik all pay their software folks of equivalent YoE or level roughly the same as their hardware folks.

This is a fairly new phenomenon and it's mostly a consequence of the AI hype wave driving investment in hardware. Wages have mostly caught up at the big boy hardware companies but you'll still generally see a disparity outside that big group.


Come to think of it, for them it is basically customer support.

Most will want to outsource it as cheap as possible and/or push it to the community. They won't care if it takes an eternity for the customer to get their issues solved as long as new customers keep buying.

And a few companies will see an opportunity to bring better customer care as an advantage and/or integrate it in their philosophy.


And it's the reason why for several years I didn't consider buying anything that had an AMD card (not now, but for many many years it was insanity).


Are you talking about the FGLRX drivers on Linux desktops?

Or their Windows driver quality back then?

I remember them both being pretty brutal.


The linux desktop was my reason.


But - doesn’t open sourcing it kinda make it someone else’s chore?

Obviously it has to “work” at sale but ongoing maintenance could be shared with the community.


I would recommend the Anbernic RG353M running ROCKNIX, or for a more powerful device, Retroid's Pocket 5 running ROCKNIX. Most other options have awful software support and are just e-waste, unfortunately.


They're stuck in the building model of making semi-custom SoCs for enormous corporations and releasing/developing drivers for them in extreme NDA environments.

It's fine (or arguably not) for locked down corporate devices.

Not so fine for building computers people want to use and own themselves.


At what point will the massive investments into AI show a respectable return? With the literal Trillion dollars OpenAI is constantly trying to raise what type of revenue would make that type of investment make sense? Even if you're incredibly bullish I don't know how you make that math work anymore.


I think it’s hard for individuals to think at the scale of very large institutional investors. They have lakes of money [1][2] that they have to invest in a balanced way, including investing a small percentage into “it probably won’t work but if it does we’ll make a fortune”-type bets. Given the size of these funds even a small percentage is a very large number.

There are also a finite number of opportunities to invest in, so companies that have “buzz” can create a bidding war among potential investors that drives up valuations.

So that’s one possible reason but in the end we can’t know why another investor invests the way they do. We assume that the investor in making a rational decision based on their portfolio. It’s fun to speculate about though, which is why there’s so much press attention.

[1] https://en.wikipedia.org/wiki/List_of_largest_pension_scheme...

[2] https://en.wikipedia.org/wiki/List_of_sovereign_wealth_funds...


The problem is we now have municipal and state governments taking on infrastructural investments (usually via subsidies) and energy companies racing to meet the load demands. There are all kinds of institutions both private and public dumping obscene amounts money into this speculative investment that can’t be a winner for everybody.

What happens to the ones that built for projects that end up failing? Seems to me the only way the story ends is with taxpayers on the hook once again.


Yes, many of us are investing in this (even indirectly) and may not realize it! The same rules still apply for municipal and state treasuries though: Only a small percentage of the overall portfolio should be allocated to high-risk investments.

Power generation, power grids are more generally useful today and less speculative than trying to win the AI race, so the risk for those types of things is somewhat lower, but there IS risk even in those.


I think the concern is something like the huge Entergy investment going on in Louisiana. Facebook basically cut a deal with them to build out all kinds of electrical load just for them. We also saw how committed Facebook was to the metaverse - they basically spent the GDP of a small nation, nothing came of it, fired a bunch of people, and moved on.

Entergy is not just going to sit around and take the L if the project doesn’t ultimately turn out to be a good long-term investment. They’re simply going to pass the cost on to their customers in the region (more so than they already plan to in the event of success). Meanwhile Louisiana taxpayers are footing the bill for all the subsidies going through these projects.

So yeah I agree it’s not quite as high risk because at least there’s some in infrastructural investment, but that’s not the kind of investment that is really needed in the region right now and having that extra capacity is not a good thing unfortunately.

To be clear I’m not really disagreeing with you. I’m just kind of bickering over the nuances lol


I think the market & political/economic actors as a whole are justifying these investments on the basis that the benefit is distributed across the labour market generally.

That is, it doesn't matter so much if OpenAI and individual investors get fleeced, if there's a 20-50% labour cost reduction generally for capitalism as a whole (especially cost reduction in our own tech professions, which have been very well paid for a generation) -- Institutional investors and political actors will benefit regardless, by increasing the productivity or rate of exploitation of intellectual / information workers.


Why Fears of a Trillion-Dollar AI Bubble Are Growing - https://www.bloomberg.com/news/articles/2025-10-04/why-ai-bu...

What Would an AI Crash Look Like? - https://www.bloomberg.com/news/newsletters/2025-10-12/what-h...


The bull thesis is you get human level intelligence and then it can do jobs like us.


As others have said before me: "the hype IS the product".


That's just a roundabout way of saying they don't expect their money back they just hope to sell before the bubble bursts.


Modern American investing 101.


Which is the same thing as a scam.


It's gonna be very fun to watch SA being tried for fraud and deceiving investors about the "future profits" of his startup.


Au contraire him and associated people (Musk, among others) will be or are being received as heroes in his class for helping to seemingly break the bargaining power of software engineers and middle/upper-middle class information workers and creatives generally.


The folks who would have to press charges are the folks who would be far too embarrassed to admit how transparent the fraud was they fell for - and nuke any remaining asset value.

It’s why Musk is also safe from similar problems.


Akshually, watch the hands. He never talks about profits at all, only about "disruptions". No refunds.


What fraud?

You should view your contributions as a donation. What donation has a ROI?


The live demo of this is brutal. https://x.com/ns123abc/status/1968469616545452055


All the VR/AR/XR demos are so insanely trivial and yet still manage to be much more difficult than current methods of doing things. Like, really, cooking?

Normal method:

* Search for a recipe

* Leave my phone on a stand and glance at it if I forget a step

Meta glasses:

* Put glasses on (there's a reason I got lasek, it's because wearing glasses sucks)

* Talk into the void, trying to figure out how to describe my problem as well as the format that I want the LLM to structure the response

* Correct it when it misreads one of my ingredients

* Hope that the rng gods give me a decent recipe

Or basically any of the things shown off for Apple's headset. Strap on a giant headset just so I can... browse photos? or take a video call where the other person can't even see my face?


I dunno, if these worked perfectly I don't think it'd be awful to be able to open my fridge and say "what can I make with this" and it could rattle of some suggestions based on my known preferences and even show me images in their new display.

Hands-free while cooking (not having to touch my phone with messy hands) is not a bad thing either.


I touch my phone with messy hands all the time. They are water resistant now, just wash it after


I think more so I feel like after touching the phone I should really wash my hands before touching the food or doing anything food related etc.


Yeah but I already live by the 5 second rule anyway so I'm more careless, you do have a point though, it's less hygienic for sure.


Yeah, I care less myself, and I'd probably believe I was training my immune system, but my partner would kill me though.


It sucks now, no idea why, but a few years ago, with the Google Home mini, I could just yell out all kinds of cooking related questions with "Hey Google" and it would always give me a good answer, was great for doing stuff hands free when cooking, like when I just don't want to get raw chicken or whatever on my phone.

But yeah, it doesn't give me good answeres any more, usually trys to start an unrelated YouTube video or email me something about Youtube plus or w/e


I suppose thats a bit easier than reading it out to ChatGPT.


But your $800 glasses are exposed to the cooking area with steam, grease fumes, heat etc.


So wipe it. It's not like it's got an air intake.


But microphones and speakers. And what about the cooling of the chips?


This reads a bit like like a pre-PC take: "Why use a computer when a cookbook works fine?"

Imagine it’s 1992:

Cookbook: Open book, follow steps.

PC: Turn on tower, wait for DOS, fiddle with floppies, pray the printer works, hope the shareware recipe isn’t weird.

Not saying you're wrong but its easy to miss the big picture


> "Why use a computer when a cookbook works fine?"

I still feel that way. I have cookbooks because I find the UX better than searching for recipes.


So I can read the 20,000 story about how the author was told this recipe by their brothers husbands step-grandmother while vacationing at the lake house with their golden retriever named Max before I can get to the recipe.


While this joke is never mentioned and is hilarious every time, you'd be hard pressed to find a recipe site that didn't have either a "print" or "go to recipe" button at the top.


Right, but we're in the 1992 of these glasses. Maybe they'll be good eventually. They aren't now.

And frankly, even the online recipe experience leaves much to be desired. Skip past the blog post. Skip past the list of ingredients. Skip past another blog post. Find the single statblock on the bottom that lists ingredients & amounts, & instructions - hoping that it exists.

Like other commenters, I've also started going back to paper cookbooks.


Not the same.

Internet and recipe websites solve a real problem: accessing recipes was expensive and not easy

AR headsets don't solve any problems. If anything, they make up a nonexistent problem, attempts but fails to solve the problem, during which the experience becomes even worse.


I mean, depends on how you describe it. One could easily say:

Phone method:

* Find phone

* Search for the right app, before finding the right recipe

* Leave my phone on counter, constantly having to move it as I move plates, pans etc.

* Wash and dry hands after each step, before unlocking phone

* Clean it every time gunk gets to it

Meta glasses:

* They're already on, just ask for recipe

* No need to ever wash/dry hands, move a device around, or clean it since one can easily unlock it without touching it

Right? Similarly with cookbooks, the best case is great and the worst case is terrible. There's a reason there's a market for recipe websites, cookbooks, etc.


Okay. Now: Imagine it's 2025:

Cookbook: Open book, follow steps.

New gadget from mult-billion dollar company: showcases on a live demonstration that it's a broken piece of crap that doesn't work.

Like, are we forgetting that it didn't work? It sucked at the job! Let's not what-if or have some imaginary "okay, but pretend it's actually good," deal here. It was bad!


No? Because traditional cookbook (paper or digital) is deterministic and LLMs are not.


honestly cookbooks genuinely are better

i got the art of italian cooking recently and it's genuinely far easier to get a recipe than trying to scroll through a 50 page monologue about the intracicies of someones childhood before even listing the ingredients


Indeed. There is an element of trust with an actual cookbook - it signals quality.

The internet over time has been riddled with junk, especially since the cost of production of information is just your opportunity cost of time. Even that is going away with the use of LLMs....


Core issue within the content age that I don't see being readily resolved. Unfortunately, I think the SEO marketing crowd are slowly catching up with LLMs, which is leading to poorer actual output when attempting to get information.

In the same way that google search used to be amazing before it was taken over by optimization, I think we're seeing a mass influx of content production to attempt to integrate itself into training corpus.


TBH I for one am glad about this.

I have always believed there is a cost borne to get the best of something. This means a sacrifice is entailed. Theres something very important about this re. the culture - a culture in which everything is free is how you get crap stuff produced. And people settle for crap stuff just because its free.

People who can see the bigger picture when you have this, can see the dangers of it.


To note, you can buy the recipes and skip the dumpster internet or register to a site like cookpad. At this point even YouTube is a decent place for that.

I agree random recipes are hell on the internet, but it's also not something we're forced into if we care any bit about recipes in the first replace.


Watching the announcement, every feature felt like something my phone already does—better.

With glasses, you have to aim your head at whatever you want the AI to see. With a phone, you just point the camera while your hands stay free. Even in Meta’s demo, the presenter had to look back down at the counter because the AI couldn’t see the ingredients.

It feels like the same dead end we saw with Rabbit and the Humane pin—clever hardware that solves nothing the phone doesn’t already do. Maybe there’s a niche if you already wear glasses every day, but beyond that it’s hard to see the case.


If executed well I think this could reduce a lot of friction in the process. I can definitely unlock my phone and hold it with one hand while I prepare and cook, but that's annoying. If my glasses could monitor progress and tell me what to do with what while I'm doing it, that's far more convenient. It's clearly not there yet, but in a few years I have no doubt it will be. And this is just the start. With the screens they'll be able to offer AR. Imagine working on electronics or a car and the instructions are overlaid on the screen while the AI is providing verbal instructions.


I'm oldish, so maybe I'm biased, but this sort of product seems like something no one will want, outside a few technophiles, but that industry desperately needs you to want. It's like 3d TV, a solution in search of a problem because the mfgs need to make the next big thing with the associated high margins.

To me the phone is a pretty good form factor. Convenient enough(especially with voice control), unobtrusive, socially acceptable, and I need to own one anyway because it's a phone. I'm a geek so I think this tech is cool, but I see zero chance I would use one, even if it were a few steps better than it is.


On the other hand, having to constantly consult a recipe on my phone while I cook is the main quality of life aspect of home cooking that could be improved.

You're missing the part where I'm reminded that my phone autolocks so I have to go into the settings in the middle of cooking to make it never autolock (or be lazy and unlock it every time I need it). And then I have to find a clean knuckle to scroll the ingredient list and the recipe steps every time I'm trying to remember what step I'm at.

You could do some killer recipe UX with a HUD on some glasses.


These companies are reaching really hard for use cases while ignoring the only ones VR actually works well for. If they just went all in on gaming it would be a much better product than trying to push AI slop cooking help.


As a gamer, in my experience people don't want to play VR games either.

Beat Saber as a social party experience with friends in the same room, sure, that's fun... but for day to day gaming the amount of people who want to play VR games on the regular is nearly zero.

If they really want to lean into the VR use case that people want, its porn, but I suspect they won't put that front and center.


I LOVED VR gaming, but after playing the same 2 games for 10 years, it never really evolved. They stopped innovating and went all in on AR.


I had a HTC Vive and I really loved playing VR games, particularly a shooter called Pavlov. Felt pretty social with a ton of absurd custom maps where the actual game was almost secondary to experiencing the immersive and strange maps.

But since I moved I didn't want to screw the base stations in to the walls again and haven't played in a long time. I feel like I probably still would like VR gaming but haven't been tempted enough to buy any of the newer systems since it seems like Meta has fully captured the market and it all seems pretty distasteful now.


I think you're very much in the minority. Also, VR games didn't really evolve because it can't really evolve - the fundamental thing that makes it attractive (immersion in a digital space) can't work well because of motion sickness. So, the only way to make an immersive VR game is to have an extremely tiny game world or an on-rails experience, and that drastically reduces the appeal.

Of course, you could make all sorts of traditional top-down or isometric games work well without motion sickness - but no one is going to pay for VR to play Civilization or Star Craft or Baldur's Gate 3, since these would be fundamentally the exact same experience as playing on PC or console, but with a display strapped to your head.


> can't work well because of motion sickness.

This is an overated problem. You play VR for a small amount of time then you adapt to it. You get your "VR Legs" as they say.


This is such nonsense. The new Batman game on VR has full motion and smooth turning. It's not on rails at all. Games have got better at reducing motion sickness, and players also adapt over time.


The many of us who get motion sickness have simply stopped bothering with VR. Since the market has shrunk anyway after the initial excitement, the few VR games left can afford to be less accessible.


Indeed. I put on any kind of VR helmet for more than 2 minutes and I'll be queasy and/or throw up outright. My level of motion sickness is maybe extreme... but i guess that definitely messes with the total addressable market.


Yeah I appreciate that. There are things like vignetting that can help and newer games do them. But some people will never be able to play them.


They adapt to the taste of their own vomit? Or mitigate it by drinking lots of chocolate milk before playing?


Your brain just learns to understand it's in VR, and then it feels normal.


It's the part about getting my brain to learn to enjoy the taste of puke that I'm having trouble with.


In my experience, the biggest obstacle to broader AR and VR adoption beyond reducing the price, size, and weigh of the hardware will always be the lack of good content creation tools.

I've been involved with two VR projects that were ultimately cancelled because, while we developed a sexy tech demo that showed the potential, building things out into something sustainable required too many resources and took too much time to maintain.


VR gaming seems like it is a bit of a niche, though. I think they want to sell glasses in quantities more like cellphones than gaming peripherals.

I agree they are reaching (and not finding) for an application.


I agree that VR gaming is a niche, but I think it could be explosively improved if we had the kind of all-in idealism that the previous commenter referred to. I think because VR gaming IS niche, we haven't yet delved into what VR/AR could do in non-gaming.

An idea that I've had before is like 'augmented curated experiences' for all kinds of things--for example imagine playing a Magic the Gathering (or similar) card game, and watching your cards come to life on the board in hologram-esque 3D. Or while watching a sports match, being able to pull up the stats or numbers of any players, or flip through channels of POV camera from helmets. Car navigation that shows you what turns to make by augmenting lanes or signs with highlighting. Brick and mortar stores having a live wayfinding route to products in their store based on your grocery list, recognizing and highlighting items you like.


> for example imagine playing a Magic the Gathering (or similar) card game, and watching your cards come to life on the board in hologram-esque 3D

This is the kind of thing that buries VR ideas. It's very cute in a demo, but as an actual product, the cost of coming up with 3D models and animations for all MTG cards currently being played is many orders of magnitude more than the total number of people who would pay for this. Ultimately this is completely unnecessary fluff for the game, like chess games where the pieces actually fight: irrelevant, and it actually detracts from the game because it interrupts the flow of what you're actually doing.


I remain convinced VR gaming is niche because despite these companies being willing to drop boatloads of money on all kinds of things they for some reason never decided to just allocate a few billion to create a handful of true AAA games and jumpstart the industry. I think even just 3 proper games with several hundred mil budgets and VR gaming might be in an entirely different space than it is now.


Facebook made a very expensive new Batman game in VR, there's also Resident Evil, Assassin's Creed, a ton of other high budget games like Red Matter.

It just isn't taking off. In my experience even though VR is unique and amazing, it's not that much better than playing those games flat screen. I tend to spend most of my time in Beat Saber.


Expensive in the context of other VR games sure. I couldn't find any official numbers but i'm sure it pales in comparison to dozens of other games that came out this year.

Also i'm not sure what these single player relatively short playtime/runtime games accomplish as you buy it play it in less than a week and are done. What I would like to see is the large scale infinitely playable MMO type game done on VR with at least at 250M budget.


I think this is extremely doubtful. The reality remains that it's impossible to make a first person or even third person VR game with free movement, because of fundamental limitations in how human brains process movement. Having your eyes tell you are moving but your muscles and inner ear tell you that you are not makes you extremely sick very quickly, and technology can't actually fix this. The better and more immersive the visual illusion of movement, the worse the movement sickness you'll experience.

And without free movement, you can't build any of the mainstream game genres. You can't build and get people excited in a Call of Duty or Assassin's Creed or Fortnite or Elden Ring or Zelda where movement works like Riven, the sequel to Myst. Valve actually tried with the first Half-Life game in a decade, and even that didn't work.

Add to this massive gameplay limitation the second massive issue that you can't get a mass audience to pay hundreds of dollars extra for a peripheral without which they can't play your 70-80 dollar game.


> Valve actually tried with the first Half-Life game in a decade, and even that didn't work.

Half Life Alyx is still considered to be one of the best VR games ever made and one that is still consistently recommended to new users even years after release. IMO people buy hardware because of the exclusive content. If a standard game console came out and it only had one AAA game on it, I probably wouldn't bother buying it. But if there were 3-4 games that looked really interesting it starts to look more worth the investment. Playing VR games takes a lot of committment (time / physical space / $$$) so the payoff has to be worth it or you'll lose people. With the huge amount of money spent on R&D for new hardware I think it's a valid argument to say that maybe funding content would have been a better investment in terms of ensuring platform growth.

Also, side note but not every game requires free motion. Plenty of hits had no movement or teleport etc. A lot of these were completely new (sub-)genres that didn't exist or hit the same as they would in a traditional pancake game. Plus lots of kids seem unaffected by free movement (maybe as high as 50% of users by my rough estimate).


Those games literally exist now. Almost all new VR games use free movement not teleportation. It is frustrating that you seem to be talking confidently when your knowledge is 5 years out of date.


10 years out of date. Free motion has been the norm for indie games since HTC vive. The bigger studios kept using teleportation because that was the "best practice" gamers got their VR legs and preferred free motion.


Maybe a really high budget VR shooter game could be successful, I don’t know.

I played some VR sword-fighting games and they were bad in a way that AAA budgets would not fix. Stuff like an attack animation being pre-scripted feels incredibly goofy in VR.

I think this is a general problem. VR worlds need to be more dynamic than typical games. AAA games tend to have higher quality assets, but arranged in a more restrictive and scripted configuration. More innovative indie work is needed to work out what the language of VR should be (it is a bit weird compared to the past because stuff like Quake was innovative, AAA-equivalent for the era, but also small and independent enough to be innovative).


We should re-watch Dennou Coil every few years to be reminded of what we’re working towards :)


Well it's clearly a first gen product. They could ship Snake and Tetris on it, probably, but I'm certain they're thinking about how to get apps and games on it.


> the only ones VR actually works well for

I had really expected a different "only one"


No offense, but there it this chart, and what this tells me, maybe just me, is that gaming is a niche within VR, not even majority use case. Zuck is probably right about VR/AR being the next big social media, only he's wrong that it'll be like Facebook/Instagram type of social media; it's old Twitter type of social media.

[1]:

Most played VR games

  Rank Name          Curr   24h pk All-time
  1.   VRChat        33,032 46,652  66,824
  2.   War Thunder   26,388 65,589 121,318
  3.   PAYDAY 2      23,513 31,619 247,709
  4.   No Man's Sky  22,509 46,010 212,613
  5.   OBS Studio    11,434 22,388  27,334
  6.   Phasmophobia   7,716 22,789 112,717
  7.   Forza Hz 5     4,940 13,617  81,096
  8.   Assetto Corsa  3,885 13,598  19,796
  9.   OVR Adv. Sett. 3,030  4,299   6,418
  10.  Tabletop Sim.  2,902  7,755  37,198
1: https://steamdb.info/charts/?tagid=21978


To me the chart shows that VR is mainly used for games. And the steam chart don't include the games played directly on the Quest headsets.


That's certainly one useful spin, but the red flag here is that these don't correlate well with games known as best VR games to VR communities. What I believe to be a more accurate interpretation is, there's nothing but VRChat in VR, and gaming demand in VR can be ~10x smaller per title relative to it.


Games are not a prolific spy tentacle for hoovering up all kinds of data. They may have changed their name, but this is still the facebook company.


Voice input is just too annoying but with the display and wristband I think the dream is there. Your hands are deep in messy food prep, you have a recipe up, you can still pause your music or take a call with the wristband and without stopping to wash up or getting oil or batter on everything.


I wear my glasses all the time. If I could just talk to the void and get help with things I’m directly seeing reliably that would be a game changer. I’ve used Gemini’s video mode and we’re not all that far away.


People dont realise how amazingly efficient touch interfaces already are.

THere is no need for these stupid glasses. Some refuse to accept it - especially Zuckerberg who relies on folks like Apple to make his money. Thats really whats driving this project if you tear away all the BS.


If you watch it carefully, he preempts the AI with "What do I do first" before it even answered the first time. This strongly suggests it did this in rehearsal to me and hence was far more than just "bad luck" or bad connectivity. Perhaps the bad connectivity stopped the override from working and it just kept repeating the previous response. Either way it suggests some troubling early implications about how well Meta's AI work is going to me, that they got this stuck on the main live demo for their flagship product on such a simple thing.


I think preempting the AI the first time was meant to be a feature (it's not trivial to implement and is something people often ask for). Failing from there definitely wasn't great, although it's kind of what I'd expect from an(y) LLM.


No, he preempted it because it was about to list all the ingredients necessary to make a steak sauce, despite having them in front of him. These are glasses, it should have skipped that part and went straight to what to do first.


The way he clung to „what do I do first” makes me think that the whole conversation was scripted in the prompt and AI was asked to reply in specific way to specific sentences. Possibility not even actually connected to the camera?


Yeah as a fully integrated system and the selling point I'd expect you'd say something like "Look again I think you're getting ahead of yourself".

Maybe the tech wasn't quite fool proof and they tried to fake it and then the fake version messed up.


I distrust meta (and hate these voice assistants) as much as the next guy but to me it’s obvious that you would prepare the prompt and use pretty much the exact phrasing. Also, repeating yourself is normal if there’s no response at all. If it was truly all fake why not just cheat outright and just prerecord all of it?


> Either way it suggests some troubling early implications about how well Meta's AI work is going

I fully expect the AI to suck initially and then over many months of updates evolve to mostly annoying and only occasionally mildly useful.

However, the live stage demo failing isn't necessarily supporting evidence. Live stage demos involving Wifi are just hard because in addition to the normal device functionality they're demoing, they need to simultaneously compress and transmit a screen share of the final output back over wifi so the audience can see it. And they have to do all that in a highly challenging RF environment that's basically impossible to simulate in advance. Frankly, I'd be okay with them using a special headset that has a hard-wired data link for the stage demo.


I assume you couldn't watch the video because it's just a live stream of a guy standing in a kitchen and talking to his glasses. He's not on the stage with hundreds of people on the wifi and you can't see what the glasses are displaying at all.


The link in this thread to the live glasses live demo is of Zuckerberg at FB Connect. The "fail" is when someone repeatedly tries to call the glasses he is wearing on stage. The person calling apparently has no trouble making the in-bound calls but the glasses Zuckerberg is wearing on stage fail in successfully answering the call. And the streamed video clearly shows the interface of Zuckerberg's glasses full-screen, as well as showing that the interface is being sent to the stage screen so the live audience can see it.

So, the failure was apparently with the glasses Zuckerberg's wearing on stage not establishing a two-way video call while simultaneously streaming it's own interface for the live stream and big-screen. He said it worked dozens of times in rehearsal and one notable difference was that for the real demo hundreds of other wifi devices present in the room.

I have quite a bit of experience producing live keynote demos at large tech events, so I don't think I've confused about this. As an aside, when we're being shown "Zuckerberg's POV" through the glasses I believe that's actually something custom put together for demos because the normal glasses don't even have a mode which shows the wearer's POV. Creating that view requires sending both the internal output of the glasses, which is the corner inset overlay AND the full screen output of the glasses live camera - which are then being composited together backstage to create the combined image we see representing what Zuckerberg sees. Sending all of that while establishing a two-way video call is a lot for a resource constrained mobile device.


I run multiple live streams from speakers to conference rooms and other bandwidth intensive offerings throughout the day in an incredibly crowded RF space. WiFi is certainly up to the task. Meta is a nearly 2 Trillion dollar company a failure of this order is ridiculous.


I've done live demos of AI. Even with the same queries, I got a different answers than my 4 previous practice attempts. My demos keep me on my toes and I try to limit the scope much more now.

(I didn't have control over temperature settings.)


It looks like true 0-temperature (i.e. determinism) will happen. Here's some good context: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...



But 0 temp is much less "Creative" and may not be conducive to showing off the AI's latest tricks


True. It depends on the feature you're demoing...but determinism is a VERY DESIRABLE feature for giving demos.


> (I didn't have control over temperature settings.)

That's...interesting. You'd think they'd dial the temperature to 0 for you before the demo at least. Regardless, if the tech is good, I'd hope all the answers are at least decent and you could roll with it. If not....then maybe it needs to stay in R&D.


Reducing temperature to 0 doesn't make LLMs deterministic. There's still a bunch of other issues such as float math results depending on which order you perform mathematically commutative operations in.


I keep reading this but I don't get it: for the same input shouldn't the order of resulting operations be deterministic too?


It gets more complicated with things like batch processing. Depending on where in the stack your query gets placed, and how the underlying hardware works, and how the software stack was implemented, you might get small differences that get compounded over many token generations. (vLLM - a popular inference engine, has this problem as well).


Not necessarily. This is a good blog post from a few days about it: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...


Fantastic article, thanks!



Associative property of multiplication breaks down with floating point math because of the error. If the engine is multithreaded then its pretty easy to see how ordering of multiplication can change which can change the output.


This one was also pretty bad: https://x.com/jason/status/1968496622884495847?s=46&t=9d1Ha4...

I think there’s some respect to give cause they’re doing it live and non-scripted.


Non-scripted? You must be kidding.


I take it they meant pre-recorded. It was definitely scripted and practiced.


Respect for trying it live now Apple just does pre-recorded with a ton of VFX.


If you’ve ever used the current Meta Ray Ban and AI, this almost exactly happens when the connection is bad. Pure confusion but the AI still tries to give you an answer.

I bet the device hardware is small/cheap and susceptible to interference


I have the Meta glasses and I've never noticed this, and don't even understand why it could be the connection's fault. The AI gets your audio and your image, if it gives the wrong answer, it's because the AI went wrong. How would the bad connection ever affect it?


Exactly. Like... what are they even saying here - that if the connection drops then it falls back to a tiny "dropped on their head as a child" 4b parameter LLM embedded in the physical firmware and so that's why it is giving inane responses?

Mad props to the presenter for holding it together though.


The ai is in the cloud

Edit0: ie without internet access the ai is unable to produce an answer other than some prerecorded ones I guess

In the live showcase the presenter even mentions that the wifi must have been bad for the ai to repeat the answer


You're saying "you've already used the first two ingredients, so go ahead and add the sauce" is the prerecorded response when it doesn't have a connection?


No, that's the last queried answer. There is no ai in the glasses without a connection, so all it (edit1:it here being the program being run on the glasses, client to the ai between other things)can do (seemingly) is loop around and re-read the last queried answer, which was the mistaken "you've already...".

In the glasses is just a client to the ai. Like there is no ai in your phone when you talk to chatgpt, you are querying it and it will not keep talking to you if you cut off the wifi

The prerecorded responses I speculated about would have been things like "i'm having some connectivity problems, I'm unable to chat at this time, I'll let you know when I'm back." - the same kind of prerecorded things your earbuds tell you when they're low on power.


This can't possibly be the case, because the AI voice says slightly different things between the two attempts. The first time it says "you've already combined the base ingredients, so now grate a carrot to add to the sauce"; while the second time it says "you've already combined the base ingredients, so now grate the carrot* and gently combine it with the base sauce".

Unless you think they've added some inference logic on the device to slightly re-state the last answer they got from the cloud, it's clear that the glasses were connected and receiving the same useless answer from the cloud.

* side note, but it can also sound like "pear" to me this second time


Oh could very well be the case I've only listened once!


If you believe that they made the glasses repeat the last answer when they don't have connectivity, instead of saying "I don't have connectivity", I don't know what to tell you.

I own a pair of Meta glasses, and the response when they don't have connectivity is "this function is not available at this time".


Isn't this a very odd discussion to keep going? I'm not sure why you're being so confrontational as well. I see you have a lot of points, is that a way to drive engagement?


Are you a bot? Also "It must be the wifi" has got to be the lamest, unimaginative, predictable demo failure excuse I've ever heard, and you're trying to defend it.


yes, i am a bot, and i'm paid by meta to convice you to buy their glasses by telling you they are shit? what are you on about?

Edit0: and what are you even doing? Where do you think this is going?


the thing is, if it loses the connection, why on earth would the correct behaviour be to just keep repeating the last response? It should just straight up say, "Sorry I'm having trouble connecting". Even the best case scenario here suggests terrible product design.


Hard agree on terrible. I guess i'd have disabled the no connectivity message for the demo to give it a chance to reconnect gracefully/quickly if at all (by non stop querying even without wifi) but that's just guessing on my part. I think they're garbage and same for meta, if that needs saying


next time they need 1 public and 1 private router and shut the public off right before the demo.


Even if it’s small/cheap, if the item is scanned multiple times this will prevent any electrical infetterence.


I don’t even think that’s a word!


It’s the WiFi, ya sure.


Yeah I was also cringing at that cop out. It doesn’t appear connectivity related. Plus even if it was, it beautifully highlights the connectivity requirement which sucks for so many reasons.


Ouch. Kudos for trying, though. I miss the days of live demos at Apple events, instead of all these polished videos of people standing in silly poses around the Apple campus.


I have mad respect to them for actually attempting this on the fly - especially a public company. Nothing really to gain versus a scripted demo, and absolutely everything to lose. Admirable.


Obviously scripted, just the LLM didn't follow its part of the script.


Hearing this AI-generated voice awakens some primal aggression in me.


This is why Jobs spent months prepping for each presentation.

But hey, at least it's not all faked


When I was at Meta (then facebook), people lived and died by the live demo creedo.

Pitches can be spun, data is cherry picked. But the proof is always in the pudding.

This is embarrassing for sure, but from the ashes of this failure we find the resolve to make the next version better.


Yep I hope that mindset never dies. Meta is one of the last engineering-first companies in big tech and willing to live demo something so obviously prone to mishaps is a great sign of it. It's not unlike SpaceX and being willing to iterate by crashing Starships for the world to see. You make mistakes and fix them, no big deal.


why did they choose to air this live?

For an internal team sure absolutely, but for public-facing work, prerecorded is the way to go


One of my internships was preparing Bill Gate's demo machines for CES. I setup custom machine images and ran through scripts to make sure everything went off w/o a hitch (I was doing just the demos for Tablet PC, each org presumably had their own team preparing the demos!)

Not doing it live would've been an embarrassment. I don't think the thought ever crossed anyone's mind, of course we'd do it live. Sure the machines were super customized, bare bones Windows installs stripped back to the minimum amount of software needed for just one demo, but at the end of the day it sure as hell was real software running up there on stage.


If it was pre-recorded we’d know it was staged and that assume they didn’t have a working product.

Their actual result was pretty bad, but, ya know, work in progress I guess.


Watch their big "Metaverse" presentation where its all vaporware and faked, presumably this is a cultural shift from that era.


The same unwarranted sense of confidence that tells them this product is worth making tells them that they can easily pull off a live demo. This is called "culture fit"


I saw Jobs give a demo of some NeXT technology and the system crashed and rebooted right in the middle of it. He just said “oops” and talked around it until the system came back up.


i love jobs but i do remember the “everybody please turn off your laptops” presentation.

live demonstrations are tough - i wish apple would go back to them.


Totally agree. Up until a few years ago failures during live demos on stage used to be a mark of authenticity, and companies playing recordings was always written off as exaggerated or fake. Now all of Apple's keynotes are prerecorded overproduced garbage.


"At least it's not faked" was my main reaction, too. Some other big-tech AI-related demos the last couple years have been caught being faked.

Zuckerberg handling it reasonably well was nice.

(Though the tone at the end of "we'll go check out what he made later" sounded dismissive. The blame-free post-mortem will include each of the personnel involved in the failure, in a series of one-on-one MMA sparring rounds. "I'm up there, launching a milestone in a trillion-dollar strategic push, and you left me @#$*&^ my @#*$&^@#( like a #@&#^@! I'll show you post-mortem!")


I appreciate the live demo but I'm suprised they didn't at least have a prerecorded backup. I wanted to see how video calls work!


Considering there's no camera pointing to your face they can't be all that interesting.


It was painful even before it started malfunctioning


The demo gods were not present that day


It was the WiFi though


Typical Meta product. I used to believe and wasted money on multiple generations of Quest & Ray-bans. I expect this device to be unsupported at launch, just like Quest Pro was


The portal was like their best product and they just abandoned it.


so when I talk but not to it, it may response like i accidentally say siri? Except is every time?


For those who didn't pick up on it, they were being sarcastic about the issue being wifi related haha


That was not sarcasm. They were being serious.


I’m surprised everyone is saying they weren’t sarcastic. They were even being MORE sarcastic about it being the Wi-Fi after the failed WhatsApp call.


It didn't sound like sarcasm at all to me?


The line must go up, forever. No matter the cost.


I don't think forever...

There has got to be a "laffer curve" for "attention tax" revenue.


“Local opposition” and NIMBYism is the primary reason we have a housing shortage, it’s a rampant problem across the US. The few grumpy old folks that show up to local planning meetings shouldn’t hold us back as a nation. Until we can find a way to get over that hump I’m not sure how we'll move forward in many aspects.


Except at least housing provides benefits to the local community.

A data centre provides almost no jobs (except during construction), and draws a significant amount of resources (electricity, water, noise pollution).

Why should any community want something that only enriches Amazon taking up vast swaths of land in their backyard?


> significant amount of resources

Most importantly, location!

Location is not fungible, and at least in my local area, data center developers seem to want to place their datacenters in up-and-coming areas, where they would block the development of higher-quality structures.

There's no reason the datacenters can't be built in the middle of nowhere, far from people, especially as they don't provide any jobs to the community.


A great solution to this is a land value tax. How do you actually determine if the data center is not the best use for a parcel? If it can compete with other uses of land based on the land value. A land value tax makes the data center, or any other use, pay the community for exactly what it's taking away from the community.


I agree, and I was considering mentioning a LVT in my comment.

I think it would have been pointless though, because LVT doesn't stand a chance: Conservatives would hear "tax" and immediately say "no", and progressives would be unhappy that their elderly mom wouldn't be able to live alone in their 6-bedroom childhood home.


Housing doesn’t benefit the local community(from most NIMBY perspectives). It makes housing more affordable lowering their property values, creates the need for more infrastructure and creates change in their environment.

The motto seems to be, “Neighborhoods full, I like things the way they are. No more change please.” Doesn’t matter if it’s a data center, housing, or any type of development.


It benefits local business by having more customers and benefits local government by having a wider tax base.

Maybe it doesn’t benefit some individuals, but the community improves.


Many times it’s just unplanned uncontrolled growth. It causes issues that aren’t mitigated and generally makes life worse for existing residents. NIMBY is strong because residents know that their politicians are corrupted and incompetent. Politicians will get kick backs and infrastructure will never get extended sufficiently to support. Theoretically we all benefit from increased density due to reduced infrastructure costs and shared resources, in practice the growth leads to government inefficiency and that offsets any costs savings. Similar to how larger companies cost savings from size is offset by internal inefficiency and friction.


I think the history of the 20th century shows that "planned growth" is in many ways far inferior to unplanned growth.

All of our favorite locations were created far before the era of modern planning, when growth was largely uncontrolled.

> in practice the growth leads to government inefficiency and that offsets any costs savings.

Do you have any examples of this? I've never heard of it before and never seen data that could support it. Unless the "government inefficiency" is highly restrictive zoning, which would be an unusual framing but one that I would highly endorse if that's what you mean!


Look at how any large American city is managed. Especially west coast ones that have recently grown. The city government is complete dysfunctional and dispite having significantly more revenue than neighboring smaller cities they piss it all away and produce much worse results than neighboring smaller cities. Seattle and Portland are good examples.


At the same time, the most densely populated large cities are also among the most expensive.


That's because the community benefits so much from density. People want to live there because the density has created fantastic amenities and jobs, ergo prices go up.


Depends what you count as an expense and where collected taxes flow. Rural living is artificially cheap by being subsidized by its more “expensive” dense living counterpart.


It is not just "grumpy old folks", almost everyone who own property, no matter the age fights development.

Seems the only people that do not actively fight development are the working poor. That is because either they work multiple jobs or have travel issues.


Statistically speaking, the homeowners in the US are not young.


Statistically speaking, Americans are not young.


The “YIMBY” movement is completely dominated by 20-40yo middle class professionals in my experience.


Who own nothing and are renting everything. They also think they don’t pay property taxes hence they vote for every tax increase


YIMBY is a supply side movement laser focused on regulatory barriers, I don’t think I’ve ever seen a tax measure mentioned in relation to it.

Plenty of tax breaks though.


YIMBYs are on the forefront of attacking Proposition 13 in California, and YIMBYs are also driving a resurgence of Georgism to use land value taxes to address fundamental inequity.

YIMBYs also regularly support work on non-regulatory barriers, such as the rent increase caps in California (ABA1482), and have even supported social housing bills in California.

There's also lots of talk of transfer taxes amongst YIMBYs, see for example: https://cayimby.org/blog/if-you-tax-the-things-you-want-less...

There are two types of discussions about YIMBYs: 1) that by far-leftists that see the battle as ideological and about regulation/deregulation and trickle-down housing vs. revolution, and 2) the actual YIMBY activism on the ground which is all about more housing and making housing less of a financial and emotional burden for renters.


Every time I read about a purported "housing shortage" I'm reminded that there are about 140 million housing units in the US[0], with an average of 5.5 rooms per unit[1], or about 700 million rooms, all that for 350 million of population, or about 2 rooms per person.

This doesn't look like "we have a housing shortage". What we do have is a shortage of affordable housing in the megacities, and "it’s a rampant problem" in all the megacities.

[0] https://www.census.gov/data/datasets/time-series/demo/popest...

[1] https://www.census.gov/acs/www/about/why-we-ask-each-questio...


Sorry, are you suggesting that the solution to housing shortage is to move into an existing building with strangers?


If I'm reading myself right, I'm suggesting that there's no need to "the solution to housing shortage," since -- with more than 2 rooms per person on average -- it's not a problem to begin with. The problem frequently called "the housing shortage" is a problem of "housing affordability in the megacities," and we should call it by its real name.


How are those rooms distributed? It's not like they are individually moving parts.

People buy a house big enough to hold their kids, then they age and the kids move out, and there's lots of fully owned homes with empty rooms, but no places for the now-adult children to live, until a prior generation dies.

> "the housing shortage" is a problem of "housing affordability in the megacities," and we should call it by its real name.

Housing affordability problems are driven by a single thing: shortage of housing. Refusing to call the shortage a shortage and instead only referring to the symptom, inaffordability, rather than the cause, shortage is willful deception to prevent action on the cause.

This is not a problem just in megacities, it's spreading everywhere else in the country as the problem gets worse and worse. It showed up first in the most in-demand cities but as remote work increased let people spread out more, it affected more and more locations. Meanwhile, people living in the highly economically productive areas with the greatest housing shortages say there's no need to allow more housing to be built because remote work solves the problem. They speak out of both sides of their mouth though, as a few short years ago they denied that shortage caused the affordability problem, but when there's something that can be used to lessen the shortage (remote work, banning AirBNB), they grab on eagerly to the the shortage explanation for housing affordability.

The story of the housing shortage in the US is people desperately, by any means they possibly can, avoid addressing the shortage and being realistic about it.


> willful deception to prevent action on the cause.

I don't care either way. I don't live in the US. Action or non-action, I'm unaffected by that. There's no reason for me to "willfully deceit" anyone, as I don't stand to either gain or lose with any outcome. There's also no reason for you to frame this as a personal attack.

I've checked Zillow though.

There's a plenty of $1 homes, mostly dilapidated and non-functional even though the land could be worth $1 if one can afford demolition and rebuilding. But at the range of $10,000 to $15,000 there's a lot of pretty normally looking homes. Even if one doesn't have that amount as a down payment, I assume plenty of banks would be willing to give a mortgage for that sum with 25 years of $150/mo payments.

The problem is nobody wants to live where these houses are, because it's not SF, while in SF there are a lot of options under $2M, but not many people have that amount of money.

"Housing shortage" doesn't exist. The only shortage that exists is the shortage of $10,000 homes in SF.


Surely most people can recognize the difference between NIMBY of a noisy wasteful cover for Bitcoin mining operation compared to the NIMBY of not wanting "the poors" nearby or the ability to retain high rent charges on hoarded housing. Housing shortage NIMBY and "don't put an industrial facility in my backyard" are really very different things.


most ycombinator folks can't seem to distinguish things outside the software realm. Maybe if all these super ultra mega smart engineers and developers could focus on utilizing existing hardware more efficient we wouldn't need to constantly build these energy sinks.


Where you see a housing shortage, I see too many people in too little an area.

Megacities are a problem everywhere. We have not yet found a scalable way to improve the economy without resorting to unnatural concentrations of people. Still, hope must be kept high, and the battle must go on.


> Where you see a housing shortage, I see too many people in too little an area.

Huh? There are housing shortages in plenty of low-density places.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: