Hacker Newsnew | past | comments | ask | show | jobs | submit | mr_tristan's commentslogin

I wonder if at a certain point, someone learns about compounding, and just sticks with it, building generational wealth. And poof, the wealth just keeps growing as long as the descendants don’t make errors.

We seem to love creating stories about why so-and-so is rich, but, I suspect the most common answer is “time, patience, and no major bad luck”.


You make it sound easy, but there are relatively few billionaires in the US descended from historically wealthy families [0]. First off, this kind of compounding has only really been possible since the industrial revolution (~200 years). Before that, wealth came primarily from land ownership, which was zero-sum. Second, "no major bad luck" has been pretty hard to come by in most of the world across multi-generational time-scales. The US has had a historically exceptional period for the last 80 years or so, but there is no reason that has to continue. Third, "as long as descendants don't make errors" can also be a significant hurdle. Many Asian countries have a saying: wealth does not last past three generations (not meant to be taken literally, but people say it for a reason).

Also, to start compounding, you have to have enough to compound. Most people instead start off significantly in debt (student debt, mortgage/rent, etc.). Then they try to save enough to be able to raise their children and retire before they become too infirm to work, at which point they start compounding in reverse. All the while, they are competing with everyone else trying to do the same thing for how much they are willing to pay for scarce resources and how little they are willing to receive for their employment. Having your compounding achieve escape velocity is the exception, not the rule, and it has to be that way.

[0] https://en.wikipedia.org/wiki/The_Missing_Billionaires


I also suspect that’s the most common path to single-digit millionaire in the US. My parents were the first generation on either side to attend college and the steady economic progress across 5 generations is impossible to miss.

Teaching the next generation(s) about how to invest (in self and in markets), not spend time nor money to frivolous excess, and delaying gratification are other critical parts of sustainability of family wealth. I think those were as big a factor as any other in the 5 generations from my kids back to their great-great-grandparents.


I'm not sure if the next few generations will have the same opportunities as the last few that enjoyed America's dominate place in the world, who also took out incredible amounts of debt that the upcoming generations will have to pay back somehow.

The values you mention are timeless & should be taught to all.

Hopefully technology continues to be a thing that rises all boats & that more people can get said boat.

I fear the current tax laws, political contributions & financial regulations favor those with more wealth so much that when you factor in compounding, their wealth will continue to grow at extreme levels compared to those with less wealth. Retail needs to start pulling their money out of stocks until large companies reduce executive pay to reasonable levels. Otherwise we just blindly keep supporting this current chaos. I believe we also need to start taxing margin loans, instead of going down a wealth tax road.


> The values you mention are timeless & should be taught to all.

Agreed, with the related note that I think that facts/knowledge can be taught by strangers, but values must be taught by family or another tight community structure (church, respected elders, or similar). Schools can prattle on all they want about delayed gratification and it won’t move the needle in behavior.


I'm curious why you seemingly discount school as a "tight community structure". In many communities it's one of the only options left.

I daresay the label of the community is irrelevant, what matters is some other aspect effective ones share - and of course, the child in question (:


There are schools that can serve in that manner, but it’s a tiny minority of them. No (or virtually no) public schools with 125+ students per grade will have that tight community structure.

A private school with 20 per class and 60 per grade has a fair shot at it. Maybe a small public district could as well, but I’ve never seen it happen there.

Sports teams with strong non-athletic aspects to their program are another possible source of values transmission.

I agree that it’s not the sign on the building that matters, but the content and consistency of what happens inside it.


It’s true in theory, but a couple of big things happen 1) for the vast majority outlays increase as fast if not faster than wealth, and 2) things like divorce/remarriage/second families happen creating massive dilution, 3) the experience of most HNW clients is more like 3-5% real after tax.


Yes I suspect the rich's strategy of owning nothing and controlling everything contributes as much to their wealth retention through the inevitable catastrophic events in life, as much as snowball interest accumulation does.

Middle class / poor people generally know how to build wealth through interest but make the fatal mistake of keeping their wealth largely tied up in on-paper assets in their name, this means they're ripe for the taking from every judge / wife / healthcare creditors / random guy with a lawsuit as soon as something in their life goes south. Of course once you get to a certain segment of underclass people who are basically unbanked you get back to horse-shoe theory and you'll never be getting their $50k gold chain they keep around their neck.


> the rich's strategy of owning nothing

I think you may be coming into this conversation with a different definition of "rich" than most people.


Even within compounding there's an interesting phenomenon.

Start with $100 and compound it at 10% per year for 30 years and you end up with about $1,700. Improve that return by just 1% to 11% and after the same 30 years you have about $2,300.

That small 1% edge produces roughly 35% more money in the end. Compounding is extremely powerful, and even marginally better money management leads to vastly better outcomes over time.


Over the last 100 years, the S&P 500 has grown with an average of 6% yearly adjusted for inflation (9.8% nominally).

That means an inflation adjusted doubling every 12 year.

It also means that if you manage to live on e.g 1% of the wealth annually, your wealth will double(inflation adjusted) every 15 year.

So yeah, as long as you don't get too many children (branching factor not more than 4) you 'just' need to get filthy rich, and none of your descendants ever need working.


The last 100 years has been the era of the American Empire, which is monetized through the network effects of the dollar and dollar-denominated-assets. We don't have ships full of Potosí silver, but we do have 10% yearly S&P returns in an economy growing 3%. Extrapolating another year of that is probably reasonable, extrapolating another 100 years of that is probably not.


Yeah, it's easier to analyse the past than predict the future. I agree that I made a statement about the luxury of wealth the last century.

I don't know if the S&P500 will grow at the same rate the next century, but I am willing to bet a beer that stocks in wide index founds will grow faster than both inflation and average salary for the next century.


I think the realistic number is is more like 2-4% if it’s not supplemented by working (and sometimes even working) I also think when you add in things like paying for private school/ college, divorces, taxes, carrying cost of real estate, etc. luxury travel/clothes/meals/vacations it’s a big number.

I’m not saying it’s not possible, but I suspect the vast majority of grandchildren of a wealthy couple have meaningfully less than the original couple did, as well as less than their parents did.


2-4% of the wealth you mean? That surely depends on the wealth... If you need 200k a year that's 2% of 10 million, but 1% of 20 million.

If you manage 1% then the real value doubles after 15 years. If you need 2% annually, the real value doubles after 18 years. The moral of the story is the same, with enough wealth you can live comfortably while your wealth grows.


The tough part is someone with 20 million "in the bank" will have a hard time constraining themselves to 200k/yr expenses. It seems like a lot but next to 20 million the temptation to spend a little, like 1m on a house, 100k on a car seems like nothing and a potentially reasonable purchase. But it drastically changes the trajectory of that balance.

And you wouldn't feel as good spending 1m on a house or 100k on a car with a 200k salary and zero in the bank.


and the further you get from the actual work that created the nest egg the greater the temptation - because no one treats found money like earned money.


There are a lot of mid-sized companies identified in the book _Hidden Champions of the 21st Century_. I just started the book, but it's exactly the ethos you're talking about here: these companies just focus on a niche, tend to sell to other businesses, and just stay doing this thing profitably, absolutely dominating their niche with razor focus.

I'm reading this book because, well, that's the kind of place I'd like to work. I think it makes sense to get a feel for how these places think, in order to really identify job opportunities

Edit: here's a Wikipedia page on the topic https://en.wikipedia.org/wiki/Hidden_champions


Thanks for sharing. Two companies come to mind: Strix for kettle controllers and Shimano for bike gears. Maybe they don't fit exactly to the Hidden champions category because they’re not very hidden from the public (many manufacturers mention their names on final products, assuming consumers might take that into account). So the criteria for “hidden champions” could be more flexible imo

Strix became less hidden for me personally after listening to The Life Scientific interview with John Taylor [1]. There is plenty of fascinating information, probably because Jim Al-Khalili is a great scientific interviewer. Recently, I recalled it in the context of AI, self-driving, and safety. Strix controllers have a second level of protection if the main automatic shut-off circuit fails. That’s probably why we never hear of fires or other incidents due to a failed Strix controller.

[1] https://www.bbc.co.uk/programmes/b0b42z87


Yeah, I think there's a lot more "good, focused" companies out there than what are covered in this hidden champions book. The book is just interesting to me. It highlights a lot of the economic export strength of Germany isn't due to the large corporations that people know, but a bunch of mid-sized companies people don't.

In some sense, what seems important is a business culture that has a mission or meaning to exist other than make shareholders money. I'd wager their employees will absolutely geek out about what the companies do throughout the organization. A lot of corporations these days, once you get above a couple of layers of management, is all fluff. I can't think of the last time I talked to a mid-level or above "engineering" manager in a tech company about any nuanced or interesting discussion about technology.


Ultimately, the first line is really resonated with me:

"When I get to write or read on a screen that’s reflecting the sun back at me instead of needing to be shielded from it, I get a dose of this feeling that this is what all computing could feel like. I want so much more of this in my life."

I have the DC-1, and where I've used it in direct sunlight, it's a great feeling. However... it's rare that this matters. But... it's winter. And so I'm inside because it's f*king cold out. I'm holding onto hope that this will bring me outside to read and note take a bit more eventually.

My iPad is still king for my "tablet computing". Especially note taking, drawing, design tasks (like CAD), casual gaming and entertainment consumption. I don't see the DC-1 replacing my iPad use any time soon. The app ecosystem, screen, sound, etc, are just not good enough to replace my iPad. Frankly, I just don't see anything that can really compete with the iPad, which sucks, because I feel like Apple continues to underestimate what the iPad could be. (It should be more like a mac and not like a phone. The hardware can do this, the software can not.)

... but anyhow, the DC-1 makes me excited to be able to, say, go to the park and read and note a design doc. Etc. Like, this device could be a lifestyle changer... when it's nice out. Or it might be a device I read documents on and take notes on the iPad. This is a second use case I'm just starting to figure out.

So I'm going to keep onto mine, and I'm optimistic and excited. But it's early.


This list does resonate, but I’d make some tweaks to express things slightly better. For example:

> Most programming should be done long before a single line of code is written

I would say “most engineering should be done before a single line of production code is written”.

Formalizing a “draft process” is something I’m really trying to sell to my team. We work in an old codebase - like, it’s now older than most of the new hires. Needless to say, there’s a whole world of complexity in just navigating the system. My take: don’t try to predict when the production code will be done, focus on the next draft, and iterate until we can define and measure what the right impact will be.

The problem is that there’s a ton of neanderthal software engineering management thinking that we’re just ticket machines. They think the process is “senior make ticket, anyone implement ticket, unga bunga”. What usually happens here is that we write a bunch of crappy code learning the system, then we’re supposed to just throw that in a PR. Then management is like “it’s done, right” and now there’s a ton of implicit pressure to ship crap. And the technical debt grows.

I haven’t quite codified a draft process, but I think it’s kind of in line with what Chris here is talking about: you shouldn’t worry about starting with writing production code until you’re very confident you know exactly what to do.

Ah well, it’s a fun list of opinions to read. Chris’ WIP book is an interesting read as well


>They think the process is “senior make ticket, anyone implement ticket, unga bunga”.

The fact that you just summarized about an hour worth of argumentation from my last annual planning meeting with that one sentence has just destroyed me. I kneel.

At this precise moment in time, if anybody seriously thought the above was the way the process works or should work, they should be advocating for firing all the juniors and replacing them with LLMs.


> At this precise moment in time, if anybody seriously thought the above was the way the process works or should work, they should be advocating for firing all the juniors and replacing them with LLMs.

Sadly, I think this is happening at some places. Like Salesforce. Sigh

https://www.ktvu.com/news/salesforce-cutting-1000-jobs-hirin...


Sort of tangent, but is there a semi automated way to select which engineer would be best to implement a ticket? Something like "we need to modify this API, let's run git blame and see who has the most familiarity" and some form of scheduler that prioritizes the most experienced engineers on the parts of code that only they know?


Not that I know of.

I do think you could do some analysis to associate code with implementers and create graphs, where you account for additional things like time. I could see LLMs being helpful in maybe doing part of that analysis. But I would use that to see where the biggest "bus factor" is, i.e., finding subsystems where there's really only one active contributor.

For planning or task assignment, it might just help to say "ask X for more detail" when there's no other docs or your LLM is spewing jibberish about a topic


Software problems almost never seem to actually impact brands, but this was so egregious it actually managed to. Even Crowdstrike increased revenue over the last year.

So… I’m not shocked at the timing. Sales have tanked, and proved the CEO couldn’t be relied on to right the ship.

It’s just one example of how long serious problems can fester in an organization. There’s likely deep cultural problems throughout the company’s decision making apparatus. We’ll see, but I don’t see Sonos capable of being a trusted brand again.


Right now, I’d say the “AI IDEs” like Cursor or Zed are ready to replace less Java-centric environments. I’d put VSCode in this “not really Java centric” bucket. I see VSCode as a “fancy text editor” for Java, i.e., better than an editor like Neovim, but, not by much. So, an AI IDE is more likely going to gain traction on people who have been using VSCode or Neovim than anyone using Eclipse or IntelliJ.

Recently, my company has tried to introduce a “cloud IDE” (the development environment runs in the cloud somewhere). Initially, it only supported VSCode. The only engineers that bothered using it were junior; once people had about 5+ years of experience, they just found it tedious. Once the company included IntelliJ for that cloud IDE, usage spiked massively. (To the point they are restricting usage due to cost.)

These “classic Java IDEs” just launch with features useful for understanding large systems, like, fast navigation and debugging capabilities. Things like “where is method used” or “what implements this interface method” is fast and accurate - i.e., not based on text search. Or the interactive debugger that lets you inspect stream state, track objects, etc.

JetBrains probably won’t be focusing on using AI simply for writing code, but for enhancing all of these other capabilities. This is where I’m not sold on Cursor or Zed replacing these truly language-specific IDEs… yet.

These new upstarts need to improve the ability to navigate and understand. Right now, they only seem to focus on writing, which I don’t think is what’s going to gain traction. I also don’t see any of them doing much other than just fancy autocomplete, which can be awful on a large legacy codebase. So… we’ll see.

This could be generational, I’ve definitely seen poorer DevEx win simply because they gained the attention of younger engineers and lasted long enough.


I don’t use an LLM largely because my current codebase has a massive amount of bespoke internal APIs. So LLMs are just useless and wrong for almost any task I use.

But this has led me to wonder if there will be gradual pressure to build on top of LLMs, which, in turn, will really only be useful with the tried and true. Like, we’re going to be heading towards an era where innovation means “we can ask the LLM about it”. Given the high capital costs required to train, I wouldn’t be shocked to see LLMs ignoring new unique approaches and biasing to whatever the big corps want you to do. For “accuracy”.

I just sense were about to hit an era of software causing massive problems and costs, because LLMs are rapidly accelerating the pace of accidental complexity, and nobody knows really how to make money off them yet.


Exactly my thoughts. It might even work reasonably well in the end, but it seems it would make the world of computing a much less fun place.


The cynical take is more that it's crappy blade guards that nobody uses that really should be improved, and it's not necessary to mandate SawStop-style blade breaking technology.

I tend to agree with Jim Hamilton, Stumpy Nubs on youtube, who was quoted in this article: https://www.youtube.com/watch?v=nxKkuDduYLk

Bascially, mandating the more expensive blade brakes instead of standards around blade guards will eliminate cheap table saws from the market. And yes, this has happened before with radial arm saws - they are now basically non-existent in the US.

So it definitely benefits SawStop to give away this patent, as their saws will look a hell of a lot "cheaper" than competition.


SawStop often breaks the saw itself, not just the blade. There's alot of energy being put into the saw all at once, and I've seen examples where it fractured the mounts of the saw itself when it engaged.

That's of course great, if you're in the business of selling saws, not so great if you're in the business of buying saws.


I have been associated with four hackerspaces that have SawStop's.

I have seen an average of about one false firing a month--generally moisture but sometimes a jig gets close enough to cause something. I have seen 4 "genuine" firings of which 2 would have been an extremely serious injury. This is over about 8 years--call it 10 years.

So, 4 spaces * 10 years * 12 months * $100 replacement = $48,000 paid in false firings vs 4 life changing injuries over 10 years. That's a pretty good tradeoff.

Professional settings should be way better than a bunch of rank amateurs. Yeah, we all know they aren't because everybody is being shoved to finish as quickly as possible, but proper procedures would minimize the false firings.

Part of the problem with false firing is that SawStop are the only people collecting any data and that's a very small number of incidents relative to the total number of incidents from all table saws. SawStop wants the data bad enough that if you get a "real" firing, SawStop will send you a new brake back when you send them the old one just so they can look at the data.


>That's a pretty good tradeoff.

Assuming of course, there is no possible way that you could otherwise reliably prevent those injuries that doesn't depend on a human's diligence. That is, of course, ridiculous, but, that's the nature of this regulation. You're also not accounting for the cost of the blade, which isn't salvageable after activation, and those can get spendy.

Realistically, SawStop wants the data so it can lobby itself into being a permanent player in the market, which will, of course, prevent anyone from innovating a no-damage alternative to SawStop, which is certainly possible.


> Assuming of course, there is no possible way that you could otherwise reliably prevent those injuries that doesn't depend on a human's diligence. That is, of course, ridiculous, but, that's the nature of this regulation.

Well, the saw manufacturers could have done that before this regulation. However, they didn't. Only once staring down imminent regulation have they been willing to concede anything.

Bosch even has a license to the SawStop technology and had their own saws with blade stops. They pulled them all from being sold.

Sorry, not sorry. The saw manufacturers have had 20+ years to fix their shit and haven't. Time to hit them with a big hammer.

> Realistically, SawStop wants the data so it can lobby itself into being a permanent player in the market

Realistically, SawStop is so damn small that they're going to disappear. They're likely to get bought by one of the big boys. Otherwise, the big boys are just going to completely mop the floor with them--there is absolutely zero chance that SawStop becomes a force in the market.


Bosch pulled their saws from the US market because SawStop sued and forced them to. Then SawStop started lobbying to have their own design mandated on all saws. It was only later that SawStop said they'd allow Bosch (presumably in order to collect patent license fees).

As to this proposed mandate... If it's mandating any safety device, and Bosch and others can freely compete without everyone paying SawStop, I'm all for it. But if it's mandating the SawStop design, or would require all competitors to pay SawStop, forget it.


You have the order wrong, first they lobbied (2011), then Bosch introduced (2015) and pulled their saws (2017). Then SawStop reached an agreement in 2018. And the reason Bosch hasn't reintroduced it is apparently interference from cell phone signals. https://toolguyd.com/bosch-reaxx-table-saw-why-you-cant-buy-...


Correction to my post: apparently SawStop already got bought. It is owned by TTS Tooltechnic Systems which also owns Festool.


Similar background and experience with sawstop. I'm a huge proponent of SawStops but it's important to be as upfront as possible. It's $100 for the cartridge and then another $60-$120 for replacement saw blade.

Sweat dripping on the work piece (especially NoVA in summer with AC on fritz) was responsible for a fair share of the cartridge firing without contacting flesh.


N=few, but thank you for sharing this actual anecdata for those of us interests.


This is a good amount of data but is $100 really the right cost for the replacement of a table saw if the saw itself is actually damaged, as OP says? Is it your experience that the saw is almost never damaged and the replacement cost is almost always the ~$150 dollar blade, or do you know how frequently these false firings damage the saw as well?


Well, only SawStop sells these saws, and I haven't seen anybody need to replace the saw after a firing. They just replace the blade and brake and get back to work.

Replacement cost is always brake and blade.

The blade is always dead. These things work by firing what looks to be an aluminum block directly into the blade.


Thanks!


I ran woodshop at a makerspace with multiple SawStops. We went through lots of cartridges and blades but never experienced damage to the rest of the saw. I have no idea where OP is getting that information/FUD.


> So, 4 spaces * 10 years * 12 months * $100 replacement = $48,000 paid in false firings vs 4 life changing injuries over 10 years.

Certainly reattaching fingers would be cheaper than $48k. That's a steal of a deal in the US.


Divided by four, right, so $12k? I would think the medical, rehab, lost wages/productivity, and disability costs of an average table saw hand injury would easily exceed $12k.


It is not that simple. Replacing a saw is a loss to the business owner, while an employee losing a finger by his own fault costs nothing to the company.


Fortunately, or unfortunately, depending on how you look at it, this is not true. If you are injured at the workplace while performing your work duties and you are not actively intoxicated on drugs or alcohol, then you are entitled to medical care and worker's compensation for that injury. It is absolutely something that has a cost to the company.


Apparently you've never heard of this thing called worker's compensation.


This is both factually incorrect and not funny at all.

In addition, last I checked, modern medicine cannot reattach nerves so you lose a great deal of functionality of your finger or hand even if you save it.

See: https://youtu.be/Xc-lIs8VNIc?t=1095

I hope I am simply missing the joke if someone would be so kind as to clue me in.


Yeah, you're missing the joke. The joke is the US healthcare system and how expensive it is.

A sense of humor might cost in excess of a finger reattachment, though.


If it engaged incorrectly, absolutely. If it saved my thumb and I have to buy a new saw as a result, it's hard to imagine a price point where I'd call the outcome not so great.


If it saves your thumb, sure. If you're ripping a wet piece of wood, no thumb risk at all, then, yeah, not so great.

Realistically, I don't like the tech or the methodology at all. Battle bots had saws that would drop into the floor without damage, and pop back up even, also without damage, and that was decades ago. That's the right model, not "fuck up the saw".


>Battle bots had saws that would drop into the floor without damage, and pop back up even, also without damage, and that was decades ago. That's the right model, not "fuck up the saw".

Might be wrong, but my own amateur reasoning has me believe that a table saw has far more kinetic energy than a battery powered battle bot, and that the SawStop must likely move the saw in microseconds, vs a battle bot which may comparatively have all the time in the world.


No, I mean they had table saw rigs that would bring the saw up/down into the floor with an actuator as a 'ring hazard', ie, your robot could be subject to sawing at any moment if they happened to be there.

The question is, how fast does it need to be? Likely not that fast really, certainly not microseconds, and an actuator could easily yank the saw down without damaging it if it detected you were about to lose a finger.

There's also no reason you couldn't use the same actuator to do fancy things, like vary cut depth on the fly, or precisely set the cut depth in the first place. Can't do any of that with a soft aluminum pad that gets yeeted into the sawblade when it detects a problem.

Basically, SawStop exists to sell saws. Those saws happen to be safer, but that's a marketing point, it's not what ultimately makes them money. Look at the incentives, you'll find the truth.


>The question is, how fast does it need to be?

I don't know - the marketing material actually says 5 milliseconds. That's the crux of the problem and I don't believe you can actually move the saw fast enough to not cause serious damage to the human without damaging the saw. The problem, as I understand it, is stopping the saw. The saw actuator only makes sense if it moves fast enough and given the saw stop works on detection, I'm not convinced you have that much time.

I'm considering the physical reality here - if the saw must be yanked down quickly, how much force must be applied to the saw to move it, and then can that equal and opposite force be applied to stop it without damaging the saw?

>Look at the incentives, you'll find the truth.

This is true of any safety device? The SawStop inventor created his company after trying to license it and eventually won in the marketplace after nearly 30 years. Surely his competitors would have released an actuator based solution if it is was possible rather than ceding marketshare of high end saws?


Bosch did release an actuator-based solution. They got sued by SawStop for patent violations and lost and pulled it from the market. SawStop's main patent just covers the concept of a blade brake, not a specific implementation.


The actual contention isn't whether an actuator-based solution would work, its if an actuator-based solution could stop the saw without damaging it (and therefore not give credence to the claim that SawStop is intentionally designing a poor solution in order to sell more blades).

As far as I can tell, REAXX also damages the blade.


REAXX retracts the blade inside the table and lets it coast to a stop. It does not damage the blade.


>and therefore not give credence to the claim that SawStop is intentionally designing a poor solution in order to sell more blades

This is the most asinine argument I’ve heard yet, and I am not a fan of this regulation.

Blades are typically cheap and the ones that aren’t are often repairable after an activation. Also, Sawstop barely sells any blades - I don’t know a single woodworker or cabinet shop that runs their blades.


I think the speed that things can go wrong when using a table saw (or most power tools) is faster than some people, including some woodworkers, might expect. There's a good example video here (warning, shows a very minor injury):

https://www.reddit.com/r/Carpentry/comments/11s6zlr/cutting_...

While we're still not talking microseconds, I think it highlights that moving the blade out of the way needs to happen very quickly in some cases to avoid serious injury.


Sounds like you're perfectly positioned to start a SawStop competitor!

"Protect your equipment AND your fingers."

With the government potentially mandating these types of devices, you could be makin' the big bucks!

These incentives are clear, where's the truth?

(This is only somewhat facetious. I'm skeptical of your claims, but not enough to discount them out-of-hand. The industry honestly does seem ripe for disruption.)


> The question is, how fast does it need to be?

According to my calculations, on a 10in/ 30tpi blade you have a teeth passing every 8.3uS.


I think the other key variable is how fast is your finger being advanced towards the saw blade and how much total depth of contact are you willing to accept and claim a victory. In an aggressive ripping the material you're holding towards the blade that might be 10 mph (~15 feet per second), if you're willing to tolerate a 1/16" depth of injury, you have about a half a millisecond.

If the rate of advancement is much slower (like a normal pace of feeding the stock into the saw accident), you have several milliseconds before reaching a 1/16" depth of injury from first contact to last contact.


Bosch used to have a system called Reaxx that could pull the saw out of the way without damaging it.


Sawblades are consumables and cheap enough (some are ~10-12 bucks) that it's probably a worthwhile cost.


An entry-level Dado blade can run about $100. The $10-12 sawblades can't make finish cuts that are worth a damn, because they chew through the work and tear splinters out rather than making precise nips at the front and back of each grain of the wood. For a saw blade an entry level blade that doesn't do this to your work can run you more like $60.

I know this because I've had to buy a table saw blade to replace a $10-12 one on my wife's table saw that someone threw on there because they were doing framing work.


I'm sorry, but this is a bizarre take to me. I don't care what happens to a saw if it would have otherwise cut my finger off.


How often do you use a saw? At $3500 a saw I care. I saw a lot of wood and inadvertently hit at least one staple/nail/screw per year. Over the last 20 years of using my saw that would be tens of thousands of dollars if even a portion of them damaged the saw. It would essentially price me out of doing woodworking.


SawStop works by detecting electrical conductance, and there are many reports of it misfiring when attempting to cut wood that isn't fully dry (i.e., there is moisture inside the wood, increasing its electrical conductance).


I'm aware. I'm not buying that a new saw blade and a replaced brake is too much of a cost over the peace of mind that you're at a significantly reduced chance of losing a finger.


And they're pointing out it's not just those two replaceable components - it's the _entire saw_ that they're risking destroying off a false positive that some woodworkers will hit frequently.


That just means the tech is not ideal, not that I want table saws without it.


They are suggesting the blade retracted, broke the saw, in a situation in which there was no risk to the finger. Maybe there was a literally hotdog in the wood.

> If you're ripping a wet piece of wood, no thumb risk at all


How many expensive false alarms are you willing to accept, per serious injury avoided?

I'm no expert in this, but I'd say 'definitely way more than one'.


And many people have experienced those ratios.


Most people would rather go bankrupt than lose a finger. Fingers are kind of important. If I can choose to keep my house or my finger, I’m definitely choosing the finger.

So just divide the average net worth of a saw operator by the cost of a saw to get how many saws a finger is worth.


Really? I would definitely rather lose a finger than go homeless. Homeless people have far, far worse life outcomes than people with missing fingers.


A specific person isn’t the average homeless person who tend to be dealing with addiction, physical or mental illness, past incarceration, etc.

So talking about the average outcomes of a random homeless person doesn’t really apply here.


Exactly, homeless people living on the street should really be called familyless.

If I went bankrupt and lost everything I have a social safety net of family members who would put a roof over my head until I got on my feet again. Only people without that safety net end up on the streets. Or they have addictions that mean their family can’t take care of them anymore.


Many older woodworkers lost fingers often multiple fingers in multiple accidents.

So, the risk is really quite high here.


The guys I've seen lose fingers were all sleep-deprived and working flat out. The biggest risk to site safety is sleep deprivation and physical exhaustion.


StopSaw makes these. The saws, tables, and fasteners are beefy enough to survive.


It works on conduction and capacitance. It's not immune from false positives.


Frankly, the biggest problem is that this makes it impractical to test the brake. How do I know the brake even works, if testing it is not practical?


I do it so seldom and am so careful not to put my fingers within 3 inches of the blade that this is a non-issue for me. This is another one of those "let's put 6 extra buttons that all need to be pressed to start the saw!" kinda situations that doesn't do anything to improve safety because the stop is the first thing you disconnect if it throws a false positive.

If we're concerned about job site injuries then let's address the real problem, which is that a lot of people using these things do so as fast as humanly possible with little regard for set up, site safety, or body positioning because the amount of money they will lose by doing that eats so much margin out of their piecework that it's not worth it. As usual we don't want to solve the hard problem of reducing throughput to improve safety, but we're perfectly happy to throw a part that is as expensive as the sawblade on the unit just to say we're doing something.


"If we're concerned about job site injuries then let's address the real problem, which is that a lot of people using these things do so as fast as humanly possible with little regard for set up, site safety, or body positioning"

Solving that sounds a lot harder to me than legislating that saws have safety features.


The point is that there's a (>1) cheaper solution that still saves your thumb, but it's (they're) being regulated out of the competition.


I tend to agree, assuming there are no false positives. Admittedly, I’m not sure how often that occurs, nor if we even can know that based on all the various work environments the cheap table saws are being used in today.


This is true, it can also fracture the motor mounts and not be noticed, until you are performing a difficult and aggressive cut and the motor mount breaks with a spinning motor attached and your board shoots across the room or into your face.


Your board shooting into your face has always been a concern with saws. Hence why you don't stand in the line of fire when making cuts.


I ran the woodshop at a local makerspace. We went through a lot of sawstop cartridges...easily 10-15 a year. The saw was never damaged because of the cartridge firing.


> That's of course great, if you're in the business of selling saws, not so great if you're in the business of buying saws.

OTOH (literally?) keeping your fingers but having to buy a new saw seems pretty reasonable.


I've seen all of the talking points, but a regulation probably is required simply to force liability.

The biggest "excuse" I have seen from the saw manufacturers is that if they put this kind of blade stop on their system that they are now liable for injuries that occur in spite of the blade stop or because of a non-firing blade stop. And that is probably true!

Even if this specific regulation doesn't pass, it's time that the saw manufacturers have to eat the liability from injuries from using these saws to incentivize making them safer.

As for cost, the blade stops are extremely low volume right now, I can easily see the price coming down if the volume is a couple of orders of magnitude larger.


We had one of these in my highschool woodshop - they would demo it once a year on the parents night because of the expense. I'd rather see this regulated in a way that says places like schools or production woodshops would need these from an insurance perspective, but home woodshops wouldn't be required to


Why are radial arm saws so dangerous? I have an old one and other than shooting wood into the shop wall when ripping, or holding the wood with your hand it seems pretty hard to hurt yourself. Circular saws seem way more dangerous, and the only injury I've ever had was from a portaband.


There used to be some pretty wild published advice on how to use a radial arm saw including ripping full sheets of plywood by walking the sheet across the cutting plane with the saw pointed at your stomach. They also travel towards the operator in the event of a catch because of the direction of the blade and the floating arbor. This makes positioning yourself out of the potential path of the blade critical and the one thing we know is that you can't trust people to be safe on a job site when they are in a hurry.


>There used to be some pretty wild published advice on how to use a radial arm saw including ripping full sheets of plywood by walking the sheet across the cutting plane with the saw pointed at your stomach.

So, similar to ripping plywood on a table saw, then? What makes one worse than the other here?

>They also travel towards the operator in the event of a catch because of the direction of the blade and the floating arbor.

So, like a modern sliding miter saw, then? What makes one worse than the other?


I have the original manuals describing how to rip using a radial arm saw. The blade is set at the level of your stomach and mere inches away from a spinning blade as you walk a sheet of plywood along it. There are so many ways for that situation to go wrong, and so few ways to make that situation safe. I have a beast of a radial arm saw, and I set it up to rip out of curiosity, and it would be insane to ever do it that way. It will cut your guts wide upon if you so much as slip.

And when that saw bites, and comes at you with enough force to be too much for you to react to. If parts of you are in the way, it'll rip right through them as it punches you in the jaw.



You do realize you linked to a discontinued product that costs over $5k?

This is what I actually expect to happen to the table saw market - they all become expensive, and the sub-$1k market (which is huge) goes away. Yes, you can find an RAS but it's about 10x the price of what they used to be.

I found a RAS from Sears from 1995: $499, which is around $1000 with inflation. https://archive.org/details/SearsCraftsmanPowerAndHandTools1...

So I stand by my statement: they're effectively non-existent, demand is gone after the 2001 recall by Craftsman, and most of the major manufacturers have stopped producing them. I expect the same thing to happen to table saws.


They literally just updated the model number yesterday as they wait for new stock to arrive in July.

It even says so right under where it says discontinued. Specs are exactly the same.


The “great recession” didn’t seem to impact tech like the dotcom bust did. So millenials and younger really hadn’t lived through a lean era in tech like we’re living in now. This is easily the worst jobs market since that dotcom bust. So it’s been a while.

Nothing like seeing how arbitrary layoffs are to seeing how shallow leadership decisions can be.

This is the period that actually defines good leadership. A good leader, at least to me, won’t be led by investors to making largely short term decisions. They will define how their organization needs to evolve and challenge them to make it happen. But what I mostly see, are leaders doing stupid things like RTO then layoffs (goodbye, loyalty!), and then shrug and whine about having to maintain margins while cutting money losing projects that shouldn’t have been started in the first place. And then turn around and say “the future is AI” without really having much of a plan.

The leadership BS really stinks during these times, and it’s something that the younger generations haven’t really experienced.


I find that architecture should benefit the social structure of the engineering team, and there are limits. I work on one of these “simple architectures” at large scale… and it’s absolute hell. But then, the contributor count to this massive monorepo + “simple architecture” hell numbers in the thousands.

Wave financial is only 350 people according to wikipedia - I doubt that’s 350 engineers. I know only of Google and Meta that can even operate with a massive monorepo, but I wouldn’t call their architecture “simple”. And even they do massive internal tooling investments - I mean, Google wrote their own version control system.

So I tend to think “keep it simple until you push past Dunbar’s number, then reorganize around that”. Once stable social relationships break down, managing change at this scale becomes a weird combination of incredible rigidity and absolute chaos.

You might make some stopgap utility and then a month later 15 other teams are using it. Or some other team wants to change something for their product and just submits a bunch of changes to your product with unforseen breakage. Or some “cost reduction effort” halves memory and available threads slowing down background processes.

Keeping up with all this means managing hundreds of different threads of communication happening. It’s just too much and nobody can ever ask the question “what’s changed in the last week” because it would be a novel.

This isn’t an argument for monoliths vs microservices, because I think that’s just the wrong perspective. It’s an argument to think about your social structure first, and I rarely see this discussed well. Most companies just spin up teams to make a thing and then don’t think about how these teams collaborate, and technical leadership never really questions how the architecture can supplement or block that collaboration until it’s a massive problem, at which point any change is incredibly expensive.


The way I tend to look at it is to solve the problem you have. Don't start with a complicated architecture because "well once we scale, we will need it". That never works and it just adds complexity and increases costs. When you have a large org and the current situation is "too simple", that's when you invest in updating the architecture to meet the current needs.

This also doesn't mean to not be forward thinking. You want the architecture to support growth that will more than likely happen, just keep the expectations in check.


> Don't start with a complicated architecture because "well once we scale, we will need it".

> You want the architecture to support growth that will more than likely happen

The problem is even very experienced people can disagree about what forms of complexity are worth it up-front and what forms are not.

One might imagine that Google had a first generation MVP of a platform that hit scaling limits and then a second generation scaled infinitely forever. What actually happens is that any platform that lives long enough needs a new architecture every ~5 years (give or take), so that might mean 3-5 architectures solving mostly the same problem over the years, with all of the multi-year migration windows in between each of them.

If you're very lucky, different teams maintain the different projects in parallel, but often your team has to maintain the different projects yourselves because you're the owners and experts of the problem space. Your leadership might even actively fend off encroachment from other teams "offering" to obsolete you, even if they have a point.

Even when you know exactly where your scaling problems are today, and you already have every relevant world expert on your team, you still can't be absolutely certain what architecture will keep scaling in another 5 years. That's not only due to kinds of growth you may not anticipate from current users, it's due to new requirements entirely which have their own cost model, and new users having their own workload whether on old or new requirements.

I've eagerly learned everything I can from projects like this and I am still mentally prepared to have to replace my beautifully scaling architectures in another few years. In fact I look forward to it because it's some of the most interesting and satisfying work I ever get to do -- it's just a huge pain if it's not a drop-in replacement so you have to maintain two systems for an extended duration.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: