Hacker Newsnew | past | comments | ask | show | jobs | submit | more nomercy400's commentslogin

I know nothing of baseball.

If pitching evolves faster than hitting, does that mean the response time of the hitter becomes shkrter? Can't you move the pitcher further away to give the hitter more time to respond?


Not from the US and therefore I know nothing about it either. I thought a torpedo bat would be something like this:

https://static.odealo.com/uploads/auction_images//6441500406...

But in comparison these new bats look exactly like the old ones...


MLB could move the mound back or lower it again like was done in 1969 after the 'Year of the Pitcher', but it's not that simple.

The other crisis baseball faces is pitcher arm health. The mere act of throwing a ball 90-105 mph is damaging to the arm, and it only gets worse the harder you throw. Every pitcher is chasing velocity and spin rate since the resulting success and money is undeniable. Pitchers frequently need major surgery and extended year+ time recovering as a result.

If the mound is moved back or lowered pitchers will respond by doubling down on chasing velocity just to stay level, leading to more injuries and UCL replacement surgeries.

The same incentives apply to other options to give batters an edge, like juicing the ball or shrinking the strike zone. Pitchers will respond with velocity and blow up their arms.


> Pitchers will respond with velocity and blow up their arms

They seem, from the outside, like they'll do this no matter what. Move the mound back, allow torpedo bats or don't, do you think pitchers will intentionally pass up the money and success?


I am kind of missing how the 'pure' query problem is solved.

Say I have 10000 rows, and my authorization gives access to 3 of those rows.

With security in the database, you return 3 rows. From what I can read, the protect pattern returns 10000 rows from the database, but discard 9997 of them afterwards. Doesn't this increase load and memory usage? Shouldn't there be a balance?


Besides those performance considerations, the article starts off with you need a data access layer, but has no idea about controllers and middleware? Seeing stuff like...

  if (!(user.isAdmin || document.authorId === user.id)) {
    forbidden();
  }
makes me almost a bit angry.

Isn't every contemporary authorization system shifting towards ReBAC (based on Google's Zanzibar paper)? The ReBAC paradigm favors the segregation of authorization logic from business logic. It'd even be possible to reimplement ABAC/RBAC-styles if you prefer to do so, but your application layer shouldn't need to care.

  // this one is usually done in middleware layer
  const user = await resolveUser(req);
  if (!user) {
    return res.status(401);
  }

  // this one is usually done in the controller layer
  const canReadDocuments = await auth.canReadDocuments(user);
  if (!canReadDocuments) {
    return res.status(403);
  }

  // all from here is usually done in the service layer
  const canReadAllDocuments = await auth.canReadAllDocuments(user);
  if (!canReadAllDocuments) {
    return findManyDocuments(user);
  }

  return findManyDocuments();
How is protecting individual queries (after retrieval!) more scalable?


This is not what I'd intended to communicate with this article. The Kilpi.filter pattern is not the primary point of this article, it is only a minor utility provided by Kilpi for special cases. I do not suggest to fetch all rows and return only authorized rows. The inner query function should still be performant and return only the requested data.

My point was to show how you can co-locate your queries and authorization logic, just as you would with any sensible data access layer. However, this approach keeps the inner function pure and e.g. easily cacheable with the upcoming Next.js "use cache" directive, and also allows easy bypassing of the authorization logic when required by your application.

I hope this clarifies my intent.


Deregulating a basic human need and leaving it to the 'market' to solve this. This sounds a lot like other privatization efforts of the past decades.

In my country healthcare, child support, energy, national railway, postal services, public housing, banking and more have all been privatized.

I worry about this. Not for now, but for 20 years in the future, where all energy is managed by companies and the government can no longer control the market due to being 'too big too fail' and because it gave all control away.


As the article says,

> Deregulated markets, which might be better described as “differently regulated” markets, separate these concerns. Independent power producers (generators) build and sell electricity wholesale to Retail Electric Providers (REPs), who sell to households, while utilities manage the transmission and infrastructure.

On the face of it this makes sense to me. Deregulation doesn't mean there are no regulations at all. It means we split up a monolithic entity into different parts that can each be subject to competitive forces and be regulated differently.


No, only one part is subject to competitive forces (generation). The other two, transmission and distribution, are agreed by any rational party to be a textbook case of a natural monopoly and are subject to strict regulation for that reason.


There is nothing wrong with markets solving basic human needs as long as the incentives are properly aligned.


Company and customer incentives are fundamentally not aligned.

Companies want to maximize profit and minimize cost, while customers want as much as possible for the lowest price.


And companies have to compete for customers on cost, which yields an equilibrium that works out well for customers on most goods and services. This is well established at this point.


It is not well established at this point. I get the impression you're reading a textbook on economics, and not how it works in the real world.

Take covid inflation for example. Covid caused supply chain issues which then caused a temporary bout of inflation, however, companies took this as an opportunity to increase their profit margins more than they need to. They specifically bragged about this in quarterly earnings calls.

If companies truly compete on cost then how does stuff like that happen? Shouldn't it have been corrected by now? But that's not what we see, we see companies continue to increase prices with little consequence.


> It is not well established at this point. I get the impression you're reading a textbook on economics, and not how it works in the real world.

You're writing this from the comfort of your home where you're enjoying the fruits of the most significant quality of life improvements over 200 years. Hedonistic adaptation has blinded you to all of the evidence that's staring in the face, like the screen you're reading on right now.

> however, companies took this as an opportunity to increase their profit margins more than they need to. They specifically bragged about this in quarterly earnings calls.

Temporary aberrations from extreme disruptions do not refute the general rule.


How does one properly align the incentives of people and corporations?


Private goods and club goods are provided by markets because incentives are already aligned.

Public goods and common goods need artificial markets created by regulatory frameworks.


Probably some kind of basic income scheme, so that no people will become invisible to the corporations.


Regulations, of course! Of which there are many for Texas' electricity markets.

IMHO, you can't have markets without regulations, and the markets that work best are those in high-trust societies, and that high-trust usually comes form a solid regulatory structure (i.e. laws).

This is why saying "deregulation is bad" is just as incoherent as saying "regulations are bad." We need to move beyond that sort of fallacious dichotomy.


How do we keep "the incentives [...] properly aligned" when "the government can no longer control the market due to being 'too big too fail' and because it gave all control away"? And why can't I find anything about this in your generic pro-market answer?


Yes it does, because it leaves the fulfillment of those necessities to the whims of the market. If suddenly an endeavour is not profitable any more, or is simply less profitable than an alternative, what happens? You have to subsidise it, at great cost, leaving you at the mercy of those privatised concerns.

Neither public services nor private enterprise is a silver bullet to everything.


> If suddenly an endeavour is not profitable any more, or is simply less profitable than an alternative, what happens?

They pivot or they slowly die.


And in what world is the incentive of cutting costs and price gouging - of necessities no less - aligned with the incentives of the vast majority of mankind who would just like energy, healthcare, housing, public infrastructure, etc. to be as affordable and high quality as possible.


The incentive of cutting costs directly leads to affordability.


Companies cut costs to increase their profits, not to pass the cost savings on to customers. If there is price pressure, they may pass on the savings. But at the latest when the market is sufficiently consolidated, they will prefer to keep these savings themselves. And even if this were not the case, there would still be no direct implication of affordability.


No quite obviously cutting costs increases profit which goes into reinvestment to then generate more profit and so on. It has no relation to prices unless the business happens to be losing customers to competition which is unrelated. Prices are lowered only as much as it can increase the rate of profit. Increasing affordability past that point for the good of the consumer is strictly against the incentives of the business since it prevents the growth of capital and thereby hampers them in competition.

Also quantitatively cutting costs qualitatively looks like enshittification of goods and services in practice, and unlike in undergrad economics textbooks consumers rarely have recourse in the form of switching brands since basically all "markets" for necessities are oligopolies (thanks often to government contracts for public works in an increasingly privatized world, if not simply the natural global minimum of any market).

On a basic level the point of putting a commodity on the market is to sell it to the highest bidder. Why is this the preferred way to distribute necessities? It certainly "aligns" with one particular incentive - that of the seller - not that of most people. Doesn't everyone need access to healthcare, housing, energy, etc.? Are poor people to compete with people who can outspend them several times over for food and housing? As the cost of living continues to increase does it make sense to hand over an increasing portion of our wages for the same - or worse - standard of living?

If you want freedom of choice in what you consume and have the means to do so then go ahead and turn to a market to buy a penthouse or gourmet food or whatever. But why is it such an offense to the current hegemonic ideology to ensure that there is basic universal access to essential resources?


Deregulation does work to address healthcare, child support, etc.

It just takes way longer to reach market equilibrium than free-market advocates claim. And by way longer, I mean years or even decades rather than months or weeks, which means a lot of suffering or externalities during the transitional period.

Healthcare was once deregulated in the U.S. And it was fine. And it would be again if we deregulated healthcare, but it would take more than a decade and nobody has the stomach for that. (And nobody has yet made a compelling case to demonstrate that a deregulated healthcare system would be a sufficient improvement over what we have now to justify the suffering that the changeover would cause.)


The one to really keep our eye out for is water. Investors are buying up water rights in drought-afflicted western states.


Doesn't changing how your cpu's microcode works mean you can bypass or leak all kinds of security measures and secrets?


Yes.


Wow, so providing a tool for bypassing the protection mechanism of a device (cpu) is accepted when it comes from google?

Try this on any game console or drm protected device ans you are DMCAed before you know it.


We broke the "encryption" (more like scrambling) of the AMD K8 and K10 CPU microcode updates. We released tooling to write and apply your own microcode updates. AMD did not take any actions against us. Granted, this was a university project so we clearly were within the academic context, but we were in no way affiliated with a too big to sue company.

https://www.usenix.org/system/files/conference/usenixsecurit...

https://informatik.rub.de/veroeffentlichungenbkp/syssec/vero...

https://github.com/RUB-SysSec/Microcode


> Granted, this was a university project so we clearly were within the academic context, but we were in no way affiliated with a too big to sue company.

Even without supposed goodwill of AMD and seeing things a different way being a) affiliated with a university b) outside the USA may have changed some of the equation.


How much is there in common between the Zen cpus of the current microcode reverse-engineering compared to the K8/K10 you looked at?


I have not looked at the format of the microcode yet, so this is only based on the blog post and discussions. K8 and K10 were based on Risc86 just like Zen seems to be. There also are some parallels, especially when it comes to sequence words and branch delay slots. There are also major differences like moving from triads to quads. I assume there are quite some similarities, but the current authors are better qualified to answer this at this point.


Any encryption/signature that can be broken in software on affordable hardware is just that: BROKEN.

What is your theory of harm? Who is harmed and how? Why should the law protect them by restricting the freedom of others?

AMD *sold* these CPUs to customers potentially running this tool on their hardware. That makes you think AMD should be entitled to restrict what the public is allowed to know about their products or does with them post sale?

Also if AMD is still in control shouldn't they be liable too? Should users get to sue AMD if an AMD CPU got compromised by malware e.g. the next side channel attack?

I might start to feel some sympathy for AMD and Intel if they voluntary paid all their customers for the effective post-sale performance downgrades inflicted on customers by mitigations required to make their CPUs fit for purpose.


DMCA 1201 says ANY decryption without permission from a copyright holder (with some exceptions that are in practice pretty minor) is a federal crime. Yet one more on the pile of "three felonies a day" to hold over the masses to keep them in line.


If this was the case, reading your comment (when delivered over TLS in a relevant jurisdiction) would be one such crime.


It was decrypted on my authority as the author by virtue of whatever license the Hacker News TOS requires in order to store and transmit my posts to the public.


Are you talking about legalities? AFAIK Hardware jailbreaking/homebrew tools are fine even in jurisdictions blighted with with DMCA unless they're specifically for circumventing DRM.

If more about morals, generally publishing vulnerability research tooling is business as usual for white hat vulnerability researchers, working at bigcorps or not, and has a long history. seems surprising to see this kind of "not cool" comment on this site.


> AFAIK Hardware jailbreaking/homebrew tools are fine even in jurisdictions blighted with with DMCA unless they're specifically for circumventing DRM.

Certain Japanese video-game companies would take issue with that interpretation of facts. Of course there is the arbitrary distinction between 'access' and 'copy' control mechanisms. Something arguably made irrelevant by the further integration of general concepts from personal-computing into certain video-game systems.


We live in an age where it's okay to pirate terabyte of data if you're Meta.

‘In the courts, you will be deemed either innocent or guilty, according to your wealth or poverty.’


What a sad and anti-hacker mentality comment.


It's not who releases it, it's who is the target that makes the difference. AMD chooses not to sue the researchers, whereas a game console maker would probably sue.


Ok, but that's on "you" being braindead releasing something like that on github. Release it anonymously and DMCA paper becomes toilet paper.

Same with apps, aka everything is opensource if you know RE ;-)


only if it's a nintendo console :)


Nintendo just shut down some Github repos for lousy DRM claims:

https://github.com/github/dmca/blob/master/2025/02/2025-02-2...


Why do you need crypto for that? Why not just 'a share of the company'?


It's similar but slightly different:

1) It's not a security, so it's not a share of the company. In order to be compliant with US law, we can only offer a utlity token 2) Crypto is a 10x better underlying data structure and architecture for the financial system: https://ortutay.substack.com/p/the-computer-science-case-for...


How do you compile to multiple platforms? Do you have separate machines for each platform?

Do you use any cli library in front? Is 12-20mb acceptable for a cli executable?

Working on a cli tool poc for work here, hence the interest.


Like the other poster, we have to compile it separately on different architecture machines (we are on Bitbucket). We use Quarkus as the base and picocli as the cli https://quarkus.io/guides/picocli. Quarkus takes care of a lot and makes the native image experience nicer. Size wise, for internal use our users don't complain, since we are all devs.

I think you can shrink the size with one of the optimisation levels https://www.graalvm.org/latest/reference-manual/native-image...


Sorry, I know you weren't asking me, but for this same use case, yes, I've used a GHA build matrix with each OS/arch pair.

Cosmo/APE support would fix this, and GraalVM already ships with a Musl libc implementation, so it isn't very far off.

https://github.com/oracle/graal/issues/8350


EUV is all about the laser. To create the small transistors of 4nm for example, either lithography or some other laser tech, you need to be able to shine a pure light on them of small enough wavelength. EUV is the smallest we can create thus far.


I suppose that the real issue is that EUV is the smallest we can focus thus far. Otherwise we'd be using harder X-rays from things like synchrotron light sources, no?


You want to impact the surface, not deep into the material like X-rays would penetrate, so you can't actually go too low in wavelength either.


Isn't that a significant research field? New semi-conducting crystals and materials which would be friendly to x-rays?


For me it is a data vs code thing.

If I run my application/code v1 right now, I generate data. I expect that if I move to application/code v2, I can leave my data in place and it will automatically apply changes to my data.

I do not get that with postgres. If I am on postgres 16, and I want to upgrade to postgres 17, I want to leave my data folder untouched. When I then start postgres 17, it should just work (tm).

It should also work with code that assumes postgres 16, so I can upgrade my database separate from my application. I can not wait 10 days for a large database to be migrated from 16 to 17 without being able to run it. However, I can wait 10 days before updating my code to support features in 17.

The current upgrade process does not give me such confidence in restoring data and uptime. So I don't upgrade until I really have to.


Wero is based on the Dutch IDEAL system. Because the Dutch already have a good system, they are not joining right away, but at a later time.


The most baffling part of it is that at first glance it seems nothing like the Dutch IDEAL system.

The IDEAL system is a replacement for online C2B payments. Rather than entering an incredibly insecure credit card number, the storefront redirects you to your bank, where you safely log in, confirm payment, and get redirected back to the store. It's essentially OAuth for payment. The only annoying part is having to manually select your bank, but even that is now a non-issue with a single unified QR code you can scan in any mobile banking app.

On the other hand, https://wero-wallet.eu/ tells me that Wero is a C2C system, so basically an alternative to Paypal, Venmo, Tikkie, or all the bank-initiated payment request implementations we've seen pop up over the last few years. You can send and receive money via your mobile phone number. So how is this supposed to be like IDEAL?

Reading the FAQs it feels like nothing more than a strictly-worse version of Tikkie, but potentially with an added attack vector for your bank account. In other words, I see very little reason to switch to it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: