America has plenty of the wrong type of oil. They need heavy oil as that's what the usa oil refinery are made to handle, but they have a shortage of heavy oil, and a oversupply of light oil. Venezuela has the heavy oil they need
Something like an H100 is definitely a feat of engineering, though.
Nothing prevents Cooler Master from releasing a line of GPUs equally performant and, while at it, even cheaper. But when we measure reality, after the wave function of fentanyl and good intentions collapses ... oh yeah, turns out only nVidia is making those chips, whoops ...
Looking around a bit the price was ~70k$USD _in China_ around the time they where released in 2023, cheaper bulk sells where a thing, too.
Note that this are the China prices with high markup due to export controls etc.
The price of a H800 80GiB in the US is today more like ~32k$USD .
But for using H800 clusters well you also need as fast as possible interconnects, enough motherboards, enough fast storage, cooling, building, interruption free power etc. So the cost of building a "H800" focused Datacenter is much much higher then multiplying GPU cost by number.
You can’t buy the gpus individually, and even if you can on a secondary market, you can’t use them without the baseboard, and you can’t use the baseboard without a compatible chassis, and a compatible chassis is full of CPUs, system memory etc. on top of that, you need a fabric. Even if you cheap out and go RoCE over IB it’s still 400gbs hcas, optics and switches
Yea, a node in a cluster costs as much as an American house. Maybe not on its own, but to make it useful for large scale training, even under the new math of deepseek, it costs as much as a house.
They estimated $200k for a single NVIDIA GPU-based CPU complete with RAM and networking. That's where my number came from. (RAM and especially very-high-speed networking is very expensive at these scales.)
"Add it all up, and the average selling price of an Nvidia GPU accelerated system, no matter where it came from, was just under $180,000, the average server SXM-style, NVLink-capable GPU sold for just over $19,000 (assuming the GPUs represented around 85 percent of the cost of the machine)"
That implies they assumed an 8-GPU system. (8 × $19,000 = $152,000 ≈ 85% × $180,000)
While this was in Ireland, it's worth specifying that Royal Mail was responsible for this wholesome delivery as it was in Northern Ireland (currently in the UK), so it's not immediately obvious that this applies to the Republic as well.
Incidentally the Republic of Ireland didn't have a nationwide postcode system until 2015.
Prior to that I used to address letters to a relative there as Name, Village, County and they were delivered! Turned out there were two people of the same name in the village but the postie saw the UK-originating stamp on the letter and ,delivered to the applicable one.
Around the same time as the Eircode was introduced, water meters were installed for every property in Ireland. They have never been used, because of public outcry over the idea of being charged for water use. Don't imitate Ireland.
Oh definitely, my townland is at least a couple of estates, at 100+ houses each.
And then there are the creatively named estates, where the whole street name is stop words. One near me has "[number] The Avenue", "The Court", "The Close", "The Drive".
Townlands are optional, or it can be the name of a large road near you. I think there are at least 3 addresses corresponding to my house that are used by official government mail, one of the things that EirCodes were supposed to prevent.
MITM for TLSv1.3 is possible. Plenty of solutions available for enterprises to do this. The MITM occurs still happens for TLSv1.3 on key exchange, allowing for the subsequent certificate to also be MITM and be replaced and encrypted. The only real affect TLSv1.3 has for MITM is that company policies for decryption can't match on the cert to determine if decrypt should occur, but they can still use the SNI which is plaintext
"In OpenSSL 1.0.x, a quirk in certificate verification means that even clients that trust ISRG Root X1 will fail"
All current FIPS accredited devices use openssl 1.0.X, so the lets encrypt cross-signing hack will essentially break multiple corporate networks until the next openssl fips module is released at the end of this year. And could take another 6 months to make it into live systems
This is a great start. More or less all web sites are technically non-compliant with Australian government security standards (ISM) because TLS has diverged so widely from NIST and those standards dictate NIST approved cryptography.
Nobody cares, of course, but it causes pointless conversations and wasted time with auditors.
I'm just pointing it out because I once confidently stated that Curve25519 illustrates everything that is wrong with FIPS, which would, on principle, never accept it, and was thoroughly served with the existence of this document. :)
I just want to run a "best-practice-ish" TLS setup and have that be compliant :(
Re: FIPS, agree that it is fractally bad [Fully realise I am preaching to choir].
The funny/sad part is that there are financial incentives to be able to say "yes" to customers inquiring about "FIPS compliance" which perpetuates the sham. Service providers (e.g. Amazon, Azure) then necessarily apply "compliance lawyering" (selective interpretation and omission) to give themselves a tick in the box. They can get away with this because their customers are also only pretending to care.
All this serves to create a false impression that "FIPS compliance" might be a real property of nontrivial systems rather than a form of expensive signalling.
My experience with HW HSMs has been that the FIPS process is so expensive that companies are only willing to put out a new FIPS-certified version once year. Also the certification itself seems to be more concerned with high-level security requirements rather than proof that any particular features of your HSM work correctly.
So the answer to any particular bug is typically wait until next year's version which includes all bug fixes that the normal releases have built up over the past year, or re-evaluate if you really need the certification.
That is a pretty skewed interpretation. FIPS mode does things for you like flag uses of the same private key for encryption and authentication, it prevents the use of weak keys, and prevents use of hobbyist or non-approved algorithms including some sketchy PRNGs. The executable signing also makes monkey-patching harder, so it's more difficult to hook into an implementation and compromise it without detecting this at the compilation stage. That can and does have real security benefits.
The downside of FIPS mode is that because the certification process is so costly and time consuming, it will generally run behind and not get the latest algorithms until a few years have passed. That type of conservatism in cryptography can be good or bad, but overall I'd rather use a FIPS system than not, given the large number of dubious systems in use, and the FIPS system will be more secure than the average non-FIPS system, but less secure than a non-FIPS system carefully reviewed by experts.
Afaik what lets encrypt did is not a "hack" and perfectly valid. It sounds like users who have FIPS requirement need to fix it for their won use-case since its a bug in what they use and already fixed for everyone else.
Many enterprises use a FIPS SSL proxy for all employees web traffic, so all websites with these lets encrypt will effectively be invalidated if the proxies are using openssl FIPs modules, same for FIPS client side applications
It seems quite silly to me to enforce a massive MitM attack while at the same time sticking to the FIPS standards. Then again, a lot of governmental and financial security requirements are nonsensical to me, like mandatory password changes.
When I, as a website host, need to choose between accepting millions of Android devices or a few organizations with an esoteric security configuration, I'll go for the Android devices.
AFAIK Windows FIPS mode is unaffected by the OpenSSL bug, so not all FIPS modules will have trouble with the Let's Encrypt certificate. A Windows-based MitM-attack won't have this problem.
The best solution here would be for OpenSSL to have a FIPS release ready before September, or to release a patched version of 1.0.X, but that still won't help companies that cannot or will not update their software.
Some of it is misguided, some of it is legacy, other parts _do_ make sense to the people involved.
Mandatory password changes for example have not been recommended[0] by NCSC in the UK since ~2018. Continuing to do so is either legacy or misguided.
As for "MitM" it's usually due to regulatory requirements to protect and inspect at boundaries to and from an organisations network.
FIPS and OpenSSL is an interesting subject. Many organisations rely on it, yet relatively few contribute financially. When 1.1.X and subsequent versions came along and had no FIPS 140-2, orgs were forced to wait it out until someone else pays to get it accredited or pony up and help the process along. I haven't looked lately at how much has been contributed to the effort but I suspect it's still pretty low considering how much of the world relies on OpenSSL.
Mandatory 90 day password changes are still required by the IRS in the US at least.
High complexity / weird rules too - and not one password across systems as they have endless DIFERRENT login systems.
So your tax software itself will require 90 day resets for all staff using that, every interface to IRS requiring it (which means every login for little used systems). It's bonkers. My worry - how do they even correlate / track login risk given all these different systems. Google (which has never required a password rotation) seems to be able to really figure out when risk is higher (new device from a new location) and lower (same device from 5 minutes ago). That makes turning on 2 factor with a hardware device MUCH easier - because it doesn't annoy you unnecessarily.
90 days is such a silly time frame. It won't defend against passwords like Spring2018! (11 characters, capital letter, special character, yyet ccompletely predictable) and people will only pick easier passwords when they're forced to pick new ones.
Even Microsoft has stopped recommending regular password changes. I think password changes can certainly be necessary, for example when problems are found during an audit or when there are indications of abuse, but these old rules are making everyone's lives so much harder than they need to be. I hope the IRS will reconsider soon.
Google's method is quite advanced (different tiers of trust for different kinds of services). It makes total sense that you can search the web using an old session, but need to redo the whole 2FA flow if you want to change your password or recovery options. Unfortunately, working such a system out can be quite a challenge because it's hard to get the API segregated into the right trust levels without massively complicating the code flow.
There are two risk classes; reused passwords exposed by a breach, and targeted attacks (phishing, dictionary/brute-force attacks, etc). The first is easiest to detect by finding password dumps and by observing login attempts. Targeted attacks are best prevented by 2FA. There never was any middle ground where password rotation improved security. For any high security systems worried about insider risk or espionage they should have been using multi-factor authentication all along.
My own view - reused passwords come from leaks - should be a penalty of $100 per leaked password - come on, salt and hash them! This would at least put some pressure on that side. Class actions allowed. This would push more towards oauth etc.
Then for targeted attacks, allow non-sms two factor with multiple keys and recovery codes. I'm non SMS two factor on google with recovery codes in a drawer. Have never changed my password and actually have it memorized (and I only use it for google). Same password for 15+ years or so now. Feel totally secure. Google authenticator on phone is pretty good because people really keep track of their phone (more so than yubico keys). I have a yubico on keychain which works 90% (a bit awkward in some cases).
Reused passwords are also really common and so cracking new dumps of salted and hashed passwords will yield a pretty high success rate.
At this point I just assume that any password that's been leaked (hashed or not) is in plaintext in some database. Obviously 20-character random passwords aren't going to get reversed but there's no guarantee that they were always hashed and weren't leaked from the login process itself, etc.
Ouch that sounds painful. If I'm not mistaken, all/most Americans have to interact with the IRS regularly? So this is an issue for many of you? By that I mean as a Brit who is salaried (PAYE) and doesn't own a business I have never had to interact directly with HMRC so even if it was as bad (it's not) it would be an infrequent experience.
This primarily affects professionals dealing with the IRS.
Individuals have been migrated a few times and a few different logins.
IRS had a "get transcript" service. It had things like super secure passwords and password rotations, but password reset and setup could be done with social security + some real basic info from credit reports (ie, where did you live etc) and didn't not timeout.
So think - 100's of thousands of fake accounts for the hackers, and pain for the real users.
That's pretty common in the US for govt systems - the password reset process is often ridiculously easy because some systems have so many reset requests you can't function with anything careful.
Imagine folks in govt - 10 systems, 90 day password rollover and there was a move for a while to 12 character passwords with no reuse and upper / lower / special / numbers (but special characters are limited so password generators often error out). It got so bad there was one reset process that was outsourced to a third party AND all you had to provide was the username which was derived from the users full name. They then gave you a new password over the phone. It was honestly easier to reset then even fight the system. You have a new intern whose forgotten their password, IT just calls reset help desk for a new one.
The security problems in all this are
1) reset process so weak
2) everyone - and I mean everyone, writes these passwords down in a text file on computer
3) because new account setup can be ridiculously long - a fair bit of password sharing, so these passwords tend to end up all over the place (training documents etc etc) which then of course end up online somewhere.
> It seems quite silly to me to enforce a massive MitM attack while at the same time sticking to the FIPS standards.
Well they're two different things. One is an often government-mandated security standard. The other is a business requirement to be able to audit network traffic, which is also often a government-mandated requirement (due to regulations, due diligence, contractual requirements, etc).
People making tech stuff very often forget that the entire world does not work based on "technical best practices", it works on laws and contracts and customer/business requirements. In the real world there is often no perfect way to satisfy all requirements.
The reason the government wants FIPS is that it's been verified to be secure according to the national agencies. Enforcing that that security and then putting all if your sensitive traffic in the hands of one key on one box directly contradicts the security requirements FIPS is intended to ensure.
I don't expect the government to have different departments work together around this stuff, but knowing the technical details, the end result is still impractical and stupid. The end result of stupid rules and requirements is that the real world application of technology is stupid, as we have probably all experienced one way or another during our lives.
Just because there's a real business need for something, doesn't stop that from being silly. Correcting the silliness is clearly not a technological challenge, we'll have to wait for politicians and managers to do that, but the end result is still a confusing and contradictory mess.
I don't have a lot of sympathy for the companies in this situation. If you want to MITM all your employee's traffic, then you accept the burden of dealing with stuff like this periodically.
Ha, good one. For the average company that breaks SSL, I expect something like this instead: "new corporate policy update: for security reasons, you're no longer allowed to visit HTTPS Web sites that use Let's Encrypt. If the Web site you want to visit still allows HTTP, that continues to be acceptable."
Maybe we just had a misunderstanding. What I was trying to say: Once this happens and everything breaks they will have an incentive to fix things quickly.
By no means do I expect vendors of "SSL inspection" devices to act any sooner than that.
They will just add an additional TLS proxy with a self-signed cert that ignores all validation. Security will be broken but users will be able to continue to do their work.
One reviewers comments to a patch of theirs from 2 weeks ago
"Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith. If it's the latter[1], may I suggest the esteemed sociologists to fuck off and stop testing the reviewers with deliberately spewed excrements?"
vvv CID 1503716: Null pointer dereferences (REVERSE_INULL)
vvv Null-checking "rm" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
reply