I would still hope for it to translate most of the code with a couple of asm blocks. But maybe the density of them was too high and some heuristic decided against it?
Because unless your TTL is exceptionally long you will almost always have a sufficient supply of new users to balance. Basically you almost never need to move old users to a new target for balancing reasons. The natural churn of users over time is sufficient to deal with that.
Failover is different and more of a concern, especially if the client doesn't respect multiple returned IPs.
You are misunderstanding how HA works with DNS TTLs.
Now there are multiple kinds of HA, so we'll go over a bunch of them here.
Case 1: You have one host (host A) on the internet and it dies, and you have another server somewhere (host B) that's a mirror but with a different IP. When host A dies you update DNS so clients can still connect, but now they connect to host B. In that case the client will not connect to the new IP until their DNS resolver gets the new IP. This was "failover" back in the day. That is dependent on the DNS TTL (and the resolver, because many resolvers and aches ignore the TTL and used their own).
In this case a high TTL is bad, because the user won't be able to connect to your site for TTL seconds + some other amount of time. This is how everyone learned it worked, because this is the way it worked when the inter webs were new.
Case 2: instead of one DNS record with one host you have a DNS record with both hosts. The clients will theoretically choose one host or the other (round robin). In reality it's unclear if that actually do that. Anecdotal evidence shows that it worked until it didn't, usually during a demo to the CEO. But even if it did that means that 50% of your requests will hit a X second timeout as the clients try to connect to a dead host. That's bad, which is why nobody in their right minds did it. And some clients always picked the first host because that's how DNS clients are sometimes.
Putting a load balancer in front of your hosts solves this. Do load balancers die? Yeah, they do. So you need two load balancers...which brings you back to case 1.
These are the basic scenarios that a low DNS TTL fixes. There are other, more complicated solutions, but they're really specialized and require more control of the network infrastructure...which most people don't have.
This isn't an "urban legend" as the author states. These are hard-won lessons from the early days of the internet. You can also not have high availability, which is totally fine.
I'm assuming OP means cloud-based load balancers (listening on public ips). Some providers scale load balancers pretty often depending on traffic which can result in a set of new IPs.
Being specific: AWS load balancers use a 60 second DNS TTL. I think the burden of proof is on TFA to explain why AWS is following an "urban legend" (to use TFA's words). I'm not convinced by what is written here. This seems like a reasonable use case by AWS.
Yes. Statistically the most likely time to change a record is shortly after previously changing it. So it is a good idea to use a low TTL when you change it, then after a stability period raise the TTL as you are less likely to change it in the future.
I would really like to use XHTML. It would make my HTML emitter much simpler (as I don't need special rules for elements that are self-closing, have special closing or escaping rules and whatever else) and more secure.
However no browsers have implemented streaming XHTML parsers. This means that the performance is notably worse for XHTML and if you rely on streaming responses (I currently do for a few pages like bulk imports) it won't work.
> no browsers have implemented streaming XHTML parsers
Dang, I hadn't considered this. That's something to add to the "simplest HTML omitting noisy tags like body and head vs going full XHTML" debate I have with myself.
One for XHTML: I like that the parser catches errors, it often prevent subtle issues.
But they didn't really stay single primary. They moved a lot of load off to alternate database systems. So they did effectively shard, but to different databases rather than postgres.
Quite possibly they would have been better off staying purely postgres but with sharing. But impossible to know.
The short-lived requirement seems pretty reasonable for IP certs as IP addresses are often rented and may bounce between users quickly. For example if you buy a VM on a cloud provider, as soon as you release that VM or IP it may be given to another customer. Now you have a valid certificate for that IP.
6 days actually seems like a long time for this situation!
Yes, the same way is that Fortran is faster than C due to stricter aliasing rules.
But in practice C, Rust and Fortran are not really distinguishable on their own in larger projects. In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations. This is usually Rust's `std` vs `libc` type stuff or whatever foundational libraries you pull in.
For most practical Rust, C, C++, Fortran and Zig have about the same performance. Then there is a notable jump to things like Go, C# and Java.
> In larger projects things like data structures and libraries are going to dominate over slightly different compiler optimizations.
At this level of abstraction you'll probably see on average an effect based on how easy it is to access/use better data structures and algorithms.
Both the ease of access to those (whether the language supports generics, how easy it is to use libraries/dependencies), and whether the population of algorithms and data structures available are up to date, or decades old, would have an impact.
This makes it better but not solved. Those tokens do unambiguously separate the prompt and untrusted data but the LLM doesn't really process them differently. It is just reinforced to prefer following from the prompt text. This is quite unlike SQL parameters where it is completely impossible that they ever affect the query structure.
The problem with SIMs is that they aren't just credentials and config. They are full applications. Imagine if you needed to run a custom program to connect to every wifi network. It is bonkers. It is absurdly complex and insecure.
A "SIM" should just be a keypair. The subscriber use it to access the network.
It’s more complicated because it has to include logic about which network to connect to and how to tunnel back to the original provider (or partner) while roaming.
So it’s more like: which network to connect to, keys, fallback network selection logic and tunnel logic to get authorisation on a non-home network
That's a good point. That is what I meant by "and config" in my first sentence.
IIUC if the keypair was a certificate with a few other fields foreign networks could give you some basic communication with your provider and decided if you should be allowed to use this network and if/how to tunnel you back to the home network.
But the main point is that it should just be data that the user can port around to different devices as they see fit and that they can trust not to do malicious things.
It’s not just config though (unless you consider logic to be config). When you’re roaming, the sim applet has to generate a path back to its home network based on request/responses with the networks it can see and their partners (and their partners’ partners etc.)
It’s effectively multi-hop peer discovery and I don’t think you can encode the general case logic for it as just config.
Edit: as a (rather niche) example, FirstNet sims run a different applet to AT&T sims despite nominal running on the same network because they have special logic to use more networks if they are in an emergency area.
So for people who don't plan to roam, what's the point of a SIM card (embedded or not)? Credentials and a few lines of config should be enough. Do the carriers benefit when users use a SIM card?
Do you have any more details on this? I always thought that once the PDP context is established (which is based on the phone providing an APN and optional credentials, not the SIM), the "tunneling" (if any - local breakout is a thing apparently) is handled by the network and is completely transparent and invisible to the phone.
> It's certainly better than calling everything a div.
It's not. For semantic purposes <my-element> is the same as <div class=my-element>. So on the surface they are equivalent.
But if you are in the habit of using custom elements then you will likely continue to use them even when a more useful element is available so <my-aside> rather than <aside class=my-aside> so in practice it is probably worse even if theoretically identical.
Basically divs with classes provide no semantic information but create a good pattern for using semantic elements when they fit. Using custom elements provides no semantic information and makes using semantic elements look different and unusual.
> But if you are in the habit of using custom elements then you will likely continue to use them even when a more useful element is available
This article is written for web developers. I’m not sure who you think you are addressing with this comment.
In any case - the argument is a weak one. To the extent people make the mistake you allege they can make it with classed div and span tags as well and I’ve seen this in practice.
That is a strawman. I never said everyone who uses classes perfectly uses semantic elements.
My point is that if you are using <div class=my-element> you don't have to change your .my-element CSS selector or JS selection code to improve your code to <p class=my-element>. If you are using <my-element> it is much more work to change your selectors and now you have two ways of doing things depending on if you are using a native semantic element or a div (either a tag selector or class selector). You have made your styling code depend on your element choice which makes it harder to change.
reply