Hacker Newsnew | past | comments | ask | show | jobs | submit | eqvinox's commentslogin

I heard there's some people working on a system that allows you to use names, but it seems to be very poorly designed and cause of a lot of outages.

> how is that to do with Tesla manufacturing standard?

Unless further data/evidence is provided, it is reasonable to assume all car owners treat their cars equally shitty, and as such can be ignored in this equation since it applies equally to all manufacturers.


Exactly. I don't understand the focus on VW here. That wasn't the point of my original post at all.

Tesla didn't even recognize the inspection failures in Denmark as real at first, so it's probably fair to assume that they're only now trying to sort out the problems on new cars, and that we'll see many more failing Tesla inspections the coming years, even on cars sold up to this day.


I haven't worked at sendmail or even anything e-mail related, and I can do that… just enough e-mail fixing as side work. Let's call it sysadmin calluses.

What made me stumble recently was having to talk LMTP to fix a mailman setup. Cheeky fuckers changed EHLO into LHLO for LMTP. (To avoid any mixups between the protocols, which is fair.)


Basically NAT64. (Teredo requires the IPv4 hosts to have awareness of it, this doesn't.)

everything is a container these days, and yet somehow collective-we don't manage to have AI agents run in a container layer on top of our current work, so we can later commit or rollback?

I feel like if I ever used an agentic AI that's how I'd need it to be done. Too many cases of AIs getting access to files that it shouldn't. But then then, how do I allow it to look things up online without sending all my code to some scammer that prompt injected on a tutorial? I don't think I'll ever trust it with anything proprietary or otherwise less than publicly available.

What do you mean?! Where? I would claim otherwise: 99% of software is not in the containers. Like 100% windows or debian software

The squishy side.

Coincidentally I think that's an overestimation on the number of devices that don't support IPv6. At this point, vendors have to go out of their way to disable IPv6, and they lose out on some government/enterprise tenders that require IPv6 even if they're not running it (yet).


Right, IPv6 is baked into the NIC, so it’s up to developers to use it.

They clearly haven't talked to a telco or network device vendor, they would've sold them a VRF/EVPN/L3VPN based solution… for a whole bunch of money :)

You can DIY that these days though, plain Linux software stack, with optional hardware offload on some specific things and devices. Basically, you have a traffic distinguisher (VXLAN tunnel, MPLS label, SRv6, heck even GRE tunnel), keep a whole bunch of VRFs (man ip-vrf) around, and have your end services (server side) bind into appropriate VRFs as needed.

Also, yeah, with IPv6 you wouldn't have this problem. Regardless of whether it's GUAs or ULAs.

Also-also, you can do IPv6 on the server side until the NAT (which is in the same place as in the article), and have that NAT be a NAT64 with distinct IPv6 prefixes for each customer.


I like to think this is what we did. It's a simple Linux software stack - Linux, nftables, WireGuard, Go... But the goal was also to make it automatic and easy to use. It's not for my Mom. But you don't need a CCNP either. The trick is in the automation and not the stack itself.

The key distinction with a L3VPN setup is that the packets are unmodified from and including the IP layer upwards, they're just encapsulated/labelled/tagged (depending on your choice of distinguisher). That encapsulation/… is a stateless operation, but comes at the cost of MTU (which in your case should be a controllable factor since the inner flows don't really hit uncontrolled devices.) Depending on what you're trying to do, the statelessness can be anything from useless to service critical (the latter if you're under some risk of DoS due to excessive state creation). It can also alleviate NAT problems, e.g. SIP and RTP are "annoying" to NAT.

(ed.: To be fair, 1:1 NAT can be almost stateless too, that is if your server side ["Technician"] can be 1:1 mapped into the customer's network, i.e. the other direction. This only works if you have very few devices on "your" side and/or/according to how many IPs you can grab on the customer network.)

The IPv6/NAT64 approach meanwhile is very similar to what you did, it just gets rid of the need to allocate unique IP addresses to devices. The first 96 bits of the IPv6 address become a customer/site ID, the last 32 bit are the unmodified device IPv4 address.


10. is /8 (24 payload bits), 172.16 is /12 (so 22) and 192.168 is /16. Very little need to spend more than 18 bits of space to map every 'usable' private IPv4 address once per customer. Probably also less than 14 bits (16k) of customers to service.

There's more addresses I didn't know about offhand but found when looking up the 'no DHCP server' autoconf IP address range (Link Local IPv4).

https://en.wikipedia.org/wiki/IPv4#Special-use_addresses


That's all true on a statement level, but doesn't make an IPv4:IPv4 NAT solution better than either VRF/encap or IPv6 mapping.

The benefit with VRF/encap is that the IPv4 packets are unmodified.

The benefit with IPv6 mapping is that you don't need to manage IPv4:IPv4 tables and have a clear boundary of concerns & zoning.

In both cases you don't give a rat's ass which prefixes the customer uses. That math/estimation you're doing there… just entirely not needed.


The problem with talking to a telco, is you have to talk with not just one but any your customer may use. And if at the customer location there’s multiple routers in between the cameras and that telco router, it’s a shitshow trying to configure anything.

Much easier to drop some router on site that is telco neutral and connect back to your telco neutral dc/hq.


The Metro Ethernet Forum standardized a lot of services telcos can offer, many years ago

No good when the upstream is some wifi connection provided by the building management, rather than a telco themselves.

May as well pick a single solution that works across all Internet connections and weird setups, be an expert in that, vs having to manage varying network approaches based on telco presence, local network equipment, operating country, etc.


That's all true, but you can also, you know, like, talk to people without buying your whole solution from them :)

(btw, have you actually read past the first 7 words? I'm much more interested what people think about the latter parts.)


On the later parts, VRF in my scenarios won’t scale.

Need to provide support access to 10k-50k locations all with the same subnet (industry standard equipment where the vendor mandates specific IP addressing, for better or worse). They are always feeding in data into the core too.

Much easier to just VPN+NAT.


That is a valid point. Though I would probably check first what the scaling limits on VRFs actually are; there was some netdev work a while back to fix scaling with 100k to 1M devices (a VRF is a device, though also a bit more than that). It's only the server ("technician") that needs to have all of these (depends on the setup if that helps or not), intermediate devices just need to forward without looking at the tags, and the VPN entry point only cares about its own subset of customers.

I'd probably use the IPv6 + NAT64 setup in your situation.


https://chaos.social/@equinox/111752488503367272

(disclaimer: shitpost. my shitpost.)


That, and ASPA, and https://manrs.org/

> It gives you a cheap way to preflight what will happen when you make a globally impacting config change.

Your "1-minute flap" can propagate and trigger load on every single DFZ BGP router on the planet. That's not cheap.

And 1 minute is too short to even propagate across carriers. There are all kinds of timers working to reduce previous point; your update can still be propagating half an hour later. It can also change state for when you do it for real. And worst of all, BGP routes can get stuck. It's rare, but a real problem.


Ok. 5 minutes. The point is clearly there’s route changes happening globally already. It should not be that much extra work to add like 10% more route changes (again - you’d batch the new route advertisements in one cohort rather than updating each individual route back and forth).

And stuck routes are a problem but not one this would make worse since those routes would get stuck from normal changes anyway.

The propagation problem isn’t real because clearly most route advertisements that handle most of the traffic actually happen quickly. You shouldn’t care about the long tail - you want to minimize the risk of your new route. The old route being present isn’t a problem and the new route disappearing back to the old also shouldn’t be a problem UNLESS the new route was buggy in which case you wanted to rollback anyway.

TLDR: these don’t feel like risks unique to advertising and then undoing it given the route publishing already has to be handled anyway AND cloudflare is a major Tier 1 ISP and handles a good chunk of the entire internet’s traffic. This isn’t about a strategy for some random tier 2/3 ISP.


> This isn’t about a strategy for some random tier 2/3 ISP.

That's not a constraint you mentioned in your original post.

> Ok. 5 minutes. The point is clearly there’s route changes happening globally already. It should not be that much extra work to add like 10% more route changes […]

I see you haven't had to deal with the operational reality of devices handling things they weren't quite designed for, and/or have been overdue for replacement, and/or were just designed to the limit to begin with. Good for you. But your solution would affect the entire internet.

If you're serious, you could try posting your suggestion to the NANOG or RIPE mailing lists. At the very least you'll probably learn a whole new set of expletives and curses… but I'd recommend against it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: