Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everyone was locked out in a building am staying at (40 something stories) for several hours. When I asked the concierge if I can have a look at the system, it turns out they had none. The whole thing communicated with AWS for some subscription SaaS that provided them with a front-end to register/block cards. And every tap anywhere (elevators/doors/locks) in the building communicated back with this system hosted on AWS. Absolute nightmare.




I wonder what happened to the building when us-east-1 went down.

As the parent said: “Everyone was locked out in a building am staying at (40 something stories) for several hours.”

Now I am waiting for time when they move us-east-1 physical security to run in us-east-1... Thus locking themselves out when needing some physical intervention on servers to get backup.

Facebook already got bit by this when their BGP setup pooped its pants on Oct 4, 2021


I wonder what happened to the building when the internet went down. How do you get into the room to reboot the router?

There’s usually a back door with a physical key. The problem can be getting ahold of one of the people with that key though!

There is probably a break-glass procedure for such cases, like, break the literal window.

A lot of modern glass is hard to break. In many cases this is a safety feature (if you can't break the glass you can't get shoved out the window in a fight...)

Is that why there is a brick next to the procedure manual?

That’s the emergency escape brick.

This is in SEA. They probably operate from ap-southeast-1 or 2. But yeah, if the internet goes down, the provider service goes down or AWS goes down they are cooked.

> Absolute nightmare.

Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.


Easier maybe. But significantly worse. Parts of these systems have been build and engineered to be entirely reliable with automatic hand-overs when some component fails or with alternative routings when some connection is lost.

>than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.), and have someone deploy, install, manage, update, etc. all of that.

You don't need any of that. You need one more box in the electrical closet and one password protected wifi for all the crap in the building (the actual door locks and the like) to connect to.


And when that box fails, you're looking at how long with no access? Longer than any AWS outage.

The IT guy walks in and replaces/restarts the box instead of waiting for the gods of AWS to descent to earth and restart theirs. They have direct control vs. waiting for something magic to happen.

You also have real-time ETAs from an actual human local to the issue. Plenty of domains where your clients won't care if AWS is down for everyone.

The building has an onsite IT guy with enough spares to fix anything that could go wrong with the box?

Have you ever actually seen these systems in person? It's usually a microcontroller which already rules out a ton of stuff you're talking about. Serious places will buy 2-3 of them at the time of installation to have some spares. The ones here are "user-replaceable" as well (unplug these three cables, replace the box, plug them back in). It's not some mysterious bunch-of-wires-on-arduino-pins magic box that nobody dares to touch.

The one at my previous office even had centralized management through an RS232 connection to a PC. No internet and related downtime at all. And I don't recall us ever being locked out because of that.


If you buy hardware from HID Global / Assa Abloy the box never breaks.

Its absolutely possible to have both a SaaS based control plane and continue functioning if the internet connection/control plane becomes unavailable for a period. There's presumably hardware on site anyway to forward requests to the servers which are doing access control, it wouldn't be difficult to have that hardware keep a local cache of the current configuration. Done that way you might find you can't make changes to who's authorised while the connection is unavailable, but you can still let people who were already authorised into their rooms.

> with redundant power, cooling, etc

The doors the system controls don't have any of this. Hell, the whole building doesn't have any of this. And it definitely doesn't have redundant internet connections to the cloud-based control plane.

This is fear-mongering when a passive PC running a container image on boot will suffice plenty. For updates a script that runs on boot and at regular intervals that pulls down the latest image with a 30s timeout if it can't reach the server.


What updates? That would be on a local network and have no internet connection, if done right.

I am guessing the main attraction of such a system is that owners can set the cards remotely and get data about it (ie: who accessed and when)

And? That doesn't mean, especially for a system with security impact (like door access), that it should never be updated.

You know what else would suffice plenty? Physical keys and mechanical locks. They worked (and still work) without electricity. The tech is mature and well-understood.

The reason for moving away from physical keys is that key management becomes a nightmare; you can't "revoke" a key without changing all the locks which is an expensive operation and requires distributing new keys to everyone else. Electronic access control solves that.

You might find Matt Blaze's paper on vulnerabilities in master-keyed physical locks interesting:

https://eprint.iacr.org/2002/160.pdf


Those devices can be trivially power cycled, and won’t have as many issues with dodgy power. Some PC somewhere with storage is a bigger problem.

> Some PC somewhere with storage is a bigger problem

Both an embedded microcontroller and a PC have storage. The reason you can power-cycle a microcontroller at will is because that storage is read-only and only a specific portion dedicated to state is writable (and the device can be reset if that ever gets corrupted).

Use a buildroot/yocto image on the PC with read-only partitions and a separate state partition that the system can rebuild on boot if it gets corrupted and you'll have something that can be power-cycled with no issues. Network hardware is internally often Linux-based and manages to do fine for exactly this reason.


PCs are orders of magnitude more complex, with a lot more to break. Sounds like a whole lot of work for… what?

Assuming the internet connection and AWS work of course. Which they won’t always, then oops.


A large number of embedded micro controllers are just PCs running Yocto linux configured as GP said. You can save money with a $.05 micro controller, but in most cases the development costs to make that entire system work are more than just buying an off the shelf raspberry pi.

If you're relying on AWS you either way have a "PC" to relay communication between AWS and the keycard readers & door latches.

There are IoT libraries that don’t require that.

It's also easier to keep all the water for fighting fires in trucks that are remote, than to run high pressure water pipes to every room's ceilings, with special valves that only open when exposed to high heat. Imagine the overhead costs!

Cooling for a card access system?

A card access system requires zero cooling, it’s a DC power supply or AC transformer and a microcontroller that fits in a small unvented metal enclosure. It requires no management other than activating and deactivating badges.

There is no reason to have any of the lock and unlock functionality tied to the cloud, it’s just shitty engineering by a company who wants to extract rent from their customers.


The server running that system needs cooling, yes. You can't just shove it in a closet with zero thought and expect it to not overheat/shut down/catch fire, unless you live in the Arctic.

That is, in fact, exactly what we typically see in reality with local access control system head-ends.

At the doors, there might be keycards, biometrics and PINs (oh my!) happening.

But there's usually just not much going on, centrally. It doesn't take much to keep track of an index of IDs and the classes of things those IDs are allowed to access.


I have a little fanless mini PC that runs various stuff around my house, including homeassistant. The case is basically a big heat sink.

It started crashing during backups.

The solution was to stick a fan on it. :( This is literally a box _designed to not need a fan_. And yet. It now has a fan and has been stable for months. And it's not even in a closet - it's wall-mounted with lots of available air around it.


I'm guessing it's the HDD that's failing. Had such mysterious failures with my NVR (the Cloud Key thingie) from UniFi. Turns out, HDDs don't like operating in 60+ degree Celsius heat all the time - but SSDs don't mind, so fortunately the fix was just to swap the drive for a solid state one.

I think it was the DRAM on mine, oddly. It already uses an nvme ssd. Could have been the CPU, of course - the error was manifesting as memory corruption but that could well have been happening during read or write.

You must be young. We used to have handhelds and computers with no cooling at all.

>You can't just shove it in a closet with zero thought and expect it to not overheat/shut down/catch fire

Actually in almost all products meant for real companies doing real work, this is an explicit design requirement.

Every cash register runs off of a computer that sits in a tiny metal oven with no cooling and is expected to run 24/7 without fail.

The difference between a tech gadget and a real world, real purpose appliance.


There are card access systems that don’t require a computer, just a microcontroller. Perhaps if you need to integrate with multiple sites or a backend system for access control rules you can add computers, but card access systems are dead ass simple for a reason; they need to be reliable. The good systems that have computers still allow access in the event of a network failure.

Any access control system that fails in the event that it loses internet connectivity is poorly designed.


You're saying that as if we never had Z80-based microcontrollers doing all this without problems. Complete with centralized control and all.

The system was not built with resiliency in mind and had no care/considerations for what a shit-show will unfurl once the system or the link goes down. I wonder if exit is regulated (you can still fully exit the building from any point using the green buttons and I think these are supposed to activate/still work even if electricity is down).

> Yes, but still probably a million times easier for both the building management and the software vendor to have a SaaS for that, than having to buy hardware to put somewhere in the building (with redundant power, cooling, etc.)

A isolated building somewhere in the middle of the jungle dependent for its operation on some American data-center hundreds of miles away is simply negligence. I am usually against regulations but clearly for certain things we can trust that all humans will be reasonable.


In the US, the answer is that exit would have to work in the event that AWS is down or power is out. Some exceptions exist for special cases.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: