Hacker Newsnew | past | comments | ask | show | jobs | submit | danielheath's commentslogin

I did a similar thing with a regular backlit computer screen.

It automatically shuts off after 30 seconds of inactivity.

I added a $3 webcam, and use openCV to detect motion. If three consecutive frames (sampled 0.5s apart) are each sufficiently difficult from the previous one, it attaches a virtual USB mouse, then moves it one pixel.

This wakes up the display whenever you walk past, then puts it back to sleep again when you stop moving.

The motion-detection pipeline uses less than 0.3% CPU on an intel N100 (6w TDP).


You can probably just use a cheap motion sensor instead of the webcam if you wanted to. There so many now

Found a few aliexpress sellers offering LD2410C's, but all cost 30% more than the webcam I used.

If you have some to suggest, I'd love to hear it... TIA!

Something like LD2410 [0]. IIRC there's newer ones that report accurate position and even heart beat rate, but I've forgotten the names of those..

[0] https://dronebotworkshop.com/ld2410c-human-sensor/


Here’s one

https://thepihut.com/products/60ghz-mmwave-breathing-and-hea...

Same kind of tech but higher frequency.


> The MR60BHA2 is a 60GHz wave sensor that detects breathing and heartbeat patterns. Using its radar technology, it can monitor vital signs without direct contact, even through materials like clothing or bedding. You can use it for sleep monitoring, health assessments, and presence detection.

This is kind of crazy, I had no idea this was a thing. And here I have PIR sensors all over the place and hacks around those, that definitively sounds much better. Besides being more expensive and weaker range, any drawbacks for using it for motion sensing?


What's your budget? https://en.tokyodevices.com/items/128

But seriously you can probably DIY something a lot cheaper.


230 is an obvious place to say “if you decide something is relevant to the user (based on criteria they have not explicitly expressed to you), then you are a publisher of that material and are therefore not a protected carriage service.

SERIALIZABLE is really quite hard to retrofit to existing apps; deadlocks, livelocks, and “it’s slow” show up all over the place when you switch it on.

Definitely recommend starting new codebases with it enabled everywhere.


Do you have examples of deadlocks/livelocks you've encountered using SERIALIZABLE? My understanding was that the transaction will fail on conflict (and should then be retried by the application - wrapping existing logic in a retry loop can usually be done without _too_ much effort)...

Haven’t kept history from the bug tracker back that far, but we definitely hit some pretty awful issues in prod trying to solve race issues with “serialisable”. Big older codebases end up with surprising data access patterns.

I guess I'd say -- I think you're right that you shouldn't (ideally) be able to trigger true deadlocks/livelocks with just serializable transactions + an OLTP DBMS.

That doesn't mean it won't happen, of course. The people who write databases are just programmers, too. And you can certainly imagine a situation where you get two (or more) "ad-hoc" transactions that can't necessarily progress when serializable but can with read committed (ad-hoc in the sense of the paper here: https://cacm.acm.org/research-highlights/technical-perspecti...).


I’m not sure they were _introduced_ by switching to serialised, but it means some processes started taking long enough that the existing possibilities for deadlocks became frequent instead of extremely rare.

Acceleration at 1g lets you get to another galaxy in a single human lifetime (although earth will have been swallowed by the sun by the time you arrive). Relativity is pretty counterintuitive.


Perhaps I’m misremembering, but I feel sure that Siri was much better a decade ago than it is today. Basic voice commands that used to work are no longer recognised, or required you to unlock the phone in situations where hands free operation is the whole point of using a voice command.


There were certain commands that worked just fine. But they, in Apple's way, required you to "discover" what worked and what didn't with no hints, and then there were illogical gaps like "this grouping should have three obvious options, but you can only do one via Siri".

And then some of its misinterpretations were hilariously bad.

Even now, I get at a technical level that CarPlay and Siri might be separate "apps" (although CarPlay really seems like it should be a service), and as such, might have separate permissions but then you have the comical scenario of:

Being in your car, CarPlay is running and actively navigating you somewhere, and you press your steering wheel voice control button. "Give me directions to the nearest Starbucks" and Siri dutifully replies, "Sorry, I don't know where you are."


Neither does baking a cake mean you'll get to eat any - but it's clearly a better cake-obtaining strategy than deciding not to bake a cake.


The thing is that taking an interest in baking a cake doesn’t actually feed anyone. If you’re not going to spend your time baking (i.e. actually get involved in politics, to drop the metaphor), then what’s the point?


> Is it perfect? No. Neither is SQL parameterization against all injection attacks. But good is better than nothing.

What injection attack gets through SQL parameterization?

If you must generate nonsense with an LLM, at least proofread it before posting.


You doubt that it's a drug?

Or you doubt that internal messages describing IG as a drug are relevant to a lawsuit over social media addiction?


I think they might doubt that it will doom Meta


Correct, Meta won the war, just like loot boxes will keep existing, while proven to be designed to make you addicted.

People are just way to optimistic instead of realistic.


> People are just way to optimistic instead of realistic.

Apathetic is a more appropriate description. This is as optimistic as hoping to not contract lung cancer without ever bothering to quit smoking.


The hash technique for uniqueness isn’t supported for indexes because it doesn’t handle hash collisions. The authors proposed solution suffers the same problem- values which do not already exist in the table will sometimes be rejected because they have the same hash as something that was already saved.


This is completely untrue. While the index only stores the hashes, the table itself stores the full value and postgres requires both the hash and the full value to match before rejecting the new row. Ie. Duplicate hashes are fine.


For a unique index, that requires a table check, which - IIRC - isn’t implemented during index updates.

The article suggests using a check constraint to get around that - are you saying that does actually check the underlying value when USING HASH is set? If so, I think the docs need updating.


That's super interesting and I am convinced by the dbfiddle but is not very intuitive or well documented? https://www.postgresql.org/docs/current/hash-index.html


This is very good to know because it means this exclusion constraint workaround is a better approach over using a SQL hash function and a btree if you want to enforce uniqueness on values too long for a btree index.


Your comment is 100% not true: https://dbfiddle.uk/Iu-u886S


No, but ECC ram gives you _hardware_ parity checks, which is much faster and doesn’t require you to change your code.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: