The main reason why HA accounted for so many requests is probably because it was a polling integration, requesting data every 30 seconds from the server, while the official app either had push events when something changes, or it updated state when the app gets opened.
Why not... just allow HA receive callback events at that point when things change? I feel like this has an easy resolve that doesn't piss off your power user customers, and makes them encourage others to invest in your products, IE power users, and they'll come back because despite being a little extra engineering effort, they were glad you thought of them.
Why not simply allow HA to integrate on site rather than to have to go through some crappy service that likely will not last the lifetime of the doors in the first place?
That's also a good question, one reason I'd be okay with having callbacks is if your software that handles what to do is on a server somewhere else entirely, maybe you own multiple homes and don't want to run several on-premise servers when one could do, I'm also thinking of more than just whatever HA is doing and whatever a power user might do.
I bought MyQ's Homekit bridge to allow local integration with Home Assistant. It was a bit of a pain to set up initially, and it's stupid that I have a separate device when the openers themselves support wifi natively, but it's been rock-solid.
You know that "bit of a pain to set up initially" you mentioned? Yeah, I've had to do that repeatedly because its little pea-brain forgets every few months. It's been anything but rock-solid for me. I just gave up on it.
I initially bought the bridge because I thought a wireless relay spliced into the hardwired door switch would be too much trouble, so I'll spend a little and save some time. Boy, was I wrong.
I had a version of your experience, but it resolved magically. No idea why. I originally set up the integration, and it worked. Then I completely rebuilt HA at one point and had to redo the bridge config, and it just refused. All sorts of errors, it just refused to even see the doors. Frustrated, I chucked the device in my closet and forgot about it for a while.
Then a few months later I decided to try again and be very careful and deliberate, and ... it worked. Just like it was supposed to. Sigh. No idea what incantation I did right, but now it has been working for several years without a hitch.
I did recently buy a ratgdo (well, ordered it at least, it hasn't arrived). That's my backup plan if the Home Bridge decides to go tits up.
I've been lucky, I guess. After I got it set up, it's just worked—even across various configuration changes I've made to Home Assistant and my network infrastructure.
I'm not saying owners should be completely barred from modifying their systems but there are security implications to bypassing their centralized / cloud-based authentication.
It'd be possible for a knows-enough-to-be-dangerous customer to modify their system in such a way that they unwittingly allow unauthenticated local access. From my point of view, Chamberlain/MyQ should be totally indemnified in such scenarios but I'm not sure how murky the legalities would be in terms of getting judges/juries to accept "caveat emptor".
EDIT: Maybe there's a way to ensure customers have signed an indemnification agreement before unlocking local API access? I guess there'd also need to be a way to ensure/promote a factory reset if/when ownership/rentalship changes.
It happens all the time, no tech required, any time someone is foreclosed on.
I agree it's wiser to avoid such situations but a lot of people end up delegating this kind of responsibility. If enough of them end up burning their own fingers, that could go badly for a provider. Even if frivolous lawsuits weren't a thing, a spate of ignorant but angry social media posts could be very damaging.
Again, I'm not saying I necessarily have a solution or that hardware owners should have hurdles placed in their way. I'm just pointing out that in some ways the provider may be damned in one way if they do and damned in another way if they don't.
I suppose the IoT sub-sector will end up in similar proportions to other, older tech: Some vendors, analogous to e.g., Red Hat or Linode, will specialize in catering to enthusiasts / power-users and have fairly noncommittal / at-your-own-risk / no-warranty license agreements. However, if the past is any indication, most people will end up doing a lot of business in walled-garden analogs of Apple or Facebook.
That makes sense to me but I'm not sure your average judge/juror would see it so simply--especially given that in most cases it'd be a lot easier to tell if/when a deadbolt has been modified.
Good suggestion, but where and how does HA receive callbacks? I would guess that almost all HA instances are behind residential LANs and most aren't accessible on the public internet. You could use dynamic DNS and forward ports, but that's flaky, you might run into CGNAT, etc. And anyway, it's best if your HA instance isn't publicly addressable; mine is only accessible over my personal WireGuard VPN and I intend to keep it that way.
I'm sure this is a solvable and solved problem, but I do believe it is non-trivial, and potentially a major headache for a company to implement just to support a tiny niche of users. I'd be delighted to find out I'm wrong though!
And, unfortunately, the business case isn't there, since this weakens lock-in effects. I don't endorse this reason—that's why I run my own HA instance and don't buy or use any products that require the cloud or otherwise can't be operated entirely locally (including flashing Valetudo to my robot vacuum!).
If you pay for the home assistant cloud subscription (built into HA, ~5 USD/mo) they can provision custom callback URLs for you so you don’t have to expose your HA instance. I have this setup for certain integrations such as Samsung Smart Things.
It’s not a perfect solution since it costs money but it’s a nice alternative to exposing your HA instance or some other front end proxy to the internet.
Unfortunately it's not actually that different in effect -- Nabu Casa proxy the encrypted TCP connection, rather than terminating TLS and proxying HTTP, which is great for privacy but not so much for providing an extra layer of security on top of HA itself.
It is also much easier for those without easy access to extra static IP addresses. Given the target audience I think it's probably the right approach.
I don't think it's entirely devoid of security improvements---you need to know the webhook address in order to get access to talk to a HA instance which would be a lot more difficult than just port scanning for an open (perhaps unpatched) HA instance on the open internet. I would still prefer it though if things would expose a local API or speak MQTT however.
Open a TCP connection from the instance to the cloud service. I don't know about all consumer routers, but I just checked mine and the default TCP established timeout is 7440 seconds. Idle timeouts are supposed to be at least 2 hours.
If you served the entire US (130 million households) and had a 1 hour keepalive, that's only 36k packets per second, which is nothing.
You could also auto-train the idle timeout by using a pair of TCP connections. One uses a known good value while the other probes upwards until it finds its connections start getting closed (with some optional binary search fanciness), feeding new known good values back to the first.
MQTT is the solution for this. Note that the garage door openers talk MQTT to the myq service (over TLS with preshared keys). It should be possible to subscribe to events from your garage door opener(s) and also to send commands to it.
but MQTT alone doesn't solve the challenge for some Internet server to push messages to a Home Assistance instance running inside a home network / behind a router / behind a firewall / NAT unless a port is opened on the router, or long-polling is used.
I recently bought a Nuki smart-lock, purely because it offered MQTT support with auto home-assistant discovery. Vote with your wallets and we can have nice things.
Because that would require them to build a callback system for the 0.2%. I don't have this, but I'm guessing the app only checks if your garage is open when you open the app. That is if you don't have the app open and someone opens the door you don't get a notification.
If I recall correctly, Chamberlin had an optional accessory that added HomeKit support to garage door openers, and that was discontinued last year. Home Assistant is capable of acting as a HomeKit hub, allowing it to control HomeKit compatible devices locally that otherwise would've required a cloud connection.
Haha this is the company that has an undocumented encrypted wire protocol between the wired button and the opener so you have to use their button instead of a normal doorbell switch.
I would argue that letting HA define a callback URL or some way to receive those events instead of relying on polling would do it. But also, are they caching the responses? I have a weird feeling that the vendor is not caching enough, especially for data that changes insanely infrequently.
That’s definitely the high road solution. The low road solution would have been to start suing HA users under the CFAA. So I guess they took the middle road.
Possible answers would be for the company to create an official integration, using a change state trigger rather than a polling trigger - or possibly to throttle requests from a particular IP to a certain number per day to incentivise parsimonious usage
Absolutely. It would also be possible for them to create a local API that home assistant can call over the local network. The real problem is that the company just doesn't care.
HA even claim that it’s used as a test bed for many iot products, so it can often have integrations before any other platform. Kind of makes sense, give many cross platform integrations there are in it.
MyQ has built in integrations for Apple Smart Home and Alexa. I’m assuming in those situations the MyQ app passes state to those services so they don’t have to poll.
Even if there is no option for 100 year registration, 10 years is pretty common[0]. Google deactivates an account and deletes all data after 2 years of inactivity[1]. So even if someone stops using the domain, all data would be deleted after 2 years, emails sent to it would bounce for ~8 years, and only then could someone take ownership of the domain. Even if you're using another paid service, your payments would fail, and your account would also be deleted before your domain expires.
I was wondering the same, and saw that this was asked in previous Who is hiring threads, but a clear answer on what "Remote" means was never provided, so until that happens here's what I found:
I also looked it up on LinkedIn. Out of 148 employees marked as working in engineering on LinkedIn, only 14 are based outside the US - and all 14 are in Canada.
Some older comments in Who is hiring? Threads say "continental US preferred", and "US timezone preferred", but these comments are a few years old - although the data on LinkedIn seems to imply that this is still the case.
Besides being sick of subscriptions for every small thing, I'm not sure I understand the premise here:
"Pay to download or for other services: Not worth it; users can find the software somewhere else and they don't need your other services."
So users won't pay a one-time fee, but instead they will pay a subscription to get that one software they need? They won't "find the software somewhere else" if it's behind a subscription, but will do so if it's behind a single payment?
The thing is that this solution scales better. If you had to pay all developers individually, that would not be worth it but with my solution, you have to pay only one.
Also, it doesn't have to be a subscription. The payment is 100% up to the developers that you pay, so they could sell a one time payment and register a lifetime subscription in this system for that.
If I understand correctly, you are not getting one piece of software. You get access to everything in their library, like a spotify subscription. You also choose which developer gets your $5 or whatever, so you retain the meritocratic infrastructure that a traditional marketplace provides.
Now that you mention it, the spotify subscription is actually very interesting here. A bundled subscription for all the software you use could make sense (though it would probably by 10-100x the cost of a spotify subscription).
However, OP's resource allocation model (each user determines which developer gets their payment) doesn't make sense to me. I think it would be better to prototype multiple resource allocation models in parallel and see which are most fair and sustainable over time.
next to nobody will pay 100x a spotify subscription for anything, no matter how great it is. Despite what buisness owners like to believe, most normal people in the first world have like $100 dollars a month total after food + rent + utilities with which to spend on any and all entertainment and luxuries. at best you could maybe charge like 60 dollars a month, like cable, but that would have to be an unbelievable deal with no alternative (not possible, its incredibly easy to make new software, so you'd constantly be undercut by startups and open source chipping away at your cataloge)
I could maaaaaybe see it working on iphone, a premium apps service, where they have a lot more control
SetApp is pretty much that (for Mac, I don't know if they also do Windows stuff). I've avoided it and instead bought a lot of software available in the bundle because I prefer to own the software when I can and when it makes sense.
OAuth might one way to enable it. Instead of logging in via Google/Facebook/Apple you could login via "SaaSBundler", this would register you've used that product which could help allocate the distribution of funds. Might need some more work if you wanted to distribute based on time
> From 2020 to 2023 we grew our team substantially and added layers of management. We now need to simplify Netlify to be more nimble
While management layers certainly do add overhead to varying degrees, simply getting rid of it and trying to be move faster and be more nimble, in my experience, leads to one of two things:
1. The existing management layer work gets offloaded to multiple people who have no interest in those topics, usually without any extra pay for them as well.
2. The management layers get removed without any replacement, which usually leads to long-term overhead across all departments that's often hard to quantify.
Combined with the wish to expand and evolve, especially within the enterprise sector, neither of the above options is a good choice. The only thing such change is good for is cutting expenses in the short term to inflate company value on paper.
I have around 180 orders from Amazon in 2022 in Germany, I've ordered multiple DSLRs/Mirrorless cameras, lenses, all my Gardena stuff, and a ton of more things - never had one single item arrive that showed any signs of use, neither on the package nor on the item itself, and I've never heard anyone else say that they got used items.
It's very weird that you've had so many used items delivered, but it's definitely not a common thing in Germany.
It's totally weird that you never got this. Perhaps you spend so much money on Amazon (I'm sadly not that rich - 180 orders/year of electronics,cameras and lenses, wow, I'd wish I had that spare money!), that you moved into a VIP bracket.
I've met many people in Germany who also got used stuff from Amazon and therefor left. I also know several people who buy 3 cameras/TVs/phones, use them for two weeks and then send 2 of them back and keep one.
And whenever I've bought lenses on Amazon I didn't find marketplace deals, so these cameras and lenses need to end up somewhere. Amazon is trashing returns but for sure not $3k cameras.
So cringy. Should we also add badges for doing it without autocomplete, syntax highlighting. Hell, why even bother with text, if you didn't develop it by punching cards then you're not a real developer.
I think you're being unreasonable. Your examples are closer to spellcheck and punctuation correction. Even they have subtle societal considerations such as Americanising English. A much more appropriate comparison would be Copilot which is being discussed nowadays.
Are there really still purists who don’t just reject technology for their own music, but who insist that any music that uses technology is somehow not real art? I haven’t met anybody like that in 30 years.
I don’t understand the point of your insulting response. Younger generations grew up with technology. Just being part of music. Music goes on just fine, nobody makes that stupid argument anymore.
Nobody said anything about people born in the 70's. They said "musicians who didn’t want to learn technology in the 70's", which is clearly different.
Those musicians were probably 30+ in the 70's, making them 83+ now. Life expectancy of someone who reaches 30 is 48 years, so yes: most of them are dead now.
I already use it instead of google to look up stuff, as well as to learn additional things.
Is it some sort of magical AI that will always produce 100% accurate answers no matter what the question is? Absolutely not.
Is it better than giving me a list of links where some of them contain inaccurate privacy invading outdated garbage written than humans? To me personally, yes - it's much better.
I do have to say that I'm not attempting to solve cryptic crosswords or similar, but rather I use it for things that interest me or that I don't understand. Or even to go through some code I've written, to find bugs, improve it, and so on. And at least for my use case it has been more reliable than a lot of people I know.
> All it shows me is large breasted women and “crystal polishers” (which I never knew was a thing)
It shows me an endless stream of cute kittens. It's amazing.
I'm also confused by the security risks listed, I just checked the iOS privacy report that shows which sensors and data apps used, and TikTok is not even on the list. I'm also struggling to understand things listed in the article such as it accessing texts on the device(?) - like... it can't do that, at least not on an iPhone. As far as "voiceprints" and "faceprints" do they mean it video/audio content can be uploaded to it? Like thousand other apps?
If anything there should be an argument that by default the apps should be better sandboxed and permissions to be an explicit opt-in, this current approach just seems like fearmongering.
TikTok is well-known for using vulnerabilities to access more personally identifying data than it should from users [0], such as MAC addresses [1] and IMEI numbers for the SIM cards.