Hacker Newsnew | past | comments | ask | show | jobs | submit | Wilya's commentslogin

I'd like to point out that Qonto is a business bank. Not open to consumers. They also have a list of prohibited activities: (https://legal.qonto.com/en#template-uoa8xux5p) which, funnily enough, include "Hunting, trapping and related service activities", "mining nonrenewable natural re-sources" and "accessibility diagnosis" (??).

Consumer protection laws obviously don't apply to businesses, and banks close business accounts all the time for not following the terms of services. That sounds like a MUCH more probably cause than "I said mean things about Palantir".


The guy has a "Conspiracy theory and disinformation" section in his wikipedia page, that mentions 9/11.

Come on. That's your "totally not pro-russia" example ?


So being pro Russia justifies de-banking?

In 2015, systemd was a giant, immature and complex galaxy of tools, that came to replace a hacky-but-mostly-stable bunch of shell scripts. It was pushed fast. It came with good ideas and innovations. It also came with security issues, bugs, and lost productivity.

The fact that the main guy behind the project has a very... abrasive personality, and that the project got to widespread adoption through political moves more than through technical superiority, turned that dislike into hate.

But it's 2025 now, systemd has stabilized now, and I don't really see the point of all this anymore.


That's the way open source works. The people that think there's a point go and fork and those that don't stay put.

Linux distros have become extremely complicated IMO. Systemd is not the worst example of this - the packaging systems are hard, things like SeLinux are very annoying. The stability is because companies have spent to make it so. There are enterprise features all over the place etc. This just isn't what all of us necessarily want. I think there's room for distros which can be understood at a technical level - which are composed of units that can be easily replaced that have defined functions.


In full-cloud environments, in small/middle companies I've worked at:

Developers handle 1). Devops handle 2)/3)/5). Nobody does 4)


Thanks. That is an interesting insight into the current reality. I assume the developers take care of optimization of queries; set up indexes and development of schemas and DB backups is handled by devops.

I must say, again I thought (I read it somewhere?) DevOps should take care of the constant battle between Devs and Operations (I've seen enough of that in my times) by merging 1 and 2 together. But it seems just a name change, and if anything, seems worst, as a (IMHO) critical and central component, like the DB, now has totally distributed responsibilities. I would like to know what happens when e.g. a DB crashes because a filesystem is full, "because one developer made another index, because one from devops had a complaint because X was too slow".

Either the people are extremely more professional that in my times, or it must be a shitshow to look while eating pop-corn.


> DevOps should take care of the constant battle between Devs and Operations

In practice there is no way to relay "query fubar, fix" back, because we are much agile, very scrum: feature is done when the ticket is closed, new tickets are handled by product owners. Reality is antithesis of that double Ouroboros.

In practice developers write code, devops deploy "teh clouds" (writing yamls is the deving part) and we throw moar servers at some cloud db when performance becomes sub-par.


Nobody does 4 until they’ve had multiple large incidents involving DBs, or the spend gets hilariously out of control.

Then they hire DBREs because they think DBA sounds antiquated, who then enter a hellscape of knowing exactly what the root issues are (poorly-designed schemata, unperformant queries, and applications without proper backoff and graceful degradation), and being utterly unable to convince management of this (“what if we switched to $SOME_DBAAS? That would fix it, right?”).


Can confirm: that's exactly what we do.


Youtube premium is still ad-free. There is a Youtube premium lite which is kinda-ad-free-but-not-really, but the full ad-free one still exists.


youtube premium has sponsorblock integrated now?


basically, yeah. there's a white fast forward button that appears during frequently fast forwarded sections, which unsurprisingly happens to be sponsor sections.


It depends how it is done. I used to think the same way, and I would never hire someone without having seen them program live.

But having experienced leetcode-style interviews on the candidate side, it's clear to me that they are no longer about figuring out and coding a solution on the spot. Interviewers expected a solution FAST, and to match that you need to have studied and learned the answer beforehand.


> Interviewers expected a solution FAST, and to match that you need to have studied and learned the answer beforehand.

Yeah this is the real BS behind the tests. Good interviewers help you manage and try to find a solution. Not just "answer is binary tree"


I blame everyone else and myself equally.

But I'll stop whining about politics when I'll stop witnessing well-behaved but incompetent people turn projects to failures.


> But regardless how fast we achieve net-zero carbon dioxide, there is good reason to believe that the societal impacts of extreme heat are manageable, and across different scenarios. For instance, according to the World Health Organization, even with increasing heat waves, mortality does not have to increase.

There are two big problems with this take:

1. The author takes the "100% adaptation scenario" from the paper, and ignores the rest of the discussion. Yes, if we mitigate the effects of heat waves, there will be no effect. I could have guessed that myself.

2. The part of the paper is about the deaths directly attributable to heat waves on people aged 65+. That is a super narrow metric. Maybe the author should read the "Undernutrition" part of the paper he himself quoted, which paints a very different picture. And that's not even the full picture.


In France last summer, some plants had indeed to be shut down because of the drought but:

* a minority of plants were involved, it was only an issue because it happened on top of other issues (planned maintenances delayed due to covid, corrosion issues)

* the problem isn't actually the drought, it was the heat. The plants could keep operating, but they would have rejected water too hot, in breach of environmental regulations.

Besides, new plants can be built close to the sea instead of rivers to account for that.


> Besides, new plants can be built close to the sea instead of rivers to account for that.

I think that's a no-go due to the salt in the water that will corrode pipes etc. Not an expert on that, though, obviously.

All the other points don't sound too good either: Corrosion issues, let's fuck up nature with hot water, not very good at all.


It's possible to build nuclear plants by the sea, and it's actually commonly built. There are some in France, China, Korea, etc.


At scale, you would need to factor in the durability of the media. [0] suggests that some types of bluray disks can last 20 to 50 years. Hard drives typically struggle to last 10 years. So if you need to replace hard drives 5x more than bluray disks, maybe it changes the economics.

That's a random study I found on Google, of course, I'm sure Meta has more accurate data on that.

Besides, you need to build the same kind of redundancy in both cases, so that shouldn't influence the choice.

[0] https://www.canada.ca/en/conservation-institute/services/con...


> At scale, you would need to factor in the durability of the media. [0] suggests that some types of bluray disks can last 20 to 50 years. Hard drives typically struggle to last 10 years. So if you need to replace hard drives 5x more than bluray disks, maybe it changes the economics.

Also consider the economics of actually retrieving and indexing the data. If you have to spend 2 hours looking for a blu-ray/DVD with the data you need, then maybe it changes the economics back to HDDs, which can look up a file within 20 TB of data almost instantaneously using an NTFS Journal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: