Hacker Newsnew | past | comments | ask | show | jobs | submit | jonathanlydall's commentslogin

You can’t change anything about a commit without breaking the chain of SHA hashes in the commits, which causes pulls to break.

GitHub hides the emails on their web UI, but nothing stops people from pulling the repository with a Git client and looking at the emails in the commit log after doing so.


Which is why you should be careful to never use your actual email in git commits.

When I made a patch to the Linux kernel I did have to use a real email, since you have to send to their mailing list. I used a throwaway email for it, which I have since edited on my mail server config to forward to /dev/null (yes, I'm one of the weirdos still self hosting email in 2026). The amount of spam I got was insane, and not even developer relevant spam.


This makes me wonder how the Linux kernel git system deals with GDPR data deletion requests. Are they even legally allowed to deny them?

Which is in large part due to hoarding by OpenAI.

Although their stated reason for hoarding is that they "really need it", I think it was a strategic move to make their competitors' lives more difficult with little regard for the collateral consequences to non-competitors, such as regular people or companies needing new computers.


Absolutely, “ORM == bad” viewpoint strikes me as highly ignorant of all the benefits they provide, particularly in statically typed languages.

People like me don’t choose an ORM to save me from having to learn SQL (which you’ll still need to know), it’s because 99% of the time it’s a no brainer as it vastly increases productivity and reduces bugs.

In a language like C#, EF Core can easily cover 95% (likely more) of your SQL needs with performance as good as raw SQL, for the small percentage of use cases its performance is lacking, you fall back to raw SQL.

But if saving you from writing 95%+ of SQL queries was not compelling enough, it’s just one benefit of EF Core. Another major time saving benefit is not having to manually map a SQL result to objects.

But an often super underrated and incredibly valuable benefit, especially on substantial sized code bases, is the type safety aspect. The queries written using LINQ are checked for free at compile time against mistakes in column or table names.

Want to refactor your schema because your business domain has shifted or you just understand it better than before? No problem. Use standard refactoring tools on your C# code base, have EF Core generate the migration and you’re done in 10s of minutes, including fixing all your “SQL queries” (which were in LINQ).

EF Core is almost always a no brainer for any team who wants high quality and velocity.


Then there are the realities of an enterprise context with multiple teams/programs of varying pedigree.

TFA is fine for a bounded context. Don't add another abstraction layer Just Because.

But past the nicely bounded context, hiding some detail could be really, really helpful.


> You can’t push changes to a non-bare repo – if you try, Git will reject your push.

You can push to a folder with a non-bare Git repo, it’s just that you can’t push the same branch which it has checked out.

Or in other words, if you get an error when trying to push to a folder with a checked out repo, push to a different remote branch.

(I do this regularly between WSL and the Windows host)


Like all languages, C#, Java, etc have cargo-cultist developers who use them badly, and in the case OOP languages, doing things like overdoing the separation of concerns.

However, speaking as someone who actually uses C# day in and day out and understands the trade-offs of different levels of (or even no) separation of concerns, it's not done for us to "feel" good.

On very large projects running for a very long time with many developers coming and going with varying levels of competency, an appropriate amount of separation of concerns and decent "plumbing" can be the difference between a code base enduring decades or becoming essentially unmaintainable due to unchecked entropy and complexity.


Yep, agreed. I worked on two PHP codebases at a company many years ago - the "old" and the "new". The old was just frameworkless PHP with three different ways of doing controllers, partial templates with no closing tags so that they could be inserted anywhere without breaking the page, inline SQL in the templates etc. The new was hugely complicated with long inheritance chains and multiple classes with the same name but in different directories... but it was structured and you couldn't easily do anything wild.

My microwave seems to gain minutes per month, my assumption was that it's due to incompetency of Eskom, the essentially sole producer of South African electricity. With the government and parastatals here incompetency is very common.

However, out of interest I just pulled yesterday's stats from my inverter on Sunsynk's website. It has the frequency of the grid at 5-minute intervals and the average over the whole day was 49.975Hz which doesn't strike me as particularly bad, so I have to wonder if the Microwave itself has an issue. It's a Samsung which is now 13 years old.


> the average over the whole day was 49.975Hz which doesn't strike me as particularly bad.

A day, having 86_400 seconds in it, is equivalent to 4_320_000 pulses at 50 Hz. At 49.975 Hz, it's only 4_317_840 pulses which is 2_160 pulses too few. Which, at assumption of 50 Hz, translates into discrepancy of 43.2 seconds, in this one day.

So, no, it's a pretty big discrepancy actually, over here anything over 0.2 Hz is legally declared to be "degraded quality", and it's been debated for years that this is actually a way too wide margin but the electricity providers/grid operators managed to successfully argue that they can't afford upgrades.

Moral of the story: don't get cute when designing electronics, just use AC/DC power supply and put a damn crystal oscillator as every other reasonable person.


I'm guessing you're being downvoted largely due to the "don't be snarky" rules.

You're right (by my maths too, which I only did now) about it being a discrepancy of 43.2 seconds per day, which as you say is quite high.

However, it is my understanding that most grid operators are actually very good about maintaining a 50Hz average over a day specifically for devices doing time keeping based their frequency, I've heard they intentionally run the generators faster or slower at certain points in the day in response to needing to get the average right over a day.

I used to have no issues with time drift on my microwave, only started in the last few years.


Speaking as a 44-year-old, this tracks.


This was also my understanding.

It's essentially like "cracking" a password when you have its hash and know the hashing algorithm. You don't have to know how to reverse the blur, you just need to know how to do it the normal way, you can then essentially brute force through all possible characters one at a time to see if it looks the same after applying the blur.

Thinking about this, adding randomness to the blurring would likely help.

Or far more simply, just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs which tend to maintain the text "hidden" underneath).


> mask the sensitive data with a single color which is impossible to reverse

You note the pitfall of text remaining behind the redaction in PDFs (and other layered formats), but there are also pitfalls here around alpha channels. There have been several incidents where folks drew not-quite-opaque redaction blocks over their images.


> just mask the sensitive data with a single color which is impossible to reverse (for rasterized images, this is not a good idea for PDFs

Also not a good idea for masking already compressed images of text, like jpg, because some of the information might bleed out in uncovered areas.


Interesting - does a little extra coverage solve this or is it possible to use distant pixels to find the original?


yep, some padding fixes this

JPEG compression can only move information at most 16px away, because it works on 8x8 pixel blocks, on a 2x down-sampled version of the chroma channels of the image (at least the most common form of it does)


I'm not super familiar with the jpeg format, but iirc h.264 uses 16x16 blocks, so if jpeg is the same then padding of 16px on all sides would presumably block all possible information leakage?

Except the size of the blocked section ofc. E.g If you know it's a person's name, from a fixed list of people, well "Huckleberry" and "Tom" are very different lengths.


Why would you expect people not living in the Netherlands to just know this?

In South Africa where I live, anywhere that accepts cards accepts Visa and Mastercard. Outside of the "informal business" sector (e.g. tiny "businesses" like a roadside stall), card acceptance is so ubiquitous I don't carry cash anymore and it's very rarely an issue.


> Why would you expect people not living in the Netherlands to just know this?

Because this is a thing in most of Europe. Most people don't own a creditcard, and those that do use it mostly for online purchases outside of the EU. I don't travel to another continent without looking up how to pay for things when I get there.


> Because this is a thing in most of Europe.

I think you wastly exegarate your observations.



It's not about owning credit card, but about being able to use it in physical shop. In one of the parent comments author of comment to which I replied said:

> Why would you expect to be able to use a creditcard in a physical shop in the Netherlands?

And then

> Because this is a thing in most of Europe.


This is definitely not a thing in most of Europe.

Netherlands' anti-credit card stance is quite unique, perhaps only matched by Germany and their obsession with cash.


I was just in Germany and lived on my credit card - not a single Euro used. I strongly disagree with this generalization based on extensive personal experience.


> Why would you expect people not living in the Netherlands to just know this?

It’s generally a good idea to learn something about your travel destination and not to assume everything there is the same as where you live. The world is big and diverse.


I’m very much a proponent of statically typed languages and primarily work in C#.

We tried “typed” strings like this on a project once for business identifiers.

Overall it worked in making sure that the wrong type of ID couldn’t accidentally be used in the wrong place, but the general consensus after moving on from the project was that the “juice was not worth the squeeze”.

I don’t know if other languages make it easier, but in c# it felt like the language was mostly working against you. For example data needs to come in and out over an API and is in string form when it does, meaning you have to do manual conversions all the time.

In c# I use named arguments most of the time, making it much harder to accidentally pass the wrong string into a method or constructor’s parameter.


In f# you can use a single case discriminated union to get that behaviour fairly cheaply, and ergonomically.

https://fsharpforfunandprofit.com/posts/designing-with-types...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: