Hacker Newsnew | past | comments | ask | show | jobs | submit | zaran's commentslogin

Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

As for backdoors, they may exist if you rely on a third party but it's pretty hard to backdoor the relatively simple algorithms used in cryptography


It's not so much that there is a way to directly crack an encrypted file as much as there being backdoors in the entire HW and SW chain of you decrypting and accessing the encrypted file.

Short of you copying an encrypted file from the server onto a local trusted Linux distro (with no Intel ME on the machine), airgapping yourself, entering the decryption passphrase from a piece of paper (written by hand, never printed), with no cameras in the room, accessing what you need, and then securely wiping the machine without un-airgapping, you will most likely be tripping through several CIA-backdoored things.

Basically, the extreme level of digital OPSEC maintained by OBL is probably the bare minimum if your adversary is the state machinery of the United States or China.


This is a nation state in a state of perpetual tension, formal war and persistent attempts at sabotage by a notoriously paranoid and unscrupulous next-door enemy totalitarian/crime family state.

SK should have no shortage of motive or much trouble (it's an extremely wealthy country with a very well-funded, sophisticated government apparatus) implementing its own version of hardcore data security for backups.


Yeah, but also consider that maybe not every agency of South Korea needs this level of protection?


> Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

DES. Almost all pre-2014 standards-based cryptosystems due to NIST SP 800-90A. Probably all other popular ones too (like, if the NSA doesn't have backdoors to all the popular hardware random number generators then I don't know what they're even doing all day), but we only ever find out about that kind of thing 30 years down the line.


Dual_EC_DRBG


while ad blocking has grown in prevalence over the years, for something like youtube I'd figured it was more than counteracted by the shift to mobile / TV (where ad blocking is more complicated)

whatever the merits, this (and google's neutering of extensions in chrome) signals a fundamental attitude shift from ~10 years ago; they're more interested in squeezing margins out of their dominant platforms instead of growth


Firefox mobile has ublock origin


*not on iPhone


Trying to watch a walled garden inside another walled ecosystem. No wonder that works how they want it and you can't simply do what you want


Yeah, it's true. iOS 9 Safari actually had the ability to play YouTube in the background without paying for that, and in iOS 10 they went out of their way to prevent it. And Apple signaled willingness to go along with WEI back when that was on the table.


Use Orion. It supports FF and Chrome extensions on mobile and desktop


Orion is a buggy mess. Horrible experience overall.

I just use Vinegar [0] and watch YT on Safari. It also allows me to listen to the videos with the phone locked.

[0] https://apps.apple.com/us/app/vinegar-tube-cleaner/id1591303...


Safari + Vinegar is my favorite way to watch youtube on any platform. One minor bug I sometimes notice is that the PiP option stops working between videos until you actually hit refresh.

Agreed about Orion, I keep it around and update it and try it out every now and again but I don’t think the experience is there yet.


It's too bad, the stock Safari in iOS 9 did both those things. Nowadays the rare times I want to watch YT on Safari, I just refresh the page once or maybe twice, which somehow makes it not show an ad.


Ublock Origin still does not work on Orion mobile sadly.


I wonder if the increasing number of computers in orbit will mean even more strange relativistic timekeeping stuff will become a concern for normal developers - will we have to add leap seconds to individual machines?

Back of the envelope says ~100 years in low earth orbit will cause a difference of 1 second


Most of those probably don't/won't have clocks that are accurate enough to measure 1 second every hundred years; typical quartz oscillators drift about one second every few weeks.


For GPS at least it is accounted for 38 microseconds per day, they have atomic clocks accurate to like 0.4 milliseconds over 100 years. The frequencies they measure at are different from earth and are constantly synchronised.


Autocomplete is more for discoverability than saving on typed characters, letting you rely less on documentation and more on the actual interface you're interacting with


That's exactly the problem: You don't know the code. You're relying on autocomplete to tell you what the signature of the function you're looking for is.

I rely less on documentation because I've got it all memorized due to not relying on autocomplete.


Surely you didn't memorize the entire 3rd party ecosystem of a programming language?


I memorize everything I work with. Knowing my tools makes me a better developer.


I mean, maybe? But if you hover over the function you just completed, the same LSP will also show you the documentation.

This is how I personally use it for discovery, anyway. The other day I was writing some Rust code and needed to remove a prefix from a &str. I tried a few common names after ‘.’ to see what would autocomplete, before finding that Rust calls this idea “trim_start_matches”. I then wanted to know what happens if the prefix wasn’t present, so I just hovered my mouse over it to read. Now, if I was writing Rust a lot, I would end up memorizing this anyway. I’ve never not written Python without a similar tool involved, yet I have a pretty close to encyclopedic knowledge of the standard library.

I feel similarly about go-to-definition. I often use it the first time I’m exploring code, or when I am debugging through some call stack, but I also always do read where I actually end up, and do form a mental map of the codebase. I’m not sure I buy either the contention in this thread that these “crutches” make developers uniformly worse, or that removing them would make all poor developers suddenly more disciplined


I’m curious as how it lets you rely less on documentation. If you don’t know what you’re looking for then how will you know you chose the right thing?

The classic example of getting this wrong is probably C# developers using IEnumerable when they should’ve used IQueryable.


> The classic example of getting this wrong is probably C# developers using IEnumerable when they should’ve used IQueryable.

Or literally any function from the standard library in C++, which will likely have undefined behaviour if you look at it wrong and didn't read the docs.


Can you elaborate on this? I'm one of C# developers who operate predominately in the Unity3d slums. This isn't familiar to me.

The closest thing coming to mind to me is mixing up IEnumerable and IEnumerator when trying to define a coroutine.


IQuerable Inherits IEnumerable and extends it with functionality to lessen the memory loaded when querying a collection (typically when reading from a database). Using IEnumerable can increase memory usage significantly compared to IQuerable.

Not every C# developer knows the difference, in my region of the world it’s an infamous mistake and is often the first thing you look for when tasked with improving bottlenecks in C# backends. On the flip-side, using IQuerable isn’t very efficient if what you wanted was to work on in memory collections.

There is an equally infamous somewhat related issue in Python with list vs generators. Especially common when non Python developers work with Python.


Ah this explains why I'm not familiar with misusing IQueryables. My domain is predominately in memory.

Thank you for the explanation.


I have a theory that autocomplete actually increases the API surface area. One of the reasons Java has so many classes and such a huge sprawl is because Java got good tooling pretty early compared to other languages.


Also about less context switching. If autocomplete gives me a full method name that I kind of remember, it saves me a trip to the browser and coming back, which saves a lot of time and avoid a family of errors when they add up.


But imho it does a disservice to new developers because they rely more on the exploratory aspect. At least that's how I remember it from doing C# in Visual Studio many years ago. You have a general idea of what you want, you type object dot and scroll the auto-complete list to find the method you think you need. And even if you can make it work after fiddling with it for ten, fifteen minutes, you can't be sure that's the best way to do it. And then you never want to go read the docs and learn the mental model and the patterns behind the library or framework. I believe it will only get worse with AI-generated completions.


This was an excellent writeup - felt a bit surprised at how much they considered "workflow" instead of agent but I think it's good to start to narrow down the terminology

I think these days the main value of the LLM "agent" frameworks is being able to trivially switch between model providers, though even that breaks down when you start to use more esoteric features that may not be implemented in cleanly overlapping ways


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: