Hacker Newsnew | past | comments | ask | show | jobs | submit | Aachen's commentslogin

I am constructing my own. Examples are drawn from books, group chats, podcasts, and other media that I consume in the target language (so I know those are either correct, or at worst, as wrong as native speakers are which is good enough for me), shortened as necessary to make it appropriate for Anki

Then I gave my first 100 cards to a native speaker and somehow it's still full of subtle issues and not infrequently also actual mistakes >.<

The two dozen decks I've downloaded aren't always perfect either, but making your own doesn't guarantee it'll be better than another uploader's honest attempt at making a good deck. I do wish Ankiweb was more collaborative though, at least having a bugtracker where people can report mistakes and additions, if not full on code forge functionality. If I'm not mistaken, all you can currently do is leave a review


Did I miss the part where the title was answered?

The article mentions some building blocks like microlearning, explains how researchers test people with, for example, fictional words and shapes to avoid that you draw on prior knowledge, states that "experts make a case for human instruction" (but not which case or how that human instruction should be shaped or structured), and shares shards of how well the author did on the different tests. There's a lot of links, which is nice, so I can dive deeper into the things mentioned (I've read a bit about 'statistical learning' and plan to read the linked paper on microlearning which is new to me), but I am not a step further in what (combination of) method(s) is the "best way" up learn a new language. Did I overlook it or fail to put some pieces together?

Edit: that microlearning paper (10.22034/meb.2022.355659.1066) is a waste of time if you've read the submission whence it was linked and know about spaced repetition. The paper makes a case that society has become more fast-paced since Charles Babbage made the difference engine in the 1800s and so microlearning can help us by breaking down lessons to fit into our day, lowers costs per lesson etc., but might also fragment the learning (and other obvious pros and cons). The most interesting part was a forgetting curve cited from another paper


the answer given in the article is "sustained exposure, interaction, feedback, and social use over many months or years"

Hmm yeah I guess it could be that they see this sound byte (quoting one scientist's opinion, not a study's results) as having answered the question. It's not reported to be the best way though, just what "Achieving fluency in the real world requires" (any at all?)

The article speaks of so much research but, if this is indeed the 'answer', uses nothing of it in answering the question. Could have just put that sound byte up top and saved themselves and us the further trouble...

Also, what's feedback even supposed to mean? Like Anki? Like having a speaker correct your grammar from the get-go as you speak, or do you speak a bunch first and learn by stumbling and do they correct major mistakes at first only? There's a million ways to fill meaning into these words. Sustained exposure is obvious, but none of the other words are any guide


Thank you!

Non car person here. Why does that matter? It's not like rain means you didn't have to go to the wash, it rains often enough here that there wouldn't be car wash places left near me but there are plenty

> Why does that matter? It's not like rain means you didn't have to go to the wash

The car gets dirty again when it rains and when it gets dry again. I guess dust, salt, pollution and more is what gets mixed in and put on the chassi as it rains, falls from roofs and splashes, but can't say I've investigated deeply enough. Not the end of the world, just annoying it keeps happening.


Many people avoid washing cars just before rain to avoid spots, etc. Phoenix as an extreme example rarely rains and leaves everything filthy afterwards.

It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN

Edit: fwiw, I'm not the downvoter


It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.

In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.

It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".


Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it

This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.


> It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug

Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.


I think the implication in this specific context is that malicious people were exploiting the vuln in the wild prior to the fix being released

I hadn't opened the article yet and was just browsing comments over my cereal when I saw this and thought "ugh, amother one?" and went to check for myself

I didn't get an LLM vibe at all. Looking for it specifically, the bullet point about UI improvements is a candidate; the sentence following "mediawiki" could be an autocompletion; maybe the first sentence of the download section... but they're also all plausibly just a bit 'marketing team worded', so not necessarily LLM-sounding. And even if an LLM made suggestions to some small parts like these, who cares? There aren't any slop sections that waste your time, this is just like using a thesaurus — if these parts were LLM suggestions in the first place, which I don't actually expect because then there should be more of it

This type of post very poorly lends itself for auto-writing anyway. It'll put emphasis on the wrong aspects and not come out as intended, at least in my experience it's more work coaxing it to good results. It can be helpful not to start from a blank page but that's about it, I rarely find a sentence among the output that's fully usable as-is


I was referring to the text of the HN post. Go read it.

Aye-aye o7 ...

I saw that one already and didn't suspect slop there either


Careful: that infosec.exchange link hijacks your back button and will just loop forever between the toot and their /explore page. For desktop OSes, protip: right click the back button to select two pages back in the history

Nice initiative! Was hoping the "play" button/link would go to a playable version though

I am not a chip designer but from my limited understanding, this "somewhere" is the problem. You can have secret memory somewhere that isn't noticed by analysts, but can it remain secret if it is as big as half the cpu? A quarter? How much storage can you fit in that die space? How many AES keys do you handle per day? Per hour of browsing HN with AES TLS ciphers? (Literally all supported ciphers by HN involve AES)

We use memory-hard algorithms for password storage because memory is more expensive than compute. More specifically, it's die area that is costly, but at least the authors of Argon2 seem to equate the two. (If that's not correct, I based a stackoverflow post or two on that paper so please let me know.) It sounds to me like it's easily visible to a microscope when there's another storage area as large as the L1 cache (which can hold a few thousand keys at most... how to decide which ones to keep)

Of course, the cpu is theoretically omnipotent within your hardware. It can read the RAM and see "ah, you're running pgp.exe, let me store this key", but then you could say the same for any key that your cpu handles (also rsa or anything not using special cpu instructions)


Good points, but might be mitigated by knowing that the first key after boot is for HDD encryption and if storage is limited then keep counter for each key, and always overwrite least frequently observed key.

Could work. How do you know what the least-frequently used key is if you can't store them, though? Would need some heuristics. Maybe it could write the first five keys it sees after power on on every power on, or some other useful heuristic.

Like, I do take your point but it does seem quite involved for the chance that it'll get them something useful, and they still need to gain physical access to the intact device, and trust that it never gets out or the chipmaker's reputation is instantly trash and potentially bankrupt. And we know from Snowden documents that, at least in ~2013 (when aes extensions weren't new, afaik), they couldn't decrypt certain ciphers which is sorta conspicuous if we have these suspicions. It's a legit concern or thing to consider, but perhaps not for the average use-case

edit: nvm it was proposed in 2008, so that it didn't show up yet in ~2013 publications is not too surprising. Might still be a general point about that 'they' haven't (or hadn't) infiltrated most cpus in general


I think Linux/LUKS software encryption was a very big challenge and they solved it with multiple approaches

- 2004: Linux LUKS disk encryption [0]

- 2008: ring −3 / intel management engine [1]

- 2010: AES instruction set [2]

- 2009: TPM [3]

[0] https://en.wikipedia.org/wiki/Linux_Unified_Key_Setup

[1] https://en.wikipedia.org/wiki/Intel_Management_Engine

[2] https://en.wikipedia.org/wiki/AES_instruction_set

[3] https://en.wikipedia.org/wiki/Trusted_Platform_Module


Try a social media that is not antisocial. HN, Mastodon, Tildes, various forums... lots of communities and tech content are available in browsers or via open source apps

I'm not aware of any that bans you when not using an allowlisted OS, so maybe it was something else that caused this shadowban, or (more likely) that's just my bubble


If you're tech-literate enough to find the bootloader unlock, I find that a strange statement. Could $colleague be anymore specific?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: