No, not that. The endings are different, the verbs are substantially different. AFAIK invention of printing had generally stabilizing effect on English.
It is not that I am incapable to understand old English, it is that 1600 is dramatically closer to modern than 1400 one; I think someone from 1600 would be able to converse at 2026 UK farmers market with little problems too; someone from 1400 would be far more challenged.
Not to mention that there are pockets of English speakers in Great Britain whose everyday speech isn’t very far from 17th century English. The hypothetical time traveler might be asked, “So you’re from Yorkshire then, are you?”
The invention of printing had a stabilizing effect on all languages, at least of their written form, because for some languages, especially for English, the pronunciation has diverged later from the written form, but the latter was not changed to follow the pronunciation.
I have read many printed books from the range 1450 to 1900, in several European languages. In all of them the languages are much easier to understand than those of the earlier manuscripts.
Seems to be heavily focused on orthography. In 1700s we get the long S that resembles an F. In 1600 we screw with the V's and U's. In 1400, the thorn and that thing that looks like a 3 appears. Then more strange symbols show up later on as well.
Orthography is probably the biggest stumbling block going back to the 1500s or 1400s , but that’s really because the rest of the language has changed in vocabulary and style, but is still understandable. If you think the 1200 or 1100 entry are mostly orthographical changes then you are missing the interesting bits.
I would prefer to see a version that was skillfully translated to modern orthography so that we could appreciate shifts in vocabulary and grammar.
To me, it is nearly like trying to look at a picture book of fashion but the imagery is degraded as you go back. I'd like to see the time-traveler's version with clean digital pictures of every era...
The broad list seems to just be a hater list. It's not trying to cover cases of deception (passing off AI material as if it's something else), as it includes sites which are very open about what kind of content is on there.
Would you say the same about a block list that blocks anything else? I don't care how obvious an ad is, I don't want to see it. Same with social widgets or cookie consent banners, or newsletter sign-ups.
But I wouldn't call the person that maintains the news letter popup block list as "newsletter hater"
>Would you say the same about a block list that blocks anything else? I don't care how obvious an ad is, I don't want to see it. Same with social widgets or cookie consent banners, or newsletter sign-ups.
He's not complaining that widgets for his favorite social network site is getting blocked, he's complaining that anything vaguely related to social networks are getting banned. Some of the sites on that list are stuff like chatgpt.com, which might be AI related, but clearly doesn't fit the criteria of "AI generated content, for the purposes of cleaning image search engines".
The purpose of the broad list is removing AI-generated content from search results, so that the user doesn't have to wade through (as much) slop to find the human-created content they're looking for.
While I applaud the honesty of sites that are open about their content being AI generated, that type of content is never what I'm looking for when I search, so if they're in my search results it's just more distraction/clutter drowning out whatever I'm actually looking for. Blocking them improves my search experience slightly, even though there is of course still lots of other unwanted results remaining.
Granted, I definitely count as an AI hater (speaking of LLM's specifically). But even if I weren't, I don't think I'd be seeking it out specifically using a search engine; why would I do that when I could just go straight to chatgpt or whatever myself? Search is usually where people go to find real human answers (which is why appending "reddit" to one's searches became so common). So I see this as a utility thing, more than a "I am blocking all this just because I hate it" thing. Although it can be both, certainly.
Only Malware uses the system call numbers directly. Using the system call numbers directly is foolish if they're going to change and break your app. Just import and call a function that will perform the actual SYSENTER (or WOW64 context change).
NTDLL should be stable since it's well documented, and many functions redirect to Ntoskrnl.exe, and things like kernel level drivers call those functions. Those functions won't change without the drivers breaking.
Then there's "Win32u.dll". These correspond to API calls from User32.dll, Gdi32.dll, etc. This DLL didn't even exist during the Windows 2000-XP era. This stuff is not well documented, and I don't know if this is stable.
The one thing that really benefits from using NT Native API over Win32 is listing files in a directory. You get to use a 64KB buffer to get directory listing results, while Win32 just does the one at a time. 64KB buffer size means fewer system calls.
(Reading the MFT is still faster. Yes, you need admin for that)
Oh hey this is exactly why I made node-windows-readdir-fast - especially with the way node works, this makes reading filenames and length and times around 50x faster
Windows only of course, but the concept is sound. Was also fun benchmarking to find out that parsing a binary stream was faster than creating a ton of objects through the node api (or json deserialization)
People also use the word "Telecine" to refer to a way to convert film framerates (24FPS) into NTSC television framerate (60FPS) by using a alternating 2 field 3 field sequence.
How can the average 7zip user know which one it is?
Search results can be gamed by SEO, there were also cases of malware developers buying ads so links to the malware download show up above legitimate ones. Wikipedia works only for projects prominent enough to have a Wikipedia page.
What are the other mechanisms for finding out the official website of a software?
There is normally a wiki page for every popular program which normally contains an official site URL. That's how I remember where to actually get PuTTY. Wiki can potentially be abused if it's a lesser known software, but, in general, it's a good indicator of legitimacy.
So wikipedia is now part of the supply chain (informally) which means there is another set of people who will try to hijack Wikipedia, as if we didn't had enough, just great.
You can corroborate multiple trusted sources, especially those with histories. You can check the edit history of the Wikipedia article. Also, if you search "7zip" on HN, the second result with loads of votes and comments is 7-zip.org. Another is searching the Archlinux package repos; you can check the git history of the package build files to see where it's gotten the source from.
And we're really going to do all the brouhaha for a single dl of an alternative compressor ? And then multiple that work as a best practice for every single interaction on the Internet? No we're not.
The dl for some programs are often on some subdomain page with like 2 lines of text and 10 dl links for binaries, even for official programs. Its so hard to know whether they are legit or not.
My point was more along the lines of "there's no need to complain about Wikipedia being hijackable, there are other options", and now you're complaining about having too many options...
You don't need to do everything or anything. They're options. Use your own judgment.
Not exactly news, wiki's been used for misinformation quite extensively from what I recall. You can't always be 100% sure with any online source of information, but at least you know there is an extensive community that'll notice if something's fishy rather sooner than later.
I feel I need to clarify my earlier comment. I was asking how can a user tell, in general, what is the legitimate website of a software, not just how to know what 7zip.com is malicious.
Are the search removals and phishing warnings reactive or proactive? Because if it is the former then we don't really know how many users are already affected before security researchers got notified and took action.
Also, 7zip is not the only software to be affected by similar domain squatting "attacks." If you search for PuTTY, the unofficial putty.org website will be very high on the list (top place when I googled "download putty.") While it is not serving malware, yet, the fact that the more legitimate sounding domain is not controlled by the original author does leave the door open for future attacks.
How would you ensure that the "average user" actually gets to the page he expects to get to?
There are risks in everything you do. If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?
> How would you ensure that the "average user" actually gets to the page he expects to get to?
I think you practically can't and that's the problem.
TLS doesn't help with figuring out which page is the real one, EV certs never really caught on and most financial incentives make such mechanisms unviable. Same for additional sources of information like Wikipedia, since that just shifts the burden of combatting misinformation on the editors there and not every project matters enought to have a page. You could use an OS with a package manager, but not all software is packaged like that and that doesn't immediately make it immune to takeovers or bad actors.
An unreasonable take would be:
> A set of government run repositories and mirrors under a new TLD which is not allowed for anything other than hosting software packages, similar to how .gov ones already owrk - be it through package manager repositories or websites. Only source can be submitted by developers, who also need their ID verified and need to sign every release, it then gets reviewed by the employees and is only published after automated checks as well. Anyone who tries funny business, goes to jail. The unfortunate side effect is that you now live in a dystopia and go to jail anyways.
A more reasonable take would be that it's not something you can solve easily.
> If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?
People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.
> People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.
I agree with the sentiment but there are limits to what we can and should do. To stay with your analogy: We don't let people drive around without taking a test. In that test they have to prove that they know the basics of how to drive a car. At least where I come from that means learning quite a bit of rules and regulations.
In other words: Don't let people off the hook. They need to do some form of learning by themselves. It's no different with what you do on the internet. If you're not willing to do some kind of work to familiarize yourself with how the bloody thing work then it's not the job of everyone else to make sure you'll be okay. It's _your_ job to understand the basics.
I'm getting tired of just another thing we must take off peoples minds so that they can "just" use whatever they want to use. Don't try to blame (or god forbid sue) someone else because you didn't do your homework.
I feel like this line of thinking is dangerous: people hit the wall hard when they don’t have sex ed, or financial education classes, or even basic classes on how to cook or do crafts (we had those in school, girls mostly cooked and the guys got to learn woodworking but also swapped sometimes; and later in university there were classes about work safety in general), or computer literacy classes.
I think a lot of people don’t even have basic mental models of how OSes or the Internet works, what a web browser is (“the Google”) and so on.
Saying that they should know that stuff won’t change the fact that they don’t unless you teach them as a part of their overall education.
The sheer amount of what you _might_ need later in life has proven to be simply too much for the time we usually spend for "overall education". I'm completely with you in that we should offer help along the way. But help can only bring you so far and you have to accept it.
In the end that's fine. I have no idea how my car works and if the guy from the repair shop says that I need to pay for a new clutch then that's what I'm gonna do. I am aware that I don't have the knowledge to know whether or not I'm being scammed or not. But I _accept_ that because the alternative (getting to know a lot more details about a car) simply doesn't appeal to me.
If someone wants to use the same approach for everything he does on the internet then that's perfectly fine. But then he needs to accept the consequences as well.
Open source software will have a code repo with active development happening on it. That repo will usually link to official Web page and download places.
Does anyone know the history of auto-tiling in games? I know that Dragon Quest II (1987) had this feature for water tiles on the overworld, before it got backported to the North American version of Dragon Warrior 1.
It would be fun to know what the oldest autotiles game is. DigDug from 1982 had them I think. diggerfrom 1980 might have used them.
There has to be some ancient ascii game that uses them. I'm sure they go back further than 1980.
Edit: now that I look at Digger & digdug I'm not sure either of those used autotiles. But I do think you'll find some games that used them in the very early 80s.
used an autotile system -albiet a very very different sort, that made 3d-looking tiles appear in the correct place. Is that a proper autotiler? I don't know, but the principle is pretty much the same.
Anyway I'll bet there are old games still that are out there with auto-tiles.
reply