Hacker Newsnew | past | comments | ask | show | jobs | submit | jokoon's commentslogin

I am curious to know if that 8.6x speedup is consistent.

I don't see many "fair" benchmarks about this, but I guess it is probably difficult to properly benchmarks module compilation as it can depend on cases.

If modules can reach that sort of speedup consistently, it's obviously great news.


with win11, it's an opportunity to take the desktop market

it's obvious that tiktok is doing this intentionally, pretending it's a technical issue, so that people can blame the US government for forcing the sale of tiktok

it's just retaliation

and obviously, trump will play into this


Or the right wing ideologues who now allegedly control (components of) TikTok are as dumb and ideological as they appear.

Note: They also are having "technical difficulties" transmitting DMs with the string "epstein" in them.


If you care to fight them directly, upload them using "Ellistein" or "Epsteen". But really you should delete the app, and find an alternative. Vote with your attention/wallet.

Yeah, agreed. I'm not a TikTok user but good advice in general

I am curious to see how much the editor have been increasing in executable size after each version

Comparing the win64 build for each version. For 12 years of growing scope, that seems pretty good to me:

  1.0 – 9.4  MB (2014)
  2.0 – 12.3 MB (2016)
  3.0 – 20.2 MB (2018)
  4.0 – 51.6 MB (2022)
  4.6 – 79.4 MB (2026)

As something to compare to, I picked a random repository from what GitHub Explore showed, clicked on the first that looked like a desktop application (https://github.com/siyuan-note/siyuan/releases/tag/v3.5.4), and their Windows binary is currently 166MB for a "privacy-first, self-hosted, fully open source personal knowledge management software".

I'd claim 80MB for an entire game engine + editor for said engine is very good.


What is to note here, this is without export templates, these are ~800MB extra (200 per platform, but it seems like you can download only all at once nowadays).

Engines like Unity and UE include those in the primary download already.


That is actually pretty amazing for a game engine. I'm not a game dev and I've only ever made some tiny games in Unity back in college but this makes me want to install Godot and try making games again.

The source of the problem is the respect of the rule of law and due process

Data collection is not the source of the problem because people give their data willingly

Do you think data collection is a problem in China, or do you think the government and rule of law is the problem?

Companies collecting data is not the true problem. Even when data collection is illegal, a corrupt government that doesn't respect the rule of law doesn't need data collection.


yeah, this is exactly it. all the arguments kind of boil down to

"well how about if the government does illegal or evil stuff?"

its very similar to arguments about the second ammendment. But laws and rules shouldnt be structured around expecting a future moment where the government isnt serving the people. At that moment the rules already dont matter


You just described the Bill of Rights. Constitutions should be structured around that.

The Rights are not intended as preemptive. You don't have a right to free speech b/c otherwise maybe the government regulation of speech will get out of hand. You have it because it's espoused as a fundamental right. Same with separation of church and state. It's like "Well maybe a future evil government will regulate the church poorly, so lets ban it completely". It's just seen as an area the government shouldn't delve in entirely.

Collecting information about people doesn't really fit the same mold. It's not sensible to remove that function entirely. It's not a right. And it's not sensible to structure things with the expectation the future government will be evil


No. Let me introduce you to the fourth amendment.

The rights weren’t invented out if thin air but to address real issues that happened earlier. Yes, every government has been evil. Power corrupts. That’s why constitutions exist, to address that problem.


> And it's not sensible to structure things with the expectation the future government will be evil

Jewish Danes would like to have a word with you about that


Are we supposed to structure out society so we're safer in the case that the Chinese invade and use all our institutions against us? There is a risk-benefit tradeoff to make. Crippling society and institutions in preparation for an a worst-case scenario future hypothetical is not sensible. To get things done you operate from the standpoint that the democratic government is responsive to the desires of the people. The adversarial perspective is self sabotaging

I would have liked for there have to have been more limits before DOGE got their hands on the voting rolls.

I think the real problem is that the government is not structured in an accountable way and things like DOGE can happen. These things basically don't happen in other democracies. The Japanese don't all have assault rifles in their basement b/c they're waiting for the day the Diet is going to harvest their data to oppress them

It's harder to do social/human science because it's just easier to make mistakes that leads to bias. It's harder to do in maths, physics, biology, medecine, astronomy, etc.

I often say that "hard sciences" have often progressed much more than social/human sciences.


Funny you say that, as medicine is one of the epicenters of the replication crisis[1].

[1] https://en.wikipedia.org/wiki/Replication_crisis#In_medicine


you get a replication crisis on the bleeding edge between replication being possible and impossible. There’s never going to be a replication crisis in linear algebra, there’s never going to be a replication crisis in theology, there definitely was a replication crisis in psych and a replication crisis in nutrition science is distinctly plausible and would be extremely good news for the field as it moves through the edge.

Leslie Lamport came up with a structured method to find errors in proof. Testing it on a batch, he found most of them had errors. Peter Guttman's paper on formal verification likewise showed many "proven" or "verified" works had errors that were spottes quickly upon informal review or testing. We've also see important theories in math and physics change over time with new information.

With the above, I think we've empirically proven that we can't trust mathmeticians more than any other humans We should still rigorously verify their work with diverse, logical, and empirical methods. Also, build ground up on solid ideas that are highly vetted. (Which linear algebra actually does.)

The other approach people are taking are foundational, machine-checked, proof assistants. These use a vetted logic whose assistant produces a series of steps that can be checked by a tiny, highly-verified checker. They'll also oftne use a reliable formalism to check other formalisms. The people doing this have been making everything from proof checkers to compilers to assembly languages to code extraction in those tools so they are highly trustworthy.

But, we still need people to look at the specs of all that to see if there are spec errors. There's fewer people who can vet the specs than can check the original English and code combos. So, are they more trustworthy? (Who knows except when tested empirically on many programs or proofs, like CompCert was.)


A friend of mine was given an assignment in a masters-level CS class, which was to prove a lemma in some seminal paper (It was one of those "Proof follows a similar form to Lemma X" points).

This had been assigned many times previously. When my friend disproved the lemma, he asked the professor what he had done wrong. Turns out the lemma was in fact false, despite dozens of grad students having turned in "proofs" of the lemma already. The paper itself still stood, as a weaker form of the lemma was sufficient for its findings, but still very interesting.


I agree. Most of the time people think STEM is harder but it is not. Yes, it is harder to understand some concepts, but in social sciences we don't even know what the correct concepts are. There hasn't been so much progress in social sciences in the last centuries as there was for STEM.

I'm not sure if you're correct. In fact there has been a revolution in some areas of social science in the last two decades due to the availability of online behavioural data.

Yeah, there is also the work of primatologists which challenges some of our beliefs of what we think is human sciences (like politics). See Frans De Waal.

Yet, I believe there hasn't been much progress as compared with STEM. But it is just a belief at the end of the day. There might be some study about this out there.


Unemployment or dislike for authority also forced me to go into this

It's more like an occupation


I don't understand, after all this time, why ipv4 still dominates

Make a social network that is centered around people who live in a 1 kilometer radius

Make them interact and do things, generally they will be less toxic because it will reduce their online disinhibition effect.

Make them have meals, meet, walk at the park, whatever.


I have considered a "physical social network". Standing on my usual street corner and holding a sign that directs strangers to join me and whoever else shows up, for a casual chat at the local coffee place at a specific time, with a few topics for conversation listed on the sign up front. If anyone has ideas for those topics, let me know, I'm likely to do it this Sunday.


you laugh, but bringing people back to reality might require using screens to do it


Actually I am open to the idea of an (minimalistic, non-profit) app helping solve this. What kind of app, I'm not sure, but I'm open to all ideas, including technology based ones.

I only said that because you reminded me of an idea I had, for a social experiment that tries to bring some "social media" elements into an in-person setting, to see what happens. (I do wish I could afford a camera and someone to man it, I've been told several times that I'd go viral.)


When NextDoor first came around, I recall walking down the street to help a lady move her couch down to the ground floor. She then gave me some cookies she'd baked. Fun! The notifications it sends me these days are less enjoyable so I send them to spam because unsubscribing doesn't seem to reliably work for me.


> Make a social network that is centered around people who live in a 1 kilometer radius…

Don't know if they still do, but Nextdoor required address verification via a postcard early on. I was pretty shocked at what some people in my area would post under their real names and locations.

(And well outside the realm of political nonsense. Someone posted a pic of their toddler's first poop in the potty.)

I think the power of shame has reduced significantly in recent years.


I think shame is still powerful, but in the context of Nextdoor we just don't see our neighbors very often anymore. In many cases they might as well be random people on the other side of the country. I live in a small town and I'm quite friendly with my neighbors, but I still see and talk to them relatively rarely.


Civility and sense of decorum have greatly diminished in the past few decades especially online.


When you have a toddler it's very surprising what becomes normal. We're potty-training our son and I sometimes get texts from my spouse with a picture of a poop in a bad spot and then just the word "help."


I mean, we did that, too. But there's a bit of a gulf between a text to the spouse and posting it for 20k people you run into regularly to see.


I stopped reading when he started using the visitor pattern


The visitor pattern is very common in programming language implementations. I've seen it in the Rust compiler, in the Java Compiler, in the Go compiler and in the Roslyn C# compiler. Also used extensively in JetBrains' IDEs.

What do you have against this pattern? Or what is a better alternative?


Visitor is heavy of code pattern that can be replaced by elegant, readable switch with exhaustive check, so all operations available by "Kind" enum are covered.


This wasn't available in Javs at the time. You're free to rewrite it with pattern matching (like the book, quite literally, leaves as an exercise for the reader).


A switch or pattern matching approach is useful, but not practical for some cases. For example, there are cases where you are interested in only a single kind of node in the three, for those cases the Visitor pattern is very helpful, while doing pattern matching is cumbersome because you have to match and check almost every node kind. That's why, for example, the Rust compiler still uses the visitor pattern for certain things, and pattern matching for others.


Roslyn has visitor pattern combined with the 'Kind' enumeration you mentioned. You can either choose to visiti a SyntaxNode of a certain type, or override the generic version and decide what you want to do based on that enumeration.


C# doesnt have exhaustive switch over enums.

It needs to get "closed enum" lang. feature.


Exhaustive enums (or type switches) are not a requirement, and are infact harmful - imagine if they add a new kind of syntax node to the language, now your analyzer no longer compiles unless you add a default case - which is very easy to add in C# as well.


Unless you add default... or handle such case, as expected.

Ofc you can use this feature wrong and abuse default case, but in general this is very good since it prevents you about missing places to add handling and screams at you at comp time instead of runtimr


Exhaustive switch with tail-calling makes for a very fast and readable interpreter.


The bytecode interpreter in the second half of the book doesn't use the visitor pattern.


No, but his first "Tree-walk Interpreter" does - he builds an AST then uses the visitor pattern to interpret it.

https://craftinginterpreters.com/representing-code.html#work...


To quote the very first paragraph of the bytecode interpreter section[1]:

> The style of interpretation it uses—walking the AST directly—is good enough for some real-world uses, but leaves a lot to be desired for a general-purpose scripting language.

Sometimes it's useful to teach progressively, using techniques that were used more often and aren't as much anymore, rather than firehosing a low-level bytecode at people.

[1] https://craftinginterpreters.com/a-bytecode-virtual-machine....


Sure, I'm not criticizing it.

He's doesn't actually build on this though, but rather goes back to a single pass compiler (no AST, no visitor) for his bytecode compiler.


the parser does


The parsers in crafting interpreters do not use the visitor pattern. The visitor pattern is used when you already have a tree structure or similar. The parser is what gives you such tree structure, the AST. When you have this structure, you typically use the visitor pattern to process it for semantic analysis, code generation, etc.


I’ve only glanced at the second part but I don’t remember that being the case.


What’s bad about the visitor pattern? /gen


https://grugbrain.dev/

grug very elated find big brain developer Bob Nystrom redeem the big brain tribe and write excellent book on recursive descent: Crafting Interpreters

book available online free, but grug highly recommend all interested grugs purchase book on general principle, provide much big brain advice and grug love book very much except visitor pattern (trap!)

Grug says bad.

In all seriousness, the rough argument is that it's a "big brain" way of thinking. It sounds great on paper, but is often times not the easiest machinery to have to manage when there are simpler options (e.g. just add a method).


https://news.ycombinator.com/item?id=44304648

Grug doesn't elaborate much, but here's the author's take in slightly more detail.


Why?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: