Hacker Newsnew | past | comments | ask | show | jobs | submit | BigTTYGothGF's commentslogin

> anyone is allowed to participate in capital formation and accumulation

In the same sense that nobody is allowed to sleep under a bridge.


I don't follow. Even now there is nothing preventing anyone here from making something for millions of dollars. While VC capital is closed to a select few, a person in a garage can still make it big.

Communist counties tend to gate keep even more. To the point that it is entirely who you know, with little concern to what you do.


> Even now there is nothing preventing anyone here from making something for millions of dollars.

Any one person might. But the system is setup such that's it's almost impossible for everyone to do well.

> Communist counties tend to gate keep even more. To the point that it is entirely who you know, with little concern to what you do.

And in capitalist countries, it's how much money you have. Swings and roundabouts.


> If wealth is sufficiently concentrated - the value of anything becomes tied to the whims of the few who can transact at that level.

Sounds like capitalism to me.



> contact dermatitis

Lots of food is like this, for example mangoes.


The same China that, added more new solar capacity in 2024 than the US currently has total? And is currently at 36% of its total energy use from renewable sources compared to the US's 23%? And has ~32GW of nuclear plants in construction compared to the US's 2.5GW?

I hope we steal their playbook.


"If N = 300, even a 256-bit seed arbitrarily precludes all but an unknown, haphazardly selected, non-random, and infinitesimally small fraction of permissible assignments. This introduces enormous bias into the assignment process and makes total nonsense of the p-value computed by a randomization test."

The first sentence is obviously true, but I'm going to need to see some evidence for "enormous bias" and "total nonsense". Let's leave aside lousy/little/badly-seeded PRNGs. Are there any non-cryptographic examples in which a well-designed PRNG with 256 bits of well-seeded random state produces results different enough from a TRNG to be visible to a user?


The argument against PRNGs this paper makes isn't that the PRNG produces results that can be distinguished from TRNG, but that the 256-bit seed deterministically chooses a single shuffling. If you need 300 bits to truly shuffle the assignment but you only have 256 bits, then that's a lot of potential assignments that can never actually happen. With this argument it doesn't matter what the PRNG is, the fact that it's deterministic is all that matters. And this invalidates the p-value because the p-value assumes that all possible assignments are equiprobable, when in fact a lot of possible assignments have a probability of zero.

I imagine you could change the p-value test to randomly sample assignments generated via the exact same process that was used to generate the assignment used by the experiment, and as you run more and more iterations of this the calculated p-value should converge to the correct value, but then the question becomes is the p-value calculated this way the same as the p-value you'd get if you actually went ahead and used equiprobable assignment to begin with?

Ultimately, this all comes down to the fact that it's not hard to use true randomness for the whole thing, and true randomness produces statistically valid results, if you use true randomness for assignment then you can't screw up the p-value test, and so there's no reason at all to even consider how to safely use a PRNG here, all that does is open the door to messing up.


If you have 300 bits of shuffling entropy, you have a lot of potential assignments that can never happen because you won't test them before the universe runs out. No matter how you pick them.

Of course a PRNG generates the same sequence every time with the same seed, but that's true of every RNG, even a TRNG where the "seed" is your current space and time coordinates. To get more results from the distribution you have to use more seeds. You can't just run an RNG once, get some value, and then declare the RNG is biased towards the value you got. That's not a useful definition of bias.


The number of possible assignments has to be effectively close to an integer multiple of the number of shuffles.

It doesn't matter how many universes it would take to generate all of them, there are some assignments that are less likely.


Everyone agrees that most of the possible shuffles become impossible when a CSPRNG with 256 bits of state is used. The question is just whether that matters practically. The original author seems to imply it does, but I believe they're mistaken.

Perhaps it would help to think of the randomization in two stages. In the first, we select 2^256 members from the set of all possible permutations. (This happens when we select our CSPRNG algorithm.) In the second, we select a single member from the new set of 2^256. (This happens when we select our seed and run the CSPRNG.) I believe that measurable structure in either selection would imply a practical attack on the cryptographic algorithm used in the CSPRNG, which isn't known to exist for any common such algorithm.


Yeah, you're discarding almost all permutations, but in an unbiased manner. It seems to imply not only an attack, but that your experimental results rely strongly and precisely on some extremely esoteric (otherwise it would've been found already) property of the randomization algorithm. If you can only detect the effect of television on turkeys when using a PRNG whose output is appropriately likely to have a high dimensional vector space when formated as a binary square matrix then I think you should probably go back to the drawing board.

The cases that are not close to a multiple are handled by the rejection of a part of the generated random numbers.

Let's say that you have a uniform random number generator, which generates with equal probability anyone of N numbers. Then you want to choose with equal probability one of M choices.

If M divides N, then you can choose 1 of M by either multiplication with taking the integer part, or by division with taking the remainder.

When M does not divide N, for unbiased choices you must reject a part of the generated numbers, either rejecting them before the arithmetic operation (equivalent to diminishing N to a multiple of M), or rejecting them after the arithmetic operation (diminishing the maximum value of the integer part of product or of the division remainder, to match M).

This is enough for handling the case when M < N.

When M is greater than N, you can use a power of N that is greater than M (i.e. you use a tuple of numbers for making the choice), and you do the same as before.

However in this case you must trust your RNG that its output sequence is not auto-correlated.

If possible, using from the start a bigger N is preferable, but even when that is impossible, in most cases the unreachable parts of the space of random number tuples will not make any statistical difference.

To be more certain of this, you may want to repeat the experiment with several of the generators with the largest N available, taking care that they really have different structures, so that it can be expected that whichever is the inaccessible tuple space, it is not the same.


This is correct, but for the author's example of randomizing turkeys I wouldn't bother. A modern CSPRNG is fast enough that it's usually easier just to generate lots of excess randomness (so that the remainder is nonzero but tiny compared to the quotient and thus negligible) than to reject for exactly zero remainder.

For example, the turkeys could be randomized by generating 256 bits of randomness per turkey, then sorting by that and taking the first half of the list. By a counting argument this must be biased (since the number of assignments isn't usually a power of two), but the bias is negligible.

The rejection methods may be faster, and thus beneficial in something like a Monte Carlo simulation that executes many times. Rejection methods are also often the simplest way to get distributions other than uniform. The additional complexity doesn't seem worthwhile to me otherwise though, more effort and risk of a coding mistake for no meaningful gain.


And why does it matter in the context of randomly assigning participants in an experiment into groups? It is not plausible that any theoretical "gaps" in the pseudorandomness are related to the effect you are trying to measure, and unlikely that there is a "pattern" created in how the participants get assigned. You just do one assignment. You do not need to pick a true random configuration, just one random enough.

I assume that as long as p-values are concerned, the issue raised could very well be measured with simulations and permutations. I really doubt though that the distribution of p-values from pseudorandom assignments with gaps would not converge very fast to the "real" distribution you would get from all permuations due to some version of a law of large numbers. A lot of resampling/permutation techniques work by permuting a negligible fraction of all possible permutations, and the distribution of the statistics extracted converges pretty fast. As long as the way the gaps are formed are independent of the effects measured, it sounds implausible that the p-values one gets are problematic because of them.


P-values assume something weaker than "all assignments are equiprobable." If the subset of possible assignments is nice in the right ways (which any good PRNG will provide) then the resulting value will be approximately the same.

Gallant always uses TRNGs. Goofus always uses a high-quality PRNG (CSPRNG if you like) that's seeded with a TRNG. Everything else they do is identical. What are circumstances under which Goofus's conclusions would be meaningfully different than Gallant's?

Suppose I'm doing something where I need N(0,1) random variates. I sample from U(0,1) being sure to use a TRNG, do my transformations, and everything's good, right? But my sample isn't U(0,1), I'm only able to get float64s (or float32s), and my transform isn't N(0,1) as there's going to be some value x above which P(z>x)=0. The theory behind what I'm trying to do assumes N(0,1) and so all my p-values are invalid.

Nobody cares about that because we know that our methods are robust to this kind of discretization. Similarly I think nobody (most people) should care (too much) about having "only" 256 bits of entropy in their PRNG because our methods appear to be robust to that.


> then you can't screw up the p-value test

Bakker and Wicherts (2011) would like to disagree! Apparently 15 % screw up the calculation of the p-value.


So here's how I would think about it intuitively:

We can create a balanced partitioning of the 300 turkeys with a 300 bit random number having an equal number of 1's and 0's.

Now suppose I randomly pick 300 bit number, still with equal 0's and 1's, but this time the first 20 bits are always 0's and the last 20 bits are always 1's. In this scenario, only the middle 260 bits (turkeys) are randomly assigned, and the remaining 40 are deterministic.

We can quibble over what constitutes an "enormous" bias, but the scenario above feels like an inadequate experiment design to me.

As it happens, log2(260 choose 130) ~= 256.

> Are there any non-cryptographic examples in which a well-designed PRNG with 256 bits of well-seeded random state produces results different enough from a TRNG to be visible to a user?

One example that comes to mind is shuffling a deck of playing cards. You need approximately 225 bits of entropy to ensure that every possible 52 card ordering can be represented. Suppose you wanted to simulate a game of blackjack with more than one deck or some other card game with more than 58 cards. 256 bits is not enough there.


It's an interesting observation and that's a nice example you provided but does it actually matter? Just because certain sequences can't occur doesn't necessarily mean the bias has any practical impact. It's bias in the theoretical sense but not, I would argue, in the practical sense that is actually relevant. At least it seems to me at first glance, but I would be interested to learn more if anyone thinks otherwise.

For example. Suppose I have 2^128 unique playing cards. I randomly select 2^64 of them and place them in a deck. Someone proceeds to draw 2^8 cards from that deck, replacing and reshuffling between each draw. Does it really matter that those draws weren't technically independent with respect to the larger set? In a sense they are independent so long as you view what happened as a single instance of a procedure that has multiple phases as opposed to multiple independent instances. And in practice with a state space so much larger than the sample set the theoretical aspect simply doesn't matter one way or the other.

We can take this even farther. Don't replace and reshuffle after each card is drawn. Since we are only drawing 2^8 of 2^64 total cards this lack of independence won't actually matter in practice. You would need to replicate the experiment a truly absurd number of times in order to notice the issue.


If it had a practical impact, then it would imply that such statistical tests could be used as a distinguisher to attack the RNG. They fail as distinguishers, even with absolutely enormous amounts of data, so the bias is too small to have any influence in any practical experiment. You'd expect to need to observe 2^128 states to detect bias in a 256-bit CSPRNG, which means you'll have to store 2^128 observed states. That's around 10^20 EiB of storage needed. Good luck affording that with drive prices these days!

At a certain point a bias in the prng just has to become significant? This will be a function of the experiment. So I don’t think it’s possible to talk about a general lack of “practical impact” without specifying a particular experiment. Thinking abstractly - where an “experiment” is a deterministic function that takes the output of a prng and returns a result - an experiment that can be represented by a constant function will be immune to bias, while one which returns the nth bit of the random number will be susceptible to bias.

> At a certain point a bias in the prng just has to become significant?

Sure, at a point. I'm not disputing that. I'm asking for a concrete bound. When the state space is >= 2^64 (you're extremely unlikely to inadvertently stumble into a modern PRNG with a seed smaller than that) how large does the sample set need to be and how many experiment replications are required to reach that point?

Essentially what I'm asking is, how many independent sets of N numbers must I draw from a biased deck, where the bias takes the form of a uniformly random subset of the whole, before the bias is detectable to some threshold? I think that when N is "human" sized and the deck is 2^64 or larger that the number of required replications will be unrealistically large.


> 256 bits is not enough there

Yeah, but the question is: who cares?

Suppose you and I are both simulating card shuffling. We have the exact same setup, and use a 256-bit well-behaved PRNG for randomness. We both re-seed every game from a TRNG. The difference is that you use all 256 bits for your seed, while I use just 128 and zero-pad the rest. The set of all shuffles that can be generated by your method is obviously much larger than the set that can be generated by mine.

But again: who cares? What observable effect could there possibly be for anybody to take action if they know they're in a 128-bit world vs a 256-bit one?

The analogy obviously doesn't generalize downwards, I'd be singing a different tune if it was, say, 32 bits instead of 128.


By the definition of a cryptographically secure PRNG, no. They, with overwhelming probability, produce results indistinguishable from truly random numbers no matter what procedure you use to tell them apart.

I think your intuition comes from the assumption that the experimental subjects are already coming to you in a random order. If that's the case, then you might as well assign the first half to control and the second half to treatment. To see the problem with poor randomization, you have to think about situations where there is (often unknown) bias or correlations in the order of the list that you're drawing from to randomize. Say you have an ordered list of 10 numbers, assigned 5 and 5 to control and (null) treatment groups. There are 252 assignments, which in theory should be equally likely. Assuming they all give different values of your statistic, you'll have 12 assignments with p <= .0476. If, say, you do the assignment from ~~a 256~~ an 8 bit random number such that 4 of the possible assignments are twice as likely as the others under your randomization procedure, the probability of getting one of those 12 assignments something between .0469 and .0625, depending whether the more-likely assignments happen to be among the 12 most extreme statistics, which is a difference of about 1/3 and could easily be the difference between "p>.05" and "p<.05". Again, if you start with your numbers in a random order, then this doesn't matter -- the biased assignment procedure will still give you a random assignment, because each initial number will be equally likely to be among the over-sampled or under-sampled ones.

Also worth noting that the situations where this matters are usually where your effect size is fairly small compared to the unexplained variation, so a few percent error in your p-value can make a difference.


> If, say, you do the assignment from a 256 bit random number such that 4 of the possible assignments are twice as likely as the others under your randomization procedure

Your numbers don't make sense. Your number of assignments is way fewer than 2^256, so the problem the author is (mistakenly) concerned about doesn't arise--no sane method would result in any measurable deviation from equiprobable, certainly not "twice as likely".

With a larger number of turkeys and thus assignments, the author is correct that some assignments must be impossible by a counting argument. They are incorrect that it matters--as long as the process of winnowing our set to 2^256 candidates isn't measurably biased (i.e., correlated with turkey weight ex television effects), it changes nothing. There is no difference between discarding a possible assignment because the CSPRNG algorithm choice excludes it (as we do for all but 2^256) and discarding it because the seed excludes it (as we do for all but one), as long as both processes are unbiased.


typo -- meant to say 8 bit random number i.e. having 256 possibilities, convenient just because the number of assignments was close to a power of 2. If instead you use a 248-sided die and have equal probabilities for all but 4 of the assignments, the result is similar but in the other direction. Of course there are many other more subtle ways that your distribution over assignments could go wrong, I was just picking one that was easy to analyze.

Ah, then I see where you got 4 assignments and 2x probability. Then I think that is the problem the author was worried about and that it would be a real concern with those numbers, but that the much smaller number of possibilities in your example causes incorrect intuition for the 2^256-possibility case.

I think the intuition that everything will be fine in the 256 bit vs 300 bit case depends on the intuition that the assignments that you're missing will be (~close to) randomly distributed, but it's far from clear to me that you can depend on that to be true in general without carefully analyzing your procedure and how it interacts with the PRNG.

If you can find a case where this matters, then you've found a practical way to distinguish a CSPRNG seeded with true randomness from a stream of all true randomness. The cryptographers would consider that a weakness in the CSPRNG algorithm, which for the usual choices would be headline news. I don't think it's possible to prove that no such structure exists, but the world's top (unclassified) cryptographers have tried and failed to find it.

And worth noting that the "even when properly seeded with 256 bits of entropy" example in the article was intended as an extreme case, i.e. that many researchers in fact use seeds that are much less random than that.

MCMC can be difficult for this reason. There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG, but also in many ways less so, because you don't care about adversarial issues, and would prefer speed.

If you can't generate all possible assignments, you care about second and third order properties etc. of the sequence.


Does there exist a single MCMC example that performs poorer when fed by a CSPRNG (with any fixed seed, including all zeroes; no state reuse within the simulation) as opposed to any other RNG source?

If there did, it'd be a distinguisher attack on that CSPRNG. So for a non-broken CSPRNG, the answer is "no", by the definition of "non-broken CSPRNG".

> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG

Huh? If you can chew through however many gigabytes of the supposed CSPRNG’s output, do some statistics, and with a non-negligible probability tell if the bytes in fact came from the CSPRNG in question or an actual iid random source, then you’ve got a distinguisher and the CSPRNG is broken.


It all comes down to actual specific statistical tests, and how hard they are to break in specific applications.

No CSPRNG is absolutely perfect, no CSPRNG has ever absolutely passed every statistical test thrown at it.

In MCMC, it stresses very different statistical tests than the typical CSPRNG tests.

Every PRNG is absolutely broken if you want to be absolute about it. MCMC and crypto applications push on different aspects where statistical issues will cause application level failures.

See e.g. this paper https://www.cs.hmc.edu/tr/hmc-cs-2014-0905.pdf

(it's not the end all be all, but it's a good survey of why this stuff matters and why it's different)


> no CSPRNG has ever absolutely passed every statistical test thrown at it

As far as I know (admittedly not a high standard), there is no published statistical test that you could run on, for example, a single AES-256-CTR bitstream set up with a random key and IV, running on a single computer, that would be able to tell you with a meaningful likelihood ratio that you were looking at a pseudorandom rather than truly random input before the computer in question broke down. (I’m assuming related-key attacks are out of scope if we’re talking about an RNG for simulation purposes.)


Cryptographic operations when done correctly result in full chaos within the discrete domain (the so-called avalanche effect). Any bias of any kind gives rise to a distinguisher and the primitive is regarded as broken.

One way to imagine what symmetric cryptography does is a cellular automaton that is completely shuffled every iteration. In the case of Keccak/SHA3, that is almost exactly what happens too.


There has never been a perfect CSPRNG.

There have been a very large number of CSPRNGs for which there does not exist any known practical method to distinguish them from TRNGs.

For all of them there are theoretical methods that can distinguish a sequence generated by them from a random sequence, but all such methods require an impossible amount of work.

For instance, a known distinguisher for the Keccak function that is used inside SHA-3 requires an amount of work over 2^1500 (which was notable because it was an improvement over a naive method that would have required an amount of work of 2^1600).

This is a so ridiculously large number in comparison with the size and age of the known Universe, that it is really certain that nobody will ever run such a test and find a positive result.

There are a lot of other such CPRNGs for which the best known distinguishers require a work of over 2^100, or 2^200, or even 2^500, and for those it is also pretty certain that no practical tests will find statistical defects.

There are a lot of CSPRNGs that could not be distinguished from TRNGs even by using hypothetical quantum computers.

Even many of the pretty bad cryptographic PRNGs, which today are considered broken according to their original definitions, can be made impossible to distinguish from TRNGs by just increasing the number of iterations in their mixing functions. This is not done because later more efficient mixing functions have been designed, which achieve better mixing with less work.


> There has never been a perfect CSPRNG.

What is a perfect CSPRNG?


"before the computer in question broke down."

A good MCMC simulation might test that! E.g. say, training a large diffusion model. It takes way more computing power than the average time for a single computer to fail.

Also, the standards of those tests vs. does it bias the statistical model fitted with MCMC are different.


I am aware of tests vs. ChaCha20 here https://www.pcg-random.org/index.html, I am not aware of tests vs. AES-256-CTR.

However at some point, 100x faster performance w/o an exploitable attack vector is also relevant! (though sometimes people find ways).

CSPRNGs are mostly worried about very specific attack vectors, and sure, they're like to be completely unpredictable. But other applications care more about other attack vectors like lack of k-dimensional equiprobability, and that hurts them far more.

The idea that CSPRNGs are the end all and be all of rngs holds CS back.


I am familiar with that site and the PCG PRNGs are based on a sound principle, so they are good for many applications.

However I have never seen a place where the author says something about finding a statistical defect in ChaCha. She only correctly says that ChaCha is significantly slower than PRNGs like those of the PCG kind (and that it also shares the same property that any PRNG with a fixed state size has, of limited high-dimensional equidistribution; this is also true for any concrete instantiation of the PRNGs recommended by the author; the only difference is that with PRNGs having a simple definition you can make the same structure with a bigger state, as big as you want, but once you have chosen a size, you have again a limit; the PCG PRNGs recommended there, when having greater sizes than cryptographic PRNGs, they become slower than those cryptographic PRNGs, due to slow large integer multiplications).

In the past, I have seen some claims of statistical tests distinguishing cryptographic PRNGs that were false, due to incorrect methodology. E.g. I have seen a ridiculous paper claiming that an AI method is able to recognize that an AES PRNG is non-random. However, reading the paper has shown that they did not find anything that could distinguish a number sequence produced by AES from a true random sequence. Instead, they could distinguish the AES sequence from numbers read from /dev/random on an unspecified computer, using an unspecified operating system. Therefore, if there were statistical biases, those were likely in whichever was their /dev/random implementation (as many such implementations are bad, and even a good implementation may appear to have statistical abnormalities, depending on the activity done on the computer), not in the AES sequence.


Are they claiming that ChaCha20 deviates measurably from equally distributed in k dimensions in tests, or just that it hasn't been proven to be equally distributed? I can't find any reference for the former, and I'd find that surprising. The latter is not surprising or meaningful, since the same structure that makes cryptanalysis difficult also makes that hard to prove or disprove.

For emphasis, an empirically measurable deviation from k-equidistribution would be a cryptographic weakness (since it means that knowing some members of the k-tuple helps you guess the others). So that would be a strong claim requiring specific support.


By O’Neill’s definition (§2.5.3 in the report) it’s definitely not equidistributed in higher dimensions (does not eventually go through every possible k-tuple for large but still reasonable k) simply because its state is too small for that. Yet this seems completely irrelevant, because you’d need utterly impossible amounts of compute to actually reject the hypothesis that a black-box bitstream generator (that is actually ChaCha20) has this property. (Or I assume you would, as such a test would be an immediate high-profile cryptography paper.)

Contrary to GP’s statement, I can’t find any claims of an actual test anywhere in the PCG materials, just “k-dimensional equdistribution: no” which I’m guessing means what I’ve just said. This is, at worst, correct but a bit terse and very slightly misleading on O’Neill’s part; how GP could derive any practical consequences from it, however, I haven’t been able to understand.


As you note, a 256-bit CSPRNG is trivially not equidistributed for a tuple of k n-bit integers when k*n > 256. For a block cipher I think it trivially is equidistributed in some cases, like AES-CTR when k*n is an integer submultiple of 256 (since the counter enumerates all the states and AES is a bijection). Maybe more cases could be proven if someone cared, but I don't think anyone does.

Computational feasibility is what matters. That's roughly what I meant by "measurable", though it's better to say it explicitly as you did. I'm also unaware of any computationally feasible way to distinguish a CSPRNG seeded once with true randomness from a stream of all true randomness, and I think that if one existed then the PRNG would no longer be considered CS.


You care when you're trying to generate random vectors which may be of a different size, and if you are biasing your sample.

Is it enough to truly matter? Maybe not, but does it also matter if 80 bit SHA1 only has 61 bits?


Nobody cares even then, because any bias due to theoretical deviation from k-equidistribution is negligible compared to the desired random variance, even if we average trials until the Sun burns out. By analogy, if we're generating an integer between 1 and 3 with an 8-bit PRNG without rejection, then we should worry about bias because 2^8 isn't a multiple of 3; but if we're using a 256-bit PRNG then we should not, even though 2^256 also isn't a multiple.

If you think there's any practical difference between a stream of true randomness and a modern CSPRNG seeded once with 256 bits of true randomness, then you should be able to provide a numerical simulation that detects it. If you (and, again, the world's leading cryptographers) are unable to adversarially create such a situation, then why are you worried that it will happen by accident?

SHA-1 is practically broken, in the sense that a practically relevant chosen-prefix attack can be performed for <$100k. This has no analogy with anything we're discussing here, so I'm not sure why you mentioned it.

You wrote:

> There are concepts like "k-dimensional equidistribution" etc. etc... where in some ways the requirements of a PRNG are far, far, higher than a cryptographically sound PRNG

I believe this claim is unequivocally false. A non-CS PRNG may be better because it's faster or otherwise easier to implement, but it's not better because it's less predictable. You've provided no reference for this claim except that PCG comparison table that I believe you've misunderstood per mananaysiempre's comments. It would be nice if you could either post something to support your claim or correct it.


> maintaining national values and tradition comes first

Japan's had a couple of major upheavals in their "national values" over the past 210-ish years, you might have heard of them.


What does that supposed to mean?

Consider the "national values" pre- and post-Meiji restoration, and pre- and post-WWII.

What exactly do you mean with all these dogwhistles? Just say it, don't beat it around the bush.

A country can have good values and moral and be peaceful high trust society with low crime, despite past atrocities.

Which country on this planet with a history of imperialism has NOT committed similar atrocities in the past? Does that invalidate their achievements?

So what's your argument here, except a cheap gotcha like you just found out something revolutionary in the history books, that invalidates the argument.


I'm saying your claim that "maintaining national values and tradition comes first' is completely ahistorical to the point of being "not even wrong". Let's take the Meiji restoration for example: https://en.wikipedia.org/wiki/Meiji_Restoration#Destruction_... https://en.wikipedia.org/wiki/Blood_tax_riots

That's a blast from the past I wasn't expecting to see today.

The internet used to be filled with thousands of these.

It was magical, serendipitous, and wonderful.

People's creativity hasn't disappeared, but it lives in corporate-owned distribution platforms now.

It's nice that people don't have to spend so much effort building websites, but we definitely lost something in the experience. We did gain convenience for creators and consumers (but also gained ads, tracking, etc.)

There are plenty of highly talented people publishing on YouTube, TikTok, and beyond, but we lost something with the loss of personal websites being popular and the loss of formats like Flash, platforms like NewGrounds, etc.

The old web felt like stepping into someone's personal atelier. Bespoke, intimate, crafted, and intentionally curated.


They can go ahead and fork it all they want, I'm sticking with the original.

> And why would R be "entitled" to an algebraic closure?

It's the birthright of every field.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: