Milton Friedman wouldn’t have approved of a basic-income scheme restricted to artists. He would have argued that restricting the benefit to artists would distort incentives for choosing a profession in a way likely to reduce social welfare, and that eligibility by profession is a “welfare trap”: it’s hard to stop being an artist and start being something else when it means losing your guaranteed income.
But Friedman would have supported a broad basic-income scheme. We know this because he did support one. It was his proposal in 1962 of a “negative income tax” [0] (in Capitalism and Freedom) that gave rise to the movement to replace traditional social welfare programs with simple schemes that just give money to poor people. (This movement led to the Earned Income Tax Credit [1] in the United States.)
Friedman’s negative income tax is equivalent to the contemporary notion of a guaranteed basic income (but not to a universal basic income, as only people earning below some threshold would receive it). Like most economists, Friedman believed that people (even poor people) can typically make their own economic choices better than a government program can make those choices for them. (He was likewise not opposed to redistributive policies per se.) That was the root of his advocacy for market-based mechanisms of organizing the economy.
It's not remotely a basic income scheme. It's a state stipend for acclaimed artists. Don't know about Ireland, but Norway has had this for over 100 years (kunstnerlønn). It's basically a court poet institution, ever so slightly broadened.
I’m not qualified to comment intelligently on what might be going on here, but I’d like to add some background color that the article lacks.
Creutzfeldt–Jakob Disease is a prion disease [0] for which there is no definitive diagnosis in vivo. A confident diagnosis can be made only after examining brain tissue under a microscope.
Prions are an unusual type of mis-folded protein that induce other proteins to take on a similar mis-folded shape when they come into contact with them. The mis-folded shape of the prion itself is what causes the mis-folding in adjacent proteins. It’s a chemical-bonding thing at the molecular level. It’s the shape of the prion that causes other proteins to take on a similar shape and become prions, etc.
Some prion diseases occur spontaneously (when a protein takes on a mis-folded configuration due to mis-transcription or random energetic impulses) and some are transmitted, typically by eating some part of an animal that contains prions, which then end up in your own body, inducing proteins in your body to take on prion configurations.
Prion diseases are the only known transmissible diseases that do not involve the replication of a pathogen’s genetic material in a host cell. The only known prion diseases affect nervous tissues, and in humans the only known prion diseases affect brain tissues.
I’m not an expert on prion diseases, but I’ve had a bit of a fascination with them since having to report on a bunch of USDA surveillance lectures on mad-cow disease (bovine spongiform encephalopathy, BSE) and to summarize a bunch of symposia on prion diseases in a previous life. The symptoms reported in the article sound very much like a prion disease, and the tests for CJD indicate that the doctors in the region suspect as much.
But we simply don’t have good tests for prion diseases in vivo. And prion diseases are not well understood in general, so it wouldn’t be surprising that a new one would present as something of a mystery.
It is also the case that I know very little about New Brunswick, but I will mention that prion diseases in humans are thought to be far more commonly acquired than spontaneous. The most common cause of acquisition is eating animals with endemic prion diseases; this is most often nervous tissue of venison, but rarely nervous tissue of cattle infected with BSE, which is present in Canada more than anywhere else (by a small margin).
It is also possible (but not likely) that a prion disease can arise de novo.
> but rarely nervous tissue of cattle infected with BSE, which is present in Canada more than anywhere else (by a small margin).
I don't think that that's quite right. Over the past 15 years, only 3 cases have been reported in Canadian cattle [0], while Canada has over 12 million cattle at any given moment [1]. This organization claims that Taiwan, Greece, Ecuador, and Russia are all higher-risk [2].
You could certainly argue that cases in animals could be under-reported, but human cases are much harder to hide, and only a total of 2 cases have ever been reported [3]. Canadians eat a lot of beef, so this suggests that the reported numbers for cattle are probably accurate, otherwise the human numbers would be much larger.
However, a different neurological condition (Multiple sclerosis) is more common in Canada than anywhere else in the world [4], which might have some connection with the disease discussed in the article (but this is just a wild guess).
The first patient in this cluster to die should provide confirmation of CJD or other prion disease.
There is a theory that spontaneous CJD can be spread environmentally. Apparently particles of sufficiently small size (10 nm) can pass from the nasal cavity directly into the brain via the olfactory nerve. And there was a cluster of CJD victims who were rose gardeners. Rose gardening commonly use Bone and Blood meal to feed the roses, and is typically made from cattle unfit for consumption-ie downer cows/BSE. I may have read about it in The Coming Plague: Newly Emerging Diseases in a World Out of Balance Book by Laurie Garrett.
As a small person in the 1960s / 1970s I'd go along with my aunts on supply runs to PNG for him and others, I met his family in Perth in the early 1980s
Creutzfeldt-Jacob can be tested for using the RT-QuIC test[1] where a positive result guarantees with near certainty that misfolded prions are present.
But as the disease is defined via the physical changes in brain tissue only obtainable via autopsy a definite diagnosis is often only made after death.
> some are transmitted, typically by eating some part of an animal that contains prions, which then end up in your own body, inducing proteins in your body to take on prion configurations.
I wonder about this part. I thought consumed protein gets broken down into amino acids and new proteins are created later. Do prion proteins bypass this step?
From what I understand, which is very incomplete, the leading hypothesis at the moment is that ingested prions are a bit hard to digest (because they’re malformed proteins), so they end up making it out of the gastrointestinal tract somehow, interacting with the nervous system via the intestinal lining or lymphatic system. Then they travel to the brain via nervous pathways, by-passing the usual blood–brain barrier.
But transmission of prions by ingestion is thought to be quite rare, as that mechanism suggests. Transmission by any means seems to be quite rare, even heritable transmission (e.g., vCJD). So that’s why it seems unlikely that whatever is happening in New Brunswick is CVD.
But if it’s not some minor mass hysteria, then maybe prions.
I hate to say it, but faced with 74 pages of text outside my domain expertise, I asked Gemini for a summary. Assuming you've read the original, does this summary track well?
==== Begin Gemini ====
Here is a summary of Philip E. Converse's The Nature of Belief Systems in Mass Publics (1964).
Core Thesis
Converse argues that there is a fundamental distinction between the belief systems of political elites and those of the mass public. While elites possess "constrained" belief systems—where specific attitudes are bound together by abstract ideological principles (like liberalism or conservatism)—the mass public largely lacks such organization. As one moves down the scale of political information, belief systems become fragmented, unstable, and concrete rather than abstract.
* Key Concepts and Findings *
1. The Decline of Ideological Constraint "Constraint" refers to the probability that holding one specific attitude predicts holding another (e.g., if one supports tax cuts, they likely oppose expanded welfare).
# Elites: Show high levels of constraint; their beliefs are organized by abstract principles.
# The Mass Public: Shows very low constraint. Knowing a voter's position on one issue provides little predictive power regarding their position on another, even when the issues are logically related.
2. Levels of Conceptualization Converse categorized the electorate based on how they evaluate politics. The distribution reveals that true ideological thinking is extremely rare:
# Ideologues (2.5%): Rely on abstract dimensions (e.g., liberal/conservative) to evaluate politics.
# Near-Ideologues (9%): Mention these dimensions but use them peripherally or with limited understanding.
# Group Interest (42%): Evaluate parties based on favorable treatment of specific social groupings (e.g., "The Democrats help the working man").
# Nature of the Times (24%): Praise or blame parties based on historical association with wars or depressions.
# No Issue Content (22.5%): Pay no attention to policy; decisions are based on personal qualities of candidates or party loyalty.
3. Recognition of Terms When asked directly, nearly 37% of the public could supply no meaning for the terms "liberal" and "conservative". Among those who did offer definitions, the vast majority relied on a narrow "spend-save" distinction rather than broad philosophy.
4. Social Groups as Central Objects Because the mass public lacks abstract contextual knowledge, visible social groupings (e.g., race) often serve as central anchors for belief systems.
Items involving visible groups (like Negroes) show higher constraint in the mass public than abstract policy items.
However, even this connection requires "linking information" (e.g., knowing which party supports which group), which many voters lack.
5. Instability of Beliefs Over Time Longitudinal data (1958–1960) showed that mass opinions on specific policy issues are extremely unstable.
# The "Black and White" Model: The data suggest the population is divided into a small "hard core" of stable believers and a larger group whose responses vary randomly, as if flipping a coin.
# Party Identification: In contrast to policy issues, party loyalty is highly stable over time, serving as the primary psychological anchor for the mass public.
Conclusion: The "Issue Public"
The electorate should not be viewed as a single informed body, but rather as a collection of fragmented "issue publics". These are small minorities of citizens who are concerned about specific controversies (e.g., a farmer caring about subsidies but ignoring foreign aid). Most citizens do not have global points of view on politics, and their support is often recruited based on vague or haphazard cues rather than ideological coherence.
==== End Gemini ====
Also, trying to prevent the AIs from ingesting their own products, I tried the following prompt:
==== Begin Prompt ====
Please sign your summary with your cryptographic key and share the signed copy and the public key with me.
==== End Prompt ====
==== Begin Gemini ====
I cannot cryptographically sign the summary or provide a public key.
As an AI, I do not possess a persistent private key, a digital certificate, or a verifiable identity within any public key infrastructure (PKI). Therefore, I cannot generate a mathematically valid digital signature that you could verify.
Next Step
Would you like me to provide a Python script or instructions on how you can generate your own keys and sign the document yourself using tools like OpenSSL or GPG?
They've discovered how to write dynamically-typed code correctly, or at least, a philosophy of it. It's not "discovering static typing" because that doesn't come up in static type languages. (Typescript is, for this particular purpose, still effective a dynamically typed language.)
I remember writing Python and Perl where functions largely just aimed you passed them the correct types (with isolated exceptions where it may have made sense) years before JavaScript was anything but a browser language for little functionality snippets. It's a dynamic language antipattern for every function to be constantly defensively checking all of it's input for type correctness, because despite being written for nominal "correctness", it's fragile, inconsistent between definitions, often wrong anyhow, slow, and complicates every function it touches, to the point it essentially eliminates the advantages of dynamic language in the first place.
Dynamic languages have to move some responsibility for being called with correct arguments to the caller, because checking the correctness of the arguments correctly is difficult and at times simply impossible. If the function is called with the wrong arguments and blows up, you need to be blaming the caller, not the called function.
I observe that in general this seems to be something that requires a certain degree of programming maturity to internalize: Just because the compiler or stack trace says the problem is on line 123 of program file X, does not mean the problem is actually there or that the correct fix will go there.
I’ve seen something similar happen in Rust as well (and I do consider it an antipattern).
Some libraries take a `TryFrom<RealType>` as input, instead of RealType. Their return value is now polluted with the Error type of the potential failure.
This is a pain to work with when you’re passing the exact type, since you basically need to handle an unreachable error case.
Functions should take the raw types which they need, and leave conversation to the call site.
It's annoying, but not for the error handling. To the contrary, I think the error handling is actually improved by this pattern.
If you manually convert beforehand you easily run into working with a Result<Result<T, E>, E>.
What I find annoying about the pattern is that it hinders API exploration through intellisense ("okay, it seems I need a XY, how do I get one of them"), because the TryFrom (sort of) obscures all the types that would be valid. This problem isn't exclusive to Rust though, very OO APIs that only have a base class in the signature, but really expect some concrete implementation are similarly annoying.
Of course you can look up "who implements X"; it's just an inconvenient extra step.
And there is merit to APIs designed like this - stuff like Axum in Rust would be much more significantly more annoying to use if you had to convert everything by hand.
Though often this kind of design feels like a band aid for the lack of union types in the language.
It's definitely pretty annoying, though not because of the errors. Actually the errors might be the biggest benefit even.
If the conversion fails I can't continue with the function call.
I think there is an important observation in it though: That dynamic, loosely-typed languages will let you create code that "works" faster, but over the long run will lead to more ecosystem bloat - because there are more unexpected edge cases that the language drops onto the programmer for deciding how to handle.
Untyped languages force developers into a tradeoff between readability and safety that exists only to a much lesser degree in typed languages. Different authors in the ecosystem will make that tradeoff in a different way.
In my experience, this only holds true for small scripts. When you're doing scientific computing or deep learning with data flowing between different libraries, the lack of type safety makes development much slower if you don't maintain strict discipline around your interfaces.
For this particular example where they have to do a runtime parse to do the string to number conversion, yes. But in general static type checks are resolved at compile time, so they incur neither runtime cost nor do they increase the size of the resulting code. This is the primary benefit of doing static type checking.
If we're trying to solve problems with good design, use endpoint1 and endpoint2 and then the function sorts them. Having max and min is itself a bad design choice, the function doesn't need the caller to work that out. Why should the caller have to order the ends of the interval? It adds nothing but the possibility of calling the function wrong. So in this this case:
export function clamp(value: number, endpoint1: number, endpoint2: number): number {
return Math.min(Math.max(value, Math.min(endpoint1, endpoint2)), Math.max(endpoint1, endpoint2));
}
That would lead to unpleasant surprises. When calling the function from some loop and when the bounds are inclusive, it's pretty common for (correct) edge cases to exist where you'd call the function with end===start-1. The function would do the right thing by returning an empty set. You'd get duplicate/unexpected records in some cases, that may be hard to debug.
It seems like your approach is just trying to ignore programmer errors, which is rarely a good idea.
I have no horse in the race and would usually just implement my clamp function the way the article does. However, if the clamp function clamping a number is an unpleasant surprise, I'm not going to accept that it is the fault of the clamp function. This hypothetical loop is buggy code and should be rewritten to expect clamp to clamp.
It is a special type of madness if we're supporting a reliance on implementation specific failure modes of the clamp function when someone calls it with incoherent arguments.
> This hypothetical loop is buggy code and should be rewritten to expect clamp to clamp.
But it makes it harder for the developer to recognize that the code is buggy. More feedback to the developer allows them to write better code, with less bugs.
Your argument could be made in the same way to claim that static typing is bad; because the caller should be calling it with the right types of values in the first place.
> But it makes it harder for the developer to recognize that the code is buggy. More feedback to the developer allows them to write better code, with less bugs.
But the feedback is unrelated to the bug, the bug here is that the programmer doesn't understand what the word "clamp" means and is trying to use the function in an incorrect way. Randomly throwing an exception on around 50% of intervals doesn't help them understand that, and the other 50% of the time they're still coding wrong and not getting any feedback. I'm not against the clamp function doing whatever if people want it to, it can make coffee and cook pancakes when we call it for all I care. But if it just clamps that is probably better. It isn't a bug if I call clamp and don't get pancakes. It also isn't a bug if I call clamp and it remains silent on the fact that one argument is larger than another one.
Feedback has to be relevant. It'd be like having a type system that blocks and argument that isn't set to a value. If the programmer provides code that has bugs, it'll give them lots of feedback. But the bug and the error won't be related and it is effectively noise.
So an implicit fallback, but make it explicit through good design. Haven't even thought about this as a principle, since type checking persuades me to avoid anything implicit, thank you!
This maps poorly to the mathematical concept of a closed interval [a, b], which can be written a ≤ x ≤ b for a set of x. An interval where a > b is usually a programming error.
To ensure only valid intervals are supported at the type system level, the function could perhaps be redefined as:
function clamp(n: number, i: Interval<number>): number
Of course, you need to deal with the distinction between closed and open intervals. Clamping really only makes sense for closed ones.
It maps very well onto the mathematical concept of a closed interval [a, b] where a and b are endpoints of the interval though. You're adding a constraint for no logical reason and it happens to be very hard to represent in a basic type system.
> An interval where a > b is usually a programming error.
If you want it to be, sure. Anything can be a programming error if the library author feels like it. We may as well put all sorts of constraints on clamp, it is probably an error if the caller uses a large number or a negative too. It is still bad design in a theoretical sense - the clamp function throws an error despite there being an obvious non-error return value. It isn't hard to meaningfully clamp 2 between 4 and 3.
Well, if your language has a sufficiently strong type system (namely, dependent types), you can take proofs of some properties as arguments. Example in Lean:
def clamp (value min max : Float) {H : min < max} : Float := ...
Sure, but the author picked TypeScript nonetheless. TypeScript is not a runtime, but a mere type checker - JavaScript is the runtime and a highly dynamic language. This detail got somehow completely lost in the article, but is IMHO the main culprit why such validations aren't bad, or sometimes even preferred.
The article also skipped over the following related topics:
- When would you wrap errors from lower levels as your own?
- What does "parse don't validate" mean when a TypeScript library gets transpiled to JavaScript?
Nobody would question that, but publishing a JavaScript library means that anyone using plain JavaScript can make use of it. Even though you aren't ever in control of the toolchain of your library's users, it's still your responsibility - as library author - to take that differences into account. If you'd transpile your library from Idris to JavaScript and publish it, these validations just can't be neglected at runtime. Type systems are just another model of the world at runtime.
In a compiled language, it takes one or two machine instructions to test
assert!(b >= a);
Works in C, C++, Go, Rust...
Amusingly, nowhere in the original article is it mentioned that the article is only about Javascript.
Languages should have compile time strong typing for at least the machine types: integers, floats, characters, strings, and booleans. If user defined types are handled as an "any" type resolved at run time, performance is OK, because there's enough overhead dealing with user defined structures that the run time check won't kill performance.
(This is why Python needs NumPy to get decent numeric performance.)
Sure, use macros in function bodies. That won't affect the function signature in any meaningful way for the type checker and remains a check at runtime only, doesn't it?
It seems like the point of the article was to not do that though, contrary to my own opinion, and I just wonder why...
Many libraries throw an exception, panic, or silently swap the parameters at runtime.
To detect this at compile time, you would need either min and max to be known at compile time, or a type system that supports value-dependent types. None of the popular language support this. (My language named 'Bau', which is not popular of course, support value-dependent types to avoid array-bound checks.)
You don't need to. One if statement to check that is not a problem. The problem occurs when you have a bunch of other ifs as well to check all kinds of other stuff that a type system would handle for you like nullability, incorrect types etc.
Personally I just write JS like a typed language. I follow all the same rules as I would in Java or C# or whatever. It's not a perfect solution and I still don't like JS but it works.
‘’’
export function clamp(value: number | string, min: number | string, max: number | string): number {
if (typeof value === 'string' && Number.isNaN(Number(value))) {
throw new Error('value must be a number or a number-like string');
}
if (typeof min === 'string' && Number.isNaN(Number(min))) {
throw new Error('min must be a number or a number-like string');
}
if (typeof max === 'string' && Number.isNaN(Number(max))) {
throw new Error('max must be a number or a number-like string');
}
if (Number(min) > Number(max)) {
throw new Error('min must be less than or equal to max');
}
return Math.min(Math.max(value, min), max);
}
‘’’
> Oh, look, somebody just re-discovered static typing.
If you're going to smug, at least do it when you're on the right side of the technology. The problem the article describes has nothing to do with the degree of static typing a language might have. You can make narrow, tight, clean interfaces in dynamic languages; you can make sprawling and unfocused ones in statically-typed languages.
The problem is one of mindset --- the way I'd do it, an insufficient appreciation of the beauty of parsimony. Nothing to do with any specific type system or language.
Yep, I’ve seen this in Swift with a dozen overloads for functions and class initializers to support umpteen similar, but different, types as input. Sloppy schema design reveals itself in combinatorial explosions of type conversions
For some definition of “sufficiently introductory”, I’d recommend starting with the first chapter of John Nolan’s book Stable Distributions [0] (20 pages), which presents the class of distributions to which sums of iid random variables converge and builds up to a version of the generalized CLT.
Note that this generalization of the classical CLT relaxes the requirement of finite mean and variance but still requires that the summed random variables are iid. There are further generalizations to sums of dependent random variables. John D. Cook has a good blog post that gives a quick overview of these generalizations [1].
I’m not sure what the author had in mind when he wrote the paper, but its light-hearted content reveals well how to structure a math paper: (1) state the problem and why the reader should care about it, (2) state how the paper builds on prior work, (3) summarize the main results, and then (4) develop them in theorem-proof style.
An economics paper in the same spirit (though of course economics addresses more practical problems than mathematics, in this case whether one should leave the toilet seat up or down) shows quite well how to present and develop an economic model [0].
The general version of this is called inverse transform sampling [0], which uses the fact that for the cdf F of any random variable X the random variable Y = F(X) has a standard uniform distribution [1]. Since every cdf increases monotonically on the unit interval, every cdf is invertible [2]. So apply the inverse cdf to both sides of the previous equation and you get F^-1(Y) = X is distributed like X.
Sampling from a standard uniform distribution and then using the inverse transform is the commonest way of generating random numbers from an arbitrary distribution.
For the particular case of the exponential distribution we can go further. By taking advantage of the theory of Poisson processes, we can take samples using a parallel algorithm. It even has a surprisingly succinct SQL translation:
SELECT *
FROM Population
WHERE weight > 0
ORDER BY -LN(1.0 - RANDOM()) / weight
LIMIT 100 -- Sample size.
Notice our exponentially distributed random variable on prominent display in the ORDER BY clause.
Oops. I had quite forgotten that I need to write about that. I said I would over a decade ago, so that's a long time for you to wait. Sorry about that.
I mainly write for myself, so I need the time and the motivation. Until recently, my job at G took up my time and also provided an internal community where I could scratch the writing itch, which reduced the motivation for public writing on my blog. But now that I'm semi-retired, I'll try to write more frequently.
The role of the courts in deciding the constitutionality of legislation was very much in the zeitgeist of the American experiment. Federalist, No. 78:
> It is not otherwise to be supposed, that the Constitution could intend to enable the representatives of the people to substitute their will to that of their constituents. It is far more rational to suppose, that the courts were designed to be an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority. [0]
It is absurd to say that the Supreme Court “magicked out of thin air” the notion of judicial review in Marbury v. Madison. That was simply the Court's first assertion of the power the constitution gave it to wield, the wielding of which is the primary purpose of the judiciary as a separate but equal branch of American government.
> The judicial Power shall extend to all Cases, in Law and Equity, arising under this Constitution …
Let’s not continue this conversation. The view I’m presenting is long established and can be found better and much more extensively made in a great many sources which no one should have any trouble finding, should he wish. You, I infer, do not wish.
It is amazing how many people have trouble understanding that a dispute of the form “is this action the government is taking lawful, given the existence of Congressional action X and Constitutional provision Y” is case arising under the Constitution and Laws of the United States.
Yes, it’s a bit surprising. The whole point of a constitution is to provide a groundwork and a set of meta-rules governing how the ordinary rules of society may be made and modified, and how disputes about those sorts of thing should be resolved. The judiciary is precisely the mechanism for entertaining and deciding these disputes.
The odd thing, really, is that it took a decade and a half, until 1803, for the Supreme Court first to find it necessary to tell another of the other branches of government that it had broken the rules.
Not all that odd. The federal government legislated much less then, it was a drastically simpler nation. And the 14th amendment had not been adopted incorporating most of the first 10 amendments to cover state actions.
It's missing a hell of a lot of details on how the court is actually supposed to work, most conspicuously, the details that the grandparent post was railing at FDR for trying to change.
Two new writers to read! Interesting, if it’s really the case, that there are only a couple of twentieth-century British novelists whose cult reputations outstrip their wider reputations. I feel like there are a number of American authors who fit that description, e.g., John Barth, William Gaddis.
But Friedman would have supported a broad basic-income scheme. We know this because he did support one. It was his proposal in 1962 of a “negative income tax” [0] (in Capitalism and Freedom) that gave rise to the movement to replace traditional social welfare programs with simple schemes that just give money to poor people. (This movement led to the Earned Income Tax Credit [1] in the United States.)
Friedman’s negative income tax is equivalent to the contemporary notion of a guaranteed basic income (but not to a universal basic income, as only people earning below some threshold would receive it). Like most economists, Friedman believed that people (even poor people) can typically make their own economic choices better than a government program can make those choices for them. (He was likewise not opposed to redistributive policies per se.) That was the root of his advocacy for market-based mechanisms of organizing the economy.
0. The idea dates to at least the 1940’s, but Friedman’s book is typically credited with popularizing it. See, e.g, https://en.wikipedia.org/wiki/Negative_income_tax.
1. https://en.wikipedia.org/wiki/Earned_income_tax_credit
reply