I don't like "Wrappeds" (low-key social hack to manufacture normalization of surveillance capitalism?), but with HN being public, I succumbed to temptation. Very fun, 10/10 no notes, surprisingly good for a small sample set this year.
> You write comments like you're trying to win a Pulitzer in Political Economy while trapped inside a middle-manager's strategy meeting.
"I'm not a capitalist, I am a creativist... Capitalists make things to make money, I like to make money to make things." - Eddie Izzard
It's more about the viability of making any kind of living from one's creative work, not motivation to create. (Though for creative works with large upfront costs, eg films, ROI motivation is relevant for backers.)
I feel like it says a lot, when intelligent amorality seems genuinely preferable to blundering incompetence. Many such cases. One wonders how much "enshittification" is intrinsic to networked software and our late-stage-whatever political economy, versus how much is a farcical byproduct of office politics and org chart turf wars.
They're inherently different: creative work (especially in a digital, trivially replicated format) is non-rivalrous, and at least partially non-excludable. "You wouldn't download a car." [0]
Property rights are a social technology to balance incentives and peacefully negotiate scarce resources (including time and effort). It's helpful to think about them in reverse: that they encode legitimacy to use force (usually via the State) against anyone who violates the right. That doesn't make the force right or wrong, a priori; it simply describes what happens. Exactly when that force is legitimate is the question at hand.
"Intellectual Property" is a post-hoc neologism. What we actually have are three very specific institutions: copyrights, patents, and trademarks. The last is arguably more like regulation than property: persistent brand identity to prevent fraud and confusion. Copyrights and patents are extremely clear in the Constitution, that their purpose is collective, moreso than an individual right for its own sake: "To promote the Progress of Science and useful Arts". Hence why they expire: at some point, the incentive has already been provided, and the body politic benefits more by their being open-sourced.
Whatever "rights" framework one subscribes to, it is an extremely thorny question, whether they include the right to alienate those rights, to give them up on purpose. We allow people to alienate their labor, an hour at a time; but not to do so for a lifetime (voluntarily sell one's self into slavery). Many US states now refuse to defend "non-compete" clauses: that you cannot constrain your future self from working for a competitor for X years, even if you wanted to, even for very lucrative terms in the contract.
I'd argue that intellectual/creative works, are more like non-compete clauses: you actually create more bargaining power if you limit the scope, and take away the capacity to give up future bargaining power.
If it is the case that consciousness can emerge from inert matter, I do wonder if the way it pays for itself evolutionarily, is by creating viral social signals.
A simpler animal could have a purely physiological, non-subjective experience of pain or fear: predator chasing === heart rate goes up and run run run, without "experiencing" fear.
For a social species, it may be the case that subjectivity carries a cooperative advantage: that if I can experience pain, fear, love, etc, it makes the signaling of my peers all the more salient, inspiring me to act and cooperate more effectively, than if those same signals were merely mechanistic, or "+/- X utility points" in my neural net. (Or perhaps rather than tribal peers, it emerges first from nurturing in K-selected species: that an infant than can experience hunger commands more nurturing, and a mother that can empathize via her own subjectivity offers more nurturing, in a reinforcing feedback loop.)
Some overlap with Trivers' "Folly of Fools": if we fool ourselves, we can more effectively fool others. Perhaps sufficiently advanced self-deception is indistinguishable from "consciousness"? :)
>If it is the case that consciousness can emerge from inert matter, I do wonder if the way it pays for itself evolutionarily, is by creating viral social signals.
The idea of what selection pressure produces consciousness is very interesting.
Their behavior being equivalent, what's the difference between a human and a p-zombie? By definition, they get the same inputs, they produce the same outputs (in terms of behavior, survival, offspring). Evolution wouldn't care, right?
Or maybe consciousness is required for some types of (more efficient) computation? Maybe the p-zombie has to burn more calories to get the same result?
Maybe consciousness is one of those weird energy-saving exploits you only find after billions of years in a genetic algorithm.
The dilemma is, the one thing we can be sure of, is our subjectivity. There is no looking through a microscope to observe matter empirically, without a subjective consciousness to do the looking.
So if we're eschewing the inelegance / "spooky magic" of dualism (and fair enough), we either have to start with subjectivity as primitive (idealism/pan-psychism), deriving matter as emergent (also spooky magic); or, try to concoct a monist model in which subjectivity can emerge from non-subjective building blocks. And while the latter very well might be the case, it's hard to imagine it could be falsifiable: if we constructed an AI or algo which exhibits verifiable evidence of subjectivity, how would we distinguish that from imitating such evidence? (`while (true) print "I am alive please don't shut me down"`).
If any conceivable imitation is necessarily also conscious, we arrive at IIT, that it is like something to be a thermostat. If that's the case, it's not exactly satisfying, and implies a level of spooky magic almost indistinguishable from idealism.
It sounds absurd to modern western ears, to think of Mind as a primitive to the Universe. But it's also just as magical and absurd that there exists anything at all, let alone a material reality so vast and ordered. We're left trying to reconcile two magics, both of whose existences would beggar belief, if not for the incontrovertible evidence of our subjectivity.
> It has received fulsome praise from such right-libertarian eminences as economists Milton Friedman and Donald Boudreaux
I had Boudreaux for an economics class in law school. His, shall we say, enthusiasm and dogmatic faith in markets was off putting. I once got riled up and sparred with him a little in class when we were on the subject of monopolies and he was teaching the theory that monopoly profits are an impossibility. My point was that the theory only makes sense when transaction costs are zero; non-zero transaction costs means monopolies can extract excess profits when there's an asymmetry in transaction (particularly financing) costs that favor the monopolist, as there often will be.
But later I realized that, at the end of the day, and like most of abstract economics, the theory was more right than wrong, just incomplete and oversimplified. And I learned much that day. If I had rejected the theory outright and walked way (notwithstanding remembering how to recite it for the exam), I would have ended up in a much worse place intellectually than accepting it, with qualifications, as an important conceptual model. The mature takeaway from his teaching the theory isn't that monopolies can't exist, just that it's more difficult than you'd think, and monopolists' ability to extract profits are bounded by the degree to which the situation deviates from the simplified model world. And doing so puts us in a place where can we can constructively ask and answer quantitative questions, as opposed to debating in qualitative rhetorical terms.
It's childish to nit-pick ideas and then pretend one has vanquished them because they pointed out technicalities about how they're wrong. The point of "I, Pencil" is to teach how humans communicate, cooperate, and produce amazing things through systems and processes that keep them all at arms length, without everyone deliberately working toward the same targeted outcome. Pedagologically, it does teach something profound and non-obvious. Is the essay simplified and reductive? Of course, but no more reductive (and IMO considerably less so) than, e.g., the anti-colonialist rhetoric trotted out in "Revisited", which feels more like an extended ad hominem than a substantive piece about political economics. (And it's worth pointing out that economic thinking can be useful in understanding and addressing colonial exploitation, such as understanding it in terms of cost externalization. And unlike rhetorical moralizing, that approach suggests far more actionable options for systematically ameliorating such exploitation--i.e. how to put our money where our mouth is, literally and figuratively.)
It’s a problem with representation generally. The political theorist Benjamin Studebaker uses an analogy of getting into a hot air balloon: there are ways you can be of service to those below, giving them an overhead view, maybe warning them of danger, etc. But the further up you go, the less you have skin the game, and the less the little ant-people can truly be real to you.
Rather than trying to force a round peg into a square hole, I’d say this a case for refactoring bicameralism: one house of professionalized legal specialists and technocrats, another house chosen by rotating lottery for short stints of public service by random citizens (sortition).
Unless I’m missing something, that’s quintiles of absolute wealth. I’d be curious how the data shakes out for the lowest quintile calculated relative to costs of living.
The better analogy might be, "when the morality police call the restaurant, they divulge which table you sit at every day during lunch". And it's also not clear that it would be noticed: national security letters, gag orders, parallel construction, etc.
It's just another principal-agent problem, and I agree that a fully self-sovereign life, with no dependence on trust or agents, is an unrealizable ideal; and, that a decent solution (while not perfect) is reputation stake and aligned incentives, check and check in Apple's case. I too think Cook is sincere, and I trust them as far as I can throw their products, which is to say, a little. (The Apple Tax is so they don't have to rely on a sketchy big-data business model.)
That said, computing and InfoSec have some unique contours, in a way that trusting a mechanic or a lawyer does not. Those can have catastrophic failure modes as well (crashing from a shoddy repair, getting sued based on bad legal advice), but they aren't systemic to society, and have lower switching costs.
And I ultimately think it's a false choice. When it comes to meatspace security, it's possible to have trusted and accountable public institutions, and allow citizens to have some means for self-sovereignty (2A, locked doors). It would be foolish to rely only on one or the other, either as a society or an individual.
So I'm deeply grateful for the Stallman types, pushing forward the capacity for self-sovereignty. Even if it doesn't currently meet my needs from a risk/benefit tradeoff, I still benefit from the ecosystem, and its BATNA, and I look forward to the day I sever my dependence on Apple's ecosystem, whether or not they betray my trust.
> a fully self-sovereign life, with no dependence on trust or agents, is an unrealizable ideal
I agree with this part, but relying Apple is quite far from self-sovereignty compared to many other practical alternatives: not relying on external clouds, GrapheneOS, Linux. By relying on Apple, you not only pay a tax to essentially bribe them to not attack you (perhaps a viable strategy, not too different from taxes to governments), but more importantly you give up the ability to resist without serious compromises (can't have E2EE backups on your own cloud if they said so). This is akin to trying to be paying taxes to the government to get better police coverage, and they decide to ban locks, security cameras, and leaving the walled garden.
The problem with the current computing security paradigm is that it puts too much trust in entities that do not deserve it, because the entities are simply too powerful and do not suffer consequences when they break that trust.
Fair points, I can't say I disagree, and I'm aware of the trade-offs I'm making. (I was actually tempted to use the word "bribe" when describing the Apple Tax!)
There are a couple meaningful points of divergence in the ecosystem: Mac vs iOS (the former has some self-sovereignty, even if there are risks of backdoors/etc); and, cloud vs not (I mostly avoid cloud usage, iCloud or otherwise, and when I do use it, I treat all content as public).
I agree about the trust problem. Varoufakis might make some valid points re: "Technofeudalism", but then Bruce Schneier was making a similar analogy over a decade ago. I've heard cogent arguments, that early feudalism evolved from rational self-interest, that serfs were willing to trade some degree of autonomy for safety, and it does feel that many "normie" users (especially with iOS) are making a similar rational trade, even if it sets up an asymmetric power dynamic, and risk (inevitability?) of future betrayal.
I'm curious if you have any examples in mind for Apple, re: "do not suffer consequences when they break that trust". IMO, they've done okay at putting actions and costly signaling behind their privacy rhetoric, and I think they'd take some kind of market hit if they were to blatantly break that trust. But I'm curious if you think there are past instances in which that already happened, which maybe I've forgotten or am neglecting, or if it's a threat model of the future.
Their image scanning proposal? The recent UK E2EE backup thing?
For the first, although they eventually backtracked, proposing it alone should be ruinous they are actually a privacy-oriented company.
Although the second situation is forced by a government, it is still a self-inflicted problem where iCloud is the only way you can back up your stuff. Not being able to have encrypted backups is a serious QoL issue.
> I mostly avoid cloud usage, iCloud or otherwise, and when I do use it, I treat all content as public
This is also my attitude toward "the cloud" in general.
Someone with everything to lose if they break it. Most large companies do not. Perhaps smaller companies whose main selling point is privacy? Proton? Signal? I don't use either but they seem relatively plausible.
> You write comments like you're trying to win a Pulitzer in Political Economy while trapped inside a middle-manager's strategy meeting.