I’ve been ordering Americanos for 20 years. Espresso drinks became a very common thing around the time when Starbucks took off in the 90s. But it does depend on where you go. Diners and gas stations and some kinds of cafes and restaurants (especially in small towns) often only had drip coffee until recently, but these days you can get an Americano in many gas stations too. Cafes with baristas making espresso drinks is the norm in big cities and has been for some time.
I wonder if Ray Kurzweil was inspired by this story, or if there was some other futurist who inspired them both. I had a sort of déjà vu reading this, having been at Kurzweil’s Siggraph keynote in 2000. He was predicting this very scenario - the singularity would bring nanobots that make humans immortal. His talk made an impression on my young mind. It wasn’t until later that I realized Kurzweil was just peddling the fountain of youth, and was somewhat unscrupulous about it…
Looks plausible for a minute, but when you start to think about it, you realize he has conflated longevity with average lifespan, and that it cannot possibly be a mistake, he’s not that ignorant or careless. The plot is missing data points that were easily available when it was made, data points that would completely contradict the trend line he put in the graph. Turns out human longevity hasn’t really budged for ten thousand years, but average lifespan has changed a lot, due to infant mortality and sanitation and vaccines and lower infant mortality and less war and more science.
I think a lot of the graphics in that article are equally sketchy when you look a little closer, and a lot of his predictions from 2000 are already orders of magnitude off, so I have no trust in anything Kurzweil writes or predicts. But given the state of the earth today, maybe it’s a good thing that significant longevity or immortality isn’t just around the corner? It’s a fun thought experiment and a nice story though.
> This is why all the moderation pushes toward deleting duplicates of questions, and having a single accepted answer.
My personal single biggest source of frustration with SO has been outdated answers that are locking out more modern and correct answers. There are so many things for which there is no permanently right answer over time. It feels like SO started solidifying and failed to do the moderation cleaning and maintenance needed to keep it current and thriving. The over-moderation you described helps people for a short time but then doesn’t help the person who googles much later. I’ve also constantly wished that bad answers would get hidden or cleaned out, and that accepted answers that weren’t very good would get more actively changed to better ones that showed up, it’s pretty common to see newer+better answers than the accepted one.
> outdated answers that are locking out more modern and correct answers. There are so many things for which there is no permanently right answer over time.... I’ve also constantly wished that bad answers would get hidden or cleaned out, and that accepted answers that weren’t very good would get more actively changed to better ones that showed up, it’s pretty common to see newer+better answers than the accepted one.
Okay, but who's going to arbitrate that? It's not like anyone was going to delete answers with hundreds of upvotes because someone thought it was wrong or outdated. And there are literally about a million questions per moderator, and moderators are not expected to be subject matter experts on anything in particular. Re-asking the question doesn't actually help, either, except sometimes when the question is bad. (It takes serious community effort to make projects like https://stackoverflow.com/questions/45621722 work.)
The Trending sort was added to try to ameliorate this, though.
Reading the rest of this thread, it sounds like moderation truly was SO’s downfall, and almost everyone involved seems to agree the site became extremely anti-social. Not sure I’ve ever seen the word ‘toxic’ this many times in one thread before.
Anyway, that is a good question you asked, one that they didn’t figure out. But if there are enough people to ask questions and search for answers, then aren’t there enough people to manage the answers? SO already had serious community effort, it just wasn’t properly focused by the UX options they offer. Obviously you need to crowd-source the decisions that can’t scale to mods, while figuring out the incentive system to reduce gaming. I’m not claiming this is easy, in fact I’m absolutely certain this is not easy to do, but SO brought too little too late to a serious problem that fundamentally limited and reduced the utility of the site over time.
Moderation should have been aimed squarely at making the site friendly, and community should be moderating the content entirely, for exactly the reasons you point out - mods aren’t the experts on the content.
One thing the site could have done is tie questions and answers to specific versions of languages, libraries, tools, or applications. Questions asked where the author wasn’t aware of a version dependency could be later assigned one when a new version changes the correctness of an answer that was right for previous versions. This would make room for new answers to the same question, make room for the same question to be asked again against a new version, and it would be amazing if while searching I could filter out answers that are specific to Python 2, and only see answers that are correct for Python 3, for example.
Some of the answers should be deleted (or just hidden but stay there to be used as defense when someone tries to re-add bad or outdated answers.) The policy of trying to keep all answers no matter how good allowed too much unhelpful noise to accumulate.
> Moderation should have been aimed squarely at making the site friendly, and community should be moderating the content entirely, for exactly the reasons you point out - mods aren’t the experts on the content.
The community was the ones moderating the content in its entirety (with a very small fraction of that moderation being done by the mods - the ones with a diamond after their name... after all, they're part of the community too). Community moderation of content was crowdsourced.
However, the failing was that not enough of the community was doing that moderation.
The tools that diamond (elected) moderators had was the "make the site friendly" by removing comments and banning users.
The "some of the answers should have been deleted" ran counter to the mod (diamond mod this time https://meta.stackoverflow.com/q/268369 has some examples of this policy being described) policy that all content - every attempt at answering a question - is valid and should remain.
> every attempt at answering a question - is valid and should remain.
Yeah this is describing a policy that seems like it’s causing some of the problem I’m talking about. SO’s current state today is evidence that not every attempt at answering a question should ‘remain’. But of course it depends on what exactly we mean by that too. Over time, valid attempts that don’t help should arguably be removed from the default view, especially when high quality answers are there, but they don’t have to be deleted and they can be shown to some users. One of the things it sounds like SO didn’t identify or figure out is how to separate the idea of an answer being valid from the idea the answer should remain visible. It would serve the site well to work on making people who try to answer feel validated, while at the same time not necessarily showing every word of it to every user, right?
That would entail a significant redesign of the underlying display engine... and an agreement of that being the correct direction at the corporate level.
Unfortunately, after Jeff left I don't think there was that much upper management level support for "quality before quantity" After the sale it feels like it was "quantity and engagement will follow" and then "engagement through any means". Deleting and hiding questions or answers that aren't high quality... really would mean making most of the site hidden and that wouldn't help engagement at all.
yes I noticed this as well, over the past few years, its happened again and again that the "Top Answer" ends up being useless and I found myself constantly sorting the answers by "Recent" to find the ones that are actually useful and relevant
> There are so many things for which there is no permanently right answer over time.
Yeah it's doubly stupid because the likelihood of becoming outdated is one of the reasons they don't allow "recommendation" questions. So they know that it's an issue but just ignore it for programming questions.
OP quoted “non-square pixels” from the article, which is talking about pixel aspect ratios, i.e., width vs height. The implicit alternative to square in this context is rectangular, we’re not talking about circular or other non-rectangular shapes. Whenever the display aspect ratio is different than the storage or format aspect ratio, that means the pixels have to be non-square. For example, if a DVD image is stored at 720x480 and displayed at 4:3, the pixel aspect ratio would have to be 8:9 to make it work out: (720x8)/(480x9)==4/3. I believe with NTSC, DVDs drop a few pixels off the sides and use 704x480 and a pixel aspect ratio of 10:11.
If TV settings offend you, you should be offended by anyone watching anything made for movie theater on a TV or iPad or - gasp - a phone, regardless of settings. And it should be offensive to watch with the lights on or windows open. ;)
To be fair, 24p is crap. You know and agree with that, right? Horizontal pans in 24p are just completely unwatchable garbage, verticals aren’t that much better, action sequences in 24p suck, and I somehow didn’t fully realize this until a few years ago.
A lot of motion-smoothing TVs are indeed changing framerate constantly, they’re adaptive and it switches based on the content. I suspect this is one reason kids these days don’t get the soap-opera effect reaction to high framerate that old timers who grew up watching 24p movies and 60i TV do. They’re not used to 24p being the gold standard anymore, and they watch 60p all the time, so 60p doesn’t look weird to them like it does to us.
TVs with motion interpolation fix the horizontal pan problem, so they have at least one thing going for them. I’m serious. Sometimes the smoothing messes up the content or motion, it has real and awful downsides. I had to watch Game of Thrones with frame interp, and it troubled me and it ruins some shots, but on the whole it was a net positive because there were so many horiz pans that literally hurt my eyeballs in 24p.
Consumers, by and large, don’t seem to care about brightness, color, or framerate that much, unless it’s really bad. And most content doesn’t (and shouldn’t) depend on brightness, color, or framerate that much. With some real and obvious exceptions, of course. But on the whole I hope that’s also something film school taught you, that you design films to be resilient to changes in presentation. When we used to design content for analog TV, where there was no calibration and devices in the wild were all over the map, you couldn’t count on anything about the presentation. Ever had to deal with safe regions? You lost like 15% of the frame’s area! Colors? Ha! You were lucky if your reds were even close to red.
BTW I hope you take this as lighthearted ribbing plus empathy, and not criticism or argument. I’ve worked in film too (CG film), and I fully understand your feelings. The first CG film I worked on, Shrek, delivered final frames in 8bit compressed JPEG. That would probably horrify a lot of digital filmmakers today, but nobody noticed.
I thought your comment was hilarious so thank you for it. 20 year old me would have had a field day with it, especially the 24p stuff. ;)
On your presentation point, I think 20 year old me would have generally agreed with you but also argued strongly that people should be educated on the most ideal environment they can muster, and then should muster it! This is obviously silly, but 20 year old me is still in there somewhere. :)
If you’re noticing stuttering on 24fps pans then someone made a mistake when setting the shutter speed (they set it too fast), the motion blur should have smoothed it out. This is an error on the cinematographer’s fault more than anything.
60fps will always look like cheap soap opera to me for movies.
Pans looking juddery no matter what you do in 24 fps is a very well known issue. Motion blur’s ability to help (using the 180-shutter rule) is quite limited, and you can also reduce it somewhat by going very slow (using the 1/7 frame rule), but there is no cure. The cinematographer cannot fix the fundamental physical problem of the 24 fps framerate being too slow.
24 fps wasn’t chosen because it was optimal or high quality, it was chosen because it’s the cheapest option for film that meets the minimum rate needed to not degrade into a slideshow and also sync with audio.
Here’s an example that uses the 180-shutter and 1/7-frame rules and still demonstrates bad judder. “We have tried the obvious motion blur which should have been able to handle it but even with feature turned on, it still happens. Motion blur applied to other animations, fine… but with horizontal scroll, it doesn’t seem to affect it.” https://creativecow.net/forums/thread/horizontal-panning-ani...
Even with the rules of thumb, “images will not immediately become unwatchable faster than seven seconds, nor will they become fully artifact-free when panning slower than this limit”. https://www.red.com/red-101/camera-panning-speed
The thing I personally started to notice and now can’t get over is that during a horizontal pan, even with a slow speed and the prescribed amount of motion blur, I can’t see any details or track small objects smoothly. In the animation clip attached to that creativecow link, try watching the faces or look at any of the text or small objects in the scene. You can see that they’re there, but you can’t see any detail during the pan. Apologies in advance if I ruin your ability to watch pans in 24fps. I used to be fine with them, but I truly can’t stand them anymore. The pans didn’t change, but I did become more aware and more critical.
> 60fps will always look like cheap soap opera to me for movies
Probably me too, but there seems to be some evidence and hypothesizing that this is a learned effect because we grew up with 24p movies. The kids don’t get the same effect because they didn’t grow up with it, and I’ve heard that it’s also less pronounced for people who grew up watching PAL rather than NTSC. TVs with smoothing on are curing the next generation from being stuck with 24 fps.
> does applying the same transfer function to each pixel (of a given colour anyway) count as “processing”?
This is interesting to think about, at least for us photo nerds. ;) I honestly think there are multiple right answers, but I have a specific one that I prefer. Applying the same transfer function to all pixels corresponds pretty tightly to film & paper exposure in analog photography. So one reasonable followup question is: did we count manually over- or under-exposing an analog photo to be manipulation or “processing”? Like you can’t see an image without exposing it, so even though there are timing & brightness recommendations for any given film or paper, generally speaking it’s not considered manipulation to expose it until it’s visible. Sometimes if we pushed or pulled to change the way something looks such that you see things that weren’t visible to the naked eye, then we call it manipulation, but generally people aren’t accused of “photoshopping” something just by raising or lowering the brightness a little, right?
When I started reading the article, my first thought was, ‘there’s no such thing as an unprocessed photo that you can see’. Sensor readings can’t be looked at without making choices about how to expose them, without choosing a mapping or transfer function. That’s not to mention that they come with physical response curves that the author went out of his way to sort-of remove. The first few dark images in there are a sort of unnatural way to view images, but in fact they are just as processed as the final image, they’re simply processed differently. You can’t avoid “processing” a digital image if you want to see it, right? Measuring light with sensors involves response curves, transcoding to an image format involves response curves, and displaying on monitor or paper involves response curves, so any image has been processed a bunch by the time we see it, right? Does that count as “processing”? Technically, I think exposure processing is always built-in, but that kinda means exposing an image is natural and not some type of manipulation that changes the image. Ultimately it depends on what we mean by “processing”.
It's like food: Virtually all food is "processed food" because all food requires some kind of process before you can eat it. Perhaps that process is "picking the fruit from the tree", or "peeling". But it's all processed in one way or another.
But that qualifier in stupid because there’s no start or stopping point for ultra processed versus all foods. Is cheese an ultra-processed food? Is wine?
There actually is a stopping point , and the definition of ultra processed food versus processed food is often drawn at the line where you can expect someone in their home kitchen to be able to do the processing. So, the question kind of goes whether or not you would expect someone to be able to make cheese or wine at home. I think there you would find it natural to conclude that there's a difference between a Cheeto, which can only be created in a factory with a secret extrusion process, versus cottage cheese, which can be created inside of a cottage. And you would probably also note that there is a difference between American cheese which requires a process that results in a Nile Red upload, and cheddar cheese which still could be done at home, over the course of months like how people make soap at home. You can tell that wine can be made at home because people make it in jails. I have found that a lot of people on Hackernews have a tendency to flatten distinctions into a binary, and then attack the binary as if distinctions don't matter. This is another such example.
There actually is no agreed-upon definition of "ultra-processed foods", and it's much murkier than you make it out to be. Not to mention that "can't be made at home" and "is bad for you" are entirely orthogonal qualities.
I see what you mean, but FWIW “fixed” doesn’t sufficiently constrain or describe it. For example, filling a rectangle with black or random pixels is a fixed algorithmic sequence, same might go for in-painting from the background. The redaction output simply should not be a function of the sensitive region’s pixels. The information should be replaced, not modified.
> so I say take as much as you can. Commons would be if it’s owned by nobody
This isn’t what “commons” means in the term ‘tragedy of the commons’, and the obvious end result of your suggestion to take as much as you can is to cause the loss of access.
Anything that is free to use is a commons, regardless of ownership, and when some people use too much, everyone loses access.
Finite digital resources like bandwidth and database sizes within companies are even listed as examples in the Wikipedia article on Tragedy of the Commons. https://en.wikipedia.org/wiki/Tragedy_of_the_commons
No, the word and its meaning both point to the fact that there’s no exclusive ownership of a commons. This is importantl, since ownership is associated with bearing the cost of usage (i.e., deprecation) which would lead an owner to avoid the tragedy of the commons. Ownership is regularly the solution to the tragedy (socialism didn’t work).
The behavior that you warn against is that of a free rider that make use of a positive externality of GitHub’s offering.
That is one meaning of “commons”, but not all of them, and you might be mistaking which one the phrase ‘tragedy of the commons’ is using.
“Commons can also be defined as a social practice of governing a resource not by state or market but by a community of users that self-governs the resource through institutions that it creates.”
The actual mechanism by which ownership resolves tragedy of the commons scenarios is by making the resource non-free, by either charging, regulating, or limiting access. The effect still occurs when something is owned but free, and its name is still ‘tragedy of the commons’, even when the resource in question is owned by private interests.
Ownership, I guess. The 2 parent comments are claiming that “tragedy of the commons” doesn’t apply to privately owned things. I’m suggesting that it does.
Edit: oh, I do see what you mean, and yes I misunderstood the quote I pulled from WP - it’s talking about non-ownership. I could pick a better example, but I think that’s distracting from the fact that ‘tragedy of the commons’ is a term that today doesn’t depend on the definition of the word ‘commons’. It’s my mistake to have gotten into any debate about what “commons” means, I’m only saying today’s usage and meaning of the phrase doesn’t depend on that definition, it’s a broader economic concept.
What’s not what? Care to back up your argument with any links? I already pointed out that examples in the WP article for ‘Tragedy of the Commons’ use private property. https://en.wikipedia.org/wiki/Tragedy_of_the_commons#Digital... Are you contradicting the Wikipedia article? Why, and on what basis?
I'm contradicting your interpretation of the Wikipedia article. It does not support your initial statement that a) Github's (or any other company's) free tier constitutes a commons and/or b) the "overuse" of said free tiers by free riders could be the base of a tragedy of the commons (ToC). The idea is absurd, since there is no commons and also no tragedy. To the contrary. Commons have an external or natural limit to how much they can provide in a given time without incurring cost in the form of depreciation. But there is no external or natural limit to the free tier. The free tier is the result of the incentives under which the Github management operates and it is fully at their discretion, so the limits are purely internal. Other than in the case of commons, more usage can actually increase the amount of resources provided by the company for the users of the free tier, because a) network effects and b) economies of scale (more users bring more other users; more users cost less per user).
If Github realizes that the free tier is too generous, they can cut it anytime without it being in any way a "tragedy" for anybody involved - having to pay for stuff or service you want to consume is not the "T" in ToC! The T is that there are no incentives to pay (or use less) without increasing the incentives for everyone else to just increase their relative use! You not using the github free tier doesn't increase the usage of Github for anybody else - if it has any effect at all, it might actually decrease the usage of Github because you might not publish something that might in turn attract other users to interact.
Wikipedia does use Wikipedia, a privately owned organization, as an example of a digital commons.
The ‘tragedy’ that the top comment referred to is losing unlimited access to some of GitHub’s features, as described in the article (shallow clones, CPU limits, API rate limits, etc.). The finiteness, or natural limit, does exist in the form of bandwidth, storage capacity, server CPU capacity, etc.. The Wikipedia article goes through that, so I’m left with the impression you didn’t understand it.
> Wikipedia does use Wikipedia, a privately owned organization
The Wikimedia organization does not actually own wikipedia. They do not control editorial policy nor own the copyright of any of the contents. They do not pay any of the editors.
It is really annoying that you're shifting the goal post by bringing up Wikipedia (as an example, not the article), which is very much different from Github in many ways. Still, Wikipedia is not a common good in my book, but at least in the case of Wikipedia I can understand the reasoning and it's a much more interesting case.
But let's stick with Github. On which of the following statements can we agree?
Z1) A "Commons" is a system of interacting market participants, governed by shared interests and incentives (and sometimes shared ownership). Github, a multi billion subsidiary of the multi trillion dollar company Microsoft, and I, their customer, are not members of the same commons; we don't share many interests, we have vastly different incentives, and we certainly do not share any ownership. We have a legally binding contract that each side can cancel within the boundaries of said contract under the applicable law.
Z2) A tragedy in the sense of the Tragedy of the Commons is that something bad happens even though everyone can have the best intentions, because the system lacks a mechanism would allow to a) coordinate interests and incentives across time, and b) to reward sustainable behavior instead of punishing it.
A) Github giving away stuff for free while covering the cost does not constitute a common good from...
1. a legal perspective
2. an ethical perspective
3. an economic perspective
B) If a free tier is successful, a profit maximizing company with a market penetration far from saturation will increase the resources provided in total, while there is no such mechanism or incentive for any participant in a market involving a common good, e.g. there will be no one providing additional pasture for free if an Allmende is already destroying the existing pasture through overgrazing.
C) If a free tier is unsuccessful because it costs more than it enables in new revenue, a company can simply shut it down – no tragedy involved. No server has been depreciated, no software destroyed, no user lost their share of a commonly owned good.
D) More users of a free tier reduce net loss / increase net earnings per free user for the provider, while more cattle grazing on a pasture decrease net earnings / increase net loss per cow.
E) If I use less of Github, you don't have any incentive to use more of it. This is the opposite of a commons, where one participant taking less of it puts out an incentive to everybody else to take their place and take more of it.
F) A service that you pay for with your data, your attention, your personal or company brand and reach (e.g. with public repositories), is not really free.
G) The tiny product samples that you can get for free in perfume shops do not constitute a common good, even though they are limited, "free" for the user, and presumably beneficial even for people not involved in the transaction. If you think they were a common good, what about Nestlé offering Cheerios with +25% more for free? Are those 20% a common good just because they are free? Where do you draw the line? Paying with data, attention, and brand + reach is fine, but paying for only 80% of the produce is not fine?
H) The concepts of "moral hazard" and "free riders" apply to all your examples, both Github and Wikipedia. The concept of a Commons (capital C) is neither necessary nor helpful in describing the problems that you want to describe wrt to free services provided by either Github of Wikipedia.
Nope, no goal posts were moved, Wikipedia and GitHub are both private entities that offer privately funded free services to everyone, and due to the widespread free access, both have been considered to be examples of digital commons by others. I didn’t make up the Wikipedia example, it’s in Wikipedia being offered as one of the canonical examples of digital commons, and unfortunately for you it pokes a hole in your argument. If your ‘book’ disagrees with the WP article, you’re free to fix it (since WP is a digital commons), and you’re also free to use it to re-evaluate whether your book needs updating.
You seem to be stuck on definitions of ‘commons’, and unfortunately that’s not a compelling argument for reasons I’ve already stated. Also unfortunate that there are fundamental terminology flaws, or made up definitions, or straw men arguments, or incorrect statements, or opinions in every single item you listed.
“Tragedy of the Commons” is a phrase that became an economic term of art a long time ago. It’s now an abstract concept, and gets used to mean (as well as defined by) any situation in which a community of people overusing shared resources causes any loss of access to those shared resources for anyone else in the community. “The tragedy of the commons is an economic theory claiming that individuals tend to exploit shared resources so that demand outweighs supply, and it becomes unavailable for the whole.” (Investopedia) I’ve already cited multiple sources that define it that way, and so far you’ve shared no evidence to the contrary.
There are also tons of examples online where the phrase has been used to refer to small, local, or privatized resources, I found a dozen in like one minute, so I already know it’s incorrect to claim that people don’t use the phrase in the way I’m suggesting.
Even though the phrase does not depend on any strict definition of commons (or of tragedy), none of your argument addresses the fact that what’s common in, say, Germany is not freely available to Iranians, for example. Land is often used in ‘tragedy of the commons’ examples. Hardin’s original example was sheep grazing on “public” land, and yet there is really no such thing as common land anywhere on this planet, all of it is claimed by subgroups, e.g., countries, and is private is some sense. The idea of commons, and even some of the alternate dictionary definitions, make explicit note that the word is relative to a specific community of people. Nothing you’ve said addresses that fact, and it means that ‘Tragedy of the Commons’ has always referred to resources that are not common in a global context. GitHub and Wikipedia are more common than “public” land in America in that global sense, because they’re used by and available to more people than US land is.
What I can agree with is that it’s common for people to mean things like land, air, and water, when using or referring to the phrase, and I agree those things count as commons.
You're confusing public goods with common goods. That's your personal tragedy of the commons.
> “The tragedy of the commons is an economic theory claiming that individuals tend to exploit shared resources so that demand outweighs supply, and it becomes unavailable for the whole.” (Investopedia)
EXACTLY. This is NOT what is happening in the case of Github. As explained plenty of times, Github has the incentive to INCREASE their supply, making MORE available for the whole, if the whole demands MORE. Also, they are a centralized, coordinated entity, that can change the rules for the whole flock, which is one of the famous coordination problems associated with common goods. They can also discriminate between their contractual partners and optimize for multi-period results for reducing moral hazards and free-riding. It must be stupidity to not see these fundamental difference on the systems level.
> I didn’t make up the Wikipedia example, it’s in Wikipedia being offered as one of the canonical examples of digital commons
Yeah, the example in the article is Wikipedia, not Github. That's your example. All my statements refer to 100% to Github and probably only 90% to Wikipedia. That said, there are true digital commons, e.g. the copper cables connecting the houses in your street. Unsufficient number of bands in old wifi standards.
Since Dunning-Kruger has entered the chat, I'm going to leave. Have a good day; you will have a hard time having serious conversations if you do not accept that it helps everyone to favor precise language over watering down the meaning of concepts, like some social scientists and journalists seem to prefer for self-marketing purposes.
> You’re confusing public goods with common goods.
Am I? Where did I do that? The distinction between common and public is defined as whether or not the thing can succumb to tragedy of the commons. If public goods are “non-rivalrous”, then land is not a public good, it’s a common good, right? And “common” land is owned by nation states, or by smaller geographic communities, is it not? Therefore, ownership is always involved and the land is not available for use by people from other nation states, right?
Above, you said “there’s no exclusive ownership of a commons”. But sheep grazing on “commons” land is generally land owned exclusively by a country, nation, state, province, city, etc.. I assume what you meant was that no one person or sub-group within the geographical community owns the commons.
> This is NOT what is happening in the case of GitHub.
That’s not true, the article we’re commenting on gave examples of at least three different specific things that GitHub has limited in response to overuse, and the comment that started this thread was reacting to that fact. If they have incentive to increase their supply, why didn’t they actually do it? Logic can’t override history.
> there are true digital commons, e.g. the copper cables connecting the houses in your street
That’s not true, that’s not a commons at all, and not what the phrase “digital commons” means. In the US, the cables are owned by the telcom providers that installed them, they are private property. Maybe there are public cables where you live, but in that case, it seems like maybe you are the one confusing public and common goods. The phrase ‘digital commons’ generally speaking refers to digital goods, not physical goods. (But there is some leakage into the physical world, which is why some digital commons are susceptible to the tragedy of the commons.) https://en.wikipedia.org/wiki/Digital_commons (Do note that GitHub is listed there as an example of a digital commons.)
> It must be stupidity to not see these fundamental difference on the systems level
FWIW, you’ve flatly broken HN guidelines here, and this reflects extremely poorly on you and your argument. From my point of view, I can only interpret this lack of civility to mean you you’re frustrated about not being able to answer my questions or form a convincing argument.
GP shouldn't have said something insulting, but I do think it's you who are being obtuse here in not acknowledging that this is at least very different than the field everyone can graze on that gets overgrazed, that is the most simple and widely-accepted type of commons. It's probably not worth arguing semantics at all ("is this a commons?") because there isn't a "Tragedy of the Commons" central authority that could ever adjudicate that. Any definition of commons could be used; the only thing that matters is if the definitions are useful to define what's going on and to compare it to other situations.
In this case, GitHub can very cheaply add enforceable rules and force heavy users to consume only what they consider a tolerable amount of resources. The majority who don't need an outsized amount of resources will never be affected by this. That is why there is no 'tragedy' here.
It would be as if the grazing field were outfitted with sheep-facial-recognition and could automatically and at trivial cost, gently drone-airlift any sheep outside the field after they consume 3x what a normal sheep eats each day. In what most of us think of as a ToC situation, there is little that can be done besides closing the field or subdividing it into tiny, private plots which are policed.
The singular point of debate here from my side has been whether the phrase ‘tragedy of the commons’ applies to cases where the ‘commons’ are owned to the exclusion of some people, and nothing else. I don’t believe I have failed to acknowledge the differences between physical and digital commons, but let me correct that impression now: GitHub certainly is very different from a sheep-grazing field in almost every way. GitHub is even different from Wikipedia in many ways, just like GP said. I am arguing those differences, no matter how large, do not matter purely in terms of whether you can call these a ‘commons’, and I’ve supported that opinion by showing evidence that other people call both GitHub and Wikipedia a ‘digital commons’. If any definition of commons can be used, including privately owned land that is made available to the public, then I think you and I agree completely. The Wikipedia article about this phrase actually points out what I’ve been saying here, that common land does not exist.
There is a central authority on this topic: the paper by Hardin that coined the phrase. It’s worth a read. He defined ‘tragedy’ to be in the dramatic sense, e.g., a Greek or Shakespearean tragedy: “We may well call it ‘the tragedy of the commons,’ using the word ‘tragedy’ as the philosopher Whitehead used it: ‘The essence of dramatic tragedy is not unhappiness. It resides in the solemnity of the remorse-less working of things.’”
Hardin did not define ‘commons’, but he used multiple examples of things that are owned to the exclusion of others, and he even pointed out that a bank robber thinks of a bank as a commons. He himself blurred the line of what a commons means, and his actual argument depends only on the idea that commons means something shared and nothing more. In fact, he was making a point about human behavior, and his argument is stronger when ‘commons’ refers to any shared resources that can be exhausted by overuse at all. Hardin would have had a good chuckle over this extremely silly debate.
The actual points Hardin was making behind his phrase ‘Tragedy of the Commons’ were that Adam Smith’s ‘Invisible Hand’ economics, and Libertarian thinking, are provably wrong, and that we should abolish the UN’s Universal Declaration of Human Rights, specifically the right to breed freely, because he believes these things will certainly lead to overpopulation of the earth and thus increased human suffering. The only actual ‘commons’ he truly cared about in this paper is the earth’s space and food supply. The question of ownership is wholly and utterly irrelevant to his phrase.
GitHub adding rules that curtails people does limit some people’s access, that’s the point. How many people it affects I don’t know, and I don’t think it’s especially relevant, but note that in this case one single GitHub user being limited might affect many many people - Homebrew was one of the examples.
“Tragedy” never referred to the magnitude of the problem, as you and GP are assuming. Hardin’s “tragedy” refers to the human character flaw of thinking that shared things are preferable to limitations, because he argues that we end up with uncontrolled (worse) limitations anyway. His “tragedy” is the inevitability of loss, the irony of misguided belief in the very idea of a commons.
I'm not sure i agree that the Wikipedia article supports your position.
Certainly private property is involved in tragedy of the commons. In the classic shared cattle ranching example, the individual cattle are private property, only the field is held in common.
I generally think that tragedy of the commons requires the commons, to, well, be held in common. If someone owns the thing that is the commons, its not a commons but just a bad product. (With of course some nit picking about how things can be de jure private property while being defacto common property)
In the microsoft example, windows becoming shitty software is not a tragedy of the commons, its just MS making a business decision because windows is not a commons. On the other hand, computing in general becoming shitty, because each individual app does attention grabbing dark patterns, as it helps the induvidual apps bottom line while hurting the ecosystem as a whole, would be a tragedy of the commons, as user attention is something all apps hold in common and none of them own.
One of the examples of digital commons in the article is Wikipedia itself, which is privately owned, so now you can be sure the Wikipedia article does backup my claim at least a little.
The Microsoft example in this subthread is GitHub, not Windows. Windows is not a digital commons, because it’s neither free nor finite. Github is (or was) both. That is the criteria that Wikipedia is using to apply the descriptor ‘commons’: something that is both freely available to the public, and comes in limited supply, e.g. bandwidth, storage, databases, compute, etc.
Wikipedia’s article seems to be careful to not discuss ownership nor define the tragedy of the commons in terms of ownership, presumably because the phrase describes something that can still happen when privately owned things are made freely available. I skimmed Investopedia’s article on Tragedy as well, and it seems similarly to not explicitly discuss ownership, and even brings up the complicated issue of lack of international commons. That’s an interesting point: whatever we call commons locally may not be a commons globally. That suggests that even the original classic notion of tragedy of the commons often involves a type of private ownership, i.e. overfishing a “public” lake is a lake owned by a specific country, cattle overusing a “public” pasture is land owned by a specific country, and these resources might not be truly common when considered globally.
What use of GitHub are you talking about? The use of GitHub by @c-linkage at the top of the thread was, in fact, based on GitHub being free to use. And GitHub’s basic services are free to use. I really don’t know what you mean.
Your oft-repeated customer vs product platitude doesn’t seem to apply to GitHub, at least not to it’s founding and core product offering. You are the customer, and GitHub doesn’t advertise. It’s a freemium model, the free access is just a sort of loss leader to entice paid upgrades by you, the customer.
Why do you blame MS for predictably doing what MS does, and not the people who sold that trust & FOSS infra to MS for a profit? Your blame seems misplaced.
And out of curiosity, aside from costing more for some people, what’s worse exactly? I’m not a heavy GitHub user, but I haven’t really noticed anything in the core functionality that would justify calling it enshittified.
Probably the worst thing MS did was kill GitHub’s nascent CI project and replace it with Azure DevOps. Though to be fair the fundamental flaws with that approach didn’t really become apparent for a few years. And GitHub’s feature development pace was far too slow compared to its competitors at the time. Of course GitHub used to be a lot more reliable…
Now they’re cramming in half baked AI stuff everywhere but that’s hardly a MS specific sin.
MS GitHub has been worse about DMCA and sanctioned country related takedowns than I remember pre acquisition GitHub being.
I don't blame them uniquely. I think it's a travesty the original GitHub sold out, but it's just as predictable. Giant corps will evilly make the line go up, individual regular people will have a finite amount of money for which they'll give up anything and everything.
As for how the site has become worse, plenty of others have already done a better job than I could there. Other people haven't noticed or don't care and that's ok too I guess.
reply