Yeah well, you're anonymous, so you might as well be someone with a huge ego who got their idea shut down by Carmack once and is still bitter about it. More often than not, when people think they "prove something wrong with data" it's more that they're "taking some data and interpret in a way that fits their standpoint".
Carmack has proven enough times in the past that he's able to deliver, that he can push technology, knows what's possible and what isn't, and can wrap up a product. "Wading into fields he has no experience in" sounds pretty unlikely for VR given his past work. And I wouldn't consider the Oculus Go a failure, more like ahead of it's time and released too early. A prototype of the quest. But I guess now it's easy to claim everything that's bad about the go was Carmack's work and everything good about the quest and quest 2 was someone else's.
>who got their idea shut down by Carmack once and is still bitter about it
Nice fantasy you created there to support your argument. Have you heard of "don't shoot the messenger"?
I admire Carmack as much all other hackers around. I don't sympathize with many of Meta's practices. Still, it's entirely plausible that GP's experience holds truth.
I've been around the equivalent of people like Carmack in academia and all of them have their dash of arrogance and petulance, sometimes this leads them to take really bad decisions. Also, engineering skills and management skills are different things. And there's Peter's Principle as well, to which Carmack is not exempt either.
The person’s whole story was entirely based around their anonymous word. The follow up comment reads to me like a narrative way of pointing they out.
We don’t really know, I guess, what happened internally. But:
* Carmack has tossed some grenades as he left, so if there’s a real story there I guess we’re likely to hear about it from some non-anonymous sources soon enough if he was a real pain.
* He’s gone now, so we’ll see to what extent he was holding them back shortly.
I bet we hear nothing and they never release anything, but I won’t claim to have an uncle who works at ~~Nintendo~~ Facebook.
The story from Carmack is also based on his word. Unless you've worked with him directly, everything you know about John Carmack is based on some or other's words.
I'm not obsessed with Carmack, so I honestly don't know what has been said about him one way or the other. But really this is just a coat rack to hang a point about the epistemology of the argument.
>Yeah well, you're anonymous...Carmack has proven enough times in the past that he's able to deliver, that he can push technology
In a follow up agreeing with him
>The person’s whole story was entirely based around their anonymous word
And later still
>A word with a name behind it.
So what's being implicitly said here is "I judge what's true based on the authority of the source". The premise is John Carmack is an asshole, and the attempt to refute it is "I have it on good authority he isn't", and when you dig into that claim the authority is either Carmack himself or a tech news org article. Well, when you stop and think about it, tech news has no interest in learning or publicizing if he's an asshole.
Unless you worked with him, everything you know about Carmack is just something you read somewhere. But there is no root of the reputation tree. Reputation comes from nowhere. Its all just bits of text being trusted because they looks like other bits of text you previously trusted. Nothing ever grounds the Carmack story in something else you can observe. We have no way to test if we are in a PR manicured version of the truth or not. Claims about him are both unfalsifiable and inconsequential and reduce to insisting a preferred source of narrative is more reputable than the others.
As far as I know, he only exists as a concept which is written about in websites I frequent. I'm a John Carmack Truther. There is no John Carmack. The CIA made him up as part of MK Ultra II. I read it on a very reputable online forum.
>As far as I know, he only exists as a concept which is written about in websites I frequent. I'm a John Carmack Truther. There is no John Carmack. The CIA made him up as part of MK Ultra II. I read it on a very reputable online forum.
That's utterly ridiculous. John Carmack is real, but he's actually an alien from the planet Ka'vi. I know this is true because I read it on an actually reputable online forum (unlike your "reputable" forum). I know my preferred forum is reputable because its other postings agree with my opinions.
> The story from Carmack is also based on his word. Unless you've worked with him directly, everything you know about John Carmack is based on some or other's words.
As an admirer I saw a few videos of Carmack and immediately pegged him as NPD, obviously so. GGGP's post supports my observations, re: bully, disparaging, can't admit when he's wrong, can't acknowledge the accomplishments of others, all of which predicts what we don't see, deep anxieties, extreme self-criticism, long-held grudges, envy, etc. And I respect Carmack for leaving Meta, but its hard to ignore that he joined in the first place when Meta already has a CEN (Chief Executive Narcissist).
I think the point was that any large enough company will have a ton of politics related to decisions around technology, so one person's anonymous perspective shouldn't carry much/any weight.
Academic experience writing papers vs actual industry experience to the point of creating new technologies and new markets almost by himself are definitely not equivalent.
So, as much as I would be a bit judgemental about academics opinions, I would definitely listen to Carmack.
Academia can mean a lot of things, some of which have obvious parallels to industry. He may deserve a pedestal in some respects, and I'd listen to him too, but it should be tempered.
Everyone can and will make really bad decisions, but in my experience, owning up to mistakes and taking responsibility in contrast to playing it down and being history revisionists, is inversely proportional to, well, how clever they consider themselves be. The "well that was intentional/expected/irrelevant since it was really X instead of Y we did" is a bit worn by now. Painting broadly, generalizing etc of course.
The principple "Don't shoot the messenger" applies to real-life messengers.
It does not apply to anonymous 'messengers' on the Internet who spread stories that can't be verified, and who themselves are of unknown credibility/origins, and who could have undisclosed biases/prejudices against the person they are criticizing.
At risk of upsetting this thread’s balance and reducing it to negativity: I prefer your parent comment’s interpretation of the Go. “Ahead of its time”? Technology is the last space where a newfangled product would lose momentum by being released to early.
I’m open to being proved ignorant here. Can you think of some examples where tech was obviously ahead of its time and not accepted?
Subscription music services like Rhapsody provided what Apple Music does now 15 years ago, and they died out (similarly Microsoft’s Zune service). Maybe this is what you’re saying? - All the same, I would trump these examples up to poor marketing, management, and product specifics. Apple Music isn’t releasing their service at a better time. They just put a lot more effort into it, and it provides the service better. (Their phone ecosystem plays a big part in this.) This example could be extended into saying that the Go just wasn’t good enough (thus: Carmack failed).
FWIW: I’m a Carmack fan, and I base a lot of how I use Emacs on his wisdom accumulated over the years. For example, his recent shift to VSCode has inspired me to think in that direction.
> Can you think of some examples where tech was obviously ahead of its time and not accepted?
Uh yeah, VR itself as a concept and models of VR have been live since the 80s but were especially hyped in the 90s but never went anywhere beyond amusement parks and arcades. And no one wanted to touch VR in the 00s despite huge leaps in processing power.
I would even argue the original Macintosh was ahead of its time, maybe because it was too expensive and too hard to upgrade. As a result, DOS and Windows and IBM clones took the PC market, despite coming later and initially being inferior.
I enjoy this as a friendly/elucidating discussion and don’t want to annoy or antagonize you (just don’t respond if I do).
I do appreciate your take on the original Macintosh.
VR has never been ahead of its time in that it’s never had a time. It still hasn’t made its way into any sort of popular acceptance. The gaming industry is the only space in which it has made significant strides. If VR circles back around to popular acceptance of something like Carmack’s vision (like the Mac has done with Job’s) your point will be valid.
As it stands, Carmack’s vision failed, and Meta continues to experiment and R&D with different directions. Carmack’s decision to leave more closely aligns with the ideas expressed in the comment that started this IMO.
I’m literally invested in Meta’s endeavors here. (FMET through Fidelity Investments.) The previous sentence is just communicating my bias that I think they have the right idea in the long run.
Carmack's vision culminated in Quest 2, which is the only hardware Meta has produced that any significant number of people care about.
Instead of Macintosh, I might point at Commodore. Affordable hardware with success in some niches like video production, but poor broader acceptance beyond gaming markets. Weirdly out of touch management with a yearning to be accepted by stuffy business types, but completely misjudging wants and needs. With Quest Pro I get vibes of the Commodore 128, a game machine trying and failing to be a Serious Business Device.
Tbh if oculus weren’t associated with Facebook in a meaningful way I’d be all over it. But it is so I avoid it. The technology works fine but is a commercial failure, that’s not wholly Carmack’s fault.
Yea it is a device that goes on your face, puts cameras in your room, and creates a pseudo-reality for you. Who in their right mind would trust Facebook with that?
While I completely agree with you. I think it’s important to point out that if it wasn’t sold by Facebook, it would be 2.5x the cost and then most people wouldn’t touch it as it would be too big of an investment.
I think, in terms of the hype, VR was going to be the next big thing in gaming, and then maybe not just gaming after that, but other applications. So I was expecting it to become a required peripheral like a headset or a good mouse & keyboard.
But I don’t feel like I’ve missed anything by not having a VR headset. Like the product direction was very clear for oculus, had lots of buy in from devs… then it was bought by Facebook and became so much muddier. (“We’re going to use it in the meta-verse for boring work stuff” VR will be everything).
You need an exciting killer app for these things and they need to be commodity hardware. I’m guessing the best thing anyone could do for VR is give up all their patents.
It is probably helpful to define 'commercial failure'. In the sense that it sold a lot of units, it is a success; in the sense that it made any money for the company which produced it, it is a failure. So, it could be taken different ways depending on how the term is defined.
Between Oculus, Vive, and other various competitors, VR has been successful in many ways that it wasn't able to achieve 20 years ago. If you set the bar so high that it needs to be as successful as the personal computer or the mobile phone, sure. But I wouldn't call Oculus or modern VR a failure. It's a niche success.
>I would even argue the original Macintosh was ahead of its time
You can argue about the Mac but certainly the Lisa was. Early laptops like the Data General/One as well (although in that case there business issues as well).
As for streaming music, to go mainstream it probably needed cheap enough and fast enough cellular service. Of course, ripped, purchased, and umm acquired local copies of music also had a place once cheap enough portable devices with sufficient storage were available.
The company I worked for, had a Xerox system. It looked like an 860, but may have actually been more modern.
Now that was ahead of its time.
We also had Osborne and Kaypro computers, but the 860 was arguably the inspiration for the Mac. The operating system presented a mouse (actually, I think it was a touchpad)-driven, icon-based GUI. I remember seeing the “trash can,” on the bottom right (I think). I also seem to remember folder icons.
But that was from a brief, 5-minute (or less) peek, 40 years ago.
They didn’t let us mensch engineers near the thing.
>Old school hackers, military generals, special forces paratroopers, and space shuttle astronauts who are sensitive to social status use a GRiD Compass.
>Development began in 1979, and the main buyer was the U.S. government. NASA used it on the Space Shuttle during the early 1980s, as it was powerful, lightweight, and compact. The military Special Forces also purchased the machine, as it could be used by paratroopers in combat.
>Along with the Gavilan SC and Sharp PC-5000 released the following year, the GRiD Compass established much of the basic design of subsequent laptop computers, although the laptop concept itself owed much to the Dynabook project developed at Xerox PARC from the late 1960s. The Compass company subsequently earned significant returns on its patent rights as its innovations became commonplace.
I asked Glenn Edens, who co-founded GRiD, about a story I heard about the GRiD a long time ago, and here's the discussion:
>Not a solution for people who are sensitive to social status.
>Old school hackers, military generals, special forces paratroopers, and space shuttle astronauts who are sensitive to social status use a GRiD Compass. [...] I can't find a citation and don't know if it's true, but decades ago I heard a rumor that a Mossad agent's magnesium alloy GRiD stopped a bullet! Try that with a MacBook Air.
>Man in a Briefcase: The Social Construction of the Laptop Computer and the Emergence of a Type Form
>Abstract
>Dominant design discourse of the late 1970s and early 1980s presented the introduction of the laptop computer as the result of 'inevitable' progress in a variety of disparate technologies, pulled together to create an unprecedented, revolutionary technological product. While the laptop was a revolutionary product, such a narrative works to dismiss a series of products which predated the laptop but which had much the same aim, and to deny a social drive for such products, which had been in evidence for a number of years before the technology to achieve them was available. This article shows that the social drive for the development of portable computing came in part from the 'macho mystique' of concealed technology that was a substantial motif in popular culture at that time. Using corporate promotional material from the National Archive for the History of Computing at the University of Manchester, and interviews with some of the designers and engineers involved in the creation of early portable computers, this work explores the development of the first real laptop computer, the 'GRiD Compass', in the context of its contemporaries. The consequent trajectory of laptop computer design is then traced to show how it has become a product which has a mixture of associated meanings to a wide range of consumers. In this way, the work explores the role of consumption in the development of digital technology.
>[translated:] The Grid Compass was made of black lacquered magnesium alloy.
>Among its most remembered features, there is the fact that the paint went away after a while, due to the weight and dimensions that did not allow it to be too delicate with its transport. And so the dull black splintered, revealing the shiny metal beneath.
>Grid Compass - Bill Moggridge Design
>The Grid Compass was a status symbol, the flag of that tribe of people who wanted to show the world that they can never really disconnect from work.
>Owning it was cool.
>But even cooler was having chipped it, because it was the unmistakable sign that one not only possessed that thing, but actually used it.
The GRiD was so well built, and they were so popular with the military, that rumor was totally believable.
This has some stories about spooky GRiD users, like Admiral John Poindexter, who was a bit of a hacker:
>Pioneering the Laptop: Engineering the GRiD compass
>Introduced in 1982, the GRiD Compass 1100 was likely the first commercial computer created in a laptop format and one of the first truly portable machines. With its rugged magnesium clamshell case (the screen folds flat over the keyboard), switching power supply, electro-luminescent display, non-volatile bubble memory, and built-in modem, the hardware design incorporated many features that we take for granted today. Software innovations included a graphical operating system, an integrated productivity suite including word processor, spreadsheet, graphics and e-mail. GRiD Systems Corporation, founded in 1979 by John Ellenby and his co-founders Glenn Edens and David Paulsen, pioneered many portable devices including the laptop, pen-based and tablet PC form factors.
>Key members of the original GRiD engineering team -- Glenn Edens, Carol Hankins, Craig Mathias and Dave Paulsen -- share engineering stories from the Wild West of the laptop computer. Moderated by New York Times journalist John Markoff.
(At 32:37 they mention an external 5 1/4" floppy disk peripheral that was returned for service with a bullet hole, and the "Scrubbing Bubbles" software they wrote for the government to erase the bubble memory in case of emergency.)
Glenn Edens sent the following messages at 11:16 PM
Hello Don, I know that rumor, I can neither confirm nor deny :)
We got a lot of returned gear with bullet holes or shrapnel damage of odd kinds.
I doubt GRiD's use had anything to do with social status though - it was more about it was the first laptop, it was rugged (we over-engineered the heck out of it), it had an amazing software development environment (you could actually write SW for it on it beyond BASIC), usually folks rag on the price, however if you fully configured any other computer of the day the price was not all that different - plus no one paid retail in those days, thats what everyone forgets :)
I love all the references you found!
I'll also add that it is a myth that the military and Government were our biggest customers, they were about 25%, our biggest early customers were banks, audit firms, engineering firms, oil exploration, etc.
The first machine went to Steve Jobs (he paid for it, it was a bet he and I made), the second machine went to William F. Buckley (he paid for it as well). The one thing I regret is that we didn't release the Smalltalk system we did for it (getting a mouse was not easy in 1982, the only producer at that time was Tat Lam and all his production went to Xerox (Star prototypes as I remember). A funny story that for Apple to get a mouse prototype for the Lisa I had to go "appropriate" one from Xerox PARC - with tacit permission, everyone forgets Xerox was an investor in Apple (Trip Hawkins kindly tells that story from time to time).
So how are you doing?
Larry Ellison was an early buyer as well to use for a sailing race computer - I was told it replaced a DEC minicomputer that was being used onboard, saving a lot of weight and power draw :)
I can add it wasn't Mossad that I know of, it was closer to home, although I think we may have discussed that long ago - it was a US Agency :).
Don wrote:
So I’m reading between the lines that it DID stop a bullet, but it was somebody in the US, not the Mossad. Is that why Reagan survived his assassination attempt??! ;)
I still believe the social status was more like the unintended effect, not the primary cause, of people owning a GRiD, because they certainly were bad-assed computers.
Maybe MythBusters cold do an experiment to find out if a GRiD will stop a bullet. Hopefully not a working one though, those should be treated with care and respect and not shot at.
Wow it would have been amazing to run Smalltalk on that thing. As it was so inspired by the Dynabook, did Alan Kay ever get to play with one?
Glenn replied:
That’s the story. I never heard it had anything to do with Reagan though. Over the years we did get multiple units with all sorts of crazy damage, much of it was repairable, some was not.
Well we certainly did nothing to counter the image, although I think that really came later. In that time (we started shipping in 1982) even having a computer was a big deal no matter if it were an Osbourne or a GRiD. Although the Compaq’s et. al. sewing machine sized computers shipped well into the late 80’s. We really didn’t any serious competition until 88’ or 89’, so nearly five years after we started shipping. For the first 3 years we were always catching up to the backlog.
Indeed :). We definitely found ‘debris' inside the machines that were returned to see if they could be repaired, obviously it would have to do with what size bullet and angle of incidence.
The Dynabook was the inspiration for sure. Yes, Alan Kay played with several GRiD models as did Dan Ingalls. The Smalltalk implementation was on the GRiD was pretty good for the day, the 8086 being a real 16-bit machine made a difference. The Alto II was still a bit faster, but not by much. If a mouse were readily commercially available we would have shipped it. It was a little hard to use on the small screen so you wound up moving windows often.
Were the GRiD laptops, which I remember reading about in Byte Magazine back in the day, waterproof? I believe decades of experience with portable computers suggest that might be a more important feature than being able to stop a bullet. Depending on what kind of company one is keeping.
I've been revisiting it lately, and Byte actually contains a vast collection of things that didn't make it largely because they were ahead of their time. Great stuff.
Expensive and hard to upgrade are both separate from being ahead of your time design-wise. (Apple had healthy margins on Macintosh from the start, and the 128k no-slots aspects were both argued against by people on the team. I guess there's a sense of "ahead of its time" that fits, where Jobs consistently aimed for more "upscale consumer" type products but wasn't yet able to make that work for a big market.)
I've always thought that TiVo was way ahead of its time. The company is still alive but it feels weird to talk about it in present tense when we've got Roku, Chromecast, Firestick, and Apple TV. Even the era of cable provider DVRs made me feel like TiVo was ahead of its time!
Tivo nailed the user experience which is why it took off. In the early years, the response time on the interface was nearly instant for everything. This made it delightful to use because it felt like an extension of your intentions. Today, even with all the content in the world available, there are far more delays and wait times because the content is streaming and not local. Even YouTube TV, which could have the same 10ms response time as Stadia, is slow in many places.
The idea of the actual device seems very tied to a particular time, not ahead of it. The point was to record broadcast TV (so, reliant on the time when broadcast TV was the main way of getting TV) and the ability to skip ads (nowadays any streaming service worth watching doesn’t have ads anyway).
footnote:
The TiVo UX was superb but, for my money, ReplayTV was superior, technically.
And, worth mentioning, its UX was not lacking in any perceivable way; OK, maybe less flair & eye candy than TiVo, but also really, really good in its discoverability & daily usability.
Can you think of some examples where tech was obviously ahead of its time and not accepted
Mobile devices with clunky resistive touchscreens come to mind. The iPhone was hardly the first "smartphone," but Jobs's key insight was to have people sitting by the river waiting for decent touchscreen technology to come floating by. When capacitive multitouch happened, it was a classic example of apparent "good luck" being equal to "preparation meets opportunity." Musk is obviously
trying to camp the same spawning grounds with Neuralink.
Teletext might be another example, as the predecessor to the WWW. Putting a lot of money into advancing Teletext development would have resulted in WebTV at best, and more likely just an expensive waste of time.
Any of dozens of personal computer models in the 1980s, some quite advanced, that weren't made by Apple or IBM.
Navigation and infotainment in cars -- Buick's early CRT touchscreen and Honda's "electric gyrocator" for navigation come to mind. There was no point trying to do either of those things at the time.
Minidisc as an early embodiment of advanced DSP techniques for lossy audio compression. ATRAC could have been MP3 but wasn't, because Sony.
Analog laserdiscs as a home video format. It was the right basic idea, and boasted some exotic technology under the hood -- but disc-based A/V needed to wait for digital techniques before it really made sense.
Not hard to come up with examples that answer this question, for sure.
>Analog laserdiscs as a home video format. It was the right basic idea, and boasted some exotic technology under the hood -- but disc-based A/V needed to wait for digital techniques before it really made sense.
I'm not it really needed to; analog laserdiscs were a huge improvement over existing videotapes, at least for distribution of movies (not for recording obviously). The main problem was the price: they were expensive as hell. Not sure if that was due to technical limitations, or the players pricing it high because it was a "premium" format and they priced themselves into irrelevance and obscurity. I've seen this with many other technologies over the years: someone introduces something really cool, but it's so damn expensive no one buys it, so it goes nowhere, and eventually some cheaper alternative comes along and becomes the new standard.
Stadia is a great example. I am still using it today before the shut down, it's amazing how it's actually got me into playing games again and it's fantastic for casual games with friends since everyone can play no matter there hardware and the multiplayer features are fantastic for this.
It works and it is fantastic, but it's ahead of it's time and most people don't know what it is. That and Google's mismanagement of the service, but if it was an accepted thing, Google wouldn't have had to push it ahead so much, but since it wasn't they did and they failed.
I don't know if we really pin the blame on that for stadia. Maybe portions, but I also suspect that a big reason for stadia's "failure" wasn't necessarily Google's/Stadia's fault. Lots of homes still have really bad internet connections. I tried stadia, I think the concept is great and most everything is there except I can't get a decent enough internet connection from any ISP in my area to make it usable at home. But I know people is places with really good internet connections and have heard nothing but good things about it before I tried.
As someone that had stadia and a good enough connection for it, the technology was honestly really impressive.
The problem for me was that it was yet another platform. I already have many games I'd like to keep playing, I don't want to buy another copy that can only be used on stadia. I don't want to buy anything on stadia and then only be able to stream it while I still have a gaming pc.
For sure I definitely see that, but I think Stadia solves a different problem. If you are already heavily invested in something like Steam and have the hardware, Stadia doesn't really solve your problem. If you don't have the hardware, maybe the computer you can afford is a $200 netbook, or you can't travel with your gaming PC, but can pay the monthly fee and occasionally the cost of a game, then Stadia could solve the problem, barring a good enough internet connection. Which when I tried it was my case. I didn't have a gaming PC, I just bought an M1 Macbook Air, so didn't really wanna dish out more money for hardware, Stadia could have allowed me to game on my M1, but once again, bad internet connection. How I play locally and have games on Steam since my job provided me with a home gaming PC and they don't care what I do with it so long as I can work from home with it (no company spyware).
But yea, if you have the hardware already, the value add wasn't really there. But for the broke college student, or broke adult who can't justify dropping hundreds/thousands today but can eat a few bucks a month. Or someone who travels often for personal or work, then the value add is there.
> Can you think of some examples where tech was obviously ahead of its time and not accepted?
Smartphones. Microsoft and Symbian were at least 7 years ahead of Apple. The manner in which they squandered the opportunity aside, most people simply didn't care about having email on their phone.
Most people still don't care about having email on their phone: that's not what they use their phone for most of the time. They use it for text chats, taking photos, playing games, navigating, etc. I'd say email ranks pretty low in importance.
Those other companies failed because they had clunky UIs and thought that most people really cared about typing emails on their phones; they didn't. Apple finally proved that people want something easy to use that does things they want to do (which isn't email).
> Can you think of some examples where tech was obviously ahead of its time and not accepted?
Well I think we might have different ideas of what "ahead of its time" exactly is. I would include - and I think I hinted at that with "released too early" - things that simply weren't refined enough technically, as well as things that relied on other technology that simply wasn't capable, widespread or accepted enough at their time.
So regarding Rhapsody for example, it was released in 2001, a time where the majority of people was still on dial up iirc, and even if you were one of the lucky ones with a DSL connection, you might've had a metered connection, so music streaming was just... ahead of it's time.
The Go was certainly not "ahead of it's time". It was a standalone version of the GearVR, which was released three years earlier. At the same time Oculus released the Go with 3DOF tracking, Google released the Lenovo Mirage Solo with 6DOF tracking.
That said, there was nothing fundamentally wrong with the Go. It was and still is, the cheapest entry point into VR. The lack of features made it much more lightweight and comfortable than its successors, which also cost double of the Go.
The only real problem with the Go is that Facebook didn't continue that line of product. There is plenty of room for a 3DOF/2D content focused headset, but Facebook never really cared about that area of VR.
Even if the statement were true, it’s not like they hired some anonymous guy, they knew what they were getting. Don’t hire a passionate guy like that if you don’t want him to concern himself with your company.
>Carmack has proven enough times in the past that he's able to deliver, that he can push technology, knows what's possible and what isn't, and can wrap up a product.
In the finance world, lesson number 1 is past performance is not a predictor of future results.
I don't care how much of a virtuoso you are, if you clash with culture, you're fucked. I'm just coming out of a similar stint where the best I could do was hold off a predilection toward toxic culture norms long enough for processes to materialize. To support the business in spite of it.
So I know exactly the kind of forces he was probably working against. It's rather thankless, draining, and exhausting in a way sleep doesn't help with.
It's often bidirectional as well, so there's a trick to figuring out when it's time to bounce.
Read the John Romero stuff. Even Carmack explains in the Lex Friedman interview how badly Carmack treated him. Carmack also presents enough in that interview to expect this take us quite likely correct.
Especially considering that there are two (unrelated) Carmacks: John and Adrian, both cofounders of id software. As for John Romero, he did a lot of "stuff".
How is it confusing? The way Carmack treated Romero is famous, and the recent Lex Fridman interview of Carmack even has it admitted by Carmack. Carmack, in the same interview, explains how he is an asshole to people that work(ed) with him at Meta.
Listen to the interview, and see if Carmack sounds like a reasonable boos, able to get the best from his staff, or sounds like someone that would cause serious friction. I'd pick the latter, from his own description of how he treats people.
He's a good programmer. He'd be absolutely terrible to work with or for.
Past experiences do not mean future success. This isn't even about Carmack: past 'heroes' end up failing in their decision-making in the future many times, and they were followed for no other reason than 'they have a track record'.
Past experience is not a perfect predictor, but still much better than almost anything else. I'm pretty sure you'd feel safer going into an operations if the surgeon said "I've done this 100 times now" instead of "This is my first time with this procedure".
Of course, it matters if the experience is directly relevant, and that's where hero worshiping often gets it wrong.
Let's say we have 2 people in two different walks of life. Jim and Alice.
Both of them are entrepreneurs and like doing startups. Both of their goals are to take a startup from idea to $1 Billion+ IPO in 2 years and exit and then start the next start-up. If they don't reach 1 Billion IPO they just exit.
After 20 years. Jim and Alice have both attempted 10 startups.
Jim has reached the goal 2 out of 10 times. While Alice has reached the goal 8 out of 10 times.
Would it be a fallacy to bet on Alice if you had to invest in either Jim or Alice's startup?
That is an indication of why the particulars are important, but not defining what makes the argument fallacious. For instance 'ad hominem' is a fallacy because attacking a person making an argument doesn't make the argument incorrect. Relying on past behavior to indicate future success is also what you just did, but you were more specific about the inputs.
I think its hard to prove without a very large statistical data set as we see that 8 out 10 times might IPO but in the next 50 years (if that would be possible) it might be 0 out of the next 10.
IMO the reason is that things change. People change. Markets change. The world just doesn't stay static. But our tendency is indeed to trust people who did something and I would probably also trust a person more given certain specifics as above.
I say this as someone who cares about language, and who has no dog in this Carmack dispute.
I think you are overweighting a fairly simple grammatical error. The commenter expresses themself clearly and logically. It is possible that they don't speak English as a first language, or that they simply are not that careful about making grammatical mistakes. Not everyone is as pedantic about language as you or I may be.
Unreasonable people can write grammatically, and reasonable people can write ungrammatically. I think it is better to judge an argument by its reasonableness.
I think many people would consider your response impolite and unkind to the original poster. Surely you do not want to shame someone for their lack of mastery with the English language? Surely you would rather judge an argument on its merits?
May I suggest a last question: could you see yourself reading this comment to the original poster face-to-face? Does it not seem rude and condescending to imagine yourself doing that?
> It is possible that they don't speak English as a first language
This is more of a fun side note: It's more likely that they're a native speaker. People who learn English as a second language generally don't make the their/there mix up.
Other keyboards are seriously annoying by either not having prediction, putting them behind late T9s or they have predictions which seems to be made by someone who almost actively try to make me look stupid.
English is the only language I know and my international friends take great pleasure in correcting my grammatical errors. I've learned a great deal about my mother tongue from them!
I observed that too on myself (English is my second language), and I even wrote comments like that in the past, but after 20 years I noticed that I started making those errors myself, which sucks.
I'm guessing when you read people making this mistake over and over (I even saw it done in news articles) I guess your brain starts equating them together :(
I'm thankful for those people correcting it, although I think it is a losing battle.
That is a fun side note :) Do you have a source for this? I'd love to read more about it. I assume it's because when you're actually taught this specifically, you remember it, as opposed to native speakers who "learn" the spelling via osmosis or something.
Native speakers learn the language at a time when they can't read or write, so they have to rely on their listening. Non-native speakers on the other hand usually first see the language written down, and then hear / pronounce it, and connect the writing with what they hear.
If I had a penny for every time a native speaker wrote "would of" instead of "would have" in forums, I'd be a billionaire. "Their" / "They're" / "There" is also common.
But the funny thing is, I noticed I would make similar errors after being immersed in a native environment after a few years time. Somehow I just say to myself what I wanted to write, and the slip-up happens. So native speakers are more prone to this, but it's not only there privilege!
I observed it on myself, although after some time I started doing it too.
My belief was that it's because English is not spelled the same way it sounds, so people who learn it are forced to memorize pronunciation and writing separately.
Going to reply on this comment since it's a thorough response to mine.
Look; I'm seeing a lot of reasoning across comments from non-native language, keyboard input, autocorrect, and so forth.
None of this changes the fact that the usage is just flat out wrong. Have we become so soft in society that nothing can be pointed out because of speculative reasons?
If it's a 2nd language, learn the language. If it's the keyboard, get a better keyboard. If it's autocorrect, double check what you write. Stop making excuses for everything.
All these cries for why we should accept there/ their/ they're uncontested is no doubt a reflection of the frustration Carmack must have experienced, if HN is any indicator of the FAANG workforce. John is known to be very direct and unapologetic himself, and here y'all are losing your mind on a slight criticism. It's no wonder.
I would suggest that it's not necessary to speculate too much about why they made a grammatical error. There are many possible reasons. For instance, I suggested that they may simply be less pedantic/careful about grammar. They may simply care about the form of their expression less than you or I do.
I think judging a hypothesis by its form/expression is not a great way to get at the truth. If a heuristic has to be used, then probably tone, coherence, and even-handedness are better than grammatical correctness. Those are at least closer to the substance of the argument.
I suggest that evaluating arguments on the basis of form/expression will not help you get at the truth.
It is your choice whether to be aesthetically dissatisfied by grammatically incorrect English. Many would consider that pedantic, though I might have a modicum of sympathy for you. However, I think the error you've made is to promote aesthetic displeasure into distrust for the OP's reasonableness.
I do not know about others, but I do not think I am losing my mind about anything. I suspect that most direct and unapologetic people have faith in the substance of their arguments, and would be frustrated to be judged using low-signal heuristics like grammatical correctness.
I only add that the post above invokes some wildly spurious logic in counting the 5 instances of the same mistake as if they were 5 different mistakes. Such a basic error really makes me question his general reasoning ability.
What a fatuous argument. Have you considered that they may have used speech to text to dictate there (sic.) response from a mobile device, or that perhaps their (sic.) not a native speaker?
I find that the people who are overly concerned about semantics tend to be the people who have the least to offer in terms of substance. The idea that you can draw a correlation between one's technological aptitude and the inability to distinguish between various possessive adjectives is patently absurd.
Here's a pithy quote I created just for you: "it doesn't matter how many languages you can speak if you have nothing to say."
or just be dyslexic or any other myriad of disorders like ADHD etc that may affect such minor grammatical rules yet not change or alter the likelihood that they could be a senior meta engineer?
> trying to be convincing on technical issues when you can't understand fundamentals of English is not persuasive at all.
Note to disprove your point but there are plenty of very technically capable people who learn English as a second, third, fourth language. In fact I would say in technical settings, such as here, that this is statistically more common than a native English speaker with poor writing skills.
There is/are and their sound very different in other languages, so I would say that people who speak English as a second language generally make these types of mistakes less often than natural speakers who learned speaking English many years before writing. (We make other types of mistakes more often though).
As a non-native Enlish speaker I'd like an n=1 'experience' to your n=1 'would say': the English is in my head first and then in writing. So at the time of writing there and their already sound alike and van be easily mixed up
This is true, but I (and you) learned to say and read/write these words at the same time, so we have an advantage over native English speakers in differentiating them.
At the same time the poster had a larger vocabulary than I have (which is true for native speakers generally, as I try to stay within simple English).
Likely. I can't find it now, but there was an HN story not long ago where someone used fairly rudimentary techniques to identify former/alt HN accounts based on stylometric similarity. It worked VERY well.
If I were going to post from a throwaway account for some reason, I would probably launder it through an intermediate language on Google Translate for one or two cycles. Otherwise, if I didn't bother with that, I'd certainly scatter some intentional errors here and there that I don't usually make.
Nowadays people would probably just use GPT prompts and rephrase to obscure identity. Good luck reversing the output to deduce the style of the author's original input.
I am quite confident that Satoshi Nakomoto was an Australian bloke(s) living in Japan when he/they/their wrote the Bitcoin Whitepaper. The code itself does suggest it was one person, but I still think it was a few people with one at the helm.
If you're typing quickly, you can miss when autocorrect puts in the wrong replacement
I stopped caring about when people have incorrect your vs you're and there vs their because it's really about whether the autocorrect ai is getting them right. English is an evolving language that I don't think will keep those distinctions in the future
It’s not a fundamental of English, because it’s (that’s another example) impossible to hear. You’re (that’s another example) being pedantic about an artifact of our writing system, which is strictly not language. I’m being pedantic about this because i’m tired of this being pointed out.
Grammatical mistakes aren’t a means to disprove anything. Not everyone is detail oriented and I doubt it prevented anyone from understanding the meaning of what they were saying. Their post wasn’t even technical, more like a stream of thought
someone writing and wishing to hide their identity may very well be masking their writing style. The bad grammar here might be the social media equivalent of using letters cut from magazines.
Their use of 'there' instead of 'their' could've been done purposely as a means of not being unmasked. A tool was posted the other week that took HN usernames and found accounts that it deemed as being alt accounts/similar writing styles.
Could be an attempt to throw something like that off
> specially anyone who doesn't know the difference between 'their' and 'there'
This is a really ablest take and should not really be seen here on HN. You have no idea what kind of an input device the author is using, if they have some handicap or disability nor even what their native language is. There are plenty of reasons a poster can make this mistake, and even more reasons to make it consistently. Please do better.
Carmack has proven enough times in the past that he's able to deliver, that he can push technology, knows what's possible and what isn't, and can wrap up a product. "Wading into fields he has no experience in" sounds pretty unlikely for VR given his past work. And I wouldn't consider the Oculus Go a failure, more like ahead of it's time and released too early. A prototype of the quest. But I guess now it's easy to claim everything that's bad about the go was Carmack's work and everything good about the quest and quest 2 was someone else's.