We may never know why ChatGPT usage mysteriously drops during summer vacation...
> By categorizing the 100 most popular searches containing the character string “chatgpt” he found that the top usage was for job applications, characterized by searches like “chatgpt resume” and “chatgpt cover letter.” But homework finished in second place, identified by searches like “chatgpt essay,” “chatgpt history,” and “chatgpt math.”
1. Knuth laments the lack of technical ("internal") history of computing, which traces the evolution of technology and ideas, and should be of great interest and benefit to practitioners.
2. Historians typically focus on their domains of expertise - social history, culture, economics, politics, personalities, etc. - and tend to write non-technical ("external") history of computing.
3. The people who have the relevant technical expertise - practioners, researchers, and scholars within the computing field - are qualified (in terms of technical understanding at least) to write this technical history, but have basically zero economic incentive to do so. There is no reward for industry practitioners to write the technical history of computing, and there is little to no reward for computing researchers or scholars either. And of course if one is (or becomes) an expert in computing, there is no economic incentive to become (or remain) a historian.
4. Nonetheless, there is in fact a small (and hopefully growing) group of scholars who seem to be interested in investigating the technical history of computing (and according to the author "holistic" history which includes multiple aspects.)
I tend to agree with Knuth - technical history is extremely valuable to both practitioners and researchers in computing, and there isn't enough of it.
While it is understandable that computing practitioners and researchers want to look forward to the next "new" thing rather than backward to "old" things, ignoring computing history means that we are often reinventing the wheel, repeating old mistakes, etc., all while lacking an understanding of how and why things are the way they are today. And perhaps missing out on a great deal of fun and intellectual engagement as well.
Fortunately is some activity in terms of writing up and analyzing the technical history of computing, and I certainly appreciate the work of the CHM, journals like Annals of the History of Computing, the work of retrocomputing hobbyists, and the work of the scholars mentioned in the article. But (as the article notes) there are few economic and career incentives - in history or in computing - to produce this important work.
The article validates Knuth with these statements:
> For different reasons, outlined below, neither group has shown much interest in supporting work of the kind favored by Knuth. That is why it has rarely been written.
> Most of this new work is aimed primarily at historians, philosophers, or science studies specialists rather than computer scientists
> Work of the particular kind preferred by Knuth will flourish only if his colleagues in computer science are willing to produce, reward, or commission it.
The second part of this last sentence isn't wrong, but sidesteps the first point. One might similarly criticize history departments for failing to reward or commission technological literacy.
I also agree with Knuth and for me it has been extremely valuable to know the history of various technologies, and especially knowing the reasons why the optimum solutions have been replaced from time to time and the causal connections between various discoveries.
I see expressed frequently opinions that old scientific and technical publications are obsolete, but in my opinion this is very naive.
The optimum technology or algorithm for solving a certain problem changes when improvements are done in some different domains. However the range of kinds of solutions for a given problem is usually finite, so when the optimum solution changes in time it may necessarily change to a kind of solution that has already been used in the past.
Because of this, it is very frequent to see claims about the discovery of "new" things, where the so-called "new" things were well known and widely used some decades ago, or even much earlier.
The worst is not the time wasted with the rediscovery of old things, but the fact that the rediscoveries are usually incomplete, without also rediscovering the finer points about which are their most efficient variants and which are their limitations, which may make them non-applicable in certain contexts.
Knowing a detailed technical and scientific history avoids such cases.
It seems like Sony and Nintendo had more high-quality exclusives (though mostly timed exclusives for Sony), which are system sellers, while most Xbox games were also available on Windows.
But the Xbox hardware is good, franchises like Halo / Gears / Forza etc. have always been good, and Xbox Game Pass is great.
The 360 era was good and they were really trying. For the last 10 years I don’t even know what Xbox stands for at this point. Like gamepass is a neat SaaS and the consoles are meh and PC gaming went it’s own way a long time ago and is at a healthy place
360 also benefited from PS3 issues (unusual architecture that was hard to program effectively, high price ). GamePass is great, and the Xbox hardware is good, but PS4 and PS5 had stronger timed exclusives while Switch had an appealing combination of Nintendo first-party exclusives (several of them rising from the ashes of the Wii U), lower cost, and handheld/hybrid operation.
As I and others noted below, it is included in Apple's clang version, which is what you get when you install the command line tools for Xcode. Try something like:
clang -g -Xclang -fbounds-safety program.c
Bounds check failures result in traps; in lldb you get a message like:
reply