How much of what we do on current-era machines is IO-bound? Not a great deal. And when it is, it's usually a network and not a local hardware limitation.
How much would UX improve? Not a great deal, most usage is CRUD/web/vid.
Would new applications become available? Sure. But what are they?
Is the value proposition vs. total addressable market compelling?
If you can convincingly answer the above you can definitely go get funding and do it.
Oh I see what you're saying. It's analogous to the old Beowulf cluster jokes.
That's basically the computer I want: 64 thousand cores with high speed interconnect appearing as a single address space (ideally on a single chip) so I can finally get real work done on embarrassingly parallel workloads that currently require me to switch to a different level of abstraction with shaders or Python math frameworks or whatever. I just want to be able to do ordinary C-style shell programming, piping immutable data around, without having to worry whether the data's on the video card or anywhere else. Which currently isn't possible at scale affordably.
What use would the typical user have for such a thing? Not much. I'm probably looking too far ahead to solve problems which people don't know exist yet.
For example, I wanted this computer to evolve something like GitHub Copilot back around 2005 after reading Genetic Programming III by John Koza. It would basically fill in the code automagically for TDD. Which would liberate programmers from the minutia that even then had swallowed all of their time.
Now we have these new tools making that possible, albeit in a roundabout way that took many thousands of developers many years of concerted effort to achieve. My heart's not in that kind of approach. I wanted to keep things simple and generic and have computers do most of the work via better compilers, using existing languages with no special intrinsics or libraries. So we'd write ordinary for-loops in C for our own blitters, not have to deal with the additional mental workload of learning and dealing with the idiosyncrasies of OpenCL or CUDA.
And that goes for everything. I don't want React, I want a DOM that isn't slow. I want all databases to emit events like Supabase and Firebase. I want all data structures to be immutable unless marked mutable, in order to avoid promises and monads. I want pattern matching, and avoiding the use of null in data structures. I want duck typing, not inheritance. I wish that the "final" and "sealed" keywords were illegal. I wish that we could hook into and/or override any getter and setter, to allow for more reactivity in our programs. I want doubly-linked references or some way to detect what's watching my state variables. I wish that all languages had aspect-oriented programming. Heck, I wish that more languages had macros. What we thought was "simpler" ended up just being "easier", which stranded us on the local maximums we're on today.
Just on and on and on. I feel like I'm on a different programming branch than the rest of the world is. Probably because I'm looking at solving problems outside of the profit motive. Which makes me feel crazy some days, or at the very least beat down and exhausted. It's been going on so long that I feel used up. Like even if someone gave me the tools I need, I don't know if I can summon the wherewithal to make use of them anymore. I mostly think about stuff like switching careers and running away. Maybe it's up to the next generation now.
Edit: that was a bit negative, I'm sorry. I've been trying to be more like Riva on Star Trek TNG and turn a disadvantage into an advantage:
I tend to perceive the negative due to my years as a software developer, so I've become risk-adverse. With so much bad news on social media, I perceive the world as ending on a daily basis. So I've been trying to seek the positive and stay in an upward spiral. Like how you suggested with getting funding. I just don't know where to begin with something like that, or who to talk to.
Absolutely frontend stuff will burn you out as an individual, it's too high churn and low ROI. Most DBs have triggers now, generally implemented in ways which can generate a text-stream to be easily parsed. While introspection suffers in a heterogeneous environment, modern Unix does give you free event generation by subsystem (eg. inotify), lists of subscribers (lsof), configurable kernel-level instrumentation if you need more, etc. so it's not all bad. IMHO much of the local maxima are artifacts of too much low-level programming and not enough higher-level codebases. Programmers eschewing the positive aspects of higher level languages and sticking their head in the sand about C-grade portability and nominal efficiency, at least in part. For funds, begin as suggested: define the problem, make a case either in market or in losses, and start pitching.
Hey you're right, I agree about problems coming from focusing on the wrong level of abstraction, and thanks for your clarity.
I was working on a project and had to install a file watcher for Node.js because the process inside Docker wasn't seeing the file changes from the IDE on the host for some reason, or using too much CPU, can't quite remember. I think it was called Watchman:
They use inotify when possible, but fall back to other methods, including full scan (recursive? bisect? can't remember the term) of the directory and file timestamps.
I hadn't put it together that it could be used to implement reactive programming. But I have made toy workflows with folder watchers using AppleScript on macOS.
For funding, maybe I could try Reddit or something. I'm realizing that many of my problems come from spending too much time in my own head and not keeping up with trends.
In the 80ś-90ś I thought of software as prototyping. If we keep a healthy separation of [shall we say] mission critical application and entertainment, today, as far as normal users are concerned, a good 99% is pretty well defined. We are now developing stuff users don't want, rent seeking schemes and prisons therefore.
Not sure if it was my failure to predict or the industries but I thought the most obvious requirements in the obvious applications would find their way into the high level language abstractions (in increasingly large chunks) and gradually migrate to lower language features and so on closer to the metal until the email client is just an array of similar email processors that can be powered on with some basic query. Have a mailto with params, a new mail notification with some custom sounds entirely separate from other audio, some way to export the attachments and the database i/o preferably with a mechanical switch that completely rules out any other process accessing the mailbox unless the user specifically enables it. Yeah, it should probably beep loudly for as long as remote reading or writing is enabled with a red led blinking above the tumbler switch.
That way no one has to ponder how to ruin the protocol, add emoji's, data mine the user, insert ads or otherwise turn email into a first person shooter MMO next level email experience. Regardless what fantastic thing email could grow into it would just not be possible. After all, we've already turned the fantastic document distribution network (www) into a fantastic application platform. Ideally such things should not be possible but it was and we did and the result is ofc wonderful... except that documents are now multi GB advertisement machines that make 500+ requests. (I cant even view the images in this topic because this laptop cant do such websites) Email has the potential to be a superior application platform much superior to the www but I really hope(ed) we could glue its components into place. (toasters that cant run doom)
The next chunk of hardware can do news groups, one for IRC, one for torrents, one for word processing, one for tabular data, one with maps, one with a web browser, a real terminal(!) and eventually we can have a hardware implementation of HN.
Each such application can have its own signal from the keyboard and mouse and its own video output. Some other chip to combine the pictures into windows with some title validation so that one cant mimic hardware implementations without the user knowing it.
I was completely wrong or was I?
That wonderful pdf in the topic makes an analogy replacing a single supper fast bar tender with multiple bar tenders but this seems a poor fit.
The general purpose stuff is like a college degree. It is some college! Its product is state of the art and it is improving all the time.
But if you want to develop, manufacture and finance the development of the next level drink dispenser you cant keep throwing [however excellent] college graduates at it and expect it to scale.
Our college is to produce the finest surgeons who are also the finest pilots, the best mathematicians, greatest artists and stand up comedians.
That hardware outlook would have been rational in those days. The popularization of computing was a double-edged sword and commerce corrupted efficiency. It's no good selling people reliable PCs with easily replaceable parts if you want to make money on hardware and software. Today in a triumph of marketing and needless consumption people buy laptops for no apparent reason plus phones more powerful than their old laptops and PCs are becoming edge-cases. Each OS release refuses to work on older hardware. Even Linux drops support. On PCs you can plug in extra processors but only your modern-day prototypers/prototypists (software developers), oddballs like film post processing houses, scientists and gamers tend to bother. It's reaching the point even screens can barely be purchased without corrupting advertisements, internet connections and spyware.
Can't say I follow your whole line of reasoning but I agree the faddishness of, say, frontend environments does favor a, say, LISP implementation with code generation to exploit the next app-market with rejigs of yesteryears' algorithms. Classic games, utilities, etc. Just a new target layer and profit. Let the deployment environment evolve and invest in something abstract enough for inherent stability.
Some of the better facets of today's environment are popular enthusiasm for technology, easy and fast connectivity, and nominal availability of funding. I agree that systems thinking tends to be learned through implementation and maintenance, not taught.
Speaking of drinks, I need a drink: for tomorrow I am scaling a drink dispenser.
How much of what we do on current-era machines is IO-bound? Not a great deal. And when it is, it's usually a network and not a local hardware limitation.
How much would UX improve? Not a great deal, most usage is CRUD/web/vid.
Would new applications become available? Sure. But what are they?
Is the value proposition vs. total addressable market compelling?
If you can convincingly answer the above you can definitely go get funding and do it.