As someone with no inner monologue, I think I could just as easily "flow" about a non-verbal task like spatial reasoning or a verbal task like reading, writing, or even engaging in a particularly technical or abstract conversation. Unlike you, my resting state is non-verbal and I would not be able to correlate verbal content with flow like that.
To me, flow is a mental analogue to the physical experience of peak athletic output. E.g. when you are are at or near your maximum cardiovascular throughput and everything is going to training and plan. It's not a perfect dichotomy. After all, athletics also involve a lot of mental effort, and they have more metabolic side-effects. I've never heard of anybody hitting their lactate threshold from intense thinking...
My point is that the peak mental output could be applied to many different modes of thought, just as your cardiovascular capacity can be applied to many different sports activities. A lot of analogies I hear seem too narrow, like they only accept one thinking task as flow state.
I also don't think it is easy to describe flow in terms of attention or focus. I think one can be in a flow state with a task that involves breadth or depth of attention. But, I do suspect there is some kind of fixed sum aspect to it. Being at peak flow is a kind of prioritization and tradeoff, where irrelevant cognitive tasks get excluded to devote more resources to the main task.
A person flowing on a deep task may seem to have a blindness to things outside their narrow focus. But I think others can flow in a way that lets them juggle many things, but instead having a blindness to the depth of some issues. Sometimes, I think many contemporary tech debates, including experience of AI tech, are due to different dispositions on this spectrum...
This was citation worthy because it's new knowledge to the field. Even in a graphics paper, you can cite whatever basic techniques you're using if it's not clear that everyone will be familiar with them.
I agree in broad strokes. If I am incapacitated, that is when things like durable power-of-attorney, medical advance directives, and living trusts come into play.
The important thing is to ensuring your computer is not a single point of failure. Instead of losing a password, you could have theft, flood, fire, etc. Or for online accounts, you are one vendor move away from losing things. None of these should be precious and impossible to replace. I've been on the other side of this, and I think the better flow is to terminate or transfer accounts, and wipe and recycle personal devices.
A better use of your time is to set up a disaster-recovery plan you can write down and share with people you trust. Distribute copies of important data to make a resilient archive. This could include confidential records, but shouldn't really need to include authentication "secrets".
Don't expect others to "impersonate" you. Delegate them proper access via technical and/or legal methods, as appropriate. Get some basic legal advice and put your affairs in order. Write down instructions for your wishes and the "treasure map" to help your survivors or caregivers figure out how to use the properly delegated authority.
I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity.
In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.
Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.
So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?
Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.
The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.
The first half of your post, I broadly agree with.
The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.
Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.
Compare models from one year ago (GPT-4o?) to models from this year (Opus 4.5?). There are literally hundreds of benchmarks and metrics you can find. What reality do you live in?
A lot of (older than me) enthusiasts I knew got an MSDN subscription even though they weren't really developing apps for Windows. This gave them a steady stream of OS releases on CD-ROMs, officially for testing their fictional apps. So, they were often upgrading one or more systems many times rather than buying new machines with a bundled new OS.
Personally, yeah, I had Windows 3.0/3.11 on a 386. I think I may have also put an early Windows NT (beta?) release on it, borrowing someone's MSDN discs. Not sure I had got value from it except seeing the "pipes" software OpenGL screensaver. Around '93-94, I started using Linux, and after that it was mostly upgrading hardware in my Linux Box(en) of Theseus.
I remember my college roommate blowing his budget upgrading to a 180 MHz Pentium Pro, and he put Windows 95 on it. I think that was the first time I heard the "Start me up!" sound from an actual computer instead of a TV ad.
After that, I only encountered later Windows versions if they were shipped on a laptop I got for work, before I wiped it to install Linux. Or eventually when I got an install image to put Windows in a VM. (First under VMware, later under Linux qemu+kvm.)
It's these discussions where I realize people use phones in such different ways.
I abandoned Nova last year when I read about this looming problem. I found that Fossify Launcher beta (from F-Droid) works well enough for me on my Pixel 8a.
I don't really need much out of a launcher. My main goal was to have one like my older Android and not be forced to have a search bar or assistant triggers on my home screen.
All I need from the home screen is to be able to place basic widgets like clock and calendar and shortcuts for the basic apps I use frequently. A plain app drawer is fine for the rest, because I don't really install that many apps and instead disable/remove many. My app drawer shows 35 apps and has several blank rows remaining on the first page with 5 icons per row.
Since well before the pandemic, I've have dual 28" 4K screens on my desk. When ordering them, I liked the fact that they had the same pixel pitch as my 14" 2K laptop screen. One monitor was like a borderless 2x2 grid of those laptop screens.
I found myself repositioning things so that one is in front of the keyboard as a primary screen and the other is further off to the side as a secondary dumping ground. I found myself neglecting the second display most of the time so it was just a blank background. Eventually, I noticed I wasn't even using the entire primary screen. I favored a sector of it and pushed some windows off to the edges.
Ironically, with work from home, I've started roaming around the house with the laptop instead of staying at my desk. So I'm mostly back to working in a 14" screen with virtual desktops, like I was 20 years ago. I am glad that laptops are starting to have 16:10 again after the long drought of HDTV-derived screens.
The popular HTTP validation method has the same drawback whether using DNS or IP certificates? Namely, if you can compromise routes to hijack traffic, you can also hijack the validation requests. Right?
1) How to secure routing information: some says RPKI, some argues that's not enough and are experimenting with something like SCION (https://docs.scion.org/en/latest/)
2) Principal-Agent problem: jabber.ru's hijack relied on (presumably) Hetzner being forced to do it by German law agents based on the powers provided under the German Telecommunications Act (TKG)
Well, not exactly in that there are cultivars and farm differences. In that way it is a little bit like grape wine, where different processing can produce very different wines from the same grapes, but there are also differences in grapes that can come through within a style.
In a way, yes; Wuyi rock oolong will be different than a high mountain Taiwanese oolong. But what most people think of as green vs black tea, they don't realize that it's the same exact plant. Camellia sinensis has only 2 cultivars, var. sinensis (the main one) and var. assamica.
This is quite incorrect. Of the top 10 planted wine varietals in the world [0], all ten are red grapes to red wine or white grapes to white wine:
Top grape varieties by planted hectares
1. Cabernet Sauvignon - red grape, red wine.
2. Merlot - red grape, red wine.
3. Tempranillo - red grape, red wine.
4. Airén - white grape, white wine.
5. Chardonnay - white grape, white wine.
6. Syrah - red grape, red wine.
7. Grenache Noir - red grape, red wine.
8. Sauvignon Blanc - white grape, white wine.
9. Pinot Noir - red grape, red wine.
10. Trebbiano Toscano / Ugni Blanc - white grape, white wine.
There are some wines which are produced with red grapes which are not left on skins so there is no impartation of red colour, but they are really not common and the result is most of the time a bit closer to a light rose than what would be considered a white wine. Perhaps the only style that would be semi-frequently encountered are some French Blanc de Noirs wines, various champagne examples being the most common of these. (And of course standard champagne itself, but I am not sure if that is really considered a white wine). Still, rare. It is also not possible to produce a red wine with a white grape, there is no colour in the skin to impart.
1. A laugh track
2. An inset with a sign language interpreter
3. An imaginary friendship with a low budget meta commentator, like Mystery Science Theater 3000
reply