I don't have a full-blown notebook, but I keep task notes in individual text files. A sample text might be:
- Fixing broken test: (full ci link)
- seems to be repo foo, target //bar:baz, subtest TestSomethingNice. Error: (30 lines of stack trace here)
- git checkout 0ead3f820da34812089
- trying locally: bazel test //bar:baz
- command failed, error: (relevant error here)
- turns out I need to set a config, reference: (wiki link here)
- trying: bazel test --config=green //bar:baz
- problem reproduces 5 times in a row, seems like 100% fail rate
- source file location: source/bar/baz.cc
- theory: baz is broken from recent dependency bump. Reverting commit 987afd
- result: the error is different now (more error text)
etc.. etc...
This is actually super handy for a complex problem. No need to wonder "did I see the error before?" or "wait, when I was trying that thing, did I see that message as well?" or "how do I reproduce a bug again?". No keeping dozens of tabs open so you can copy a few words from each of them. When later talking to someone, you can refer to your notes.
I use a daily log system. I just run this bash script to open my log for the day.
This opens the same file all day, so I can add stuff, and I know how to find old stuff, it's easy to grep, etc..
## create new log file for personal logging
vi ~/daily_logs/personal_logfile_$(date +%j_%m%d%y)
I feel the modern systems are so complex, there will always be some record somewhere. Thumbnails are an extreme examples, but the filenames themselves can leak via LRU list, logs, history etc...
Git (the version control tool) is experimenting with Rust (programming language). Some people are unhappy about this.
metux, author of most recent commit on this repo, is famous for his Xorg fork. TL/DR: Xorg maintainers focused on keeping things working and did not want new features. metux was Xorg contributor who was working on many new features, but also introduced many breaking regressions, which caused his contributions to be reverted. Eventually metux made Xorg fork called "Xlibre" where he can move as fast as he can, and break as many things as he wants to.
Here we have a repo which is a fork of git with a single commit by metux, titled "apply WD-40 .." which removes Rust parts (it's a joke on the fact because WD-40 is a sprayable oil that can be used to remove rust (iron oxide)).
Note this commit is symbolic: at present, Rust is optional in Git, so there is no need to remove it - just set a build-time option. So by starting this fork, metux (or whatever group he belongs to) effectively promises to maintain git without rust, presumably rewriting rust-based features into C. Or maybe metux is just trolling someone, and this repo will be forgotten next week. Time will tell.
> Note this commit is symbolic: at present, Rust is optional in Git, so there is no need to remove it - just set a build-time option
While your comment is true, it's worth pointing out that the intended future state is that rust will stop being optional; as per
https://git-scm.com/docs/BreakingChanges ,
> Git will require Rust as a mandatory part of the build process.
Although, it also notes
> We will evaluate the impact on downstream distributions before making Rust mandatory in Git 3.0. If we see that the impact on downstream distributions would be significant, we may decide to defer this change to a subsequent minor release. This evaluation will also take into account our own experience with how painful it is to keep Rust an optional component.
so I suppose we'll see how exactly things play out.
Not sure what you mean that Xorg does not "actually work"? I am using it on all my computers, works fine.
(That said, I am pretty sure there is some exotic hardware out there where the Xlibre works and Xorg does not, just like the opposite direction where only Xorg works. But I don't know the numbers... I suspect once people get a working X server, they won't try another one)
Allow me to echo the sibling comment: As I type this comment on Xorg, I assure you it does "actually work". I'm very open to the idea that it has room for improvements - I am specifically aware of features it lacks, and history appears to suggest plenty of room for bug/robustness improvements - but that's not the same as saying that it outright doesn't work.
Wow, that "Figure 2. Breadboarding" picture is messed up - took me asking myself "the board looks reasonable, but why alligator clips and not special probes... Wait, where are the scope inputs? why is text so broken, compression artifacts?" before I got the idea to google the scope model and realized the whole picture is AI slop.
I think it's pretty safe to assume the rest of article is AI slop as well, and that if author did not notice the (numerous and obvious) problems with the picture, they missed lots of other problems in text as well. If you want to simulate some circuits, use some other post to learn.
This post would benefit greatly from using common terminology, or with some specific comparison to existing technologies. This article implies Arcan will fix the problems with the existing web, so giving a few examples of problematic Web behavior and how Arcan would fix it would be great. Some cases I'd like to read about:
- User opens a very old website (arcan-site?) which uses very old versions of browser standards. (In regular web, this is handled by browsers painstakingly added compatibility and quirks modes.)
- User opens the very newest website, which uses all the latest website standards, including new video codec, new font and new text rendering effect. (In regular web, this would only happen once most clients update to the very latest version of the browser, and telemetry says enough clients have updated)
If the answer to the question above is "Arcan will use a different external client for each", then how will those clients be managed? Is there going to be a allow-list of video codecs and text renderers hardcoded into the browser? Or any site can specify any requirement, and browser will just go fetch it automatically on first visit? Or will there be a user prompt ("This site uses LALA.Image codec which is not installed. [download and install] [go back]")? Oh, and those codecs/renderers, are they all Lua, or a native code?
- User comes to an artistic website whose author thinks purple on red is a great color schema. In current web, I'd use "reader mode", or a custom UserScript if I plan to go to website often. What happens in Arcan - is this up to the "decode client"? What if the site author chose a "decode client" which has no such functionality?
- User comes to a hostile website, which contains content they want to read, but also 78 tracker scripts which send data to 9000 partners and a cryptominer. In the regular web, this is where adblockers shine - the tracker scripts are external, so they are easily blocked, plus they are "plugins" so they can go and mess with page's code. What happens in Arcan world? Arcan has no dynamic loader, so all those 78 tracker scripts and cryptominer will be bundled into main app during "compile package sign" process, and presumably obfuscated as well. Does Arcan browser has a place for adblock? Where in the layers does it live? If it lives in "decode client", can website author choose the client which has no adblock functionality?
> User comes to an artistic website whose author thinks purple on red is a great color schema.
The outer app used as your desktop/browser-UI has two mechanisms at its disposal. The first is collaborative - it can convey the preferred colour-scheme so the 'page' you are browsing to can pick accordingly. That's actually how the terminal emulator and non-terminal shell both gets their colour schemes.
The other is enforced - just as tone mapping is needed on the composition level to deal with HDR, you can use the same facility to apply colour grading per 'page'. That's what I do to chrome right now without it knowing. If the average luminosity exceeds a threshold, it picks a colour map that enforces a dark mode regardless of what the browser wants. Similarly, if a game doesn't respond to resize requests but continues to provide a low 256x240 like output I apply an NTSC signal degradation shader pass into a CRT simulation one. By having it as a composition effect means that I can have my own view of colours, but if I 'share desktop' to someone else the original form is retained.
> - User opens the very newest website, which uses all the latest website standards, including new video codec, new font and new text rendering effect. (In regular web, this would only happen once most clients update to the very latest version of the browser, and telemetry says enough clients have updated)
It's similar to Android API_LEVELS. API breaks (we've had, I think two, since 2013) have been handled with a hook script that exposes the old behaviour. New video codec deployment is a bit special and there's a protocol level degradation path for that with a few nuances. It has mainly been used for hardware accelerated web cameras as a remote data source where it starts with direct pass-through of whatever streams it provides. If the client fails to decode (stream-cancel:reason=codec failure) it signals that and the source end reverts to a safer default from a hierarchy (h264 into zstd).
> In current web, I'd use "reader mode", or a custom UserScript if I plan to go to website often. What happens in Arcan - is this up to the "decode client"? What if the site author chose a "decode client" which has no such functionality?
Decode can be used for that but it is overkill. The hook-script facility (arcan -Hmyscript1.lua -Hmyscript2.lua myapp) can interpose every function the app has access to transparently. Reroute patterns like "load_image_asynch("something.jpg") into null_surface() there. These and other options like the set of encode/decode/permitted integration with allow-list of local software can* be preset per authentication key.
There is a PoC 'more than adblock' that uses an LLMed version of 'decode' in my bin of experiments. The first part is a hookscript that routes all image loading through a version of decode with a local classifier model asked to 'describe the contents of this image and tell me if it looks like an advertisment'. If not the default path applies. If yes it substitutes an image of puppies playing tug of war with a rubber chicken.
Right, I've realized that I asked too many questions in this thread, so I tried to ask more focused question here. You did not answer it, so let me as the most interesting (to me) question:
>> If the answer to the question above is "Arcan will use a different external client for each", then how will those clients be managed? Is there going to be a allow-list of video codecs and text renderers hardcoded into the browser? Or any site can specify any requirement, and browser will just go fetch it automatically on first visit? ...
Since you mention "API breaks", it sounds like list of clients is hardcoded in browser version. So what is the innovation compared to existing web, _from the standpoint of the website author_? I see the the following items:
- Discrete "API Levels" instead of forever backwards compatibility. More bundled libraries.
- Lua instead of Javascript, no WASM or any other low-level/compiled code
- No "HTML equivalent" - if a page wants to do text layout, then it has to pull layout library
- Bundle all libraries in initial package instead of loading them dynamically
- No third-party storage, and maybe first-party is optional too (not sure what's the plat for this.. user prompt? explicit action?)
Did I get this right? did I miss something?
(From what I see, the SHMIF, pledge, Decode/Encode/Network/Terminal, and all this stuff is basically internal browser implementation detail - cool to know if you are a browser author, but does not matter for your typical page writer, right?)
The entire article is from the browser internals angle. The page-writer one comes when we have tools to compile pages into app and layer in user features. The rough path for that is simpler ones first (Sile typesetter and Pandoc), then more exotic ones. An example of the later would be a "print to" modification to a browser that can preserve more content like animations and audio and not be restricted to fixed sized pages.
It is forever backwards compatible. API levels is for the rare few cases where we need to deprecate or break something on the engine side but can apply a runtime fix-up script to replicate the behaviour on the scripting level.
Decode/Encode/Network/Terminal is user-relevant insofar as the set of those are interchangeable binaries. If you want more or less (paranoia, legal environment restrictions) media format support, export or streaming capabilities, etc. those can be swapped out per site without any changes to the engine. What is being considered is to be explicit about expected formats in the manifest so that incompatibilities can be detected.
SHMIF is user relevant insofar as "wasm or low-level compiled code" running as local software can be embedded and controlled by the app, allow-listed by the user.
> No third-party storage, and maybe first-party is optional too (not sure what's the plat for this.. user prompt? explicit action?)
This is about "neuromorphic computing", a model invented in 1980's which colocates memory and computer, just like real brains. Plus about "memristor", a weird electronic part invented in 1970's. People have been doing research since then, and yet there is no progress.
This is yet another call-to-action, this time under "AI consumes too much energy" sauce. I've seen those for more than two decades, and it nothing ever came out of this.
A special mention for this paragraph:
> The programmability challenge is perhaps the most significant. The von Neumann architecture comes with 80 years of software development, debugging tools, programming languages, libraries, and frameworks. Every computer science student learns to program von Neumann machines. Neuromorphic chips and in-memory computing architectures lack this mature ecosystem.
This is total B.S, especially with application to AI - there is no need for "ecosystem" of millions of software libraries, there is a handful of algorithms that you need to run and that's it, the thing can earn money. And of course plenty of people work with FPGA's or custom logic which has nothing to do with von Neumann machines - and they get things done. If you have a new technology and you cannot build even a few sample apps on it... don't blame establishment, it just means that your technology does not work.
I think they have a point about merging the CPU & memory. It seems to have worked out well for Apple. Their proposal sounds like another step in the same direction.
This brings memories - back when I was a student programming in Turbo Pascal 6, I got the same invalid bool (due to array range overflow) which was both true and false at the same time.
- Fixing broken test: (full ci link)
- seems to be repo foo, target //bar:baz, subtest TestSomethingNice. Error: (30 lines of stack trace here)
- git checkout 0ead3f820da34812089
- trying locally: bazel test //bar:baz
- command failed, error: (relevant error here)
- turns out I need to set a config, reference: (wiki link here)
- trying: bazel test --config=green //bar:baz
- problem reproduces 5 times in a row, seems like 100% fail rate
- source file location: source/bar/baz.cc
- theory: baz is broken from recent dependency bump. Reverting commit 987afd
- result: the error is different now (more error text)
etc.. etc...
This is actually super handy for a complex problem. No need to wonder "did I see the error before?" or "wait, when I was trying that thing, did I see that message as well?" or "how do I reproduce a bug again?". No keeping dozens of tabs open so you can copy a few words from each of them. When later talking to someone, you can refer to your notes.
reply