Anyone using OpenClaw to manage a bunch of coding agents so that you only set the high-level vision and leave all the prompting, testing, debugging, forking to agents? If yes, how did you glue it all together? Are you using local models? What is the SOTA for what I can run locally with a 512GB M3 Ultra, 2x DGX Spark, 2x RTX Pro 6000 Max-Q in one machine and 1x RTX Pro 6000 WS in another machine?
This. It's awful to wait 15 minutes for M3 Ultra to start generating tokens when your coding agent has 100k+ tokens in its context. This can be partially offset by adding DGX Spark to accelerate this phase. M5 Ultra should be like DGX Spark for prefill and M3 Ultra for token generation but who know when it will pop up and for how much? And it still will be at around 3080 GPU levels just with 512GB RAM.
I use Discord to talk to university students (top 10 in CS) and it only works with university email. I am wondering if I am going to be treated as <13 from now on as well or if they waive it in our case.
There was this sentence in the article: "...he realized that nonuniformly elliptic PDEs that seem well behaved can have irregular solutions even when they satisfy the condition Schauder had identified"
The "marathon of sprints" paradigm is now everywhere and AI is turning it to 120%. I am not sure how many devs can keep sprinting all the time without any rest. AI maybe can help but it tends to go off-rails quickly when not supervised and reading code one did not author is more exhausting than just fixing one's own code.
I never use Apple News but they often pop up among the apps that are using significant energy. I am wondering what does it really do on the background.
Now this is going to be interesting to watch to see if the finance bros financing this AI wave to get rid of SW engineers will keep financing getting rid of their own.
reply