Google isn't even good at engineering great software.
They have some good people working on some good projects. If you look at the relation between software-quality of their average product and number of developers they have... yeah I don't know. Maybe hiring tons of new-grads that are good at leetcode and then forcing them to use golang... is not what actually makes high quality software.
I could believe that they are good at doing research though.
Most of the core products at Google are still written in pre-C++11.
I wish these services would be rewritten in Go!
That’s where a lot of the development time goes: trying to make incredibly small changes that cause cascading bugs and regressions a massive 2000s C++ codebase that doesn’t even use smart pointers half the time.
Also, I think the outside world has a very skewed view on Go and how development happens at Google. It’s still a rather bottom up, or at least distributed company. It’s hard to make hundreds of teams to actually do something. Most teams just ignored those top-down “write new code in Go” directives and continued using C++, Python, and Java.
I wouldn't say most. Google is known for constantly iterating on its code internally to the point of not getting anything done other than code churn. While there is use of raw pointers, I'd argue it's idiomatic to still use raw pointers in c++ for non owning references that are well scoped. Using shared pointers everywhere can be overkill. That doesn't mean the codebase is pre c++11 in style.
Rewriting a codebase in another language that has no good interop is rarely a good idea. The need to replicate multiple versions of each internal library can become incredibly taxing. Migrations need to be low risk at Google scale and if you can't do it piecewise it's often not worth attempting either. Also worth noting that java is just as prevelant if not moreso in core products.
> Static types, algebraic data types, making illegal states unrepresentable: the functional programming tradition has developed extraordinary tools for reasoning about programs
Looks like the term "functional programming" has been watered down so much that now it is as useful as OOP: not at all.
Look, what matters is pure functional programming. It's about referential transparency, which means managing effects and reason about code in a similar way you can do with math. Static typing is very nice but orthogonal, ADT and making illegal states unrepresentable are good things, but all orthogonal.
What would you say if someone has a project written in, let's say, PureScript and then they use a Java backend to generate/overwrite and also version control Java code. If they claim that this would be a Java project, you would probably disagree right? Seems to me that LLMs are the same thing, that is, if you also store the prompt and everything else to reproduce the same code generation process. Since LLMs can be made deterministic, I don't see why that wouldn't be possible.
PureScript is a programming language. English is not. A better analogy would be what would you say about someone who uses a No Code solution that behind the scenes writes Java. I would say that's a much better analogy. NoCode -> Java is similar to LLM -> Java.
I'm not debating whether LLMs are amazing tools or whether they change programming. Clearly both are true. I'm debating whether people are using accurate analogies.
> PureScript is a programming language. English is not.
Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write. If it can do that, why couldn’t it be used to tell a computer exactly what program to write?
> Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write
Various attempt has been made. We got Cobol, Basic, SQL,… Programming language needs to be formal and English is not that.
I don’t think you can do that. Or at least if you could, it would be an unintelligible version of English that would not seem much different from a programming language.
I agree with your conclusion but I don't think it'd necessarily be unintelligible. I think you can describe a program unambiguously using everyday natural language, it'd just be tediously inefficient to interpret.
To make it sensible you'd end up standardising the way you say things: words, order, etc and probably add punctuation and formatting conventions to make it easier to read.
By then you're basically just at a verbose programming language, and the last step to an actual programming language is just dropping a few filler words here and there to make it more concise while preserving the meaning.
However I think there is a misunderstanding between being "deterministic" and "unambiguous". Even C is an ambiguous programming language" but it is "deterministic" in that it behaves in the same ambiguous/undefined way under the same conditions.
The same can be achieved with LLMs too. They are "more" ambiguous of course and if someone doesn't want that, then they have to resort to exactly what you just described. But that was not the point that I was making.
I'm not sure there's any conflict with what you're saying, which I guess is that language can describe instructions which are deterministic while still being ambiguous in certain ways.
My point is just a narrower version of that: where language is completely unambiguous, it is also deterministic where interepreted in some deterministic way. In that sense plain, intelligible english can be a sort of (very verbose) programming language if you just ensure it is unambiguous which is certainly possible.
It may be that this can still be the case if it's partly ambiguous but that doesn't conflict with the narrower case.
I think we're agreed on LLMs in that they introduce non-determinism in the interpretation of even completely unambiguous instructions. So it's all thrown out as the input is only relevant in some probabilistic sense.
Here's a very simple algorithm: you tell the other person (in English) literally what key they have to press next. So you can easily have them write all the java code you want in a deterministic and reproducible way.
And yes, maybe that doesn't seem much different from a programming language which... is the point no? But it's still natural English.
English CAN be ambiguous, but it doesn't have to be.
Think about it. Human beings are able to work out ambiguity when it arrises between people with enough time and dedication, and how do they do it? They use English (or another equivalent human language). With enough back and forth, clarifying questions, or enough specificity in the words you choose, you can resolve any ambiguity.
Or, think about it this way. In order for the ambiguity to be a problem, there would have to exist an ambiguity that could not be removed with more English words. Can you think of any example of ambiguous language, where you are unable to describe and eliminate the ambiguity only using English words?
Human beings are able to work out the ambiguity because a lot of meaning is carried in shared context, which in turn arises out of cultural grounding. That achieves disambiguation, but only in a limited sense. If humans could perfectly disambiguate, you wouldn't have people having disputes among otherwise loving spouses and friends, arising out of merely misunderstanding what the other person said.
Programming languages are written to eliminate that ambiguity because you don't want your bank server to make a payment because it misinterpreted ambiguous language in the same way that you might misinterpret your spouse's remarks.
Can that ambiguity be resolved with more English words? Maybe. But that would require humans to be perfect communicators, which is not that easy because again, if it were possible, humans would have learnt to first communicate perfectly with the people closest to them.
A determinisitic prompt + seed used to generate an output is interesting as a way to deterministically record entirely how code came about, but it's also not a thing people are actually doing. Right now, everyone is slinging around LLM outputs without any trying to be reproducible; no seed, nothing. What you've described and what the article describe are very different.
Yes, you are right. I was mostly speaking in theoretical terms - currently people don't work like that. And you would also have to use the same trained LLM of course, so using a third party provider probably doesn't give that guarantee.
Nope, that is precisely what pure functional programming is about: to turn actions like "draw something to the screen" into regular values that you can store into a variable, pass around, return from a function and so on.
It's not an utopia. It will eventually happen and it will replace how react.js currently works. effect.website will probably be the foundation.
I'm well aware of what "pure functional programming" is about, I spend most of my time in Clojure during a normal work day, and done my fair deal of Haskell too :)
And yes, even the most pure functional language eventually needs to do something non-pure, even if the entire flow up until that point is pure, that last step (IO) just cannot be pure, no matter how badly you want it to.
With that said, you'd have to pry my pure functions out of my cold dead hands, but I'm not living under the illusion that every function can be pure in a typical application, unless you have no outputs at all.
A 400W half decently oriented panel (i.e. south facing balcony, 60 degree angle) is sufficient to run your AC to cool your 50sqrm apartment during summer for free.
In the summer the optimal angle where I live (35.5º S) is 12º. Panels lying flat on the lawn are losing only 2% from that. Just now, near the equinox, panels lying flat on the lawn lose around 18% at noon.
I'll find a way to prop them up at 50º for winter when the time comes for that (April or May), though that's for sunny conditions. In our typical overcast in winter flat on the ground is probably still fine to catch the most diffuse light. I'll experiment when the time comes.
Right. I mentioned the suboptimal 60 degrees because many people hang it from the balcony (90 degrees) and the tilt it a bit, but usually not too much. From what I've seen 60 degrees seems like a good average number.
Yeah, in northern Germany vertical and 60º are pretty much equally good/bad in winter (75º would be ideal), but 60º will be getting 70% more power in summer than vertical, and is near ideal in spring/autumn.
From what I remember, it turned out that the electronics were sufficient. Though the chance of issues (e.g. in case of a software but) were/are increased.
Well, it turned out that the micro inverter in question did not contain the necessary physical relay to show conformity with the claimed norms on electric safety (related to disconnect from the grid in case of grid failure).
They only had a software implementation and were forced to send all customers in Germany a free relay dongle to ensure safety.
Now imagine you run two AIs (like ChatGPT) on your machine or on a server. You maybe even want them to cooperate on something. How do you do that? Right, you cannot, there is no standard, no interoperability, nothing.
> In general the security model of desktop operating systems is woefully inadequate for the modern era. Given the sheer volume of software known to do things not in the user’s best interest it’s borderline insanity that we hand it the keys to the kingdom without so much as a second thought with such frequency.
This. It must be the problem of having grown up with it that makes people not realize it.
Software will need to operate like people in the real world. You can give your friend power of attorney, but usually you don't, you find a better way to get things done.
Exactly. It’s not so different from how popular desktop operating systems used to be single-user, which turned out to be a security nightmare, and so shifted to a multi-user design. It’s time for the next evolution.
They have some good people working on some good projects. If you look at the relation between software-quality of their average product and number of developers they have... yeah I don't know. Maybe hiring tons of new-grads that are good at leetcode and then forcing them to use golang... is not what actually makes high quality software.
I could believe that they are good at doing research though.
reply