Knowlege gets taught in specific institutions exactly because street knowledge is quite often incorrect, like in this case, spreading urban myths based in shaky foundations.
It's a variable simply because it doesn't refer to a specific object, but any object assigned to it as either function argument or by result of a computation.
It's in fact us programmers who are the odd ones out compared to how the word variable has been used by mathematics and logicians for a long time
Making a distinction between pure and effectful functions doesnt require any kind of effect system though.
Having a language where "func" defines a pure function and "proc" defines a procedure that can performed arbitrary side effects (as in any imperative language really) would still be really useful, I think
> Having a language where "func" defines a pure function and "proc" defines a procedure that can performed arbitrary side effects (as in any imperative language really) would still be really useful, I think
Rust tried that in the early days, the problem is no-one can agree on exactly what side effects make a function non-pure. You pay almost all the costs of a full effect system (and even have to add an extra language keyword) but get only some of the benefits.
The definition I’ve used for my own projects is that anything that touches anything outside the function or in any way outlives the function is impure. It works pretty well for me. That is, no i/o, mutability of a function-local variable is okay but no touching other memory state (and that variable cannot outlive the return), the same function on the same input always produces the same output, and there’s no calling of impure code from within pure code. Notice this makes closures and currying impure unless done explicitly during function instantiation, making those things at least nominally part of the input syntactically. YMMV.
Who cares? That's just semantics. If we define science as the systematic search for truths, then mathematics and logic are the paradigmic sciences. If we define it as only empirical search for truth then perhaps that excludes mathematics, but it's an entirely unintersting point, since it says nothing.
To be fair, presumably debug printig could be "escaped" from the effect type checking if the designer of an effect system would want it. For instance, debug printig in Haskell completely sidesteps the need for the IO Monad and just prints in whatever context
yeah, most times its solved by side-stepping the strict type system and making an exception for debug prints. but this is not a real practical solution, this is a stupid workaround born from overinsistence on "beautiful" design choices.
It seems to me like a pragmatic compromise and very much a real solution. What would you consider a real solution that isn’t overinsisting on beautiful design choices?
putting strong static type system into optional compiler pass. yes, I know this may be null is some cases, let me run my program for now, I know what I am doing. yes, there are unhandled effects or wrong signature, just let me run my test. yes, that type is too generic, i will fix it later, let me run my god damn program.
This puts a lot of extra conditions on the runtime; you basically have to implement a "dynamically typed" runtime like Javascript. In doing so you lose a lot of performance. Google have invested something like a century of man-hours into V8 and on typical benchmarks the performance is about half of Java's, which in turn is typically about half of C / Rust's performance. That's a pretty big compromise for some.
This is basically what we already have in Haskell. Debug functions that sidestep the typing system can be annotated with a warning and you can make this one an error while compiling for production.
And in a more general sense, you can ask the compiler to forbid escape hatches altogether.
I found "A philosophy of software design" to be a well intended but somewhat frustrating book to read.
It seemingly develops a theory of software architecture that is getting at some reasonable stuff, but does so without any reference _at all_ to the already rich theories for describing and modeling things.
I find software design highly related to scientific theory development and modeling, and related to mathematical theories like model theory, which give precise accounts of what it means to describe something.
Just taking the notion of "complexity". Reducing that to _just_ cognitive load seems to be a very poor analysis, when simple/complex ought to deal with the "size" of a structure, not how easy it is to understand.
The result of this poor theoretical grounding is that what the author of A Philosophy of Software Design presents feels very ad-hoc to me, and I feel like the summary presented in this article similarly feels ad-hoc.
> Just taking the notion of "complexity". Reducing that to _just_ cognitive load seems to be a very poor analysis, when simple/complex ought to deal with the "size" of a structure, not how easy it is to understand.
Preface: I'm likely nitpicking here; the use of "_just_" is enough for me to mostly agree with your take.
Isn't the idea that the bulk of complexity IS in the understanding of how a system works, both how it should work and how it does work? We could take the Quake Fast Inverse Square Root code, which is simple in "size" but quite complex on how it actually achieves its outcome. I'd argue it requires comments, tests, and/or clarifications to make sense of what its actually doing.
How do we measure that complexity? No idea :) But I like to believe that's why the book takes a philosophical approach to the discussion.
I agree the arguments in the book largely "make sense" to me but I found myself finding it a little hand-wavey on it actually proving its points without concrete examples. I don't recall there being any metrics or measurements on improvement either, making it a philosophical discussion to me and not a scientific exercise.
I mean, we can definitively talk about simplicity/complexity in a fairly easy way when it comes to mathematical structures or data structures in my opinion.
For instance, a binary tree that contains just a root node is clearly simpler than a binary tree with three nodes, if we take "simple" to mean "with less parts" and complex to mean "with more parts". Similarly, a "molecule" is more complex than an "atom".
This is a useful definition, I think, because when we write computer programs they always written in some programming language, with a syntax that yields some kind of abstract tree, so ultimately we'll always have _some_ kind of graph-like nature to the computer program, both syntactically and semantically, and surely graphs also permit the same kind of complexity metrics.
I'm not saying measuring the number of nodes is _the_ way of getting at complexity, I'm just pointing out that there's no real difficulty in defining it.
Complexity means more stuff, and we simply take it as a premise that we can only fit so much stuff in our head at the same time.
I think my issue with this generalization is assuming the code itself is where complexity is measured and applied.
For example, the Quake Fast Inverse Square Root[1] takes into account nuances in how floating point numbers can be manipulated. The individual operations/actions the code takes (type casts, bit shifts, etc.) are simple enough, but understanding how it all comes together is where the complexity lies, vs just looking at the graph of operations that makes up the code.
Tools like Rubocop for Ruby take an approach like you mention, measuring cyclomatic and branch complexity in your code to determine a mathematical measurement of the complexity of that code. Determining how useful this is, is another conversation I think. I usually find enforcing rules around that code complexity measurement against your code to be subjective.
Going back to the article, the visualization of with vs without abstractions can cover aggregating the mathematical representation of the code and how to tackle complexity. Abstractions lets you take a group of nodes and consider them as a single node, allowing you to build super-graphs covering the underlying structure of each part of the program.
> both syntactically and semantically
I do want to cover semantic program complexity at some point as a deeper discussion. I find that side to me to be quite interesting. How to measure it too.
While the tools you talk about sound interesting, to me this was more about an in-principle possible measurement rather than something we'd actually carry out.
I think stating that "more stuff" in the program code and in the spec leads to more stuff to keep track of, and so we want to minimize complexity to maintain tractability?
> related to mathematical theories like model theory, which give precise accounts of what it means to describe something
Perhaps too precise? APoSD is about the practical challenges of large groups of people creating and maintaining extensive written descriptions of logic. Mathematical formalisms may be able to capture some aspects of that, but I'm not sure they do so in a way that would lend real insight.
"How can I express a piece of complicated logic in an intuitive and easy-to-understand way" is fundamentally closer to writer's craft than mathematics. I don't think a book like this would ever be mathematically grounded, any more than a book about technical writing would be. Model theory would struggle to explain how to write a clear, legible paragraph.
I think model theory is a really good source of theory to ground the notion of modules.
The relation between an interface and an implementation to me is very much the same as between a formal theory and a model of that theory.
I agree that in practice you'd want to use heuristics for this, but I think the benefits would be similar to learning a little bit of formal verification like TLA+, it's easier to shoot from your hip if you've studied how to translate some simpler requirements into something precise.
For a book like this you'd probably not need more than first order logic and set theory to get a sense of how to express certain things precisely, but I think making _reference to_ existing mathematics as what grounds our heuristics would've been beneficial.
I haven't read it myself but I probably will because I have a lot of hope for this topic (there must be a better way to do this!)
I worry that it doesn't much matter if it's perfect or mediocre, though, because there's a huge contingent of project managers who mock _any_ efforts to improve code and refuse to even acknowledge that there's any point to doing so - and they're still the ones running the asylum.
Project managers shouldn't be running engineering. They are there to keep the trains running on time, not to design the track, trains and stations.
The generally accepted roles are Product decides what we need to build, Design decides how it should work from user perspective, Engineering decides how to build it at a reasonable upfront and maintenance cost. This involves a fair amount of influence, because Engineering is better equipped to describe the cost tradeoffs than any other function. Of course this comes with the responsibility of understanding the big picture and where the business wants to go. IMHO you should not be speaking to project management about code quality, you should maintain ground level quality as you go, for bigger refactoring/cleanup this needs to be presented to Product leadership (not project managers) in terms of shoring up essential product complexity so it's easier for customers to use, less support, and simpler foundation for the next wave of features. Never talk about code with non-technical stakeholders.
I disagree fundamentally with the modern division of labor. I’ve been around long enough to understand that it doesn’t actually have to work like this.
I don’t think you can be an expert in generic “Product” just like I don’t think you can be a generic management expert.
And I don’t think you can decide what to build or how it should work from a user perspective without taking into account how it’s built. In many ways I think how it’s built tends to inform what it should do more than the other way around.
However, Product alone is never the cause of bad software in my experience. It’s always product plus an engineer who refuses to push back on the initial proposal.
In most cases when product and design comes to you with a feature, and all the solutions you can come up are going to add tech debt or take forever, you should step back and talk to Product about the problem they are actually trying to solve.
If you go back to product with “I can build this very similar feature that will get you 90% of the way there, but will take 1/2 as long and not create maintenance problems down the line”, they will almost always be happy with that.
The real problems are caused when an engineer says immediately “yep I can build that in 2 weeks” and starts trying to force their solution through by telling everyone that product insisted on this specific feature in this specific timeline that unfortunately can only be done in the way they’ve designed. And then they tell product that they have a solution but are being blocked.
Agree on the one engineer overpromising. But you can talk about a product without knowing how it’s built in finer details. But what can and can’t be done is the realm of engineering. Then the filtered list can be reduced by product to the ones that are inportant. So it’s actually a spiral: (product) here’s what I would like -> (eng) here’s how it can be done -> (product) let’s go with this one -> (eng) here is the plan -> etc…
This is exactly what I assume when I see people blame their tech debt on others.
We do the work, we are responsible for whatever it is. Sure maybe some times you begrudgingly just have to do something you're told, but in my experience there's almost always room for discussion and suggestions. I think most devs just don't care. They do what they're told and blame others if it's ass.
The author is describing less a theory and more a framework or system of heuristics bases on extensive practicap experience. There's no need for rigor if it's practical and useful. I think your desire for grounding in something "scientific" or "mathematical" is maybe missing the forest for the trees a bit. Saying this as someone with loads of practical software development experience and loads of math experience. I just don't find that rigor does much to help describe or guide the art of software. I do think Ousterhout's book is invaluable.
My issues stems from me feeling like a lot of terminology introduced by the author ending up being used in different ways in different paragraphs.
It didn't feel like a thought through whole, and I felt somewhat punished for trying to read along attentively.
I also found there to be a frequent conflation of e.g. the notion of modules and a classic OOP-class, to me it seemed like the author thought of them interchangeably.
To me there's enough theoretical computer science that can be used to help ground the terminology, even if it's just introduced cursory and with a reference for further reading. But at least then there'd be more consistency.
I'm not sure I think the book is invaluable, but I think it's a good contribution to the subject.
I'm very mathematically inclined, so I would probably want a "proper" treatment of this subject to include both formal logic, set theory, type theory and model theory, but they're also subjects I'm still familiarizing myself with.
My basic pitch is that, to a large degree writing sensible computer programs is about modeling some real life activity that the computer program will be part of, and describing things accurately has been done in other fields than programming for many hundreds if not thousands of years, so there's a deep well to draw from.
Despite my appetite for a dry and mathematical treatment of writing computer programs, I still think the book is good for what it is. I think I would go easier on the book if it were not for the title, because philosophy is precisely one of those subjects that tend to favor being very precise about things, something I distinctly think the book lacks. What the book is, however, is an excellent _sketch_ on what we'd want out of program design. I definitely agree about the author's notion of "deep modules" being desirable.
When a bug like this can cause real world harm, we can't just bumper car program our way out of things. As engineers we should be able to provide real guarantees.
I do agree for safety critical systems, you’re right.
But I don’t think we are engineers. Software dev isn’t like engineering. You can’t change the structure of a bridge after it has been built by deploying code to prod in a minute. Software dev is just software dev, it’s not engineering or science. It has some parallels with craftsmanship, but it’s unique.
Types give you static proof where tests only give partial inductive evidence. I cannot _fathom_ why people would prefer tests over types where types do the job, outside anything but sheer ignorance.
Saying that the language has GC just because it has opt-in reference counting is needlessly pedantic