I don't see how it's odd at all. The kinds of changes that would stabilize housing and make growth more sustainable would threaten the interests of many wealthy people, including politicians themselves.
Yet, apparently one will instead sidestep the discussion entirely. Frankly the more you've tried to answer the question the less you actually answer it...
I don't see how "rapidly transitioning from a high-trust to a low-trust society" or "she's got 2 bikes stolen ... this would be inconceivable to me during my time living in the same town" reflect failures in Canadian government at all, really.
Has societal trust actually increased anywhere in the developed world? Sure, our governments have had their share of failures, but it would actually take an extraordinary vision and effort to increase societal trust as technology and population advance.
Is it possible your sister had a shockingly unlucky semester? Or that your world model was simply naive and wrong 10 years ago? Hard to say since the anecdote isn't really evidence of anything.
Every store in my town now locks up anything small that costs more than $20 in cages. Talking to some people working there it was pretty common for people to walk in, take a bunch of shit, and walk out. Drivers are completely out of control. I've witnessed at least 3 people run red lights in the last 2 years, while I can remember only one such incident in the 10 years before that. Signalling is no longer something drivers do - like at all. For the last 2 years teenagers have terrorized the local park on Canada Day shooting fireworks at random passers by. With someone setting off fireworks under an occupied baby carriage last year. Car thefts in Toronto got so bad that people were building retractable bollards in their driveways[1].
I could go on, but there's a clear apparent trajectory to these experiences.
I'm kind of confused by the question. Do you think an unverified commenter on a public website saying "all the stores in my town [not named] do X [but I didn't count]" is a type of hard evidence that I'm arbitrarily rejecting?
No; I think that there's no feasible way that anyone could have hard evidence one way or another for the underlying question, and that you should therefore take anecdotes more seriously.
>Has societal trust actually increased anywhere in the developed world? Sure, our governments have had their share of failures, but it would actually take an extraordinary vision and effort to increase societal trust as technology and population advance.
Japan. Again, depending on where in the country, but things like muggings and drunk driving have drastically decreased in the last 35 years.
If you know you know, and clearly plenty of people who read my original comment do.
Judging from your other comments, you're either wilfully ignorant or actively dishonest, can't tell which, and frankly don't care either way. All I know is it'd be a complete waste of time to try to convince you.
> you're either wilfully ignorant or actively dishonest
I think "willful ignorance" is a good description of accepting impossible-to-verify anecdotes of internet comments as evidence of societal change, personally. But I'm realizing we don't have the same goals in the conversation so I understand why it feels pointless to continue.
Of course that's the reasonable approach, but in reality the US gov't finds mind control useful, and wants to be able to exert influence over it, rather than allow a foreign gov't exert influence over it.
That line made me stop and think. At first I thought it was an exaggeration, but then I realized it was exactly true - however, I don't think the author understands it fully.
If someone offered to sell me that pill, I wouldn't say, "Ah, but you were kind of brusque, I don't think I'll buy it." I would say (well, think, actually), "I don't think I trust you that it's this easy and safe, so I won't buy it."
The key is trust. The insight the author missed is that we more easily trust people who make us feel good, among other things (attraction, social standing, etc.).
> Shelly in Wichita is not going to buy what you’re selling, no matter how good the deal is, if she can clearly hear in your voice how much you hate your job and, by extension, her.
I think this again misses the point. Shelly doesn't necessarily think you hate her, but she has no reason to trust you. If your product was good, and people were better off for buying it, you probably wouldn't hate selling it so much.
For me it was not confusing; I don't think a newly discovered taxonomic Class would make mainstream news. I was expecting something higher-order and the discovery delivered!
Honestly, yes, I'm curious to hear that perspective. The negative responses to "TypeScript makes JS programming fun and easy" are always pretty ill-formed, and I really want to know if there's a genuine argument against it in any complex application. (My suspicion is that no, there is not, but I'm trying to be generous and curious.)
Competent in types, yes. Just like you’d want a team competent in functional programming before starting a project in Haskell.
It would be unfair to consider your team incompetent just because they are experts with another set of tools. It’s also unreasonable to expect these things to be quickly learned (TypeScript types are not friendly). But I think it’s reasonable to explain the benefits of this approach and to help your ramp up and learn the skill.
But, anyway, I understand the frustration. I’m usually the one trying to get my team to understand the value of modeling problems in type systems.
If complex situations arise, they can slap `any` on it, at least it would be explicit, and marker to revisit in the future.
Is there really that much legwork otherwise? Adding ": string" to a function parameter assumes they know what a string is (which should already be the case), adding an object type assumes knowing what an object is, etc.
There is a big difference between typing your application (e.g. changing (arg) => {} to (arg: string): void => {}) and modeling your application in the type system.
Simply adding types is usually not too difficult and it is still quite beneficial. It does eliminate certain kinds of bugs.
Modeling your application in a type system means making invalid states unrepresentable and being as precise as possible. This is a lot more work, but again is eliminates more kinds of bugs that can occur.
An example of this being complex: earlier this week I wrote a generic React component that allows users to define which columns of a table are sortable. I wanted to prevent any invalid configurations from being passed in. This is what it looks like: https://tinyurl.com/bdh6xbp6
It's a bit complex but the compiler can guarantee that you're using the component correctly. This is more important and useful when it comes to business logic.
I'm not trained as a programmer/software engineer, but this was ChatGPT's response:
1. Added Boilerplate and Ceremony:
Simple tasks may require extra type declarations and structures, adding “ceremony” that feels unnecessary for quick one-off solutions.
2. Rigid Type Constraints:
Combining different data types or working with unclear data shapes can force complex type solutions, even for simple logic, due to strict compilation rules.
3. Complex Type Definitions for Simple Data:
Handling semi-structured data (like JSON) requires elaborate type definitions and parsing, where dynamically typed languages let you manipulate data directly.
4. Refactoring Overhead:
Small changes in data types can cause widespread refactoring, turning minor edits into larger efforts compared to flexible, dynamically typed environments.
5. Complexity of Advanced Type Systems:
Powerful type features can overwhelm trivial tasks, making a few lines of code in a dynamic language balloon into complex type arguments and compiler hints.
All of those come down to "Let the compiler guess about my data, and it may produce correct results in some of the cases."
A risk is, unexpected data (empty field instead of zero; a real number introduced in untested corner cases where only an integer will actually work etc) can cause issues after deployment.
Those 'complex' requirements mean, if you want a reliably correct program well then you'll have to put in this much work. But go ahead, that 'trivial task' may become something less trivial when your task fails during Christmas sales season or whatever.
As someone in a similar position, may I take a tangent? I'm curious what you transitioned into out of programming. The stress of feeling "always behind" is taking its toll on me, and I wonder about another career change often.
I joined the software industry at a small consultancy that needed me to do a lot of different things, including both programming and design. So I got experience doing both of those. When I left the consultancy world in 2016, I had to decide whether to sell myself to employers as either a programmer or a designer—normal companies want you to pick a single lane—so I just focused on my design experience, and started doing that as a day job. I went from a fancy title to a much less fancy title for my first job as a designer, but more or less worked back up from there. I think for most programmers, their fork in the road would be to stay as an individual contributor or become a manager, but I don't want to be a manager, and was lucky to have a different path to fall back on.
> This dichotomy gels really well with the way my brain works. I’m able to channel short bursts of creative energy into precisely mapping the domain or getting type scaffolding set up. And then I’m able to sustain long coding sessions to actually implement the feature because the scaffolding means I rarely have to think too hard.
It's why I keep begging my team, every time there's a new codebase (or even a new feature), to stop throwing `any` onto everything more complicated than a primitive. It is exhausting. It forces me to waste energy on the shitty, tedious parts. It forces me to debug working code just to find out how it works before I can start my work.
They tend to take the quickest solution to everything -- which means everyone else has to do the same work over and over again until someone (me, invariably) sits down and makes a permanent record of it.
In doing this they ensure that I can't trust any of their code, which is counterproductive for what should be obvious reasons. Every time I work on established, untyped (or poorly typed) code, it's like I'm writing new code with hidden, legacy dependencies.
The longer I’m in this industry, the more I find that there are two types of programmers: those who default to writing every program procedurally and those who default to doing so declaratively.
The former like to brag about how quickly they can go from zero to a working solution. The latter brag about how their solutions have fewer bugs, need less maintenance, and are easier to refactor.
I am squarely in the latter camp. I like strong and capable type systems that constrain the space so much that—like you say—the implementation is usually rote. I like DSLs that allow you to describe the solution and have the details implemented for you.
I personally think it’s crazy how much of the industry tends toward the former. Yes there are some domains where going the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements. But so much more of our time and energy is spent maintaining code than writing it in the first place that upfront work like defining and relating types rapidly pays dividends.
I have multiple products in production at $JOB that have survived nearly a decade without requiring active maintenance other than updating dependencies for vulnerabilities. They have had a new version deployed maybe 3-5 times in their service lives and will likely stay around for another five years to come. Being able to build something once and not having to constantly fix it is a superpower.
> Yes there are some domains where going from the time from zero to a working product is critical. And there are domains where the most important thing is being able to respond to wildly changing requirements
I agree with your observations, but I'd suggest it's not so much about domain (though I see where you're coming from and don't disagree), but about volatility and the business lifecycle in your particular codebase.
Early on in a startup you definintely need to optimize for speed of finding product-market fit. But if you are successful then you are saddled with maintenance, and when that happens you want a more constrained code base that is easier to reason about. The code base has to survive across that transition, so what do you do?
Personally, I think overly restrictive approaches will kill you before you have traction. The scrappy shoot-from-the-hip startup on Rails will beat the Haskell code craftsmen 99 out of 100 times. What happens next though? If you go from 10 to 100 to 1000 engineers with the same approach, legibility and development velocity will fall off a cliff really quickly. At some point (pretty quickly) stability and maintainability become critical factors that impact speed of delivery. This is where maturity comes in—it's not about some ideal engineering approach, it's about recognition that software exists to serve a real world goal, and how you optimize that depends not only on the state of your code base but also the state of your customers and the business conditions that you are operating in. A lot of us became software engineers because we appreciate the concreteness of technical concerns and wanted to avoid the messiness of human considations and social dynamics, but ultimately those are where the value is delivered, and we can't justify our paychecks without recognizing that.
Sure it’s important for startups to find market traction. But startups aren’t the majority of software, and even startups frequently have to build supporting services that have pretty well-known requirements by the time they’re being built.
We way overindex on the first month or even week of development and pay the cost of it for years and years thereafter.
I'm not convinced that this argument holds at all. Writing good code doesn't take much more time than writing crap code, it might not take any more time at all when you account for debugging and such. It might be flat out faster.
If you always maintain a high standard you get better and faster at doing it right and it stops making sense to think of doing it differently as a worthwhile tradeoff.
Is it worth spending a bit more time up-front, hoping to prevent refactoring later, or is it better to build a buggy version then improve it?
I like thinking with pen-and-paper diagrams; I don't enjoy the mechanics of code editing. So I lean toward upfront planning.
I think you're right but it's hard to know for sure. Has anyone studied software methodologies for time taken to build $X? That seems like a beast of an experimental design, but I'd love to see.
I personally don't actually see it as a project management issue so much as a developer issue. Maybe I'm lucky but in the projects I've worked, a project manager generally doesn't get involved in how I do my job. Maybe a tech lead or something lays down some ground rules like test requirements etc but at the end of the day it's a team effort, we review each other's code and help each other maintain a high quality.
I think you'd be hard pressed to find a team that lacks this kind of cooperation and maintains consistently high quality, regardless of what some nontechnical project manager says or does.
It's also an individual effort to build the knowledge and skill required to produce quality code, especially when nobody else takes responsibility of the architectural structure of a codebase, as is often the case in my experience.
I think that in order to keep a codebase clean you have to have a person who takes ownership of the code as a whole, has plans for how it should evolve etc. API surfaces as well as lower level implementation details. You either have a head chef or you have too many cooks, there's not a lot of middle ground in my opinion.
I hear you, and agree there’s not much overhead in basic quality, but it’s a bit of a strawman rebuttal to my point. The fact is that the best code is code that is fit for purpose and requirements. But what happens when requirements change? If you can anticipate those changes then you can make implementation decisions that make those changes easier, but if you guess wrong then you may actually make things worse by over-engineering.
To make things more complicated, programmers need practice to become fluent and efficient with any particular best practice. So you need investment in those practices in order for the cost to be acceptable. But some of those things are context dependent. You wouldn’t want to run consumer app development the way you run NASA rover development because in the former case the customer feedback loop is far more important than being completely bug free.
I always try to design for current requirements. When requirements change I refactor if necessary. I don't try to predict future requirements but if I know them in advance I'll design for them where necessary.
I try to design the code in a modular way. Instead of trying to predict future requirements I just try to keep everything decoupled and clean so I can easily make arbitrary changes in the future. Some times a new requirement might force me to make large changes to existing code, but most often it just means adding some new stuff or replacing something existing that I've already made easy to replace.
For example I almost always make an adapter or similar for third-party dependencies. I will have one class where I interact with the api/client library/whatever, I will avoid taking dependencies on that library anywhere else in my code so if I ever need to change it I'll just update/replace that one class and the rest of my code remains the same.
I've had issues in codebases where someone else doesn't do that - they'll use some third-party library in multiple different components and practically make the data classes of that library part of their own domain and have workarounds for the library's shortcomings all over the place so when we need to replace it or an update contains breaking changes or something like that it's a big deal.
There's a lot of things like this you can do that don't really take much extra time but makes your code a lot simpler to work with in general, makes it a lot easier to change things later etc. It has lots of benefits even if the library never gets breaking changes or needs to be replaced.
Same thing for databases, I'll have a repository that exposes actions like create, update, delete etc and if we ever need to use a different db or whatever it's easy. Just make a new repository implementation, hook it up and you're done. No SQL statements anywhere else, no dependency on ORMs anywhere else, I have one place for that stuff.
When I organize a project this way I find that nearly every future change i need to make is fairly trivial. It's mostly just adding new things and I have a place for everything already so I don't even need to spend energy thinking about where it belongs or whatever - I already made that decision.
Well said. This summarizes my experience quite succinctly. Many an engineer fails to understand the importance of distinguishing between the different tempo and the immediate vs long-term goals.
A strong type system is your knowledge about the world, or more precisely, your modeled knowledge about what this world is or contains - the focus is more on data structures and data types, and that's as declarative as it can get with programming languages(?). I'd also call it to be holistic.
A procedural approach focussed more on how this world should be transformed - through the use of conditional branching and algorithms. The focus feels to be less on circumstances of this world, but more to be on temporary conditions of micro-states (if that makes any sense). I'd would call it to be reductionistic.
I love strong types. I love for loops. I love stacks.
GP! Try Rust. Imperative programming isn’t orthogonal to types. You can go hard in Rust. (I loved experimenting with it but I like GC)
GP! Try data driven design. Imperative programming isn’t orthogonal to declarative.
Real talk, show me any declarative game engine that’s worth looking at. The best ones are all imperative and data driven design is popular. Clearly imperative code has something going for it.
and the advantages aren’t strictly speed of development, but imperative can be clearer. It just depends.
I adore Rust. My point isn’t that you can’t have both, but that the two types of programmers have different default approaches to problem solving. One prefers to model the boundaries of domain as best they can (define what it should look like before implementing how it works), one prefers to do things procedurally (implement how it works and let “what it looks like” emerge as a natural result).
Neither is strongly wrong or right, better or worse. They have different strengths in different problem areas, though I do think we’ve swung far too hard toward the procedural approach in the last decade.
It's the difference between "how?" and "what?". A procedural approach describe the step you do something. But not what is the problem you want to solve and why you want to do this to solve it. A declarative approach on the other end, describe the goal and intended solution first and try to make a proper procedure to achieve the goal.
The two approach have their own cons and pros. But aren't explicitly exclusive. Sometimes the goal and solution aren't that clear. So you do it procedurally until you find a POC(Proof of concepts) that may actually solve the problem. And refine it in a declarative way.
I think GPs point is that they haven't gone from zero to a working solution, they've gone from zero to N% towards a working solution and then slowed down everyone else. Maybe for the most trivial programs they can actually reach a solution.
You can't write a program without knowing that x is a string or a number, your only choice is whether you document that or not.
Yes you can, you handle every case equally. You don’t even need the reflection mechanisms to be visible to the user with a good type system. A good type system participates in codegen.
for a really simple example: languages which allow narrowing a numeric to a float, but also let you interpolate either into a string without knowing which you have.
A statically typed Console.log in JS/TS would be an unnecessary annoyance.
I think TypeScript is part of the problem here. It's a thin layer atop a dynamically typed language with giant escape hatches and holes. I think it's great if you're stuck in JS, it's so much better than JS, but I can't think why anyone would choose it compared to a "real" statically typed language.
It is actually a rather hard question. There is a web page somewhere where the author asks it, lists possible answers and get amazed by some of the definitions, such as "declarative is parallelizable". Cannot find it now, unfortunately.
I would say that imperative is the one that does computation in steps so that one can at each step decide what to do next. Declarative normally lacks this step-like quality. There are non-languages that consist solely of steps (e.g. macros in some tools that allow to record a sequence of steps), but while this is indeed imperative, this is not programming.
One side cares more about how the solution is implemented. They put a lot of focus on the stuff inside functions: this happens, then that happens, then the next thing happens.
The other side cares more about the outside of functions. The function declarations themselves. The types they invoke and how they relate to one another. The way data flows between parts of the program, and the constraints at each of those phases.
Obviously a program must contain both. Some languages only let you do so much in the type system and everything else needs to be done procedurally. Some languages let you encode so much into the structure of the program that by the time you go to write the implementations they’re trivial.
You don't even need a loop. Steps, conditions, and a 'goto'. Loop are actually a design mistake. They try to bound 'goto' by making it structured. They are declarative, by the way. As a special case or even as a common case they are fine, but not when they try to completely banish 'goto'. They are strictly secondary.
Similarly declarative programming is strictly secondary to imperative. It is a limited form of imperative that codifies some good patterns and turns them into a structure. But it also makes it hard or impossible not to use these patterns.
I am also squarely declarative, but currently use a language for work that forces me to be procedural pretty much always and it kinda sucks. My code always feels bad to me and the cognitive load is always super high
Is it the language that forces procedural code? In my experience it’s usually the stdlib, but the language itself is capable of declarative constructs outside of existing APIs. If that’s the case, an approach like “functional core, imperative shell” is often a good one. You can treat the stdlib like it’s any other external API, and walk it off as such.
There is no stdlib. Its a very specific proprietary purpose built language thats been around since like the 90s. It has a super limit set of standard functions that operate on an underlying proprietary data structure and every thing else is just a thin vaneer over a very limited set of c functions.
> I personally think it’s crazy how much of the industry tends toward the former.
It's because most people who use technology literally don't care how it works. They have real, physical problems in the real world that need to be solved and they only care if the piece of technology they use gives them the right answer. That's it. Literally everything programmers care about means nothing to the average person. They just want the answer. They might care about performance if they have to click the same button enough times, and maybe care about bugs if it's something that is constantly in their face. But just working is enough...
I'm thinking more along the lines of how scripting languages are often used in, say, scientific domains (Python, R, etc...). Or how JavaScript and Ruby are more popular than, say, Rust and Haskell for startups.
"Poorly typed" means different things to different people, in the context of this article and thread it would probably mean weakly typed or dynamically typed? Which has nothing to do at all with the correctness of a formula or what output a program will produce.
Declarative programming is essentially programming through a parameter. The declaration is that parameter that will be passed to some instruction. In small doses declarative programming occurs with every function call. In declarative programming the parameter is essentially the whole program and the instruction is implicit; we know more or less how it works, but generally assume it just exists or even forget about it and take it as the way things work.
Of course declarative programming is simpler and less error prone. But it is also essentially inflexible. The implicit instruction is finite and will inevitably run into a situation when the baked execution logic does not quite fit. It will be either inefficient or require a verbose and repetitive parameter, or just flat out incapable of doing what is desired. In this case declarative programming fails; it is impossible to fix unless we rewrite the underlying instruction.
E.g. 'printf' is a small example of declarative programming. It does work rather well, especially when the compiler is smart about type checks, but once you want to vary the text conditionally it fails. (The thing that replaces 'printf' are template engines that basically reimplement same logic and control statements you already have in any language and the engine works as an interpreter of that logic. The logic is rather crude and limited and the finer details of formatting are left to callbacks that are mostly procedural.) For example, how do I format a list so that I get "A" for 1, "A and A" for 2, and "A, A, and A" for more? Or how I format a number so that the thousand separator appears only if the number is greater than 9999? Or what to do if I have an UTF-8 output, but some strings I need to handle are UTF-16? The existing declarative way did not foresee these cases and to add them to the current model would complicate it substantially. But if I have a simple writer that writes basically numbers and strings I can very quickly write procedures for these specific cases.
Instructions are primary by their nature. A piece of data on its own cannot do anything. It always has an implicit instruction that will handle it. So instructions are the things we have to master.
Yep, adopting strict after the fact is a different conversation, but one that has been talked about a bunch and there is even tooling to support progressive adoption.
Types that are too complex... hmmmm - I'm sure this exists in domains other than the bullshit CRUD apps I write. So yeah, I guess I don't know what I don't know here. I've written some pretty crazy types though, not sure what TypeScript is unable to represent.
Progressive code QA in general is IMO an underexplored space. Thankfully linters have now largely given way to opinionated formatters (xfmt, black, clang-format) but in the olden days I wished there was a way to check in a parallel exemptions file that could be periodically revised downward but would otherwise function as a line in the sand to at least prevent new violations from passing the check.
I'd be interested in similar capabilities for higher-level tools like static analyzers and so on. The point is not to carry violations long term, but to be able to burn down the violations over time in parallel to new development work.
This is how we introduced and work with clang-tidy. We enabled everything we eventually want to have, then individually disabled currently failing checks. Every once in a while, someone fixes an item and removes it from the exclusion list. The list is currently at about half the length we started out with.
You could try to craft your own type to match google's schema or hunt down 3rd party types, but just doing `(window as any)["__grecaptcha_cfg"]` gets the job done much faster and it's fairly isolated from the rest of the code so it doesn't matter too much.
You don't have to provide complete types. If you know what you need to access, and what type to expect (you darn well should!), you only have to tell TypeScript about those specific properties and values.
Generally, the conveniences of allowing any are swamped by the mess it accumulates in a typical team.
Agreed; with 3rd party APIs I type down to the level of the properties I actually need. And when I use a property but don't care what the type is, I use `unknown`. That will throw an error if the type matters, in which case, you can probably figure out what's needed. Although I agree with the article that sometimes fudging the rules is acceptable, it's extremely rare that a 3rd party API is so difficult it's worthwhile. And enforcing `any` as an error means you have to be very intentional in disabling the linter rule for a particular line if that really is the best option.
When you quarantine third party code with an adapter (which you should probably be doing anyway), you can make your adapter well-typed. This is not hard to do, and it pays dividends.
TypeScript has "unknown" for this, forcing you to cast it, possibly to any, every time you use it. A much better type for your variables of unknown type!
Yeah, those are few and far between, generally there will be a DefinitelyTyped for anything popular, and you start choosing libs that are written in TypeScript over ones that aren't.
But for your own handwritten application code, there is no excuse to use `any`.
For large code bases, the team has to pay the piper one way or the other. Pay up front with static typing or pay later with nearly infinite test cases to prove that it all works. To be sure, just because you're using a statically typed language does not mean that the code is bug free. It just means that it should all be correct with respect to types.
Of course, my response as a human to those rules and that prompt would be, "Hey - don't harm anyone."
I do not know if it breaks rule 2 or not; as a human I don't have to figure that out before responding. But all my subconscious processing deprioritizes such a judgment and prioritizes rule 1.
> The rules of ethics laid out are mutually incompatible.
Prioritization is part of the answer, for a human. You cannot ever have 2 equally-weight priorities (in any endeavor). Any 2 priorities in the same domain might at any time come into conflict, so you need to know which is more important. (Or figure it out in real-time.)