> I can't speak for Go's genesis within Google, but outside of Google, this underanalysed political stance dividing programmers into "trustworthy" and "not" underlies many arguments about the language.
This summarises my opinion of Go that has been forming recently, that it's a replacement for Java.
Java was the "safe" choice. Many engineers knew it, bad engineers couldn't be _that_ bad in it and would manage to produce something. They could be handed an architecture spec and implement it word for word without doing much translation to the Java language. But I don't think good engineers could "shine" with it.
It normalised all engineers around some median of productivity, and made projects predictable.
I believe Go is doing the same thing in some ways. It's not supposed to allow good engineers to find great ways of expressing complex architectures (however necessary that may be), it's designed to let engineers of all skill levels hit all problems with a Go shaped hammer and get something that looks inoffensive, boring and predictable.
This is very unsurprising given that it came out of a large engineering organisation. Google have many engineers of a wide range of skill levels, and normalising engineering is more important than doing it better.
I realise Google probably has a higher than average engineering ability, maybe far higher than average, but they still have a wide range, and I also include in this engineers who might be fantastic frontend engineers, who have to get stuck into some system software, for example.
Go code is very opinionated but still flexible; it seems to be C but with superpowers and great tooling out of the box. A whole bunch of complex software is written and in production using go; I don’t think it limits the productive/clever engineers from doing what they would do with Java.
>> A whole bunch of complex software is written and in production using go
And also in Java, javascript, COBOL, FORTRAN, R, and many other languages that are associated in software developer circles with all sorts of developer-unfriendly nastiness. The truth is that most programming languages we use today are a bit shit, because they have to make hard choices to overcome limitations of hardware and, well, human brains. In my view, the real reason programmers are highly valued is because we can endure the hardship of using those primitive tools and make them shine despite their inherent limitations, not because we are so brilliant we can't keep ourselves from programming the equivalent of the Capella Sistina in the process of gluing together a bunch of library interfaces.
Otherwise- well, you can write complex software in Brainfuck, if you really put your mind to it. Hell, a whole bunch of very complex software is written in assembly languages. So what? That's a testament to the ability of humans to express themselves in very restricted formalisms, not to the quality of those formalisms.
It is unusual to see R in a list of languages characterized as being developer unfriendly, at least in this century. Best in breed metaprogramming and code introspection tools, an excellent debugger and profiler, a wide ecosystem of accessible books written by core language developers, an excellent (although I agree difficult to submit to) central package repository, high quality data serialization options, built-in report-making tools, huge and high quality standard library (even moreso if you consider the tidyverse a standard library), easy to use and efficient ways to dispatch to other languages, etc.
There are a bunch of weird language quirks, most stemming from its legacy as a reimplementation of S in the early days, but I would be surprised if it was viewed as developer-unfriendly.
I use R all the time, and I have come to love it, but I feel it's Stockholm syndrome. There are many complaints about it, by others, more experienced at the langauge than me (and some who are less so) that you can find on the internet.
My personal beef is with the inconsistencies in the syntax, programming conventions and, particularly, data structures, all of which were designed in a haphazard manner over many years, by many different parties working without any coherent plan towards integration and unification of language features. Like you say, it's basically a more "modern" layer bolted on top of S- or rather, many, many such layers, some distinctly less modern than others. The result is that it's very difficult to figure out the language as a whole. For every specific thing you want to do, very often, there's a very special way to do it, and you just have to know it, you can't just intuit it from first principles, like in other languages.
A couple of months ago, a medical student who wanted to get into data science asked me which language to use, R or Python. I recommended Python, although I myself prefer to work in R. I felt that it would be way easier for him to get started and get a feel for the job in Python, whereas in R he'd have to constantly battle the language and spend much too many hours browsing SO for tips on how to do very specific things, rather than gaining an understanding of the language as a whole- because it's almost impossible to understand R as a coherent whole; because it's not a coherent whole.
So I'm not, like anti-R or anything, quite the contrary. But I do believe it's not developer friendly, not by a long shot.
Following up after the thread has moved on -- but I think if you spend some time in the "tidyverse" and read the Advanced R / R4DS books, you'll find that the tidyverse extends the language's core competencies in a way that fixes a lot of the inconsistencies, and it is possible for people to basically never have to interact with base R if they don't want to. We train our incoming PhDs in R focused on the tidyverse and they generally get the hang of it pretty quickly.
The problem you're highlighting was pretty bad 5-8 years ago where core R developers were pushing three different object models. These days pretty much everyone has agreed to use S3.
The bigger challenge in data science is that statisticians are generally shitty programmers and so many of the packages are very much v0.1 releases designed to accompany a journal article and never iterated on. The reverse tends to be true in Python -- the notorious thing with scikit-learn defaulting to penalized logistic regression last I checked is insane.
If none of this is news to you, then agree to sort-of disagree for sure, but if you haven't checked out R4DS or spent much time with modern packages you'll see that you can pretty safely leave behind a lot of the stuff you used to hate.
I re-wrote the analysis for a paper I wrote I think 4 years ago, and I was surprised at how little code didn't change in that time.
Well, my complaint is that you need something like the tidyverse and a ton of books and blog posts -and Stack Overflow threads- to figure out how to do basic things with R. When I first started learning R, my first point of call would be the CRAN package documentation, which was, often, indecipherable (e.g. it often refers to concepts in S).
What I mean is, it takes a lot of support before you can feel comfortable and be fluent in R, rather more so than in the languages used in the software dev industry, that are much more considerate of programmers' strengths, and, of course, weaknesses.
I'll agree to sort-of disagree. Thanks for your insights and merry christmas :)
I'm a developer of 20+ years and I found R to be unfathomable. By that I mean so arbitrary and inconsistent that I quickly gave up on thinking it was worth my time.
My experience with Go is very limited, but I tend to see interesting abstractions built with other languages. Ruby, Haskell, Swift, JS, even some Python – I see engineers building DSLs and architectures that strongly fit particular use-cases. In Go, everything just tends to be functions and structs, and the lack of expressiveness make it difficult to do anything with nicer ergonomics.
I would agree that it is C with superpowers in some respects though.
> A whole bunch of complex software is written and in production using go
This is true, but my point isn't that you can't write the software, just that it might have been better code in another language (easier to understand, reason about, more reliable, more maintainable?). The biggest benefit along these lines that Go seems to have is that lots of other software is written in it, which comes back to a similar argument as Java.
Different problem domains have different cost/benefit analyses. I write in several languages daily depending on the problem I’m trying to solve. Go may not be right for most problems. But it works really well in the right situations.
If you need something performant, efficient, easy to deploy, and easy and fast to build, Go checks those boxes. Deploying complex software written in Ruby, Python, Node, or Java can be a major pain, with thousands of files and dependency hell, or build-in-place which has its own set of risks. Versus deploying a single binary file, that runs faster, in less memory, more stably, and takes orders of magnitude less time to build? The other languages will need to be a lot better than they currently are to win that fight.
> Ruby, Haskell, Swift, JS, even some Python – I see engineers building DSLs and architectures that strongly fit particular use-cases
> easier to understand, reason about, more reliable, more maintainable
I find DSLs obfuscating, buggy, and unmaintainable. I know we're going to agree to disagree here, as I'm a fan of the Java, Go, C, or even Python side of things, I find most DSLs are a nightmare. Maybe we work on vastly different software.
Most DSLs, imho, are usually better off removed. It's part of a class of code smells I like to call: Sound concepts applied too liberally and out of context.
It's the same sin as the "Architect Astronauts" of Java Enterprise FizzBuzz Design Pattern overload. Python and monkey patching abuse. JavaScript and framework explosion.
C++ had operater overload abuse.
Every language seems to go through it's awkward teen years and finds a good balance eventually, so I don't mean to rag on DSLs.
The problem is: DSLs usually work well when the problems are well defined, common, and relatively static. SQL comes to mind. BLAS/numpy like programs.
If the DSL models business or application ideas, those will grow and shift. If I find myself fixing bugs in the compilers, I need a different language. That's what DSLs end up as: half-designed specs I need to fix.
I'm sorry, did I misunderstand your point? I'm not saying these things to troll, this is my honest opinion.
I'll admit, Haskell is probably a poor choice for other reasons, but in many ways I think Rust and Swift are better languages than Go for the things Go is commonly used for.
Youre okay it's 5 am I've been up all night. I agree with the thoughts that Rob Pike can be opinionated, and that can be good and bad. There was a website cat-v I used to go to and cat-v harmful stuff software.the site really likes plan9 and go and Rob Pike.
I feel it gives some insight, but I think it's bad to be too opinionated as this cat-v site can be
I would argue that Swift, a compiled, strongly typed language, is more a C++ replacement than it is comparable to Python or Ruby. The only "higher level" aspect would be largely hands-off memory management, but go is in the same boat there.
It was an apple thing. It's now an open-source, community driven language. It's very usable on Linux for server-side applications, and with Swift for Tensorflow it's going to become increasingly relevant in data science and machine learning.
edit: Swift is no more an "apple thing" than go is a "google thing"
I.e. Swift is an apple thing, go is a google thing... and c# is a microsoft thing, and java is an oracle thing. I am happy to have never been obliged to use any of them, and I expect my odds of needing to use any of them in the future are strictly decreasing.
Rust might not really be a mozilla thing anymore. But is it entirely a coincidence that Firefox had become a lot crashier since they started shoehorning rust code into it?
Firefox has largely been on this weird trend downwards since 4.x for me, at least. I don’t think it is necessarily the introduction of rust, but perhaps that’s exasperated the issue.
Huh, I was under this mistaken impression too. I know many, many, companies and projects using go as a foundation, but I don't think I've heard of any using swift. Any I should read up on?
I've seen some amazing abstractions created in Rust and Haskell. Some of the Haskell ORMs are really nice abstractions over SQL for example, and there are many examples of Rust macros doing amazing things (I believe async/await was implemented as a macro?).
Obviously you can write an ORM in Go, and you can write async code or code that will do whatever Rust macros can do, but the solutions will be less elegant, less re-usable, less useful.
There's an argument to be made that this is all "clever" code, and that that is bad for maintainability. I'd agree they can be misused, but when used correctly they can level up how code is written in a very substantial way. Understanding the difference and when they are appropriate is what I really mean good engineering here.
I'm a huge proponent of boring code, but for me it's an orthogonal concept to having a lot of tools for abstractions in a language.
What you actually want is to be able to write code which is as close to the problem domain as possible. Overly "clever" code is one way to get that wrong, but so is being too low-level or needing a lot of boilerplate. Having the right level of abstraction lets you write self-documenting code whose function and intent is made clear just by reading it.
This must've been quite an old article, because lo and behold, nearly every single complaint on that page is being addressed with gusto by the Go community! Proposals on generics and error handling is well underway, and his complaints regarding GOPATH and the go get has been solved with Go modules.
EDIT: As I expected, the linked page is written on Feb 2018[1], things indeed have changed a lot since then.
Sorry, but the language existed for how long? I remember hearing about the same complaints about generics and error handling years ago - and they got answers back then with about the same level of arrogant dismissal like the quote about syntax highlighting in the article.
It's good if the community is at least starting to listen now, but I don't think that is sufficient evidence that the language is good.
There is no discussion if the language is good, it's already quite convincingly proven it's good. The question was if generics would make the language even better. Some people think that's a no brainer, others are not so sure, and the latter group included the core maintainers.
People use the language because it is good, I think at this point one can state that as objective fact. Once people use it, they'll complain when something is not as nice as it could be, like the error handling, and like the generics.
I wonder if in 2000 people felt the way about Java 1.3 people feel about Go now. Java had a similar rise to prevalence, and it also lacked some features we now consider crucial, such as generics. I personally would never want to go back to Java 1.3. I feel I'd much rather code in Go instead, even if it lacks generics. But I still think it's worth considering if Go would turn out to become Java, if it now embraces generics.
The language is crap. Just because people are using it does not mean it is good. Take a look at javascript. Same deal. You did not state any "fact"s why Go is good. Maybe it is good compared to Cobol or Fortran, but not compared to pretty much everything else.
Oh come on. People are forced to use Javascript, I don't think it would even exist today if it wasn't the only crossbrowser language.
Almost no one is forced to use Go, and yet people flock to it. I don't feel I need to state the things that make Go good because it's already been done a thousand times, and everyone in this thread already knows why it is good.
Have you ever built something in Go? I'm almost as productive in Go as I am in Ruby, and that was within a week of learning it. It's crazy good for a language that lacks so much expression.
People are not forced to use Javascript since there are a dozen of great LangName->Js infrastructures.
People are not forced to use PHP or other badly designed languages as well.
I wouldn't call Go a bad language due to the subjectivity of the term, I would rather call it ad-hoc language, which is basically a DSL sufficient for some domain, but not very suitable for a general engineering.
Go is a compiled PHP, its rather weak type system and oversimplified nature are suitable and sufficient for webdev for the economical reason. Building something complex in Go is possible but painful, Go doesn't help here but rather impedes making you write your own dynamic type system and other facilities to compensate the language's deficiencies.
To be fair, I feel like go's simplicity makes it easy to get small things up and running, but the pain comes when your project starts getting large. That's when additional abstractions are missed.
It's interesting, I feel like it's the opposite and Go excels at large projects. Compared to other languages, Go doesn't let you use as many abstractions, so writing code takes longer, but it is much more straightforward to read.
I work at a >1000 person company that codes mostly in Scala, and some in Go, and would prefer to read unknown Go code any day. With
the Scala I run into things like implicit parameters and 7 layer of inheritance, but the Go code is straightforward.
Go is somewhat of a lowest common denominator language. You could out perform it with a small team of strong developers, but when once your project gets to a certain size, it's unlikely all of the developers will be strong.
I think it's philosophically interesting. The Go approach to needing a more powerful language is to assume that you actually want to write a custom code generator for your project's needs. And it's not pretty to do so(the "go generate" thing makes quite a wart in its attempt to ease this process), but it works, and you can move on with your project. Most newer languages get tied in knots trying to generalize that same task and have a kind of configurability meltdown where everyone does it differently, so while it can be shared and reused in theory, none of it is actually compatible. I think it's akin to criticism of OOP in that in a lot of cases, "you wanted a banana but you got a banana plantation, three tractors, and twenty employees".
And Go is just a little bit closer to the sweet spot than C was, since a greater proportion of Go code seems to successfully avoid extensive preprocessing.
On the other hand, it might be a bit too restrictive for prototyping to have these power limits. I definitely have an easier time feeling my way through an unknown data modelling problem if I can start slinging things together dynamically.
That is an interesting point, and even if that is true, it still means adding generics would be a detriment.
I do wonder if that's an inherent property of languages with a more expressive type system. Many would blame it on OOP, but I think Haskell also suffers from it. Haskell that is written without using typeclasses is much more readable than Haskell that's written more plainly, at least to me.
It might be a unique property to Go, which other language is strongly typed yet with such a basic typesystem, lacking inheritance or even generics?
I will echo this too. I think Go has helped usher in the sensible concept BDD and of microservices to the forefront and the do one thing do it well approach. Versus monolithic applications instead you have a lot of digestible standalone services comprising the overall application and Go excels in this.
> I don't think Go should be used for larger projects. Apparently though the community disagrees because they're really adding generics.
Go is actually specifically built for large projects: the project was started because C++ has severe problems when used at the scale of Google.
The basic design of Go was intended to engineer out the issues that Google developers saw when using C++ at scale for long periods, such as circular references and problems with combining exceptions with concurrency.
What makes Go distinctive is that the team concluded that complexity inherantly does not scale, and designed for simplicity. Which enables Go to scale down in a way, even though it was originally intended as a "systems language".
>> Almost no one is forced to use Go, and yet people flock to it.
Well, at some point people "flocked" to javascript, and they weren't forced to. So it's a good language, then? Or sub javascript for java, or your favourite bashable language.
I will assume you're referring to Node. The browser explains why many coders know Javascript well; the fact that many coders know Javascript well explains the popularity of Node --without our having to resort to the hypothesis that Javascript is a good language.
But there wasn't anything stopping early browsers from using a different scripting language. They stuck with javascript. Was that because it was a good language?
And what about Java, for example? And why don't people "flock" to LISP, say, or Haskell? Are those "bad" languages, now? And what about software, like windows- is windows widely used because it's a great operating system?
Adoption is a rotten bad measure of quality. Not just in programming languages- in everything.
agreed. after i learned common lisp and smalltalk, i realized that since there has hardly been any innovation in programming since the first languages half a century ago.
Plot twist: people never "flocked" to Javascript. People use Node for example because the company they are working for thought that it would be nice, efficient and less costly to have one language for the whole platform. They mostly learned their lesson since then and moved on to real languages which can be used on the backend.
> There is no discussion if the language is good, it's already quite convincingly proven it's good.
I'm sorry, you can't prove a language is good or bad. You can prove it's popular, widely discussed, used by several individuals and organizations, is actively developed, but "good" and "bad" are inherently subjective terms and there's no way to prove it unless there happens to be a universally accepted definition of what constitutes a good or bad programming language.
> a universally accepted definition of what constitutes a good or bad programming language.
Implicitly many people, rightfully in my opinion, conflate the utility of a programming language with that language being good. If many people are able to solve real problems with a language, that language is good, the end.
Good does not equal perfect, there is no perfect in the real world of engineering.
On the other hand, many people conflate the popularity of a programming language with it being backed by a big corporation.
There actually are a few measurable metrics of programming languages, but even when we discuss such a simple factor as execution speed that you'd expect to be universally accepted as positive, there will be people arguing that developer time is more expensive and savings made here are more important than the gains on execution speed. There is simply no way two programmers are going to agree in classifying a number of languages as good or bad.
So in the world you're describing, pretty much all languages are "good," and none are "perfect." Not very interesting so far. What are the adjectives that fall in the middle, the ones that some languages deserve and others don't?
One might also wonder how long it took Java to respond to user complaints, and I would suspect Go core team has been much faster addressing these issues and responding to developers.
Are you really arguing that the language is not “good”? Sure, there are things not to like about it, but the widespread adoption of Go as a systems programming language is evidence alone that it is a “good” language.
Good is such an ambiguous term that I’m not sure it’s even worth using. For one person, only esoteric languages like Haskell are “good”, for another person, Python or Javascript are “good” enough.
Judging by most responses form the Go team and/or the community around the language, it would seem that none of the issues listed in the article are actually valid. I.e. this is the _correct_ way to handle errors, sum types bring nothing, etc.
Of course, things have changed since, but what does that say about the previously strongly held opinions of the team?
> Judging by most responses form the Go team and/or the community around the language, it would seem that none of the issues listed in the article are actually valid
Generics have been considered for years by the core maintainers. See Ian Lance Taylor's proposals. So you're either lying or not informed enough to be so assertive.
> This must've been quite an old article, because lo and behold, nearly every single complaint on that page is being addressed with gusto by the Go community!
An interesting tidbit I learnt from a former colleague:
He wrote a small but non-trivial project in Go, and ended up re-writing in Python. Over time, he and his team measured both codebases and found that they had roughly the same number of bugs per line of code, but the Python codebase was 1/3rd of the lines of code, therefore had far fewer bugs.
I wouldn't attribute this to inexperience either, this is one of the best engineers I've worked with.
So during the rewrite, he had the providence of the go codebase to look back on, and the Python version still had as many bugs per lines? That sounds like an argument for go, if anything.
This is true, I don't know the specifics, but I would trust that my former colleague wouldn't make such a claim based on such incorrect data. My guess these bugs were "code" bugs rather than "product" bugs, if that distinction makes sense.
I have the opposite experience. I've rewritten several services from Python to Go; while Go ends up being more lines of code, the type checker actually found far more bugs. When I'm ready to deploy the new service, I'll route a copy of the production traffic to the new service and observe differences in behavior, and the Go version (despite being very young) typically has ~10 percent of the bugs (_especially_ in error paths). Further, the performance usually improves by one or two orders of magnitude, depending on whether or not the old Python code was async (async Python is even buggier than sync Python in our experience) and how CPU-intensive the program was. Lastly, I iterate much more quickly in Go despite having more experience in Python, entirely because the type checker catches silly errors that I would otherwise have to find and fix in a test loop.
3x more bugs that you know about. I wouldn't be surprised if Go's static type system just found more bugs, and the Python program had just as many bugs but you didn't happen to hit them at runtime.
I got the impression these were bugs that made it into a tracker of some sort, and I don’t think most engineers log “bugs” they encounter during local development before some notion of a release (be it to staging, the world, the master branch, whatever).
The type system has to support the language features, there are tons or races, deadlocks etc due to go language features without a type system to express these features.
It shouldn't be unexpected, it's very natural. I guess it's a bit hard to grasp why, in which case there are bug studies showing same effects. Essentially higher level of reasoning (think reasoning about fewer things) and producing less code for the same functionality creates fewer possibilities for mistakes.
From what I've seen (though I can't find any sources to back it up, apply the appropriate amount of salt), having a number of bugs or defects proportional to the lines of code constitutes one of the robust findings in that pretty muddy area of the world (Does OO increase productivity? Static typing superior? .. at least we know, I think, that bugs run in proportion to lines of code, regardless of language)
I can get past most of Go's weird idiosyncracies... Except for the odious error handling. Reading go code makes my eye twitch, and the biggest reason is its error handling.
And sadly I'm now stuck with it, and not even the changes for Go 2 are encouraging because, even though its new error handling is saner, it is still needlessly different for the sake of it.
I quite like Go's error handling. It adds visual structure and makes it apparent which things can fail and how they're handled.
It's a welcome relief from the likes of Python where you only see (and consequently think about) the happy path. This is definitely a big driver of our 500s in our production Python services. While there are many who are genuinely aggrieved by the number of keystrokes, I think a lot of people are annoyed at error handling in Go simply because you have to think about error handling.
>simply because you have to think about error handling.
until you write
x, _ := some_code ()
Besides, Go errors are used in pair with the return value, so you have to check 4 possible combinations of value/nil and error/nil instead of 2 due to the absence of sum types.
Oh, and no generics, so your errors are either string or you have to check the interface in runtime which and can't statically check which exact errors could your code return.
Oh, and no monadic chains/exception mechanism, so feel free to write
x, err :=
if (err != nil) {
return err
}
y, err = ...
if (err != nil) {
return err
}
instead of
do x <- fun1 ()
y <- fun2 ()
...
and then just check the proper subset of possible errors, carefully inferred by the type system.
There are a bunch of different possible approaches for error handling: exceptions, monads, erlnags failures as possible case, Go crowd had just invented the worst and least error-prone one.
It's also suprisingly easy to mess up error returning. I've accidentally done things like:
a, err := run()
if err != nil {
b, e2 := cleanup()
if e2 != nil {
return err // Whoops
}
return e2 // Double whoops
}
Since you generally test the happy path more diligently, subtleties like these tend to show up later than desirable (compile time would be the most desirable, of course).
The := semantics is partly to blame here, as well as Go's lack of laziness around unused values. Go is curiously strict about unused variables, which never hurt anyone, as opposed to unused values, which absolutely can hurt. For example:
a, err := foo() // This err is never used
b, err := bar()
This is much worse! And the compiler doesn't complain. Fortunately there's a linter called ineffassign that tracks ineffective assignments, and it's part of gometalinter and golangci-lint. But not "go vet". And there's no linter that can detect the first example that I gave.
Shadowing is a pet peeve of mine. Go is one of the strictest, most opinionated languages in mainstream use, but it's amazingly lax and unopinionated in certain areas. "go vet" can check for shadowing, but doesn't do so by default, and of course it doesn't test the subtype of shadowing that := causes. Shadowing is usually a bad idea, but Go encourages it through its := assignment semantics.
Go’s lax attitude about shadowing jumped right out at me, too. Coming to Go recently from TypeScript, where the combination of the compiler and tslint will aggressively bark at me, was jarring.
Coukd you point me towards any crash courses on using the Go linters you mentioned to increase safety of my code? I’ll look them up, too, but if you have any resources handy or think there’s work for me beyond RTFM, I’d appreciate it.
First, I recommend golangci-lint [1], just because it's much more performant (and designed to be from the start) than gometalinter. It uses much of the same code.
Some of the more useful linters, like deadcode, can be a little too slow to run in an editor like VSCode. We run the full linter list as part of our CI tests, and then in development we run it with "--fast", which runs only the fastest one. I've disabled megacheck, which didn't work with Go modules the last time I tried it.
Among the prominent ones I use, all enabled by fast mode, are ineffassign (as mentioned),
deadcode (detects unused code), govet (runs "go vet"), errcheck (checking for calls where you ignore the error result),
structcheck (finds unused struct fields), varcheck (finds unused variables and constants), typecheck (this does the same thing as "go build", basically), and unconvert (detects unnecessary type conversions). The slower ones that are worth turning on are staticcheck (performs lots of static checks) and unused (checks for unused variables etc., but is slower than varcheck and structcheck, apparently).
Thank you! This is great. For all of Go’s warts, it lived up to every single hope and expectation I had for it in this last project. I’m looking forward to improving it with these suggestions.
Yeah, you can definitely opt out of it and there are even cases where the compiler doesn't guarantee that you're handling the errors, but assuming you're making a good faith effort to adhere to conventions, then you're thinking about your errors.
> Besides, Go errors are used in pair with the return value, so you have to check 4 possible combinations of value/nil and error/nil instead of 2 due to the absence of sum types.
No, you don't. If the error is not nil, there's an error. Sum types would be nice, but they wouldn't reduce the number of checks, they'd just formalize the idiom.
> Oh, and no monadic chains/exception mechanism, so feel free to write
I'm fine with this. Monads are hard for people to understand, and I'm not playing code golf...
> Go crowd had just invented the worst and least error-prone one.
I don't think it's the least error-prone, but it's not bad.
And if the value is also not nil? Or if both value and error are nil?
>Monads are hard for people to understand
How are they hard if they are nothing but a mere formalization of a computation step?
>I don't think it's the least error-prone
Sure, typeless errors which you have to check in runtime, which are so tedious that people usually either ignore them or write something like that [1] are the least error prone error handling mechanism possible (beyond having no errors at all)
1. If there is an error, it doesn’t matter what the value is.
2. Monads are famously difficult to wrap one’s head around.
3. Either you’re a big fan of Go’s error handling, or you’re confusing most and least. I’m guessing it’s the latter, in which case your claims don’t match my experience.
>If there is an error, it doesn’t matter what the value is.
Who said that? In Go the type system let me return both error and value, and there is no static guarantee that value is invalid if error is not nil.
>Monads are famously difficult to wrap one’s head around.
What's exactly difficult about them? If you do understand this code
x := proc1 (arg);
y := proc2 (x);
return y;
what's confusing about this
do x <- proc1 (arg)
y <- proc2 (x)
return y
? Monad is nothing but a pattern (pretty simple in comparison to the majority of GOF patterns, it has only one function for binding two computations of given semantics together) for describing an operational semantics which is just baked in in many imperative languages and could be described by a programmer according to his or her needs with monads (i.e. you don't need to bake don-deterministic, async or parallel computations in a language, you could describe it's semantics with a monad and use it freely).
Its closest counterpart is the iterator pattern, which is a generalization of the iteration semantics.
It really sounds crazy that it's hard for a programmer to wrap his or her head around operational semantics.
> there is no static guarantee that value is invalid if error is not nil
There doesn’t need to be. An error was returned computing the return value, so it doesn’t make sense to use the value. And if there is something useful to be returned, then you’re dealing with a product type anyway.
RE monads, I don’t know why they’re hard to understand, but they are notorious. Probably because they’re more abstract than is helpful in the majority of programs, and they’re often just used to play code golf, removing bits of boilerplate that were never really harmful to begin with (like error handling).
I actually like monads and excessive abstraction in general, but that’s because I appreciate a sort of mathematical elegance beyond what is helpful for shipping a product. When I program to feel clever, I use languages like Haskell. When I want to get things done, I use languages like Go. It doesn’t mean Go is better than Haskell or that monads are bad in some absolute sense; only that in general they cost more than they contribute when the objective is shipping a product. Similarly, Go is totally lacking when I want to build something very abstract.
>And if there is something useful to be returned, then you’re dealing with a product type anyway.
And how do I know if I'm dealing with the product or sum type if the former is absent and emulated by the latter?
>I don’t know why they’re hard to understand, but they are notorious.
Citation needed.
>more abstract than is helpful in the majority of programs
They are neither more abstract nor less useful than the iterator pattern. I.e. you don't need either if your programs are trivial or your language has baked in constructs for the semantics they ought to provide, though both are unavoidable when you need to implement iteration or operational semantics in general for the case not foreseen by the language creators.
It's a shame that both are neglected in Go, and Go programmers would write these `if (err != nil)` snippets all the way making dozens of errors due to lack of DRY or iterate over a tree using loops.
>only that in general they cost more than they contribute when the objective is shipping a product.
Citation needed. Need an evidence of clear, generic, literate and domain-driven code being more expensive that a typical Go spaghetti of loops and if (err!=nil). I could believe that its cheaper because of Go developers being cheaper,but the technical debt couldn't be smaller due to verbosity, code repeat and lack of expressiveness to separate the domain logic from the low level programming stuff like iteration and error checking.
It is just based on experience and common sense (although I’m sure you can come up with some awkward justification for why Haskell has only delivered a fraction of the successful products Go has done despite having been around so much longer and despite being The One True Programming Language). I’m not bothered about whether you believe me or not to be honest. Frankly, I’ve lost interest in the conversation. You seem very defensive and threatened by the success of a programming language, and that’s just the saddest and most disinteresting thing I can think of.
What are you talking about? Go is a niche language [1] just like Haskell, and the amount of successful projects in it could be counted on one hand (aka docker and kubernetes).
Popular languages are Python, Java, Javascript, and they all use either monads (java, C#) or exceptions (Java, C#, Python, Js).
Also Java, C# as well as the new languages like scala, rust or swift are all influenced by haskell or ML in some way (option/result types, variants), while `if err != nil` is indeed a very obscure and opinionated way of signaling error, which exists in Go and in some very legacy langs only. Java wants to be more like ML than like Go [2] [3].
> It's a welcome relief from the likes of Python where you only see (and consequently think about) the happy path. This is definitely a big driver of our 500s in our production Python services. While there are many who are genuinely aggrieved by the number of keystrokes, I think a lot of people are annoyed at error handling in Go simply because you have to think about error handling.
Okay, so let's say you're using Python. You get some 500s in production, but if you're written things in a clean fashion, they provide clear error messages and you can fix them quickly. 500 is error handling--not ideal, but it's also probably better than the "Pretend this error didn't happen" type error handling that programmers write when they are forced to handler the 80% of possible errors that will never occur.
And if you need a high-reliability system where 500 errors aren't acceptable, then Go is the wrong language, period. Its static type system is a joke and it doesn't provide any real formal verification beyond that.
> it's also probably better than the "Pretend this error didn't happen" type error handling that programmers write when they are forced to handler the 80% of possible errors that will never occur.
It’s interesting that Go programs don’t have either of these problems. Not necessarily because of technical controls, but because of a strong culture of “handle your errors” that adapts programmers to actually think about how their program should behave when things don’t go exactly as planned.
> And if you need a high-reliability system where 500 errors aren't acceptable, then Go is the wrong language, period. Its static type system is a joke and it doesn't provide any real formal verification beyond that.
This is a false dichotomy. You don’t have to choose between 500ing on a client error or never-ever throwing 500s. You could use 500s as they were intended—to signal server issues. And while it’s true that Go’s type system doesn’t cover ~5% of cases (and that’s generous), surely it’s better than Python’s which covers zero? (Yes, I know about Mypy and no it’s not yet suitable for a production system).
These are author opinion and I respect and understand where he came from. People should have ability to share about their opinions with the world. That's great. But argue about the syntax highlight are silly to me. Argue about leadership though on that syntax highlight is fine.
Below are mine. Let's look at Go from 2 angles:
- Go the language
- Go the leadership
Go, the language is great to me. In the article, OP talk about `go get`, GOPATH, Error Handling. To me they are fine. I love `go get` because it was so easy to install binary that way and get update too. It fall short when have conflict because you have a single global tree which thing depend on. This is obviously not gonna work in long run, and in fact they always said they use it different way at google and expect community standarize and find ways through it. Which eventually lead to where we are now with go module.
But let's take a step back. Let's say if everything work great, do you like `go get`. It's just feel like `homebrew` and a game changer to me. What if we can abstract everything out, make `go get` works great as it's without population the global tree. Technologies will evolve. Look at how JavaScript evolve and become a great language and platform as it's right now. Give Go time.
Same with error handling. Exception is just as annying too. You either handle it or just swallow up or let's it buble up. I prefer Go style. Read Go code force me to think about error right at that point.
I would love Go to have thing like Result or Maybe :( instead of return multiple value which is just a convention right now.
Go bring lots of toolchain and interesting idea. gofmt gofix etc...
However, Go the leadership isn't up to Go the language. I think they feel to the trap when they made a language which many big project use(influxdb k8s docker...) and they get defensive when people talk about Go or criticize it. Look at how friendly Jose of Elixir or Matz of Ruby is. I would love Go leadership reach that. I read in other thread where someone say `Golang` and one of Go core dev just dismiss because he written `Golang` instead of `Go`.
I think eventually the leadership will change. Go is opensource after all. And we will see a day where Go is great both at language and leadership.
As have already been pointed out most of these points ARE being addressed by the Go community, but it kinda reminded why I look towards Go 2.0 with some dread. Most of these things that he is asking for are things that will make the language harder to read, Generics and Exceptions (and not least this weird chimera that they have come up with in to Go 2 proposal last time I checked), will undoubtedly create popular libraries, which are almost impossible to understand for the uninitialised.
The main strengths of Go is that it's easy to pick up for almost anyone with basic programming skills and be productive. The rigidity of the style, and the relative simplicity means that I can get a very fair idea about how almost any 3rd party library on Github works in an hour, and probably tweak the bits I may need to tweak. (The obvious exception to this is any library using a lot of reflection which can fulfill anyone's deepest desires to make code unreadable).
C++ (which is my other main work language), have all these advanced features (often added in haphazard fashion late in the language specs), and I love writing C++ code, but I dread looking at other peoples code, because it can be almost impossible to figure out what is going on.
I am tempted, when I see all these complaints about Go and comparing them to languages with different feature sets, to tell people that in that case they should use those.
Of course, you might have the language dictated by an employer, but in that case there was probably a rationale behind choosing Go, and trying to change the language into being a clone of every other language will probably just result in someone else coming up with a new simple, statically typed compiled language, because I honestly believe the world needs that.
> ... often added in haphazard fashion late in the language specs
Agreeing with this point. I feel, there seems to be a vicious cycle in "pragmatic" language design that goes something like this:
1) Let's design language n. It will be like language n-1 in spirit but much more lightweight and without all the historical baggage!
2) In particular, let's leave out generics. They have been a never-ending source of bugs and confusion in n-1 development, with all the gotchas and awkward special cases you need to remember - and we can't really think of any compelling real-world use-cases for them anyway...
(... some years later ...)
3) Sigh... the community keeps pestering us about generics... so alright, we'll think about some way to add generics to the language so you guys finally stop complaining.
4) Behold: We made n', a language extension to n with generics. However, because all of our existing ecosystem is written in the old way, without generics, we had to prioritize backwards compatibility over all else during the design. That's why you'll have to live with counter-intuitive behavior A, awkward special case B and arbitrary-looking restriction C when using them. We're sorry, guys!
5) Let's design language n+1. It will be like language n in spirit but much more lightweight and without all the historical baggage! Especially generics!
How about for a change designing a language where generics are part of the initial design instead of just getting bolted on later?
> How about for a change designing a language where generics are part of the initial design instead of just getting bolted on later?
Muttering about Rust at this point.
Exactly. Generics being there from the get-go makes a difference to the language design. The most visible example being how values are returned from functions, when the result is tuple return for success case or error case. Bolting that on later is unlikely to work well.
Well, C# and TypeScript both seems to have gotten a whole lot more right (I'll cut TypesScript some extra slack here since it has to take extra care not to step on JavaScripts toes now or in the future.)
Generics came to C# in version 2.0. Although it was done well, there is no doubt that some things would have been different if there were generics in Version 1.
> The main strengths of Go is that it's easy to pick up for almost anyone with basic programming skills and be productive.
This is passed around like gospel, but as a counterpoint my intern started this week at 30 hours a week. I taught him elixir from scratch, and today he finished writing a web server that serves GET and POST requests (which he didn't know) and stores states correctly. Also with low touch; he did much of the documentation reading himself and I was able to finish up about 1500 lines of code for a demo tomorrow.
I feel like pretty much all modern languages except maybe rust are reasonably easy to pick up.
> I taught him elixir from scratch, and today he finished writing a web server that serves GET and POST requests
This is an anecdotal sample size of 1. If something is "gospel" there is usually a real truth behind it that one should try to understand.
> I feel like pretty much all modern languages except maybe rust are reasonably easy to pick up
Simply not true for most of the industry. Perhaps in some corners of the silicon valley monoculture machine, but most of software engineering is comprised of a diverse range of people with diverse backgrounds and skill sets for which mastering a new programming language is time consuming.
I am a total beginner in Go but I feel like the enforced uniform formatting is actually a good thing because it makes everyone's code consistent and more readable.
It can be extremely difficult to read other people's code if they use a totally different style even if both of you are good at the same language.
And I feel there is litter point in telling people not to use one language simply because you don't like it. Everyone is different, if you don't like it just use something you like (maybe change job if you have to use it at your current job).
There is and will never be a language that EVERYONE likes.
Oh did I forget javascript? LOL...
C is equally bad, but it can be used anywhere -- it can be compiled into a dynamic lib and used in any language/framework, which is exactly what we did.
True, though that doesn't seem like a good reason to rewrite it in C. A PDF parser in C sounds like a security nightmare, and now you have to deal with C's archaic build systems.
You don't bind yourself to the inner politics and philosophy of something like Google except when you are Google. Anyone's good taste should have prevented them from using a language with such severe and ludicrous restrictions, but that filter obviously didn't work.
> syntax highlighting, or as I prefer to call it, spitzensparken blinkelichtzen
Google Translate reckons this means "peak parking", is that a correct translation? If so would somebody mind explaining what parking has to do with syntax highlighting? And if not, what is the correct translation?
ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS!
DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND
MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER
SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT
SPITZENSPARKEN.
IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN
SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.
ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN.
"This silliness dates back to least as far as 1955 at IBM and had already gone international by the early 1960s, when it was reported at the University of London's ATLAS computing site."
Heh. Google translate actually recognises some of those, er, terms:
ATTENTION!
ALL TOURISTS AND NONTEKNISH LOOKENPEEPERS!
THE COMPUTER MACHINE IS NOT FOR THE GEFINGERPOKEN AND
CENTER DIG! ORWISE IS EASY TO SNAP THE
SPRINGENWERK, BLOWENFUSEN AND POPPENCORKEN WITH
TIP sparken.
IS NOT FOR TRADE IN STUPID HEADS. THE RUBBER TIPS
SIGHTSEARS KEEP THE COTTON PICK HANDS INTO THE POCKETS.
ZO RELAXING AND WATCHING THE FLASHING LIGHT.
I'm kind of amazed that it can go even that far. But mostly, I'm amazed at the fact that I can just pick up and read a completely made-up language, or a severely garbled real language, depending on your point of view.
We got some way to go before machine translation can match human language abilities, eh.
It's actually semi-correct German with "germanified" American words. "DAS KOMPUTERMASCHINE IST NICHT FÜR DER ..." is grammatically correct. But gefingerpoken isn't a German word, obviously, though it's close to gefingert (fingered). Blinken means flash, and lichten means lights, so blinkenlicht at least means soemthing, though blitzende Lichter would be more idiomatic.
This is mock German - Google translate is probably taking spitzens parken as peak parking. But it is probably a refernce to blinkenlights https://en.wikipedia.org/wiki/Blinkenlights
This was entertaining:
": ACHTUNG!
ALLES TURISTEN UND NONTEKNISCHEN LOOKENPEEPERS!
DAS KOMPUTERMASCHINE IST NICHT FÜR DER GEFINGERPOKEN UND MITTENGRABEN! ODERWISE IST EASY TO SCHNAPPEN DER SPRINGENWERK, BLOWENFUSEN UND POPPENCORKEN MIT SPITZENSPARKEN.
IST NICHT FÜR GEWERKEN BEI DUMMKOPFEN. DER RUBBERNECKEN SIGHTSEEREN KEEPEN DAS COTTONPICKEN HÄNDER IN DAS POCKETS MUSS.
ZO RELAXEN UND WATSCHEN DER BLINKENLICHTEN."
German native speaker here with lots of assumptions.
For the first word, "spitze" can be "peak", but also the pattern/cloth as seen when you google doily. In this context, that's probably what he means. That means the "s" belongs together with the "parken", making it the English "spark". "sparken" could be "making sparks"
For the second word "Blinklicht" ist "blinking light", "Blinklichtchen" would be the diminutive, it's probably written "Blinkelichtzen" to make it look ever more German.
I love how childish it is to think that his personal experience of learning arithmetic, is worth anything regarding syntax highlighting. In some way it even undermines him.
Why did he use coloured rods as a child? Because it helps reduce cognitive load! To define easier things as childish and harder things as adult, is elitist and hostile to other people.
My expectations surrounding Go were born from my intense admiration for Python, which has grown so much as a language from its humble genesis at Google and proudly stood the test the time. Needless to say, I was beyond disappointed. Barring its concurrency module, which is some half-backed mixture of STM and async, I was astonished at the absurdity of the language. It reminds me of JavaScript, which as I recall was originally written as a sort of joke. So yeah, Go's like a bad joke.
Python did not have a genesis at Google. It existed before Google did.
JavaScript was written by Brendan Eich as the browser-side scripting language for the Netscape browser, originally based on Lisp I believe. It was not meant to be a joke.
I've learnt that programming languages, just like food and fashion, are a matter of taste to a high degree. You can point at awful smelling, too sugary meal and call it crap. Someone else spends years perfecting it.
One's taste however can change: food with high sugar leads to heart disease and diabetes, so I better stop eating that favorite dish of mine. You keep doing that for a while and taste of sugar (or salt, or fat) becomes disgusting.
Statements like "Go is crap, it has no generics" originate from one's taste in programming languages so there is no arguing with that. However, talks about what generics (or exceptions etc.) does to the health of the programs you write that span time and other contributors, are no longer subjective matter, but an argument worth having. This is, where I think, Go's strength comes in: its careful development process that puts programs' and programmers' health at the top of the priority list.
Health is not the same as comfort. Taste of a dish you've had for years is comforting. You're not going to feel comfortable when the ingredients you've come to expect are taken away. But if you can get past that, you may end up living longer and healthier. Similarly, your programs may become more maintainable and long lasting.
No one, who is familiar with the philosophy and motivation behind Go, can claim that what I've said above is false. You can say that Go team is slow in progress, but that's what methodical and evidence based approach is like: slow. You need time to experiment, learn and evolve. You other favorite languages act like the general public who at hearing in the news that "coffee is good for you", starts drinking 10 cups a day. Go's approach is more along the scientific way.
>I've learnt that programming languages, just like food and fashion, are a matter of taste to a high degree. You can point at awful smelling, too sugary meal and call it crap. Someone else spends years perfecting it.
Programming languages ... food ... fashion ... taste ...
Cholesterol is just the scapegoat for fructose over-consumption's horrifying effects. Cholesterol is used by the body to try and patch up all of the damage it causes. Cholesterol is used almost literally everywhere in the body, from hormones, to cell linings, etc.
Been working with Go a bit and also reviewed and saw tons of production code at places. The one thing that stands out is how messy and unorganized code is written often. It's more of a "write only" language to me it seems. Java or Python are much more readable.
One thing I definitely dislike are value/reference semantics, where arbitrary nested object structures are passed by value by default, but maps and slices are somehow magical and are passed by reference. I don't even care which one it is, but either everything non-primitive should be by reference by default, or by value.
This article lists a few issues, some valid, others not, but all exaggerated. No mention whatsoever of the many advantages Go brings much less a comparison between the pros and cons. There are a lot of interesting critiques of Go; sadly, this isn't one of them.
Well yeah, I'd agree that he wouldn't. Most of the specialness comes at the application level: He gets nothing out of specific blend of garbage collection, allocation options, typing, and green threading that sorta defines go.
What do you mean by 'fast enough'? As compiled languages go I think it's generally accepted that go compiles relatively quickly, especially for large projects with lots of dependencies (as it has saner dependency rules than c/++).
Go was influenced by Modula-2, but even more so by Oberon, which one of Go's designers, Robert Griesemer, worked on with Wirth at ETH Zürich. Oberon is in many ways a minimalist version of Modula-2.
What kind of application would you use this in? Just exiting on every error seems you're just punting the whole problem of error handling, and wrapping every fail-able function in another function doesn't seem like the best thing for readability either.
Also this doesn't really solve the problem sited in the article: it requires that all your functions return exactly two values with the error in the second place. What if you need to return two non-error values?
Command line tools: if any part of the command fails, you usually don't want to continue, unless it is a long-running batch process. In that case, I usually also write a logOnError() function to keep track of problematic work items.
Servers: during server initialization, there are many circumstances where it doesn't make sense to continue, such as template compilation failure or failing to open the socket.
Other: I use dieOnError anywhere it would otherwise make sense to use panic() - situations where continuing the program would mean running in an unknown state, like out-of-memory conditions.
I've built systems with complex error handling before, but my teams have always run into the problem of trying to reason about complex, unpredictable program state. If the requirements can allow for the program to fail fast (as you say, "punting the whole problem"), then the code itself can be kept much simpler and easier to reason about. When errors do crop up, it is easier to look at logs to debug a simple program (or infrastructure issue) than to debug a complex program where you aren't sure how earlier errors affected state.
In languages with exceptions, this is sort of the default - if you don't handle an exception, the program will crash. Often there will be a main level exception handler that logs otherwise uncaught errors. But a problem in Go is that programs are default-alive on errors. If you don't check the error status of every single function call that returns errors, then the program can get into unknown state. So in production code, especially code that just combines a few libraries, every function call ends up taking 3 or 4 lines (1 line of call and/or 3 lines of error checking).
Regarding the number of return values, my dieOnError/logOnError only handles functions that return error and nothing else. If the function returns multiple values, I first call it, then call dieOnError with the err:
foo, bar, err := baz()
dieOnErr(err)
//assume foo and bar are valid
I suspect I could write a variadic error checking function that assumes the last parameter is error:
func dieOnError(a ...interface{}) {
//do some reflection magic?
}
func foo() (bar int, baz int, err error) {
...
}
//maybe needs some type assertions or something?
bar, baz := dieOnError(foo())
tl;dr - they said go was going to be simple. It is.
Preface: It's perfectly OK to dislike languages, especially when you know of a better one that you're not using. I personally cannot help but think of all the better languages I could be using when I have to write some verbose bad abstraction in a less expressive/powerful language. Also, as I re-read this, the notion has struck me that I might be a Golang apologist. Also, as others have noticed, the Go 2 drafts mention just about every point in this article.
That said, I think OP misses the point of Go -- it's meant to be simple, productive, and production-ready. Many of the points that OP makes about the mistakes I kind of shrugged at.
> I've never met a language lead so openly hostile to the idea of developer ergonomics.
I'm not sure I believe this C++/Java have similarly bad ergonomics, just in a different way -- they give/saddle you with abstractions that are easy to use wrongly and let you build a shit castle with amazing speed, while keeping yourself open to to a hornets nest of mistakes and pitfalls. Go sacrifices these features for a reason, and they stated it up front.
That said, I do think they made a huge mistake not having proper union/sum types (AKA Algebraic Data Types/ADTs), but I can forgive them because building an excellent type system might have gotten in the way of their stated simplicity goals.
> In Go's case, the language embodies an extremely rigid caste hierarchy of "skilled programmers" and "unskilled programmers," enforced by the language itself.
I think this is a reflection of the Go, Google, and programming community in general. Whether it's the "10x engineer" or working-at-big-companies-means-you're-a-good-engineer mentality, this mindset is pervasive. It's the difference between a Senior Staff Software Engineer II and a Senior Engineer who just moved in from some other company -- it's trust.
> Again, the Go team's "not our problem" response is disappointing and frustrating.
Sucks that they didn't solve those problems up front, but I personally do not blame them for focusing on shipping, and fixing as they go. Honestly with the amount of work they've put in, they have built a runtime and language that have comparable performance and are more simple than Java + JVM. The JVM has millions of man hours dumped into it over decades. This is insane.
Also, there's the fact that whether it's a complete farce or not, they have left open an avenue for changing the language. People could fork Go and change it if they really felt that strongly about this stuff. The thing is, people for the most part don't -- if you want a better type system go use (and convince your team to use) a better language.
> The standard Go approach to operations which may fail involves returning multiple values (not a tuple; Go has no tuples) where the last value is of type error, which is an interface whose nil value means “no error occurred.”
Yes, error handling in Golang is less than ideal, but I prefer the forcing of this as a default over the C++/Java approach -- this default makes it very hard to write code that doesn't deal with errors at the place you're most able to correct/work around them. Again, I'm also giving Golang a pass for not having errors-as-values in the way a language with a good type system would... because Go sacrificed their type system for simplicity.
I don't think Go is a good programming language, but I do think it delivers on what it promises. It does what it set out to do -- be simple, productive, and production-ready (this is mostly because they hammered out all the bugs with tons of buy in from developers inside and outside google, and it's still not perfect of course).
I've said it before and I'll say it again -- I think Go will supplant Java as the language most companies use on the backend in <10 years. Fundamentally, because of the developer fungibility benefits of Go -- it's going to be easier than ever to swap out Golang programmers than ever -- you won't even have to be a JVM master (which is normally one of the border points between Java amateurs and pros) to write good-enough code.
Business thrives on good-enough code, not good code -- Golang is excellent for that, and gets proven more and more right every day with all the companies that are writing performant, statically compilable (thus easier to ship) code and just getting shit done.
> I'm not sure I believe this C++/Java have similarly bad ergonomics, just in a different way -- they give/saddle you with abstractions that are easy to use wrongly and let you build a shit castle with amazing speed, while keeping yourself open to to a hornets nest of mistakes and pitfalls. Go sacrifices these features for a reason, and they stated it up front.
So what this immediately makes me think is what about Swift? Swift has many similar goals as far as avoiding the common pitfalls of C++ and Java, and moving more of the work of problem detection from runtime to compile-time, but it manages to do so while embracing the advancements in programming language design over the past 20 years. The go way seems to throw out an awful lot of baby out with the bathwater.
I would love to hear from someone who actually uses Swift day to day, but everything I read about Swift disappoints me (in the "why was that even a problem in such a modern language??" sense) along with some brief conversations with an iOS developer from a startup I was at for a while. I don't have any specific blog posts I can point to right now but a more defensible point might be why would I pick Swift over Rust, if I wasn't forced to (by doing recent iOS development)?
Features seem to be very similar, but without Rust has/is:
- A more expressive compiler + type system
- Better memory management via the borrowing/ownership paradigm
- More F/OSS friendly, community steered
- Usable from embedded (no runtime) to web services tiers (maybe I just don't know a lot about embedded Swift outside of iOS)
- No cost abstractions
Swift is indeed a huge step forward from Objective-C or some other languages like C++/Java but I don't personally try and use it outside iOS because there are better options. If I'm going to bring a long a runtime why not use Haskell or Golang?
These days if I get to choose my language I spend my time trying to decide between Rust or Haskell -- then again most of my projects are small, and make no money.
> everything I read about Swift disappoints me (in the "why was that even a problem in such a modern language??" sense)
Can you be specific? I would love to understand which type of problem you are speaking about. As someone who also got into Swift because it is the only game in town for iOS, it's become my go-to language for everything from tooling/orchestration to back-end infrastructure simply because I find it very nice to work with.
Admittedly I'm not an expert in Rust, but the best way to explain why I prefer working in Swift is that I find it to be a permissive language, where Rust is an opinionated language.
For example, I find Rust's memory management paradigm to be super interesting, and I can see the advantages it has in terms of safety (especially with regard to concurrency, where Swift has some things to figure out), but it is something which puts some limitations on the design patterns which are available to you.
Swift, by contrast, has deep toolbox which you can use to implement OO patterns, functional patterns, Protocol Oriented design etc.
So I think Swift and Rust have slightly different goals, and where the worst piece of Rust code you can write which will compile will be a bit safer/more performant etc than similar with Swift, to me Swift strikes that balance where it's still giving me a huge advantage over most languages in terms of compile-time checking, but I can be incredibly productive and expressive.
> Can you be specific? I would love to understand which type of problem you are speaking about. As someone who also got into Swift because it is the only game in town for iOS, it's become my go-to language for everything from tooling/orchestration to back-end infrastructure simply because I find it very nice to work with.
Sorry I tried to look through my browser history for a specific example but couldn't find one... All I found was a page on AlamoFire (https://stackoverflow.com/questions/29131253/swift-alamofire...) that I think I was looking up when trying to help debug some iOS mobile code. I think I was looking over the shoulder of an iOS dev on my team and just didn't like the code I was seeing...
I don't think I can defend that point very well but it's just a feeling I've held on to over time.
> Admittedly I'm not an expert in Rust, but the best way to explain why I prefer working in Swift is that I find it to be a permissive language, where Rust is an opinionated language.
Well I'm not either -- I've been writing it for fun but it's not my main workhorse at this point (nevermind finding a corporate codebase in it). This might make it a bit clearer were we differ -- I'm really into opinionated/not-permissive languages right now. The more feedback I can get from a compiler/type system the better as far as I'm concerned as long as the language lives up to it's promises to make me better/safer/whatever (for Rust & Haskell they mostly deliver).
> Swift, by contrast, has deep toolbox which you can use to implement OO patterns, functional patterns, Protocol Oriented design etc.
I personally don't actually want OO design in backend languages I use if I can avoid it -- I've found that a good type system along with typeclasses(haskell)/traits(rust)/protocols(swift)/duck-type interfaces(golang) and basic structs is all I need. If the language allows constraints on the generics (i.e. requiring one typeclass/trait/protocol to satisfy another) that's all the composability I need without all the OO boilerplate and cruft.
Swift might be the more flexible language since it doesn't have the ownership/borrowing system but I also don't feel like I spend my time reaching for too many design patterns per say -- maybe I write too many of the same kind of application (backend web services). I generally use the component model (like in https://github.com/stuartsierra/component), sprinkle some DDD and I'm good to go for most things.
> So I think Swift and Rust have slightly different goals, and where the worst piece of Rust code you can write which will compile will be a bit safer/more performant etc than similar with Swift, to me Swift strikes that balance where it's still giving me a huge advantage over most languages in terms of compile-time checking, but I can be incredibly productive and expressive.
Personally the langauge (if you can call it that) which fits this mold for me is Typescript. Of course Javascript is the wild wild west where the most reckless code seems to be written but Tyepscript gives me just enough of a type system and some assurance that everything isn't going to break at runtime. The projects where I'd use node (basically so I can ensure smooth hand off to another developer) I can't use Swift, and the projects where I might use Swift I'd rather use Rust or Haskell.
I think the biggest difference is that I don't write Objective-C/Swift iOS apps, if I was maybe I'd have a chance to give Swift a more thorough look over. That said, I basically never intend to write fully native apps because I'd rather pick "close-enough"/"good-enough" options like Nativescript.
Ah yeah, so AlamoFire is a monstrosity of a library which was originally implemented in Objective C back when iOS's native networking tools were pretty bare bones. I thought it was bloated and overcomplicated then, and it's worse now.
> JSON parsing in swift
This is a solved problem since Swift 4. Now you can declare conformance to Encodable/Decodable protocols on any one of your types, and in many cases the compiler can generate a default implementation of the protocol methods, so that's all it takes to make a type parsable from/serializable to JSON.
What's better, these "Codable" objects work with "Encoader/Decoder" protocols, so you can provide your own JSON parser implementation, or for instance, you could implement a TOML parser which conforms to Decoder, and all your existing Codables would already work with it.
Swift definitely has some warts remaining, but it's gotten a lot better over time, and the upcoming features and priorities seem promising as well.
> Ah yeah, so AlamoFire is a monstrosity of a library which was originally implemented in Objective C back when iOS's native networking tools were pretty bare bones. I thought it was bloated and overcomplicated then, and it's worse now.
OK, glad I wasn't imagining it
> This is a solved problem since Swift 4. Now you can declare conformance to Encodable/Decodable protocols on any one of your types, and in many cases the compiler can generate a default implementation of the protocol methods, so that's all it takes to make a type parsable from/serializable to JSON.
This is exactly what I was expecting it to be like which is why I think I was disappointed -- the ToJSON typeclass (haskell) and the Deserialize (rust) trait are the analogs I was expecting to find but didn't.
Honestly I think Swift is a good language, my opinion on it really isn't too relevant except for why I normally don't reach for it outside of iOS development.
Also does swift compile to a static binary fairly easily? That's another feature I really like from the recent crop of languages (Haskell excluded, it can be a bit tricky) -- Rust and Golang compile statically super easily, and you can get even more portability if you build in alpine (since it uses musl libc).
you don't have to like go, I'm not "liking it" per say, but what's the alternative if you want static typing/compile language, Java, well the compile time is to big for my taste, do a change wait 30 seconds, also it needs to munch ram, Net core could not get it to run.
You can use Java in a lightweight manner. That's what I'm doing lately. Use built-in com.sun.net.httpserver or add undertow dependency. Enough for HTTP, uses few MB memory (I'm running one app on a 256 MB OpenBSD server), starts in a fraction of second. Use JDBC without all those ORM nonsense, you'll have somewhat verbose but pretty obvious code. It's pretty productive environment for me.
>Use built-in com.sun.net.httpserver or add undertow dependency.
Haven't used Java for a while, but IIRC, (some of) those star.sun.star classes (as opposed to star.java.star ones) used to be mentioned as liable to be deprecated, so not to be relied on (long term at least). Is that still the case now?
I don't think that this module is deprecated. https://docs.oracle.com/en/java/javase/11/docs/api/jdk.https... here's documentation for latest Java without any mentioning of deprecation. It's a bit strange module because it was located in that package, but it was always documented, so I'm considering it safe to use for the time being. Probably not the most performant (uses good old blocking I/O), but good enough for many tasks.
Would suggest that you're not very familiar with the field that you're commenting on.
Ignorance of a subject is perfectly fine. Everyone's ignorant of something. But flaunting your ignorance doesn't really strengthen your case appreciably.
You shouldn't need to wait 30 seconds on any change to a Java app. It supports incremental compilation. My own projects usually take 2-3 seconds to recompile and launch, and they're large.
D is a mess, Rust maybe no, but seeing how firefix is lossing browser share, I doubt in 5 - 10 years, there will be any company behind it anymore, same for kotlin, with vscode beating jetbrains.
Yeah, it's much safer to use a language backed by Google, a company known for ruthlessly killing non-core projects after a few years, no matter how much traction they have.
You have a point, but to be honest there are specifications of Go available to the public and also FLOSS compiler implementations.
To draw a parallel with Java, eventhough Oracle is ruling the project with an iron fist the FLOSS implementations ensure that developers won't find themselves SOL if Oracle decides to pull the trigger and kill their Java business.
is a core project, apparently they use it a lot, youtube is in it, but if it happens, it can be picked by other companies that use it, docker, kubernetes, cockroachdb, coreOS, etc.
What Rust has, firefox? What D has.. nothing?
Same case as Rust then. Currently backed by Mozilla, with lots of big companies using it too. IIRC the most famous case is Dropbox. Rust has unique features that make it very attractive. I'd say Go has unique features too (e.g. uniformity of code style). Having unique features means the language will probably live on even if the current maintainers step down.
D doesn't bring anything new to the table, that's why it's not as relevant.
Mozilla is not the only company paying people to work on Rust, and we've heard rumors of more companies being interested in doing so. Losing Mozilla would be a big blow, but Rust is larger than that at this point.
I've done lots of work in C, Python and Common Lisp. I've done a fair bit in go and for me it's just not fun to use. I feel like I'm playing in a little sandbox with a few prescribed tools.
In my opinion this is a misconception that I saw in other people approaching the language as well. The thing is: nobody should use "advanced" features. Using `reflect` should be reserved for exceptional case where it's not possible to solve the problem differently. This has been pointed out in the Go website and the mailing list bizillion times.
That said, the language doesn't have many features. So if you want to show off your skill, it's probably by structuring the code concisely which can be challenging at times. Using reflect is actually the easy way out and most functions of that package can panic easily...
Sacla, IMO, suffers from exactly the opposite problems as Go. It's flexibility lends itself to abstractions for the sake of abstractions, and in practice it sometimes seems more like a platform for showing off "cleverness" than for solving actual problems.
Even if you believe that distinction, Scala is different because library writers can use the same language as application developers. That's not true of the divide between implementing and using Go.
This article is a bit outdated and has invalid claims and a lot of ad hominem and other fallacies, so I'd recommend to stick to the textbook for complaining about Go for maximum clicks:
This summarises my opinion of Go that has been forming recently, that it's a replacement for Java.
Java was the "safe" choice. Many engineers knew it, bad engineers couldn't be _that_ bad in it and would manage to produce something. They could be handed an architecture spec and implement it word for word without doing much translation to the Java language. But I don't think good engineers could "shine" with it.
It normalised all engineers around some median of productivity, and made projects predictable.
I believe Go is doing the same thing in some ways. It's not supposed to allow good engineers to find great ways of expressing complex architectures (however necessary that may be), it's designed to let engineers of all skill levels hit all problems with a Go shaped hammer and get something that looks inoffensive, boring and predictable.
This is very unsurprising given that it came out of a large engineering organisation. Google have many engineers of a wide range of skill levels, and normalising engineering is more important than doing it better.
I realise Google probably has a higher than average engineering ability, maybe far higher than average, but they still have a wide range, and I also include in this engineers who might be fantastic frontend engineers, who have to get stuck into some system software, for example.