formalized expression is detail and while that detail may not be complex for a programmer it can be completely meaningless to a user. I agree with you in part but i think worth recognizing that if we all had this attitude then we wouldn't be building business applications / consumer apps for non-developers
whether you think it's complex or not, we're hiding information on procedural tasks from users in the products we build because information to someone who is not a programmer IS complex
> “As it stands now, there seems to be a tradeoff between ease-of-use and control, and until someone figures out how to remove that tradeoff, there will always be a need for engineers who can fully manipulate software to meet the full range of use cases businesses (and individuals) need.”
we're constantly making this type of tradeoff as developers. It's not just a decision we make at the beginning of a project where we look at the requirements and say hey no-code might be the way to go here.
have you written a function for a library to hide information / details of how something works? that's a tradeoff between ease and control for the client.
now expose that through a GUI with some params and now you have "no-code".
I really like how she connects the necessity of the software for customers to release risk management. It's a very user-centric view which I like. Helps me understand release processes I've experienced much better from a business angle too. It makes sense that with few vendors who will experience high degrees of pain from bad releases that it tends to encourage more conservative, move-slow-and-don't-break-shit mentality.
what if you have plenty of ideas but you run out of money because you can't ship fast enough? :D sorry couldn't help it
on more serious note - I'm curious what you mean by run out of ideas. do you mean product literally not having any ideas? that seems hard to believe. Or do you mean ideas just don't gain traction? I've seen that happen a lot more.
what does unit testing have to do with whether or not you instrument the test with fake responses? those points of contact that you're mocking out at the perimeter, that data will sometimes need to reach a particular function through a collaborator...which you may want to mock?
sometimes the dependency is not a third party, but it may be code that requires a ton of setup (as mentioned in article) that's not worth the cost. it may make sense to just mock at that point to actually test the rest of the business logic in a function. I don't think i'd say "well, that's no longer a unit test!". You can argue that it's a more brittle test, sure.
update: also, i'll be honest that comments like this really rub me the wrong way. This type of dogmatism around what is or isn't unit testing (which is a pretty ill-defined phrase in industry) is something that needs to stop. I think it hurts new practioners in the field who are mislead into black and white thinking.
I'm sorry, I did not intend to offend anyone obviously. Needless to say, this is just my opinion condensed in a sentence (therefore lacking a lot of context, which I should have provided).
I was not aiming to define what a unit test is, more like when it stops being a unit test (which I thought it would be an easier agreement to reach than a definition of what is, but I guess I underestimated the task).
My point was that if you have to mock, for example, a DB call inside your business logic, well you are writing an integration test at that point, whether or not you mock the DB out. If you design your code so that you only have those dependencies at the edge of the system then you get, in my opinion, a much cleaner design and much more testable code.
Too much mocking (and/or more like mocking in the wrong places) is a code smell in my opinion.
by claiming when something stops being a unit test, you have to define what a unit test is. Now, I do think there is a good, reasonable definition of a unit test and it's looser than yours. A single function that reaches out for an API call via another function and does N other steps of compute that has the API call mocked is still a unit test.
can it be a smell? maybe, that depends. Does it break your idea of single responsibility? maybe. Is it an integration test? not ... really because the test is primarily designed to verify the behavior of the rules of the function and not its interaction with a third party system.
so while you can argue that mocking is a smell in that context, that doesn't change the fact that it's still a unit test!
all that said, one can still make a case that the fundamental unit of work for a given context is not really a function, and so testing functions are actually integration tests! so I'll also grant that these definitions can be very context sensitive..
this convo has made me realize that our terms for testing do not pair well with actual testing practices, which i find easier to conceptualize in terms of "coarseness" relative to fundamental units of work in a system as opposed to this binary notion of unit versus integration.
More and more I'm seeing "unit testing" becoming the generic term for what we used to just call plain old testing. A lot of companies had never had anything remotely close to unit testing but called all their automated tests unit tests, at least in part out of ignorance of the difference. My current company has gone a step further and calls their extremely manual and poorly defined QA test plans unit testing.
As for my 2 cents, I find single assert principle helpful as it helps narrow down the unit of behavior your actually testing. I don't care if a test covers a single function or half the code base, as long as it's clear why it is specifically testing for and what has gone wrong if it fails.
Hi detaro. Good question. If all my module does is make an api call, and I mock the API, what am I testing? I would rather leave that out of the unit tests because the added value is, in my opinion, close to none if you are relying on mocked data.
Now, if the module does more than just call the API then I would argue that it's breaking the single responsibility principle and would prefer to split it into a module that does only the call and another that does the rest.
Ok, then go one level in: If a component uses the "only makes an API call module", how do I unit test it? I can't let it use the module to make the API call (because the API might not be available in testing/that's an integration test/...), and I can't mock it, because using a mock would make it a not-unit test? I guess this gets at the line between mocks and stubs, but I never found that all that convincing.
Yes, you just don't unit test it, because if you mock out the only dependency it has you are left with nothing. So you are not unit testing anything anyway. You know what I mean?
That code can (and should) be tested, through an integration test.
> One may argue that a motivated and smart student can overcome such obstacles and get her foot in the door by attending such schools. But then the challenge is flipped: not many such students need to attend such schools.
i guess it question is is there enough such students who can overcome these obstacles who are also willing to fork over X tuition? they may not need it per-se, but I think you may be discounting the value of the pre-existing networking leverage these schools have over individuals that may have no network in data science related work
what I've seen a lot over the years is that an engineer gets asked about timeline, thinks for about 10 seconds, blurts something out, then if it's off (too short) they bust their asses / cut corners trying to hit their own estimate. Since estimates are usually optimistic, I tend to see a lot of over time and/or corners cut.
there's certainly an element of personal pride in that dynamic I think. You're asked to give an estimate. You give it. You now feel like you've staked your professional rep to it.
the author shares his method for coming up with estimates, and it looks like an offline process that involves more than a gut check. I think for sufficiently large features we (eng managers, eng peers) should encourage engineers to _not_ give on the spot estimates given we know how difficult it is to estimate.
it seems to come from the same place of wanting a simple rule to pattern match against some problem space so we don't have to think too hard about it every time.
also ... the author cites sam altman as evidence that his thinking is on the right track, but can you really argue against a statement like "Almost everyone I’ve ever met would be well-served by spending more time thinking about what to focus on"? that's about as close to a one size fits all statement as "almost everyone who succeeds thinks"
i sometimes wonder if this comes from a place of fear. Maybe we're scared of really listening to customers / users / colleagues / stakeholders and so we invent these rules to hold ourselves accountable for what's probably pretty obvious from the outside
I'm a consultant, been on thousands of calls with customers, partners, colleagues ... after all this time it is still truly astounding how many people wait to talk instead of listen, and cannot or will not exhibit professional empathy and curiosity.
By nature we choose the path of least resistance. Because it’s easier if possible. It costs less.
We are also always trying to navigate the world constantly, and talking is a way to test our world view against other people. This is why we talk.
So; Talking (articulating the world view) and hopefully getting a affirming response to that is less expensive to our system than throwing away that paradigm and learning a new one.
I agree with the first part of what you said, but I don't think plain listening is a "new paradigm", albeit I'll grant that it's probably a skill that can be improved.
to circle back to the original topic of understanding the problem, a big part of that is being observant and attentive to the needs of users. In that context and using your framing of this in terms of cost and resistance, I agree that simply talking and stating opinions is far easier than shutting up, talking to users, and gathering real information about what they actually want
it really sucks and I feel it's gotten worse with zoom. Sitting in retro meetings in the past year I've definitely noticed an uptick in people just talking and not actually responding to what someone just said. very frustrating.
whether you think it's complex or not, we're hiding information on procedural tasks from users in the products we build because information to someone who is not a programmer IS complex