Manual testing as the first step… not very productive imo.
Outside in testing is great but I typically do automated outside in testing and only manual at the end. The loop process of testing needs to be repeatable and fast, manual is too slow
Yeah that's fair, the manual testing doesn't have to sequentially go first - but it does have to get done.
I've lost count of the number of times I've skipped it because the automated test passed and then found there was some dumb but obvious bug that I missed, instantly exposed when I actually exercised the feature myself.
Maybe a bit pedantic, but does manual testing really need to be done, or is the intent here more towards being a usability review? I can't think of any time obvious unintended behaviour showed up not caught by the contract encoded in tests (there is no reason to write code that doesn't have a contractual purpose), but, after trying it, finding out that what you've created has an awful UX is something I have encountered and that is something much harder to encode in tests[1].
[1] As far as I can tell. If there are good solutions for this too, I'd love to learn.
Automated testing (there aren't different kinds; to try and draw a distinction misunderstands what it is) doesn't catch bugs, it defines a contract. Code is then written to conform to that contract. Bugs cannot be introduced to be caught as they would violate the contract.
Of course that is not a panacea. What can happen in the real world is not truly understanding what the software needs to do. That can result in the contract not being aligned with what the software actually needs. It is quite reasonable to call the outcome of that "bugs", but tests cannot catch that either. In that case, the tests are where the problem lies!
Most aspects of software are pretty clear cut, though. You can reasonably define a full contract without needing to see it. UX is a particular area where I've struggled to find a way to determine what the software needs before seeing it. There is seemingly no objective measure that can be applied in determining if a UX is going to spark joy in order to encode that in a contract ahead of time. Although, as before, I'm quite interested to learn about how others are solving that problem as leaving it up to "I'll know it when I see it" is a rather horrible approach.
There's a lot of pedantry here trying to argue that there exists some feature which doesn't need to be "manually" tested, and I think the definition of "manual" can be pushed around a lot. Is running a program that prints "OK" a manual test or not? Is running the program and seeing that it now outputs "grue" rather than "bleen" manual? Does verifying the arithmetic against an Excel spreadsheet count?
There are programs that almost can't be manual, and programs that almost have to be manual. I remember when working on PIN pad integration we looked into getting a robot to push the buttons on the pad - for security reasons there's no way of injecting input automatically.
What really matters is getting as close to a realistic end user scenario as possible.
Outside in testing is great but I typically do automated outside in testing and only manual at the end. The loop process of testing needs to be repeatable and fast, manual is too slow