Generally because bigger N means the study is more complicated logistically ( cost, finding participants, whatever). And if the results are promising you can always get more funding to move on to bigger trials. Nobody is going to pay for you to recruit 1000 participants if you can't show an effect size with a smaller number first.
> I think its pretty obvious that a small N is not statistically significant
Well this totally depends on the effect size. You can't just dismiss results because N is small, that ignores how probability works.
>Why not use use N >= 100 or not at all.
Because N > 100 is a totally arbitrary value. The size of N that you need is totally dependent on the effect size, not an arbitrary value. You'd miss lots of real but small effects by having an N of only 100. You'd miss out on lots of real but large effects which can be observed at N < 100.
Basically this approach would severely limit the branches of research we can do, for a questionable gain in statistical reliability.
Generally because bigger N means the study is more complicated logistically ( cost, finding participants, whatever). And if the results are promising you can always get more funding to move on to bigger trials. Nobody is going to pay for you to recruit 1000 participants if you can't show an effect size with a smaller number first.
> I think its pretty obvious that a small N is not statistically significant
Well this totally depends on the effect size. You can't just dismiss results because N is small, that ignores how probability works.
>Why not use use N >= 100 or not at all.
Because N > 100 is a totally arbitrary value. The size of N that you need is totally dependent on the effect size, not an arbitrary value. You'd miss lots of real but small effects by having an N of only 100. You'd miss out on lots of real but large effects which can be observed at N < 100.
Basically this approach would severely limit the branches of research we can do, for a questionable gain in statistical reliability.