> You cannot ignore the group that didn't answer the questionnaire, as they will most likely expose some of the behavior that you are researching (i.e. about life etc), and might have a huge impact on your results.
This is completely paradoxical.
You are saying that using the data from the 90 would be jumping to conclusion because you would probably ignore data that would not match those 90.
But making this claim IS jumping to conclusions, because you are making an assumption (the 30 have something in common explaining why they didn't fill the form).
The article you link to establishes the reality of participation bias, but it does not exactly endorse option A. It does say "In e-mail surveys those who didn't answer can also systematically be phoned and a small number of survey questions can be asked. If their answers don't differ significantly from those who answered the survey, there might be no non-response bias. This technique is sometimes called non-response follow-up." This is not, however, the same as option A, which (as far as it goes) commingles responses from those who respond to the second prompt with those from the first, potentially concentrating a non-response bias in those who don't respond to either prompting. Furthermore, neither option A nor the above quote offers a remedy if evidence of non-response bias is found.
It's really not. The 30 might have something in common, which calls any findings that exclude them into question. This doesn't rely on any unreasonable assumptions
This is completely paradoxical.
You are saying that using the data from the 90 would be jumping to conclusion because you would probably ignore data that would not match those 90.
But making this claim IS jumping to conclusions, because you are making an assumption (the 30 have something in common explaining why they didn't fill the form).