Abstract
In nematology research, hypothesis testing is a fundamental method and is typically supported by statistical significance (e.g., P-value <0.05). However, our review of recent publications in nematology reveals frequent issues, including unjustified sample size and unclear reporting of statistical methods, which undermines the validity and reproducibility of the results. To address these issues, we recommend researchers to conduct a priori power analyses to estimate adequate sample sizes and report key descriptive statistics (e.g., effect size). These practices not only strengthen the reliability of research, but can also help answer a central question for investigators: How many samples are needed to detect a “truly” statistically significant difference in an experiment?