
Examples of studies using p < 0.001, p < 0.0001 or even lower p …
2011年3月21日 · However, in essence they are using 0.05, but multiplied by the number of tests. It is obvious that this procedure (Bonferroni correction) can quickly lead to incredibly small p-values. That's why people in the past (in neuroscience) stopped at p<0.001. Nowadays other methods of multiple comparison corrections are used (see Markov random field ...
Convention for symbols indicating statistical significance?
$\begingroup$ Note that some used "+" specifically for results in-between 0.10 and 0.05. They are commonly termed "marginally significant". Keep in mind that such arbitrary thresholds are not really improving clarity of the results.
Why is it possible to get significant F statistic (p<.001) but non ...
$\begingroup$ Nice work! I just wonder about one specific example I ran into (on real data): logistic regression with p=10 predictors, n=500 observations (balanced response classes), largest VIF < 1.5, where one ends up with significant full model Likelihood Ratio test (at 0.0003) and all insignificant predictors (two smallest p-values at ~0.11, rest are > 0.25).
Coefficient of 0.001 with p < 0.005 [duplicate] - Cross Validated
2020年9月15日 · This should be a simple inquiry. Doing a regression analysis I found that the coefficient of a predictor has a(n) (infinitesimal) positive effect of 0.001 that is significant at the 0.005 level. I cannot help wondering how a similar phenomenon may be possibile. Do you think it might depend on the fact that my variables are not (yet) scaled?
bayesian - Why is a $p (\sigma^2)\sim\text {IG (0.001, 0.001)}
One of the most commonly used weak prior on variance is the inverse-gamma with parameters $\alpha =0.001, \beta=0.001$ (Gelman 2006). However, this distribution has a 90%CI of approximately $[3\times10^{19},\infty]$.
p value - How can you have p < 0.0001 with a sample size of 89 …
Patients with recurrent primary GBM treated with POH survived significantly longer (log rank test, P < 0.0001) than untreated group. Is it even possible to have p < 0.0001 for such a small sample & control group? I'm not a statistician, but I thought you'd need tens of thousands of samples at the very least to get that sort of confidence.
using shapiro wilk test to explain p-values - Cross Validated
2020年5月29日 · A p-value of 1.439e-05 equals to 1.429*10^(-5), which is less than 0.05. A Shapiro-Wilk test is the test to check the normality of the data.
r - Restricted cubic spline looks like a linear curve, but p for ...
2023年3月2日 · This is result p for Nonlinear. > anova(rcs_model) # Nonlinear test Wald Statistics Response: Surv(care.py, care) Factor Chi-Square d.f. P QQ 474.35 2 <.0001 Nonlinear 25.65 1 <.0001 TOTAL 474.35 2 <.0001 This is the restricted cubic spline.
r - Mediation CI contains 0 but p<.001? - Cross Validated
2023年8月11日 · As far as I know, this method of performing a mediation analysis and obtaining a p-value from it are both correct. However, I recently obtained a result where the indirect effect's CI was (1.05, -0.02) yet the p-value was still 0 (indicating <0.001% of indirect effects should be less than 0). What could possibly produce this result?
Interpreting F, p, & partial eta squared from an ANOVA
Color priming effect was reliable in the nonsynesthetic F(1,22)=13.10, p<.005, np2= .37 as well as the synesthetic group F(1,22)= 24.39, p<.001, np2= .53 I have no idea what the partial eta squared means, but I have come to the understanding that if F is bigger than 1 it is significant.