non parametric test for samples with unequal variance for 3 or more samplesWilcoxon Test - non normality, non equal variances, sample size not the sameNon-parametric for two-way ANOVA (3x3)How to choose between t-test or non-parametric test e.g. Wilcoxon in small samplesComparisons on nonparametic, unequal variance dataNon-parametric test equivalent to mixed ANOVA?Comparison of variance between two samples with unequal sample sizeParametric or non parametric test

Is there a theorem in Real analysis similar to Cauchy's theorem in Complex analysis?

What is the word for a person who destroys monuments?

Why are two-stroke engines nearly unheard of in aviation?

Hobby function generators

Is there a generally agreed upon solution to Bradley's Infinite Regress without appeal to Paraconsistent Logic?

Rare Earth Elements in the outer solar system

Answer not a fool, or answer a fool?

Can Brexit be undone in an emergency?

Latex matrix formatting

Delete empty subfolders, keep parent folder

Plot irregular circle in latex

Tips for remembering the order of parameters for ln?

Smooth irreducible subvarieties in an algebraic group that are stable under power maps

Should I inform my future product owner that there are big chances that a team member will leave the company soon?

Why does an orbit become hyperbolic when total orbital energy is positive?

How often is duct tape used during crewed space missions?

What is the source of "You can achieve a lot with hate, but even more with love" (Shakespeare?)

Wouldn't Kreacher have been able to escape even without following an order?

Why cannot a convert make certain statements? I feel they are being pushed away at the same time respect is being given to them

What is the difference between an engine skirt and an engine nozzle?

Bash attempts to write two shell prompts?

Python web-scraper to download table of transistor counts from Wikipedia

Why is belonging not transitive?

Who are the people reviewing far more papers than they're submitting for review?



non parametric test for samples with unequal variance for 3 or more samples


Wilcoxon Test - non normality, non equal variances, sample size not the sameNon-parametric for two-way ANOVA (3x3)How to choose between t-test or non-parametric test e.g. Wilcoxon in small samplesComparisons on nonparametic, unequal variance dataNon-parametric test equivalent to mixed ANOVA?Comparison of variance between two samples with unequal sample sizeParametric or non parametric test






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


I have data which is independent, but non-normal, with unequal variance. There are more than two groups, all with the same sample size. Which non-parametric test can I use?










share|cite|improve this question











$endgroup$













  • $begingroup$
    You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
    $endgroup$
    – Glen_b
    4 hours ago


















1












$begingroup$


I have data which is independent, but non-normal, with unequal variance. There are more than two groups, all with the same sample size. Which non-parametric test can I use?










share|cite|improve this question











$endgroup$













  • $begingroup$
    You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
    $endgroup$
    – Glen_b
    4 hours ago














1












1








1





$begingroup$


I have data which is independent, but non-normal, with unequal variance. There are more than two groups, all with the same sample size. Which non-parametric test can I use?










share|cite|improve this question











$endgroup$




I have data which is independent, but non-normal, with unequal variance. There are more than two groups, all with the same sample size. Which non-parametric test can I use?







nonparametric






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 6 hours ago









Ben Bolker

26k2 gold badges70 silver badges97 bronze badges




26k2 gold badges70 silver badges97 bronze badges










asked 8 hours ago









IvanIvan

205 bronze badges




205 bronze badges














  • $begingroup$
    You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
    $endgroup$
    – Glen_b
    4 hours ago

















  • $begingroup$
    You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
    $endgroup$
    – Glen_b
    4 hours ago
















$begingroup$
You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
$endgroup$
– Glen_b
4 hours ago





$begingroup$
You should not be using characteristics you identify in the sample you're testing to choose which test you use, since it impacts the properties of the test (e.g. p-values won't be uniform under the null). What are the characteristics of your variables? Why would normality be the only parametric distributional model considered? Are you interested in shift alternatives or more general ones? Is the situation one in which the population distributions would plausibly be the same when the null were true? (e.g one in which a 'no treatment effect' would imply no effect on the distribution at all)
$endgroup$
– Glen_b
4 hours ago











2 Answers
2






active

oldest

votes


















3














$begingroup$

Comment:



Simulated gamma data Here are simulated gamma data to illustrate some of the points in BenBolker's Answer (+1). Although
none of the traditional tests is completely satisfactory, the
Kruskal-Wallis test shows differences in groups for
my fake data (in which one difference between groups is rather large).



set.seed(123)
x1 = round(rgamma(20, 5, .2), 3)
x2 = round(rgamma(20, 5, .25), 3)
x3 = round(rgamma(20, 5, .35), 3)
sd(x1); sd(x2); sd(x3)
[1] 10.30572
[1] 6.218724
[1] 6.483086


Sample standard deviations differ, and box plots show different
dispersions along with different locations.



x = c(x1, x2, x3)
g = as.factor(rep(1:3, each=20))
boxplot(x ~ g, notch=T, col="skyblue2")


enter image description here



Notches in the sides of the boxes are nonparametric
confidence intervals calibrated so that, comparing
two groups, CIs that overlap suggest no significant
difference. So the first and last groups may differ significantly.



Kruskal-Wallis test, with ad hoc Wilcoxon comparisons. A Kruskal-Wallis test detects differences among the
three groups.



kruskal.test(x ~ g)

Kruskal-Wallis rank sum test

data: x by g
Kruskal-Wallis chi-squared = 13.269,
df = 2, p-value = 0.001314


Ad hoc tests with two-sample Wilcoxon tests show significant differences between groups 1 and 3. P-values for the other two comparisons are not small enough to satisfy the Bonferroni criterion
against false discovery.



wilcox.test(x1,x2)$p.val
[1] 0.01674239
wilcox.test(x2,x3)$
p.val
[1] 0.06343245
wilcox.test(x1,x3)$p.val
[1] 0.000667562


Welch one-factor ANOVA with ad hoc Welch t comparisons. Here we have moderately large samples and only moderate
skewness. A Welch one-way ANOVA, which does not
assume equal variances may be useful. The overall test gives a highly significant result; you could use Welch two-sample t tests for ad hoc comparisons.



oneway.test(x ~ g)

One-way analysis of means
(not assuming equal variances)

data: x and g
F = 6.905, num df = 2.000, denom df = 36.733,
p-value = 0.002847


Permutation test focused on differences among group means. Finally, a permutation test with the standard deviation
of the three (permuted) group means as 'metric', shows a significant difference. This metric focuses on differences among group means. The test is nonparametric. The P-value 0.0008 is similar to that of the
Kruskal-Wallis test.



d.obs = sd(c(mean(x1),mean(x2),mean(x3)))
set.seed(915)
m = 10^5; d.prm = numeric(m)
for(i in 1:m)
x.prm = sample(x) # scrambe obs among gps
a1=mean(x.prm[1:20]); a2=mean(x.prm[21:40]); a3=mean(x.prm[41:60])
d.prm[i] = sd(c(a1,a1,a3))
mean(d.prm >= d.obs)
[1] 0.00075
length(unique(d.prm))
[1] 77417


Of the 100,000 iterations there were over 77,000
uniquely different values of d.prm. Their histogram
is shown below along with the value 5.235 of d.obs.



enter image description here






share|cite|improve this answer











$endgroup$














  • $begingroup$
    Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
    $endgroup$
    – Dave
    3 hours ago










  • $begingroup$
    That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
    $endgroup$
    – BruceET
    2 hours ago


















2














$begingroup$

The Kruskal-Wallis test is a rank-based analogue of 1-way ANOVA, so it would be a reasonable approach to nonparametric testing of differences in location for >=2 groups.



HOWEVER: the "unequal variance" thing really messes you up. This answer discusses why unequal variances are problematic for Mann-Whitney tests (the 2-sample version of K-W/non-parametric version of the t-test), and the same problem applies to K-W, as discussed on Wikipedia (linked above):




If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group.




Loosely speaking, from my answer to the Mann-Whitney question:




If you are satisfied with showing that the distribution of the groups differs in some way from that of men, then you don't need the extra assumption [of equality of distribution except for location].







share|cite|improve this answer









$endgroup$














  • $begingroup$
    Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
    $endgroup$
    – Ivan
    6 hours ago











  • $begingroup$
    See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
    $endgroup$
    – Ben Bolker
    6 hours ago













Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/4.0/"u003ecc by-sa 4.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);














draft saved

draft discarded
















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f427383%2fnon-parametric-test-for-samples-with-unequal-variance-for-3-or-more-samples%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














$begingroup$

Comment:



Simulated gamma data Here are simulated gamma data to illustrate some of the points in BenBolker's Answer (+1). Although
none of the traditional tests is completely satisfactory, the
Kruskal-Wallis test shows differences in groups for
my fake data (in which one difference between groups is rather large).



set.seed(123)
x1 = round(rgamma(20, 5, .2), 3)
x2 = round(rgamma(20, 5, .25), 3)
x3 = round(rgamma(20, 5, .35), 3)
sd(x1); sd(x2); sd(x3)
[1] 10.30572
[1] 6.218724
[1] 6.483086


Sample standard deviations differ, and box plots show different
dispersions along with different locations.



x = c(x1, x2, x3)
g = as.factor(rep(1:3, each=20))
boxplot(x ~ g, notch=T, col="skyblue2")


enter image description here



Notches in the sides of the boxes are nonparametric
confidence intervals calibrated so that, comparing
two groups, CIs that overlap suggest no significant
difference. So the first and last groups may differ significantly.



Kruskal-Wallis test, with ad hoc Wilcoxon comparisons. A Kruskal-Wallis test detects differences among the
three groups.



kruskal.test(x ~ g)

Kruskal-Wallis rank sum test

data: x by g
Kruskal-Wallis chi-squared = 13.269,
df = 2, p-value = 0.001314


Ad hoc tests with two-sample Wilcoxon tests show significant differences between groups 1 and 3. P-values for the other two comparisons are not small enough to satisfy the Bonferroni criterion
against false discovery.



wilcox.test(x1,x2)$p.val
[1] 0.01674239
wilcox.test(x2,x3)$
p.val
[1] 0.06343245
wilcox.test(x1,x3)$p.val
[1] 0.000667562


Welch one-factor ANOVA with ad hoc Welch t comparisons. Here we have moderately large samples and only moderate
skewness. A Welch one-way ANOVA, which does not
assume equal variances may be useful. The overall test gives a highly significant result; you could use Welch two-sample t tests for ad hoc comparisons.



oneway.test(x ~ g)

One-way analysis of means
(not assuming equal variances)

data: x and g
F = 6.905, num df = 2.000, denom df = 36.733,
p-value = 0.002847


Permutation test focused on differences among group means. Finally, a permutation test with the standard deviation
of the three (permuted) group means as 'metric', shows a significant difference. This metric focuses on differences among group means. The test is nonparametric. The P-value 0.0008 is similar to that of the
Kruskal-Wallis test.



d.obs = sd(c(mean(x1),mean(x2),mean(x3)))
set.seed(915)
m = 10^5; d.prm = numeric(m)
for(i in 1:m)
x.prm = sample(x) # scrambe obs among gps
a1=mean(x.prm[1:20]); a2=mean(x.prm[21:40]); a3=mean(x.prm[41:60])
d.prm[i] = sd(c(a1,a1,a3))
mean(d.prm >= d.obs)
[1] 0.00075
length(unique(d.prm))
[1] 77417


Of the 100,000 iterations there were over 77,000
uniquely different values of d.prm. Their histogram
is shown below along with the value 5.235 of d.obs.



enter image description here






share|cite|improve this answer











$endgroup$














  • $begingroup$
    Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
    $endgroup$
    – Dave
    3 hours ago










  • $begingroup$
    That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
    $endgroup$
    – BruceET
    2 hours ago















3














$begingroup$

Comment:



Simulated gamma data Here are simulated gamma data to illustrate some of the points in BenBolker's Answer (+1). Although
none of the traditional tests is completely satisfactory, the
Kruskal-Wallis test shows differences in groups for
my fake data (in which one difference between groups is rather large).



set.seed(123)
x1 = round(rgamma(20, 5, .2), 3)
x2 = round(rgamma(20, 5, .25), 3)
x3 = round(rgamma(20, 5, .35), 3)
sd(x1); sd(x2); sd(x3)
[1] 10.30572
[1] 6.218724
[1] 6.483086


Sample standard deviations differ, and box plots show different
dispersions along with different locations.



x = c(x1, x2, x3)
g = as.factor(rep(1:3, each=20))
boxplot(x ~ g, notch=T, col="skyblue2")


enter image description here



Notches in the sides of the boxes are nonparametric
confidence intervals calibrated so that, comparing
two groups, CIs that overlap suggest no significant
difference. So the first and last groups may differ significantly.



Kruskal-Wallis test, with ad hoc Wilcoxon comparisons. A Kruskal-Wallis test detects differences among the
three groups.



kruskal.test(x ~ g)

Kruskal-Wallis rank sum test

data: x by g
Kruskal-Wallis chi-squared = 13.269,
df = 2, p-value = 0.001314


Ad hoc tests with two-sample Wilcoxon tests show significant differences between groups 1 and 3. P-values for the other two comparisons are not small enough to satisfy the Bonferroni criterion
against false discovery.



wilcox.test(x1,x2)$p.val
[1] 0.01674239
wilcox.test(x2,x3)$
p.val
[1] 0.06343245
wilcox.test(x1,x3)$p.val
[1] 0.000667562


Welch one-factor ANOVA with ad hoc Welch t comparisons. Here we have moderately large samples and only moderate
skewness. A Welch one-way ANOVA, which does not
assume equal variances may be useful. The overall test gives a highly significant result; you could use Welch two-sample t tests for ad hoc comparisons.



oneway.test(x ~ g)

One-way analysis of means
(not assuming equal variances)

data: x and g
F = 6.905, num df = 2.000, denom df = 36.733,
p-value = 0.002847


Permutation test focused on differences among group means. Finally, a permutation test with the standard deviation
of the three (permuted) group means as 'metric', shows a significant difference. This metric focuses on differences among group means. The test is nonparametric. The P-value 0.0008 is similar to that of the
Kruskal-Wallis test.



d.obs = sd(c(mean(x1),mean(x2),mean(x3)))
set.seed(915)
m = 10^5; d.prm = numeric(m)
for(i in 1:m)
x.prm = sample(x) # scrambe obs among gps
a1=mean(x.prm[1:20]); a2=mean(x.prm[21:40]); a3=mean(x.prm[41:60])
d.prm[i] = sd(c(a1,a1,a3))
mean(d.prm >= d.obs)
[1] 0.00075
length(unique(d.prm))
[1] 77417


Of the 100,000 iterations there were over 77,000
uniquely different values of d.prm. Their histogram
is shown below along with the value 5.235 of d.obs.



enter image description here






share|cite|improve this answer











$endgroup$














  • $begingroup$
    Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
    $endgroup$
    – Dave
    3 hours ago










  • $begingroup$
    That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
    $endgroup$
    – BruceET
    2 hours ago













3














3










3







$begingroup$

Comment:



Simulated gamma data Here are simulated gamma data to illustrate some of the points in BenBolker's Answer (+1). Although
none of the traditional tests is completely satisfactory, the
Kruskal-Wallis test shows differences in groups for
my fake data (in which one difference between groups is rather large).



set.seed(123)
x1 = round(rgamma(20, 5, .2), 3)
x2 = round(rgamma(20, 5, .25), 3)
x3 = round(rgamma(20, 5, .35), 3)
sd(x1); sd(x2); sd(x3)
[1] 10.30572
[1] 6.218724
[1] 6.483086


Sample standard deviations differ, and box plots show different
dispersions along with different locations.



x = c(x1, x2, x3)
g = as.factor(rep(1:3, each=20))
boxplot(x ~ g, notch=T, col="skyblue2")


enter image description here



Notches in the sides of the boxes are nonparametric
confidence intervals calibrated so that, comparing
two groups, CIs that overlap suggest no significant
difference. So the first and last groups may differ significantly.



Kruskal-Wallis test, with ad hoc Wilcoxon comparisons. A Kruskal-Wallis test detects differences among the
three groups.



kruskal.test(x ~ g)

Kruskal-Wallis rank sum test

data: x by g
Kruskal-Wallis chi-squared = 13.269,
df = 2, p-value = 0.001314


Ad hoc tests with two-sample Wilcoxon tests show significant differences between groups 1 and 3. P-values for the other two comparisons are not small enough to satisfy the Bonferroni criterion
against false discovery.



wilcox.test(x1,x2)$p.val
[1] 0.01674239
wilcox.test(x2,x3)$
p.val
[1] 0.06343245
wilcox.test(x1,x3)$p.val
[1] 0.000667562


Welch one-factor ANOVA with ad hoc Welch t comparisons. Here we have moderately large samples and only moderate
skewness. A Welch one-way ANOVA, which does not
assume equal variances may be useful. The overall test gives a highly significant result; you could use Welch two-sample t tests for ad hoc comparisons.



oneway.test(x ~ g)

One-way analysis of means
(not assuming equal variances)

data: x and g
F = 6.905, num df = 2.000, denom df = 36.733,
p-value = 0.002847


Permutation test focused on differences among group means. Finally, a permutation test with the standard deviation
of the three (permuted) group means as 'metric', shows a significant difference. This metric focuses on differences among group means. The test is nonparametric. The P-value 0.0008 is similar to that of the
Kruskal-Wallis test.



d.obs = sd(c(mean(x1),mean(x2),mean(x3)))
set.seed(915)
m = 10^5; d.prm = numeric(m)
for(i in 1:m)
x.prm = sample(x) # scrambe obs among gps
a1=mean(x.prm[1:20]); a2=mean(x.prm[21:40]); a3=mean(x.prm[41:60])
d.prm[i] = sd(c(a1,a1,a3))
mean(d.prm >= d.obs)
[1] 0.00075
length(unique(d.prm))
[1] 77417


Of the 100,000 iterations there were over 77,000
uniquely different values of d.prm. Their histogram
is shown below along with the value 5.235 of d.obs.



enter image description here






share|cite|improve this answer











$endgroup$



Comment:



Simulated gamma data Here are simulated gamma data to illustrate some of the points in BenBolker's Answer (+1). Although
none of the traditional tests is completely satisfactory, the
Kruskal-Wallis test shows differences in groups for
my fake data (in which one difference between groups is rather large).



set.seed(123)
x1 = round(rgamma(20, 5, .2), 3)
x2 = round(rgamma(20, 5, .25), 3)
x3 = round(rgamma(20, 5, .35), 3)
sd(x1); sd(x2); sd(x3)
[1] 10.30572
[1] 6.218724
[1] 6.483086


Sample standard deviations differ, and box plots show different
dispersions along with different locations.



x = c(x1, x2, x3)
g = as.factor(rep(1:3, each=20))
boxplot(x ~ g, notch=T, col="skyblue2")


enter image description here



Notches in the sides of the boxes are nonparametric
confidence intervals calibrated so that, comparing
two groups, CIs that overlap suggest no significant
difference. So the first and last groups may differ significantly.



Kruskal-Wallis test, with ad hoc Wilcoxon comparisons. A Kruskal-Wallis test detects differences among the
three groups.



kruskal.test(x ~ g)

Kruskal-Wallis rank sum test

data: x by g
Kruskal-Wallis chi-squared = 13.269,
df = 2, p-value = 0.001314


Ad hoc tests with two-sample Wilcoxon tests show significant differences between groups 1 and 3. P-values for the other two comparisons are not small enough to satisfy the Bonferroni criterion
against false discovery.



wilcox.test(x1,x2)$p.val
[1] 0.01674239
wilcox.test(x2,x3)$
p.val
[1] 0.06343245
wilcox.test(x1,x3)$p.val
[1] 0.000667562


Welch one-factor ANOVA with ad hoc Welch t comparisons. Here we have moderately large samples and only moderate
skewness. A Welch one-way ANOVA, which does not
assume equal variances may be useful. The overall test gives a highly significant result; you could use Welch two-sample t tests for ad hoc comparisons.



oneway.test(x ~ g)

One-way analysis of means
(not assuming equal variances)

data: x and g
F = 6.905, num df = 2.000, denom df = 36.733,
p-value = 0.002847


Permutation test focused on differences among group means. Finally, a permutation test with the standard deviation
of the three (permuted) group means as 'metric', shows a significant difference. This metric focuses on differences among group means. The test is nonparametric. The P-value 0.0008 is similar to that of the
Kruskal-Wallis test.



d.obs = sd(c(mean(x1),mean(x2),mean(x3)))
set.seed(915)
m = 10^5; d.prm = numeric(m)
for(i in 1:m)
x.prm = sample(x) # scrambe obs among gps
a1=mean(x.prm[1:20]); a2=mean(x.prm[21:40]); a3=mean(x.prm[41:60])
d.prm[i] = sd(c(a1,a1,a3))
mean(d.prm >= d.obs)
[1] 0.00075
length(unique(d.prm))
[1] 77417


Of the 100,000 iterations there were over 77,000
uniquely different values of d.prm. Their histogram
is shown below along with the value 5.235 of d.obs.



enter image description here







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 4 hours ago

























answered 5 hours ago









BruceETBruceET

14.6k1 gold badge10 silver badges30 bronze badges




14.6k1 gold badge10 silver badges30 bronze badges














  • $begingroup$
    Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
    $endgroup$
    – Dave
    3 hours ago










  • $begingroup$
    That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
    $endgroup$
    – BruceET
    2 hours ago
















  • $begingroup$
    Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
    $endgroup$
    – Dave
    3 hours ago










  • $begingroup$
    That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
    $endgroup$
    – BruceET
    2 hours ago















$begingroup$
Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
$endgroup$
– Dave
3 hours ago




$begingroup$
Standard deviation (or variance) of the means is super clever! Do you have any references that explore this more? My first thought when I saw this question was to permute the group labels, but I was at a loss for what the test statistic would be with more than two groups (can’t take the difference between two means).
$endgroup$
– Dave
3 hours ago












$begingroup$
That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
$endgroup$
– BruceET
2 hours ago




$begingroup$
That may be a left-over idea of Bill Kruskal's from back in the 1960's. Good references for permutations tests seem scarce. I co-authored a fairly elementary paper on permutations tests pub in ASA's J. or Statistics Educ. several years back. L. Eudey was first author. // Also, there are some interesting pages on this site. Dinner time now, but I'll google around later and report any successes.
$endgroup$
– BruceET
2 hours ago













2














$begingroup$

The Kruskal-Wallis test is a rank-based analogue of 1-way ANOVA, so it would be a reasonable approach to nonparametric testing of differences in location for >=2 groups.



HOWEVER: the "unequal variance" thing really messes you up. This answer discusses why unequal variances are problematic for Mann-Whitney tests (the 2-sample version of K-W/non-parametric version of the t-test), and the same problem applies to K-W, as discussed on Wikipedia (linked above):




If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group.




Loosely speaking, from my answer to the Mann-Whitney question:




If you are satisfied with showing that the distribution of the groups differs in some way from that of men, then you don't need the extra assumption [of equality of distribution except for location].







share|cite|improve this answer









$endgroup$














  • $begingroup$
    Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
    $endgroup$
    – Ivan
    6 hours ago











  • $begingroup$
    See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
    $endgroup$
    – Ben Bolker
    6 hours ago















2














$begingroup$

The Kruskal-Wallis test is a rank-based analogue of 1-way ANOVA, so it would be a reasonable approach to nonparametric testing of differences in location for >=2 groups.



HOWEVER: the "unequal variance" thing really messes you up. This answer discusses why unequal variances are problematic for Mann-Whitney tests (the 2-sample version of K-W/non-parametric version of the t-test), and the same problem applies to K-W, as discussed on Wikipedia (linked above):




If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group.




Loosely speaking, from my answer to the Mann-Whitney question:




If you are satisfied with showing that the distribution of the groups differs in some way from that of men, then you don't need the extra assumption [of equality of distribution except for location].







share|cite|improve this answer









$endgroup$














  • $begingroup$
    Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
    $endgroup$
    – Ivan
    6 hours ago











  • $begingroup$
    See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
    $endgroup$
    – Ben Bolker
    6 hours ago













2














2










2







$begingroup$

The Kruskal-Wallis test is a rank-based analogue of 1-way ANOVA, so it would be a reasonable approach to nonparametric testing of differences in location for >=2 groups.



HOWEVER: the "unequal variance" thing really messes you up. This answer discusses why unequal variances are problematic for Mann-Whitney tests (the 2-sample version of K-W/non-parametric version of the t-test), and the same problem applies to K-W, as discussed on Wikipedia (linked above):




If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group.




Loosely speaking, from my answer to the Mann-Whitney question:




If you are satisfied with showing that the distribution of the groups differs in some way from that of men, then you don't need the extra assumption [of equality of distribution except for location].







share|cite|improve this answer









$endgroup$



The Kruskal-Wallis test is a rank-based analogue of 1-way ANOVA, so it would be a reasonable approach to nonparametric testing of differences in location for >=2 groups.



HOWEVER: the "unequal variance" thing really messes you up. This answer discusses why unequal variances are problematic for Mann-Whitney tests (the 2-sample version of K-W/non-parametric version of the t-test), and the same problem applies to K-W, as discussed on Wikipedia (linked above):




If the researcher can make the assumptions of an identically shaped and scaled distribution for all groups, except for any difference in medians, then the null hypothesis is that the medians of all groups are equal, and the alternative hypothesis is that at least one population median of one group is different from the population median of at least one other group.




Loosely speaking, from my answer to the Mann-Whitney question:




If you are satisfied with showing that the distribution of the groups differs in some way from that of men, then you don't need the extra assumption [of equality of distribution except for location].








share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered 7 hours ago









Ben BolkerBen Bolker

26k2 gold badges70 silver badges97 bronze badges




26k2 gold badges70 silver badges97 bronze badges














  • $begingroup$
    Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
    $endgroup$
    – Ivan
    6 hours ago











  • $begingroup$
    See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
    $endgroup$
    – Ben Bolker
    6 hours ago
















  • $begingroup$
    Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
    $endgroup$
    – Ivan
    6 hours ago











  • $begingroup$
    See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
    $endgroup$
    – Ben Bolker
    6 hours ago















$begingroup$
Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
$endgroup$
– Ivan
6 hours ago





$begingroup$
Thanks for your answer. Now let's say that I have the same population non normal and unequal variance with more than 2 groups and independant. If I pick two groups (there are 12 in total), 1 of them is a control sample and I want to test if the mean is different from the control group. Would be suitable to do a two sample test? Such as mann whitney? Btw my data is continuous
$endgroup$
– Ivan
6 hours ago













$begingroup$
See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
$endgroup$
– Ben Bolker
6 hours ago




$begingroup$
See my linked answer. Mann-Whitney would be OK for two groups, but: It's very hard to test differences in location (mean, median, etc.) non-parametrically if you're not willing to assume that the distributions in each group are identical up to a shift (change in location).
$endgroup$
– Ben Bolker
6 hours ago


















draft saved

draft discarded















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f427383%2fnon-parametric-test-for-samples-with-unequal-variance-for-3-or-more-samples%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

Кастелфранко ди Сопра Становништво Референце Спољашње везе Мени за навигацију43°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.5588543°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.558853179688„The GeoNames geographical database”„Istituto Nazionale di Statistica”проширитиууWorldCat156923403n850174324558639-1cb14643287r(подаци)