Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make n approach infinity?Showing that the power of a test approaches 1 as the sample size approaches infinityBootstrapping power estimates for a bootstrap testCan bootstrap be seen as a “cure” for the small sample size?Using bootstrap under H0 to perform a test for the difference of two means: replacement within the groups or within the pooled sampleHow could one derive power indicators for omnibus tests?Bootstrapping and hypothesis testingBootstrapped Hypothesis Test Power with Covariate AdjustmentIn an ovepowered experiment why may tiny effects create a significant result?any publications using power analysis on deep learning projects?Bootstrapping non-random samples

Plato and the knowledge of the forms

Not been paid even after reminding the Treasurer; what should I do?

Does a 4 bladed prop have almost twice the thrust of a 2 bladed prop?

Why do cheap flights with a layover get more expensive when you split them up into separate flights?

In MTG, was there ever a five-color deck that worked well?

Is an "are" omitted in this sentence

If someone else uploads my GPL'd code to Github without my permission, is that a copyright violation?

What is the probability of a biased coin coming up heads given that a liar is claiming that the coin came up heads?

Can attackers change the public key of certificate during the SSL handshake

Can I enter Switzerland with only my London Driver's License?

Do you like Music? This word is Historic

How easy is it to get a gun illegally in the United States?

Will a research paper be retracted if the code (which was made publicly available) is shown to have a flaw in the logic?

What is the German idiom or expression for when someone is being hypocritical against their own teachings?

Our group keeps dying during the Lost Mine of Phandelver campaign. What are we doing wrong?

How to approach protecting my code as a research assistant? Should I be worried in the first place?

How to realistically deal with a shield user?

Does a humanoid possessed by a ghost register as undead to a paladin's Divine Sense?

Traveling from Germany to other countries by train?

Which pronoun to replace an infinitive?

How do I deal with large amout missing values in a data set without dropping them?

Do any languages mention the top limit of a range first?

What could prevent players from leaving an island?

Why should I "believe in" weak solutions to PDEs?



Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make n approach infinity?


Showing that the power of a test approaches 1 as the sample size approaches infinityBootstrapping power estimates for a bootstrap testCan bootstrap be seen as a “cure” for the small sample size?Using bootstrap under H0 to perform a test for the difference of two means: replacement within the groups or within the pooled sampleHow could one derive power indicators for omnibus tests?Bootstrapping and hypothesis testingBootstrapped Hypothesis Test Power with Covariate AdjustmentIn an ovepowered experiment why may tiny effects create a significant result?any publications using power analysis on deep learning projects?Bootstrapping non-random samples






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


Why do we care about the power of a hypothesis test if we no longer live in an age where computers are slow and it's too costly to bootstrap/do a permutation test on anything which is also non-parametric?



Is power-analysis irrelevant if I can bootstrap/permutation hypothesis test?



We can make the "sample size" infinity with bootstrapping so power goes up as a result of bootstrapping?










share|cite|improve this question









$endgroup$




















    1












    $begingroup$


    Why do we care about the power of a hypothesis test if we no longer live in an age where computers are slow and it's too costly to bootstrap/do a permutation test on anything which is also non-parametric?



    Is power-analysis irrelevant if I can bootstrap/permutation hypothesis test?



    We can make the "sample size" infinity with bootstrapping so power goes up as a result of bootstrapping?










    share|cite|improve this question









    $endgroup$
















      1












      1








      1





      $begingroup$


      Why do we care about the power of a hypothesis test if we no longer live in an age where computers are slow and it's too costly to bootstrap/do a permutation test on anything which is also non-parametric?



      Is power-analysis irrelevant if I can bootstrap/permutation hypothesis test?



      We can make the "sample size" infinity with bootstrapping so power goes up as a result of bootstrapping?










      share|cite|improve this question









      $endgroup$




      Why do we care about the power of a hypothesis test if we no longer live in an age where computers are slow and it's too costly to bootstrap/do a permutation test on anything which is also non-parametric?



      Is power-analysis irrelevant if I can bootstrap/permutation hypothesis test?



      We can make the "sample size" infinity with bootstrapping so power goes up as a result of bootstrapping?







      bootstrap power-analysis power






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 8 hours ago









      GermaniaGermania

      1376 bronze badges




      1376 bronze badges























          1 Answer
          1






          active

          oldest

          votes


















          8












          $begingroup$

          The amount of information relating to the hypotheses that you have is simply the information in the original data.



          Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there.



          The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn.



          As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples.



          Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there.



          With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy.



          For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples:



          histograms of bootstrap distribution of t-statstic, with 1000 and 10000 bootstrap resamples



          The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one)



          A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well.



          We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value).



          --



          To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.






          share|cite|improve this answer











          $endgroup$

















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f420959%2fwhy-is-power-of-a-hypothesis-test-a-concern-when-we-can-bootstrap-any-representa%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            8












            $begingroup$

            The amount of information relating to the hypotheses that you have is simply the information in the original data.



            Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there.



            The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn.



            As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples.



            Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there.



            With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy.



            For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples:



            histograms of bootstrap distribution of t-statstic, with 1000 and 10000 bootstrap resamples



            The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one)



            A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well.



            We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value).



            --



            To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.






            share|cite|improve this answer











            $endgroup$



















              8












              $begingroup$

              The amount of information relating to the hypotheses that you have is simply the information in the original data.



              Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there.



              The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn.



              As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples.



              Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there.



              With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy.



              For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples:



              histograms of bootstrap distribution of t-statstic, with 1000 and 10000 bootstrap resamples



              The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one)



              A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well.



              We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value).



              --



              To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.






              share|cite|improve this answer











              $endgroup$

















                8












                8








                8





                $begingroup$

                The amount of information relating to the hypotheses that you have is simply the information in the original data.



                Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there.



                The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn.



                As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples.



                Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there.



                With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy.



                For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples:



                histograms of bootstrap distribution of t-statstic, with 1000 and 10000 bootstrap resamples



                The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one)



                A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well.



                We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value).



                --



                To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.






                share|cite|improve this answer











                $endgroup$



                The amount of information relating to the hypotheses that you have is simply the information in the original data.



                Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there.



                The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn.



                As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples.



                Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there.



                With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy.



                For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples:



                histograms of bootstrap distribution of t-statstic, with 1000 and 10000 bootstrap resamples



                The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one)



                A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well.



                We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value).



                --



                To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 1 hour ago









                Dimitris Rizopoulos

                9,8481 gold badge6 silver badges26 bronze badges




                9,8481 gold badge6 silver badges26 bronze badges










                answered 7 hours ago









                Glen_bGlen_b

                221k23 gold badges435 silver badges790 bronze badges




                221k23 gold badges435 silver badges790 bronze badges






























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f420959%2fwhy-is-power-of-a-hypothesis-test-a-concern-when-we-can-bootstrap-any-representa%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

                    Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

                    19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу