Family-wise Type I error OLS regressionOverall type I error with dependent testsControlling for Type 1 Errors: Post-hoc testing on more than 1 ANOVACorrecting for family-wise error rate with series of repeated measures ANOVA?A situation where ignoring clustering optimises the Type I and Type II error rates?Does the logic of “family-wise error” apply to effect size estimation?Type I/II errorPair-wise comparisons in non-parametric ANCOVA in R/SPSSIn using backward elimination procedure how to control for type I error?How to measure risk of a Type 2 error in A/B testsType 1 Error correction for multiple comparisons: ANOVA vs multiple regression

What could a Medieval society do with excess animal blood?

Is it OK to throw pebbles and stones in streams, waterfalls, ponds, etc.?

Was Wolfgang Unziker the last Amateur GM?

Does the Grothendieck group of finitely generated modules form a commutative ring where the multiplication structure is induced from tensor product?

Non-inverting amplifier ; Single supply ; Bipolar input

latex equation missing { inserted on end split

Why did the Middle Kingdom stop building pyramid tombs?

How to know the operations made to calculate the Levenshtein distance between strings?

What is the meaning of "it" in "as luck would have it"?

Why is quantum gravity non-renormalizable?

How can I change my buffer system for protein purification?

Did the Shuttle payload bay have illumination?

Is my background sufficient to start Quantum Computing

How might boat designs change in order to allow them to be pulled by dragons?

Polynomial and roots problems

How can I know (without going to the station) if RATP is offering the Anti Pollution tickets?

My players like to search everything. What do they find?

How do I use efficient repeats in sheets for pop music?

What is the point of using the kunai?

Runtime too long for NDSolveValue, FindRoot breaks down at sharp turns

Simplify the code

ShellExView vs ShellMenuView

Tricky riddle from sister

Which are more efficient in putting out wildfires: planes or helicopters?



Family-wise Type I error OLS regression


Overall type I error with dependent testsControlling for Type 1 Errors: Post-hoc testing on more than 1 ANOVACorrecting for family-wise error rate with series of repeated measures ANOVA?A situation where ignoring clustering optimises the Type I and Type II error rates?Does the logic of “family-wise error” apply to effect size estimation?Type I/II errorPair-wise comparisons in non-parametric ANCOVA in R/SPSSIn using backward elimination procedure how to control for type I error?How to measure risk of a Type 2 error in A/B testsType 1 Error correction for multiple comparisons: ANOVA vs multiple regression













4












$begingroup$


Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










share|cite|improve this question









$endgroup$
















    4












    $begingroup$


    Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










    share|cite|improve this question









    $endgroup$














      4












      4








      4





      $begingroup$


      Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?










      share|cite|improve this question









      $endgroup$




      Why is it advised to control the Type I error rate (e.g. Turkey's HSD) when conducting several pairwise comparisons, but not when assessing the significance of several coefficient estimates when conducting say OLS regression?







      regression multiple-comparisons type-i-and-ii-errors






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 8 hours ago









      ChernoffChernoff

      1538 bronze badges




      1538 bronze badges




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



          However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



          Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:



          • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

          • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

          • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.

          For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



          Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






          share|cite|improve this answer











          $endgroup$















            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f415451%2ffamily-wise-type-i-error-ols-regression%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            3












            $begingroup$

            If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



            However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



            Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:



            • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

            • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

            • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.

            For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



            Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






            share|cite|improve this answer











            $endgroup$

















              3












              $begingroup$

              If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



              However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



              Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:



              • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

              • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

              • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.

              For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



              Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






              share|cite|improve this answer











              $endgroup$















                3












                3








                3





                $begingroup$

                If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



                However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



                Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:



                • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

                • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

                • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.

                For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



                Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.






                share|cite|improve this answer











                $endgroup$



                If your goal is confirmation through hypothesis tests, you should correct for the FWER (or FDR), regardless of the type of model used. If you have a source for the claim to the contrary, please include it in your question.



                However, confirmation isn't the only reason someone would use linear regression. You may want to simply predict the outcome variable, or you might just be interested in the magnitude of the effects that the explanatory variables have on the outcome. Personally, I am rarely interested in the $p$-values of my linear models.



                Even if you are interested in the $p$-values of a linear regression, which $p$-values you should correct for multiple testing depends on what you are doing, for example:



                • Whether the intercept differs significantly from $0$ is rarely interesting. Including this $p$-value in the correction can inflate the type II error rate by increasing the number of tests, or even increase the type I error rate by including a nonsense significant result (in case of FDR correction);

                • If your research question revolves around the effect of a single explanatory variable, but you want to include potential confounders, there is no need to even look at those other variables' $p$-values;

                • Similarly, if your research questions concerns the presence of a (significant) interaction effect, the significance of the marginal effects may be irrelevant.

                For this reason, there is no standard multiple testing correction applied to most of the default summaries of linear models, but you can of course apply your own after deciding which $p$-values matter.



                Contrast this with Tukey's honest significant difference: You are comparing every group with every group⁠. Not only is this the maximum number of hypothesis tests you can perform⁠—increasing the risk of poor inference without some standard correction applied⁠—but it also exists exclusively to perform comparisons, whereas linear regression in general can be used for all kinds of purposes.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 4 hours ago

























                answered 8 hours ago









                Frans RodenburgFrans Rodenburg

                4,4681 gold badge5 silver badges30 bronze badges




                4,4681 gold badge5 silver badges30 bronze badges



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f415451%2ffamily-wise-type-i-error-ols-regression%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

                    Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

                    19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу