Why do we need an estimator to be consistent?Intuitive explanation of convergence in distribution and convergence in probabilityWhat is the distribution of sample means of a Cauchy distribution?Minimax estimator for the mean of a Poisson distributionEstimators, sufficiency, consistency, and biasWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Correctness of a proof for Hodges' estimatorNeyman - Pearson criterion: most powerful but not consistent?Consistency in mean square vs. “normal” consistencyConsistent estimator, that is not MSE consistentwhy does unbiasedness not imply consistencyParzen density estimates convergenceProof of (weak) consistency for an unbiased estimator

When a ball on a rope swings in a circle, is there both centripetal force and tension force?

What is a Kravchuk transform and how is it related to Fourier transforms?

Why should fork() have been designed to return a file descriptor?

Is it possible to have a career in SciComp without contributing to arms research?

When we are talking about black hole evaporation - what exactly happens?

What could make large expeditions ineffective for exploring territory full of dangers and valuable resources?

What is a Romeo Word™?

Applying for jobs with an obvious scar

How to belay quickly ascending top-rope climbers?

Did Hitler say this quote about homeschooling?

Is encryption still applied if you ignore the SSL certificate warning for self-signed certs?

How important are the Author's mood and feelings for writing a story?

How should I interpret a promising preprint that was never published in a peer-reviewed journal?

"This used to be my phone number"

Legendre Polynomial Integral over half space

What's the largest an Earth-like planet can be and support Earth's biosphere?

Locked-up DOS computer beeped on keypress. What mechanism caused that?

Could Europeans in Europe demand protection under UN Declaration on the Rights of Indigenous Peoples?

Does the Bracer of Flying Daggers really let a thief make 4 attacks per round?

Should I work for free if client's requirement changed

Making a Dataset that emulates `ls -tlra`?

How to tell readers that I know my story is factually incorrect?

How did Jayne know when to shoot?

Masyu-making game



Why do we need an estimator to be consistent?


Intuitive explanation of convergence in distribution and convergence in probabilityWhat is the distribution of sample means of a Cauchy distribution?Minimax estimator for the mean of a Poisson distributionEstimators, sufficiency, consistency, and biasWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Correctness of a proof for Hodges' estimatorNeyman - Pearson criterion: most powerful but not consistent?Consistency in mean square vs. “normal” consistencyConsistent estimator, that is not MSE consistentwhy does unbiasedness not imply consistencyParzen density estimates convergenceProof of (weak) consistency for an unbiased estimator






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


I already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong:



$W_n$ is an consistent estimator for $theta$ if $forall epsilon<0$



$$lim_ntoinfty P(|W_n - theta|> epsilon) = 0, quad foralltheta in Theta$$



Where, $Theta$ is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples?



I accept simulations in R or python.










share|cite|improve this question









$endgroup$







  • 2




    $begingroup$
    An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
    $endgroup$
    – compbiostats
    7 hours ago











  • $begingroup$
    Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
    $endgroup$
    – StubbornAtom
    7 hours ago

















3












$begingroup$


I already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong:



$W_n$ is an consistent estimator for $theta$ if $forall epsilon<0$



$$lim_ntoinfty P(|W_n - theta|> epsilon) = 0, quad foralltheta in Theta$$



Where, $Theta$ is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples?



I accept simulations in R or python.










share|cite|improve this question









$endgroup$







  • 2




    $begingroup$
    An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
    $endgroup$
    – compbiostats
    7 hours ago











  • $begingroup$
    Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
    $endgroup$
    – StubbornAtom
    7 hours ago













3












3








3





$begingroup$


I already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong:



$W_n$ is an consistent estimator for $theta$ if $forall epsilon<0$



$$lim_ntoinfty P(|W_n - theta|> epsilon) = 0, quad foralltheta in Theta$$



Where, $Theta$ is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples?



I accept simulations in R or python.










share|cite|improve this question









$endgroup$




I already understood the mathematical definition of a consistent estimator. Correct me if I'm wrong:



$W_n$ is an consistent estimator for $theta$ if $forall epsilon<0$



$$lim_ntoinfty P(|W_n - theta|> epsilon) = 0, quad foralltheta in Theta$$



Where, $Theta$ is the Parametric Space.  But I want to understand the need for an estimator to be consistent. Why an estimator that is not consistent is bad? Could you give me some examples?



I accept simulations in R or python.







estimation consistency






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 8 hours ago









FamFam

755 bronze badges




755 bronze badges







  • 2




    $begingroup$
    An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
    $endgroup$
    – compbiostats
    7 hours ago











  • $begingroup$
    Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
    $endgroup$
    – StubbornAtom
    7 hours ago












  • 2




    $begingroup$
    An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
    $endgroup$
    – compbiostats
    7 hours ago











  • $begingroup$
    Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
    $endgroup$
    – StubbornAtom
    7 hours ago







2




2




$begingroup$
An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
$endgroup$
– compbiostats
7 hours ago





$begingroup$
An estimator that is not consistent is not always a bad one. Take for instance an inconsistent but unbiased estimator. See Wikipedia's article on Consistent Estimator en.wikipedia.org/wiki/Consistent_estimator, particularly the section on Bias versus Consistency
$endgroup$
– compbiostats
7 hours ago













$begingroup$
Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
$endgroup$
– StubbornAtom
7 hours ago




$begingroup$
Consistency is roughly speaking an optimal asymptotic behaviour of an estimator. We choose an estimator which approaches the true value of $theta$ in the long run. Since this is just convergence in probability, this thread might be helpful: stats.stackexchange.com/questions/134701/….
$endgroup$
– StubbornAtom
7 hours ago










3 Answers
3






active

oldest

votes


















3












$begingroup$

Consider $n = 10,000$ observations from the standard Cauchy distribution,
which is the same as Student's t distribution with 1 degree of freedom.
The tails of this distribution are sufficiently heavy that it has no
mean; the distribution is centered at its median $eta = 0.$



A sequence of sample means $A_j = frac 1j sum_i=1^j X_i$ is not consistent
for the center of the Cauchy distribution. Roughly speaking, the difficulty
is that very extreme observations $X_i$ (positive or negative) occur with
sufficient regularity that there is no chance for $A_j$ to converge to $eta = 0.$ (The $A_j$ are not just slow to converge, they don't ever converge. The distribution of $A_j$ is again standard Cauchy [proof].)



By contrast, at any one step in a continuing sampling process, about half
of the observations $X_i$ will lie on either side of $eta,$ so that the sequence $H_j$ of sample medians does converge to $eta.$



This lack of convergence of $A_j$ and convergence of $H_h$ is illustrated
by the following simulation.



set.seed(2019) # for reproducibility
n = 10000; x = rt(n, 1); j = 1:n
a = cumsum(x)/j
h = numeric(n)
for (i in 1:n)
h[i] = median(x[1:i])
par(mfrow=c(1,2))
plot(j,a, type="l", ylim=c(-5,5), lwd=2,
main="Trace of Sample Mean")
abline(h=0, col="green2")
k = j[abs(x)>1000]
abline(v=k, col="red", lty="dotted")
plot(j,h, type="l", ylim=c(-5,5), lwd=2,
main="Trace of Sample Median")
abline(h=0, col="green2")
par(mfrow=c(1,1))


enter image description here



Here is a list of steps at which $|X_i| > 1000.$ You can see the effect
of some of these extreme observations on the running averages in the plot at left (at the vertical red dotted lines).



k = j[abs(x)>1000]
rbind(k, round(x[k]))
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
k 291 898 1293 1602 2547 5472 6079 9158
-5440 2502 5421 -2231 1635 -2644 -10194 -3137


Consistency in important in estimation: In sampling from a Cauchy population, the sample mean of a sample of $n = 10,000$ observations is no better for estimating the center $eta$ than just one observation. By contrast, the consistent sample median converges to $eta$ so larger samples produce better estimates.






share|cite|improve this answer











$endgroup$




















    2












    $begingroup$

    If the estimator is not consistent, it won't converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.






    share|cite|improve this answer









    $endgroup$




















      0












      $begingroup$

      A really simple of example of why it's important to think of consistency, which I don't think gets enough attention, is that of an over-simplified model.



      As a theoretical example, suppose you wanted to fit a linear regression model on some data, in which the true effects were actually non-linear. Then your predictions cannot be consistent for the true mean for all combinations of covariates, while a more flexible may be able to.






      share|cite|improve this answer









      $endgroup$















        Your Answer








        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "65"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: false,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        imageUploader:
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        ,
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













        draft saved

        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f418417%2fwhy-do-we-need-an-estimator-to-be-consistent%23new-answer', 'question_page');

        );

        Post as a guest















        Required, but never shown

























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        3












        $begingroup$

        Consider $n = 10,000$ observations from the standard Cauchy distribution,
        which is the same as Student's t distribution with 1 degree of freedom.
        The tails of this distribution are sufficiently heavy that it has no
        mean; the distribution is centered at its median $eta = 0.$



        A sequence of sample means $A_j = frac 1j sum_i=1^j X_i$ is not consistent
        for the center of the Cauchy distribution. Roughly speaking, the difficulty
        is that very extreme observations $X_i$ (positive or negative) occur with
        sufficient regularity that there is no chance for $A_j$ to converge to $eta = 0.$ (The $A_j$ are not just slow to converge, they don't ever converge. The distribution of $A_j$ is again standard Cauchy [proof].)



        By contrast, at any one step in a continuing sampling process, about half
        of the observations $X_i$ will lie on either side of $eta,$ so that the sequence $H_j$ of sample medians does converge to $eta.$



        This lack of convergence of $A_j$ and convergence of $H_h$ is illustrated
        by the following simulation.



        set.seed(2019) # for reproducibility
        n = 10000; x = rt(n, 1); j = 1:n
        a = cumsum(x)/j
        h = numeric(n)
        for (i in 1:n)
        h[i] = median(x[1:i])
        par(mfrow=c(1,2))
        plot(j,a, type="l", ylim=c(-5,5), lwd=2,
        main="Trace of Sample Mean")
        abline(h=0, col="green2")
        k = j[abs(x)>1000]
        abline(v=k, col="red", lty="dotted")
        plot(j,h, type="l", ylim=c(-5,5), lwd=2,
        main="Trace of Sample Median")
        abline(h=0, col="green2")
        par(mfrow=c(1,1))


        enter image description here



        Here is a list of steps at which $|X_i| > 1000.$ You can see the effect
        of some of these extreme observations on the running averages in the plot at left (at the vertical red dotted lines).



        k = j[abs(x)>1000]
        rbind(k, round(x[k]))
        [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
        k 291 898 1293 1602 2547 5472 6079 9158
        -5440 2502 5421 -2231 1635 -2644 -10194 -3137


        Consistency in important in estimation: In sampling from a Cauchy population, the sample mean of a sample of $n = 10,000$ observations is no better for estimating the center $eta$ than just one observation. By contrast, the consistent sample median converges to $eta$ so larger samples produce better estimates.






        share|cite|improve this answer











        $endgroup$

















          3












          $begingroup$

          Consider $n = 10,000$ observations from the standard Cauchy distribution,
          which is the same as Student's t distribution with 1 degree of freedom.
          The tails of this distribution are sufficiently heavy that it has no
          mean; the distribution is centered at its median $eta = 0.$



          A sequence of sample means $A_j = frac 1j sum_i=1^j X_i$ is not consistent
          for the center of the Cauchy distribution. Roughly speaking, the difficulty
          is that very extreme observations $X_i$ (positive or negative) occur with
          sufficient regularity that there is no chance for $A_j$ to converge to $eta = 0.$ (The $A_j$ are not just slow to converge, they don't ever converge. The distribution of $A_j$ is again standard Cauchy [proof].)



          By contrast, at any one step in a continuing sampling process, about half
          of the observations $X_i$ will lie on either side of $eta,$ so that the sequence $H_j$ of sample medians does converge to $eta.$



          This lack of convergence of $A_j$ and convergence of $H_h$ is illustrated
          by the following simulation.



          set.seed(2019) # for reproducibility
          n = 10000; x = rt(n, 1); j = 1:n
          a = cumsum(x)/j
          h = numeric(n)
          for (i in 1:n)
          h[i] = median(x[1:i])
          par(mfrow=c(1,2))
          plot(j,a, type="l", ylim=c(-5,5), lwd=2,
          main="Trace of Sample Mean")
          abline(h=0, col="green2")
          k = j[abs(x)>1000]
          abline(v=k, col="red", lty="dotted")
          plot(j,h, type="l", ylim=c(-5,5), lwd=2,
          main="Trace of Sample Median")
          abline(h=0, col="green2")
          par(mfrow=c(1,1))


          enter image description here



          Here is a list of steps at which $|X_i| > 1000.$ You can see the effect
          of some of these extreme observations on the running averages in the plot at left (at the vertical red dotted lines).



          k = j[abs(x)>1000]
          rbind(k, round(x[k]))
          [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
          k 291 898 1293 1602 2547 5472 6079 9158
          -5440 2502 5421 -2231 1635 -2644 -10194 -3137


          Consistency in important in estimation: In sampling from a Cauchy population, the sample mean of a sample of $n = 10,000$ observations is no better for estimating the center $eta$ than just one observation. By contrast, the consistent sample median converges to $eta$ so larger samples produce better estimates.






          share|cite|improve this answer











          $endgroup$















            3












            3








            3





            $begingroup$

            Consider $n = 10,000$ observations from the standard Cauchy distribution,
            which is the same as Student's t distribution with 1 degree of freedom.
            The tails of this distribution are sufficiently heavy that it has no
            mean; the distribution is centered at its median $eta = 0.$



            A sequence of sample means $A_j = frac 1j sum_i=1^j X_i$ is not consistent
            for the center of the Cauchy distribution. Roughly speaking, the difficulty
            is that very extreme observations $X_i$ (positive or negative) occur with
            sufficient regularity that there is no chance for $A_j$ to converge to $eta = 0.$ (The $A_j$ are not just slow to converge, they don't ever converge. The distribution of $A_j$ is again standard Cauchy [proof].)



            By contrast, at any one step in a continuing sampling process, about half
            of the observations $X_i$ will lie on either side of $eta,$ so that the sequence $H_j$ of sample medians does converge to $eta.$



            This lack of convergence of $A_j$ and convergence of $H_h$ is illustrated
            by the following simulation.



            set.seed(2019) # for reproducibility
            n = 10000; x = rt(n, 1); j = 1:n
            a = cumsum(x)/j
            h = numeric(n)
            for (i in 1:n)
            h[i] = median(x[1:i])
            par(mfrow=c(1,2))
            plot(j,a, type="l", ylim=c(-5,5), lwd=2,
            main="Trace of Sample Mean")
            abline(h=0, col="green2")
            k = j[abs(x)>1000]
            abline(v=k, col="red", lty="dotted")
            plot(j,h, type="l", ylim=c(-5,5), lwd=2,
            main="Trace of Sample Median")
            abline(h=0, col="green2")
            par(mfrow=c(1,1))


            enter image description here



            Here is a list of steps at which $|X_i| > 1000.$ You can see the effect
            of some of these extreme observations on the running averages in the plot at left (at the vertical red dotted lines).



            k = j[abs(x)>1000]
            rbind(k, round(x[k]))
            [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
            k 291 898 1293 1602 2547 5472 6079 9158
            -5440 2502 5421 -2231 1635 -2644 -10194 -3137


            Consistency in important in estimation: In sampling from a Cauchy population, the sample mean of a sample of $n = 10,000$ observations is no better for estimating the center $eta$ than just one observation. By contrast, the consistent sample median converges to $eta$ so larger samples produce better estimates.






            share|cite|improve this answer











            $endgroup$



            Consider $n = 10,000$ observations from the standard Cauchy distribution,
            which is the same as Student's t distribution with 1 degree of freedom.
            The tails of this distribution are sufficiently heavy that it has no
            mean; the distribution is centered at its median $eta = 0.$



            A sequence of sample means $A_j = frac 1j sum_i=1^j X_i$ is not consistent
            for the center of the Cauchy distribution. Roughly speaking, the difficulty
            is that very extreme observations $X_i$ (positive or negative) occur with
            sufficient regularity that there is no chance for $A_j$ to converge to $eta = 0.$ (The $A_j$ are not just slow to converge, they don't ever converge. The distribution of $A_j$ is again standard Cauchy [proof].)



            By contrast, at any one step in a continuing sampling process, about half
            of the observations $X_i$ will lie on either side of $eta,$ so that the sequence $H_j$ of sample medians does converge to $eta.$



            This lack of convergence of $A_j$ and convergence of $H_h$ is illustrated
            by the following simulation.



            set.seed(2019) # for reproducibility
            n = 10000; x = rt(n, 1); j = 1:n
            a = cumsum(x)/j
            h = numeric(n)
            for (i in 1:n)
            h[i] = median(x[1:i])
            par(mfrow=c(1,2))
            plot(j,a, type="l", ylim=c(-5,5), lwd=2,
            main="Trace of Sample Mean")
            abline(h=0, col="green2")
            k = j[abs(x)>1000]
            abline(v=k, col="red", lty="dotted")
            plot(j,h, type="l", ylim=c(-5,5), lwd=2,
            main="Trace of Sample Median")
            abline(h=0, col="green2")
            par(mfrow=c(1,1))


            enter image description here



            Here is a list of steps at which $|X_i| > 1000.$ You can see the effect
            of some of these extreme observations on the running averages in the plot at left (at the vertical red dotted lines).



            k = j[abs(x)>1000]
            rbind(k, round(x[k]))
            [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
            k 291 898 1293 1602 2547 5472 6079 9158
            -5440 2502 5421 -2231 1635 -2644 -10194 -3137


            Consistency in important in estimation: In sampling from a Cauchy population, the sample mean of a sample of $n = 10,000$ observations is no better for estimating the center $eta$ than just one observation. By contrast, the consistent sample median converges to $eta$ so larger samples produce better estimates.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited 4 hours ago

























            answered 6 hours ago









            BruceETBruceET

            11.4k1 gold badge8 silver badges25 bronze badges




            11.4k1 gold badge8 silver badges25 bronze badges























                2












                $begingroup$

                If the estimator is not consistent, it won't converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.






                share|cite|improve this answer









                $endgroup$

















                  2












                  $begingroup$

                  If the estimator is not consistent, it won't converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.






                  share|cite|improve this answer









                  $endgroup$















                    2












                    2








                    2





                    $begingroup$

                    If the estimator is not consistent, it won't converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.






                    share|cite|improve this answer









                    $endgroup$



                    If the estimator is not consistent, it won't converge to the true value in probability. In other words, there is always a probability that your estimator and true value will have a difference, no matter how many data points you have. This is actually bad, because even if you collect immense amount of data, your estimate will always have a positive probability of being some $epsilon>0$ different from the true value. Practically, you can consider this situation as if you're using an estimator of a quantity such that even surveying all the population, instead of a small sample of it, won't help you.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered 7 hours ago









                    gunesgunes

                    11.7k1 gold badge4 silver badges19 bronze badges




                    11.7k1 gold badge4 silver badges19 bronze badges





















                        0












                        $begingroup$

                        A really simple of example of why it's important to think of consistency, which I don't think gets enough attention, is that of an over-simplified model.



                        As a theoretical example, suppose you wanted to fit a linear regression model on some data, in which the true effects were actually non-linear. Then your predictions cannot be consistent for the true mean for all combinations of covariates, while a more flexible may be able to.






                        share|cite|improve this answer









                        $endgroup$

















                          0












                          $begingroup$

                          A really simple of example of why it's important to think of consistency, which I don't think gets enough attention, is that of an over-simplified model.



                          As a theoretical example, suppose you wanted to fit a linear regression model on some data, in which the true effects were actually non-linear. Then your predictions cannot be consistent for the true mean for all combinations of covariates, while a more flexible may be able to.






                          share|cite|improve this answer









                          $endgroup$















                            0












                            0








                            0





                            $begingroup$

                            A really simple of example of why it's important to think of consistency, which I don't think gets enough attention, is that of an over-simplified model.



                            As a theoretical example, suppose you wanted to fit a linear regression model on some data, in which the true effects were actually non-linear. Then your predictions cannot be consistent for the true mean for all combinations of covariates, while a more flexible may be able to.






                            share|cite|improve this answer









                            $endgroup$



                            A really simple of example of why it's important to think of consistency, which I don't think gets enough attention, is that of an over-simplified model.



                            As a theoretical example, suppose you wanted to fit a linear regression model on some data, in which the true effects were actually non-linear. Then your predictions cannot be consistent for the true mean for all combinations of covariates, while a more flexible may be able to.







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered 3 hours ago









                            Cliff ABCliff AB

                            14.3k1 gold badge26 silver badges67 bronze badges




                            14.3k1 gold badge26 silver badges67 bronze badges



























                                draft saved

                                draft discarded
















































                                Thanks for contributing an answer to Cross Validated!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid


                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.

                                Use MathJax to format equations. MathJax reference.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f418417%2fwhy-do-we-need-an-estimator-to-be-consistent%23new-answer', 'question_page');

                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

                                Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

                                Кастелфранко ди Сопра Становништво Референце Спољашње везе Мени за навигацију43°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.5588543°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.558853179688„The GeoNames geographical database”„Istituto Nazionale di Statistica”проширитиууWorldCat156923403n850174324558639-1cb14643287r(подаци)