Why we don’t make use of the t-distribution for constructing a confidence interval for a proportion?Confidence interval for a proportion when sample proportion is almost 1 or 0Why use the t-distribution for confidence interval for difference of means for unpaired samples when the population variance is unknown?Confidence interval for the standard deviation of a Normal distribution with known meanConfidence interval for sample $x_1, x_2, x_3, x_4 = 5,10,6,7$confidence interval for proportionConstructing confidence interval using asymptotic distributionpopulation proportion confidence intervalConfidence Interval Widths as Measure for VariabilityConfidence interval for the mean - Normal distribution or Student's t-distribution?confidence interval for the variance with unknown distribution

Soft question: Examples where lack of mathematical rigour cause security breaches?

CROSS APPLY produces outer join

When would it be advantageous not apply Training Ground's cost reduction?

What is the purpose of the goat for Azazel, as opposed to conventional offerings?

With Ubuntu 18.04, how can I have a hot corner that locks the computer?

SQL counting distinct over partition

Alternate way of computing the probability of being dealt a 13 card hand with 3 kings given that you have been dealt 2 kings

How did old MS-DOS games utilize various graphic cards?

How to communicate to my GM that not being allowed to use stealth isn't fun for me?

Is it legal for a bar bouncer to confiscate a fake ID

How come the nude protesters were not arrested?

Does a scale have more than seven chords?

How can I get an unreasonable manager to approve time off?

Using "subway" as name for London Underground?

A IP can traceroute to it, but can not ping

Does Disney no longer produce hand-drawn cartoon films?

How to hide rifle during medieval town entrance inspection?

How can I make some of my chapters "come to life"?

Wooden cooking layout

Is using haveibeenpwned to validate password strength rational?

Importance of Building Credit Score?

English word for "product of tinkering"

Fixing obscure 8080 emulator bug?

How is John Wick 3 a 15 certificate?



Why we don’t make use of the t-distribution for constructing a confidence interval for a proportion?


Confidence interval for a proportion when sample proportion is almost 1 or 0Why use the t-distribution for confidence interval for difference of means for unpaired samples when the population variance is unknown?Confidence interval for the standard deviation of a Normal distribution with known meanConfidence interval for sample $x_1, x_2, x_3, x_4 = 5,10,6,7$confidence interval for proportionConstructing confidence interval using asymptotic distributionpopulation proportion confidence intervalConfidence Interval Widths as Measure for VariabilityConfidence interval for the mean - Normal distribution or Student's t-distribution?confidence interval for the variance with unknown distribution






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


To calculate the confidence-interval (CI) for mean with unknown population standard deviation (sd) we estimate the population standard deviation by employing the t-distribution.
Notably, $CI=barX pm Z_95% sigma_bar X$ where $sigma_bar X = fracsigmasqrt n$. But because, we do not have point estimate of the standard deviation of the population, we estimate through the approximation $CI=barX pm t_95% (se)$ where $se = fracssqrt n$



Contrastingly, for population proportion, to calculate the CI, we approximate as $CI = hatp pm Z_95% (se)$ where $se = sqrtfrachatp(1-hatp)n$ provided $n hatp ge 15$ and $n(1-hatp) ge 15$



My question is, why are we complacent with standard distribution for population proportion?










share|cite|improve this question









New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$







  • 1




    $begingroup$
    My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
    $endgroup$
    – Gavin Simpson
    8 hours ago










  • $begingroup$
    @GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
    $endgroup$
    – Abhijit
    7 hours ago






  • 2




    $begingroup$
    I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
    $endgroup$
    – whuber
    7 hours ago










  • $begingroup$
    @Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
    $endgroup$
    – Glen_b
    4 hours ago







  • 1




    $begingroup$
    It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
    $endgroup$
    – Glen_b
    4 hours ago


















3












$begingroup$


To calculate the confidence-interval (CI) for mean with unknown population standard deviation (sd) we estimate the population standard deviation by employing the t-distribution.
Notably, $CI=barX pm Z_95% sigma_bar X$ where $sigma_bar X = fracsigmasqrt n$. But because, we do not have point estimate of the standard deviation of the population, we estimate through the approximation $CI=barX pm t_95% (se)$ where $se = fracssqrt n$



Contrastingly, for population proportion, to calculate the CI, we approximate as $CI = hatp pm Z_95% (se)$ where $se = sqrtfrachatp(1-hatp)n$ provided $n hatp ge 15$ and $n(1-hatp) ge 15$



My question is, why are we complacent with standard distribution for population proportion?










share|cite|improve this question









New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$







  • 1




    $begingroup$
    My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
    $endgroup$
    – Gavin Simpson
    8 hours ago










  • $begingroup$
    @GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
    $endgroup$
    – Abhijit
    7 hours ago






  • 2




    $begingroup$
    I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
    $endgroup$
    – whuber
    7 hours ago










  • $begingroup$
    @Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
    $endgroup$
    – Glen_b
    4 hours ago







  • 1




    $begingroup$
    It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
    $endgroup$
    – Glen_b
    4 hours ago














3












3








3


1



$begingroup$


To calculate the confidence-interval (CI) for mean with unknown population standard deviation (sd) we estimate the population standard deviation by employing the t-distribution.
Notably, $CI=barX pm Z_95% sigma_bar X$ where $sigma_bar X = fracsigmasqrt n$. But because, we do not have point estimate of the standard deviation of the population, we estimate through the approximation $CI=barX pm t_95% (se)$ where $se = fracssqrt n$



Contrastingly, for population proportion, to calculate the CI, we approximate as $CI = hatp pm Z_95% (se)$ where $se = sqrtfrachatp(1-hatp)n$ provided $n hatp ge 15$ and $n(1-hatp) ge 15$



My question is, why are we complacent with standard distribution for population proportion?










share|cite|improve this question









New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




To calculate the confidence-interval (CI) for mean with unknown population standard deviation (sd) we estimate the population standard deviation by employing the t-distribution.
Notably, $CI=barX pm Z_95% sigma_bar X$ where $sigma_bar X = fracsigmasqrt n$. But because, we do not have point estimate of the standard deviation of the population, we estimate through the approximation $CI=barX pm t_95% (se)$ where $se = fracssqrt n$



Contrastingly, for population proportion, to calculate the CI, we approximate as $CI = hatp pm Z_95% (se)$ where $se = sqrtfrachatp(1-hatp)n$ provided $n hatp ge 15$ and $n(1-hatp) ge 15$



My question is, why are we complacent with standard distribution for population proportion?







normal-distribution confidence-interval sampling t-distribution






share|cite|improve this question









New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question









New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question








edited 7 hours ago









jsk

2,1581718




2,1581718






New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









AbhijitAbhijit

1161




1161




New contributor



Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Abhijit is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









  • 1




    $begingroup$
    My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
    $endgroup$
    – Gavin Simpson
    8 hours ago










  • $begingroup$
    @GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
    $endgroup$
    – Abhijit
    7 hours ago






  • 2




    $begingroup$
    I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
    $endgroup$
    – whuber
    7 hours ago










  • $begingroup$
    @Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
    $endgroup$
    – Glen_b
    4 hours ago







  • 1




    $begingroup$
    It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
    $endgroup$
    – Glen_b
    4 hours ago













  • 1




    $begingroup$
    My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
    $endgroup$
    – Gavin Simpson
    8 hours ago










  • $begingroup$
    @GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
    $endgroup$
    – Abhijit
    7 hours ago






  • 2




    $begingroup$
    I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
    $endgroup$
    – whuber
    7 hours ago










  • $begingroup$
    @Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
    $endgroup$
    – Glen_b
    4 hours ago







  • 1




    $begingroup$
    It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
    $endgroup$
    – Glen_b
    4 hours ago








1




1




$begingroup$
My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
$endgroup$
– Gavin Simpson
8 hours ago




$begingroup$
My intuition says this is because to get the standard error of the mean you have second unknown, $sigma$, which is estimated from the sample to complete the computation. The standard error for the proportion involves no additional unknowns.
$endgroup$
– Gavin Simpson
8 hours ago












$begingroup$
@GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
$endgroup$
– Abhijit
7 hours ago




$begingroup$
@GavinSimpson Sounds convincing. In fact the reason we introduced t distribution is to compensate the error introduced to compensate the standard deviation approximation.
$endgroup$
– Abhijit
7 hours ago




2




2




$begingroup$
I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
$endgroup$
– whuber
7 hours ago




$begingroup$
I find this less than convincing in part because the $t$ distribution arises from the independence of the sample variance and sample mean in samples from a Normal distribution, whereas for samples from a Binomial distribution the two quantities are not independent.
$endgroup$
– whuber
7 hours ago












$begingroup$
@Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
$endgroup$
– Glen_b
4 hours ago





$begingroup$
@Abhijit Some textbooks do use a t-distribution as an approximation for this statistic (under certain conditions) - they seem to use n-1 as the d.f.. While I am yet to see a good formal argument for it, the approximation does seem often to work fairly well; for the cases I have checked, it is typically slightly better than the normal approximation (but for that there is a solid asymptotic argument the t-approximation lacks). [Edit: my own checks were more-or-less similar to those whuber shows; the difference between the z and the t being far smaller than their discrepancy from the statistic]
$endgroup$
– Glen_b
4 hours ago





1




1




$begingroup$
It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
$endgroup$
– Glen_b
4 hours ago





$begingroup$
It may be that there's a possible argument (perhaps based on early terms of a series expansion for example) that could establish that the t should nearly always be expected to be better, or perhaps that it should be better under some specific conditions, but I haven't seen any argument of this kind. Personally I generally stick to the z but I don't worry if someone uses a t.
$endgroup$
– Glen_b
4 hours ago











5 Answers
5






active

oldest

votes


















4












$begingroup$

Both the standard Normal and Student t distributions are rather poor approximations to the distribution of



$$Z = frachat p - psqrthat p(1-hat p)/n$$



for small $n,$ so poor that the error dwarfs the differences between these two distributions.



Here is a comparison of all three distributions (omitting the cases where $hat p$ or $1-hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$



Figure 1



The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $hat p$ are limited to the finite set $0, 1/n, 2/n, ldots, n/n.$



The $t$ distribution appears to do a better job of approximation.



For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible:



Figure 2



Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.






share|cite|improve this answer









$endgroup$




















    2












    $begingroup$

    The justification for using the t distribution in the confidence interval for a mean relies on the assumption that the underlying data follows a normal distribution, which leads to a chi-squared distribution when estimating the standard deviation, and thus $fracbarx-mus/ sqrtn sim t_n-1$. This is an exact result under the assumption that the data are exactly normal that leads to confidence intervals with exactly 95% coverage when using $t$, and less than 95% coverage if using $z$.



    In the case of Wald intervals for proportions, you only get asymptotic normality for $frachatp- psqrt hatp(1-hatp )/n$ when n is large enough, which depends on p. The actual coverage probability of the procedure, since the underlying counts of successes are discrete, is sometimes below and sometimes above the nominal coverage probability of 95% depending on the unknown $p$. So, there is no theoretical justification for using $t$, and there is no guarantee that from a practical perspective that using $t$ just to make the intervals wider would actually help achieve nominal coverage of 95%.



    The coverage probability can be calculated exactly, though it's fairly straightforward to simulate it. The following example shows the simulated coverage probability when n=35. It demonstrates that the coverage probability for using the z-interval is generally slightly smaller than .95, while the coverage probability for the t-interval may generally be slighter closer to .95 on average depending on your prior beliefs on the plausible values of p.



    enter image description here



    enter image description here






    share|cite|improve this answer











    $endgroup$




















      1












      $begingroup$

      Note your use of the $sigma$ notation which means the (known) population standard deviation.



      The T-distribution, named after Student-T (pseudonym for William Gosset), was a famous solution to the problem: what happens when you don't know $sigma$?



      He noted that, when you cheat by estimating $sigma$ from the sample as a plug-in estimator, your CIs are on average too narrow. Thus his T-distribution was born.



      Conversely, if you use the T distribution when you actually do know $sigma$, your confidence intervals will on average be too wide.






      share|cite|improve this answer









      $endgroup$








      • 1




        $begingroup$
        The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
        $endgroup$
        – Glen_b
        4 hours ago







      • 1




        $begingroup$
        See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
        $endgroup$
        – Glen_b
        4 hours ago



















      0












      $begingroup$

      Confidence interval for normal mean. Suppose we have a random sample $X_1, X_2, dots X_n$ from a normal population. Let's look at the confidence interval for normal mean $mu$ in terms of hypothesis testing. If $sigma$ is known, then a two-sided test of $H_0:mu = mu_0$ against $H_a: mu ne mu_0$ is based on the statistic $Z = fracbar X - mu_0sigma/sqrtn.$ When $H_0$ is true, $Z sim mathsfNorm(0,1),$ so we reject $H_0$ at the 5% level if $|Z| ge 1.96.$



      Then 'inverting the test', we say that a 95% CI for $mu$ consists of the values $mu_0$ that do not lead to rejection--the 'believable' values of $mu.$ The CI is of the form $bar X pm 1.96sigma/sqrtn,$ where $pm 1.96$ cut probability
      0.025 from the upper and lower tails, respectively, of the standard normal distribution.



      If the population standard deviation $sigma$ is unknown and estimated by by the sample standard deviation $S,$ then we use the statistic $T=fracbar X - mu_0S/sqrtn.$ Before the early 1900's people supposed that $T$ is approximately standard normal for $n$ large enough and used $S$ as a substitute for unknown $sigma.$ There was debate about how large counts as large enough.



      Eventually, it was known that $T sim mathsfT(nu = n-1),$ Student's t distribution with $n-1$ degrees of
      freedom. Accordingly, when $sigma$ is not known, we use $bar X pm t^*S/sqrtn,$ where $pm t^*$ cut probability 0.025 from the upper and lower tails, respectively, of $mathsfT(n-1).$



      [Note: For $n > 30,$ people have noticed that for 95% CIs $t^* approx 2 approx 1.96.$ Thus the century-old idea that you can "get by" just substituting $S$ for $sigma$ when $sigma$ is unknown and $n > 30,$ has persisted even in some recently-published books.]



      Confidence interval for binomial proportion. In the binomial case, suppose we have observed $X$ successes in a binomial experiment with $n$ independent trials. Then we use $hat p =X/n$ as an estimate of the binomial success probability $p.$
      In order to test $H_0:p = p_0$ vs $H_a: p ne p>0,$ we use the statitic $Z = frachat p - p_0sqrtp_0(1-p_0)/n.$ Under $H_0,$ we know that $Z stackrelaprxsim mathsfNorm(0,1).$ So we reject $H_0$ if $|Z| ge 1.96.$



      If we seek to invert this test to get a 95% CI for $p,$ we run into some difficulties. The 'easy' way to invert the test is to
      start by writing $hat p pm 1.96sqrtfracp(1-p)n.$ But his is useless because the value of $p$ under the square root is unknown. The traditional Wald CI assumes that, for sufficiently large $n,$ it is OK to substitute $hat p$ for unknown $p.$ Thus the Wald CI is of the form $hat p pm 1.96sqrtfrachat p(1-hat p)n.$ [Unfortunately, the Wald interval works well only if the number of trials $n$ is at least several hundred.]



      More carefully, one can solve a somewhat messy quadratic inequality to 'invert the test'. The result is the Wilson interval. (See Wikipedia.) For a 95% confidence interval a somewhat simplified version of this result comes from
      defining $check n = n+4$ and $check p = (X+2)/check n$ and then computing the interval as $check p pm 1.96sqrtfraccheck p(1-check p)check n.$
      This style of binomial confidence interval is widely known as the Agresti-Coull interval; it has been widely advocated in elementary textbooks for about the last 20 years.



      In summary, one way to look at your question is that CIs for normal $mu$ and binomial $p$ can be viewed as inversions of tests.



      (a) The t distribution provides an exact solution to the problem of needing to use $S$ for $sigma$ when $sigma$ is unknown.



      (b) Using $hat p$ for $p$ requires some care because the mean and variance of $hat p$ both depend on $p.$ The Agresti-Coull CI provides one serviceable way to get CIs for binomial $p$ that are reasonably accurate even for moderately small $n.$






      share|cite|improve this answer











      $endgroup$




















        0












        $begingroup$

        Both AdamO and jsk give a great answer.



        I would try to repeat their points with plain English:



        When the underlying distribution is normal, you know there are two parameters: mean and variance. T distribution offers a way to do inference on the mean without knowing the exact value of the variances. Instead of using actual variances, only sample means and sample variances are needed. Because it is an exact distribution, you know exactly what you are getting. In other words, the coverage probability is correct. The usage of t simply reflects the desire to get around the unknown populuation variance.



        When we do inference on proportion, however, the underlying distribution is binomial. To get the exact distribution, you need to look at Clopper-Pearson confidence intervals. The formula you provide is the formula for Wald confidence interval. It use the normal distribution to approximate the binomial distribution, because normal distribution is the limiting distribution of the binomial distribution. In this case, because you are only approximating, the extra level of precision from using t statistics becomes unnecessary, it all comes down to empirical performance. As suggested in BruceET's answer, the Agresti-Coull is simple and standard formula nowadays for such approximation.



        My professor Dr Longnecker of Texas A&M has done a simple simulation to illustrate how the different approximation works compared to the binomial based CI.



        Comparison of Various 95% C.I.’s for Proportion



        Further information can be found in the article Interval Estimation for a Binomial Proportion in Statistical Science, Vol. 16, pp.101-133, by L. Brown, T. Cai and A. DasGupta. Basically, A-C CI is recommended for n >= 40.



        enter image description here






        share|cite|improve this answer








        New contributor



        Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
        Check out our Code of Conduct.





        $endgroup$













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "65"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Abhijit is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f411699%2fwhy-we-don-t-make-use-of-the-t-distribution-for-constructing-a-confidence-interv%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          5 Answers
          5






          active

          oldest

          votes








          5 Answers
          5






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$

          Both the standard Normal and Student t distributions are rather poor approximations to the distribution of



          $$Z = frachat p - psqrthat p(1-hat p)/n$$



          for small $n,$ so poor that the error dwarfs the differences between these two distributions.



          Here is a comparison of all three distributions (omitting the cases where $hat p$ or $1-hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$



          Figure 1



          The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $hat p$ are limited to the finite set $0, 1/n, 2/n, ldots, n/n.$



          The $t$ distribution appears to do a better job of approximation.



          For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible:



          Figure 2



          Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.






          share|cite|improve this answer









          $endgroup$

















            4












            $begingroup$

            Both the standard Normal and Student t distributions are rather poor approximations to the distribution of



            $$Z = frachat p - psqrthat p(1-hat p)/n$$



            for small $n,$ so poor that the error dwarfs the differences between these two distributions.



            Here is a comparison of all three distributions (omitting the cases where $hat p$ or $1-hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$



            Figure 1



            The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $hat p$ are limited to the finite set $0, 1/n, 2/n, ldots, n/n.$



            The $t$ distribution appears to do a better job of approximation.



            For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible:



            Figure 2



            Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.






            share|cite|improve this answer









            $endgroup$















              4












              4








              4





              $begingroup$

              Both the standard Normal and Student t distributions are rather poor approximations to the distribution of



              $$Z = frachat p - psqrthat p(1-hat p)/n$$



              for small $n,$ so poor that the error dwarfs the differences between these two distributions.



              Here is a comparison of all three distributions (omitting the cases where $hat p$ or $1-hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$



              Figure 1



              The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $hat p$ are limited to the finite set $0, 1/n, 2/n, ldots, n/n.$



              The $t$ distribution appears to do a better job of approximation.



              For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible:



              Figure 2



              Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.






              share|cite|improve this answer









              $endgroup$



              Both the standard Normal and Student t distributions are rather poor approximations to the distribution of



              $$Z = frachat p - psqrthat p(1-hat p)/n$$



              for small $n,$ so poor that the error dwarfs the differences between these two distributions.



              Here is a comparison of all three distributions (omitting the cases where $hat p$ or $1-hat p$ are zero, where the ratio is undefined) for $n=10, p=1/2:$



              Figure 1



              The "empirical" distribution is that of $Z,$ which must be discrete because the estimates $hat p$ are limited to the finite set $0, 1/n, 2/n, ldots, n/n.$



              The $t$ distribution appears to do a better job of approximation.



              For $n=30$ and $p=1/2,$ you can see the difference between the standard Normal and Student t distributions is completely negligible:



              Figure 2



              Because the Student t distribution is more complicated than the standard Normal (it's really an entire family of distributions indexed by the "degrees of freedom," formerly requiring entire chapters of tables rather than a single page), the standard Normal is used for almost all approximations.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered 6 hours ago









              whuberwhuber

              210k34461840




              210k34461840























                  2












                  $begingroup$

                  The justification for using the t distribution in the confidence interval for a mean relies on the assumption that the underlying data follows a normal distribution, which leads to a chi-squared distribution when estimating the standard deviation, and thus $fracbarx-mus/ sqrtn sim t_n-1$. This is an exact result under the assumption that the data are exactly normal that leads to confidence intervals with exactly 95% coverage when using $t$, and less than 95% coverage if using $z$.



                  In the case of Wald intervals for proportions, you only get asymptotic normality for $frachatp- psqrt hatp(1-hatp )/n$ when n is large enough, which depends on p. The actual coverage probability of the procedure, since the underlying counts of successes are discrete, is sometimes below and sometimes above the nominal coverage probability of 95% depending on the unknown $p$. So, there is no theoretical justification for using $t$, and there is no guarantee that from a practical perspective that using $t$ just to make the intervals wider would actually help achieve nominal coverage of 95%.



                  The coverage probability can be calculated exactly, though it's fairly straightforward to simulate it. The following example shows the simulated coverage probability when n=35. It demonstrates that the coverage probability for using the z-interval is generally slightly smaller than .95, while the coverage probability for the t-interval may generally be slighter closer to .95 on average depending on your prior beliefs on the plausible values of p.



                  enter image description here



                  enter image description here






                  share|cite|improve this answer











                  $endgroup$

















                    2












                    $begingroup$

                    The justification for using the t distribution in the confidence interval for a mean relies on the assumption that the underlying data follows a normal distribution, which leads to a chi-squared distribution when estimating the standard deviation, and thus $fracbarx-mus/ sqrtn sim t_n-1$. This is an exact result under the assumption that the data are exactly normal that leads to confidence intervals with exactly 95% coverage when using $t$, and less than 95% coverage if using $z$.



                    In the case of Wald intervals for proportions, you only get asymptotic normality for $frachatp- psqrt hatp(1-hatp )/n$ when n is large enough, which depends on p. The actual coverage probability of the procedure, since the underlying counts of successes are discrete, is sometimes below and sometimes above the nominal coverage probability of 95% depending on the unknown $p$. So, there is no theoretical justification for using $t$, and there is no guarantee that from a practical perspective that using $t$ just to make the intervals wider would actually help achieve nominal coverage of 95%.



                    The coverage probability can be calculated exactly, though it's fairly straightforward to simulate it. The following example shows the simulated coverage probability when n=35. It demonstrates that the coverage probability for using the z-interval is generally slightly smaller than .95, while the coverage probability for the t-interval may generally be slighter closer to .95 on average depending on your prior beliefs on the plausible values of p.



                    enter image description here



                    enter image description here






                    share|cite|improve this answer











                    $endgroup$















                      2












                      2








                      2





                      $begingroup$

                      The justification for using the t distribution in the confidence interval for a mean relies on the assumption that the underlying data follows a normal distribution, which leads to a chi-squared distribution when estimating the standard deviation, and thus $fracbarx-mus/ sqrtn sim t_n-1$. This is an exact result under the assumption that the data are exactly normal that leads to confidence intervals with exactly 95% coverage when using $t$, and less than 95% coverage if using $z$.



                      In the case of Wald intervals for proportions, you only get asymptotic normality for $frachatp- psqrt hatp(1-hatp )/n$ when n is large enough, which depends on p. The actual coverage probability of the procedure, since the underlying counts of successes are discrete, is sometimes below and sometimes above the nominal coverage probability of 95% depending on the unknown $p$. So, there is no theoretical justification for using $t$, and there is no guarantee that from a practical perspective that using $t$ just to make the intervals wider would actually help achieve nominal coverage of 95%.



                      The coverage probability can be calculated exactly, though it's fairly straightforward to simulate it. The following example shows the simulated coverage probability when n=35. It demonstrates that the coverage probability for using the z-interval is generally slightly smaller than .95, while the coverage probability for the t-interval may generally be slighter closer to .95 on average depending on your prior beliefs on the plausible values of p.



                      enter image description here



                      enter image description here






                      share|cite|improve this answer











                      $endgroup$



                      The justification for using the t distribution in the confidence interval for a mean relies on the assumption that the underlying data follows a normal distribution, which leads to a chi-squared distribution when estimating the standard deviation, and thus $fracbarx-mus/ sqrtn sim t_n-1$. This is an exact result under the assumption that the data are exactly normal that leads to confidence intervals with exactly 95% coverage when using $t$, and less than 95% coverage if using $z$.



                      In the case of Wald intervals for proportions, you only get asymptotic normality for $frachatp- psqrt hatp(1-hatp )/n$ when n is large enough, which depends on p. The actual coverage probability of the procedure, since the underlying counts of successes are discrete, is sometimes below and sometimes above the nominal coverage probability of 95% depending on the unknown $p$. So, there is no theoretical justification for using $t$, and there is no guarantee that from a practical perspective that using $t$ just to make the intervals wider would actually help achieve nominal coverage of 95%.



                      The coverage probability can be calculated exactly, though it's fairly straightforward to simulate it. The following example shows the simulated coverage probability when n=35. It demonstrates that the coverage probability for using the z-interval is generally slightly smaller than .95, while the coverage probability for the t-interval may generally be slighter closer to .95 on average depending on your prior beliefs on the plausible values of p.



                      enter image description here



                      enter image description here







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited 3 hours ago

























                      answered 6 hours ago









                      jskjsk

                      2,1581718




                      2,1581718





















                          1












                          $begingroup$

                          Note your use of the $sigma$ notation which means the (known) population standard deviation.



                          The T-distribution, named after Student-T (pseudonym for William Gosset), was a famous solution to the problem: what happens when you don't know $sigma$?



                          He noted that, when you cheat by estimating $sigma$ from the sample as a plug-in estimator, your CIs are on average too narrow. Thus his T-distribution was born.



                          Conversely, if you use the T distribution when you actually do know $sigma$, your confidence intervals will on average be too wide.






                          share|cite|improve this answer









                          $endgroup$








                          • 1




                            $begingroup$
                            The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                            $endgroup$
                            – Glen_b
                            4 hours ago







                          • 1




                            $begingroup$
                            See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                            $endgroup$
                            – Glen_b
                            4 hours ago
















                          1












                          $begingroup$

                          Note your use of the $sigma$ notation which means the (known) population standard deviation.



                          The T-distribution, named after Student-T (pseudonym for William Gosset), was a famous solution to the problem: what happens when you don't know $sigma$?



                          He noted that, when you cheat by estimating $sigma$ from the sample as a plug-in estimator, your CIs are on average too narrow. Thus his T-distribution was born.



                          Conversely, if you use the T distribution when you actually do know $sigma$, your confidence intervals will on average be too wide.






                          share|cite|improve this answer









                          $endgroup$








                          • 1




                            $begingroup$
                            The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                            $endgroup$
                            – Glen_b
                            4 hours ago







                          • 1




                            $begingroup$
                            See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                            $endgroup$
                            – Glen_b
                            4 hours ago














                          1












                          1








                          1





                          $begingroup$

                          Note your use of the $sigma$ notation which means the (known) population standard deviation.



                          The T-distribution, named after Student-T (pseudonym for William Gosset), was a famous solution to the problem: what happens when you don't know $sigma$?



                          He noted that, when you cheat by estimating $sigma$ from the sample as a plug-in estimator, your CIs are on average too narrow. Thus his T-distribution was born.



                          Conversely, if you use the T distribution when you actually do know $sigma$, your confidence intervals will on average be too wide.






                          share|cite|improve this answer









                          $endgroup$



                          Note your use of the $sigma$ notation which means the (known) population standard deviation.



                          The T-distribution, named after Student-T (pseudonym for William Gosset), was a famous solution to the problem: what happens when you don't know $sigma$?



                          He noted that, when you cheat by estimating $sigma$ from the sample as a plug-in estimator, your CIs are on average too narrow. Thus his T-distribution was born.



                          Conversely, if you use the T distribution when you actually do know $sigma$, your confidence intervals will on average be too wide.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered 7 hours ago









                          AdamOAdamO

                          36.4k267147




                          36.4k267147







                          • 1




                            $begingroup$
                            The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                            $endgroup$
                            – Glen_b
                            4 hours ago







                          • 1




                            $begingroup$
                            See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                            $endgroup$
                            – Glen_b
                            4 hours ago













                          • 1




                            $begingroup$
                            The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                            $endgroup$
                            – Glen_b
                            4 hours ago







                          • 1




                            $begingroup$
                            See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                            $endgroup$
                            – Glen_b
                            4 hours ago








                          1




                          1




                          $begingroup$
                          The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                          $endgroup$
                          – Glen_b
                          4 hours ago





                          $begingroup$
                          The pseudonym Gosset published under was "Student" not "Student-T". He also didn't actually come up with the standard t-distribution itself, nor was the statistic he dealt with actually the t-statistic (he did equivalent things, essentially dealing with a scaled t, but almost all the formalism we have now comes from Fisher's work). Fisher wrote the statistic the way we write it. Fisher called it the t. Fisher formally derived the distribution of the statistic (showing Gosset's combination of algebra, intuition and accompanying simulation-argument about his version of the statistic was correct)
                          $endgroup$
                          – Glen_b
                          4 hours ago





                          1




                          1




                          $begingroup$
                          See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                          $endgroup$
                          – Glen_b
                          4 hours ago





                          $begingroup$
                          See Gosset's 1908 paper here: archive.org/details/biometrika619081909pear/page/n13 - there's also a nice readable pdf of the paper redone in LaTeX here. Note that this is out of copyright since it comes more than a few years before Steamboat Willie.
                          $endgroup$
                          – Glen_b
                          4 hours ago












                          0












                          $begingroup$

                          Confidence interval for normal mean. Suppose we have a random sample $X_1, X_2, dots X_n$ from a normal population. Let's look at the confidence interval for normal mean $mu$ in terms of hypothesis testing. If $sigma$ is known, then a two-sided test of $H_0:mu = mu_0$ against $H_a: mu ne mu_0$ is based on the statistic $Z = fracbar X - mu_0sigma/sqrtn.$ When $H_0$ is true, $Z sim mathsfNorm(0,1),$ so we reject $H_0$ at the 5% level if $|Z| ge 1.96.$



                          Then 'inverting the test', we say that a 95% CI for $mu$ consists of the values $mu_0$ that do not lead to rejection--the 'believable' values of $mu.$ The CI is of the form $bar X pm 1.96sigma/sqrtn,$ where $pm 1.96$ cut probability
                          0.025 from the upper and lower tails, respectively, of the standard normal distribution.



                          If the population standard deviation $sigma$ is unknown and estimated by by the sample standard deviation $S,$ then we use the statistic $T=fracbar X - mu_0S/sqrtn.$ Before the early 1900's people supposed that $T$ is approximately standard normal for $n$ large enough and used $S$ as a substitute for unknown $sigma.$ There was debate about how large counts as large enough.



                          Eventually, it was known that $T sim mathsfT(nu = n-1),$ Student's t distribution with $n-1$ degrees of
                          freedom. Accordingly, when $sigma$ is not known, we use $bar X pm t^*S/sqrtn,$ where $pm t^*$ cut probability 0.025 from the upper and lower tails, respectively, of $mathsfT(n-1).$



                          [Note: For $n > 30,$ people have noticed that for 95% CIs $t^* approx 2 approx 1.96.$ Thus the century-old idea that you can "get by" just substituting $S$ for $sigma$ when $sigma$ is unknown and $n > 30,$ has persisted even in some recently-published books.]



                          Confidence interval for binomial proportion. In the binomial case, suppose we have observed $X$ successes in a binomial experiment with $n$ independent trials. Then we use $hat p =X/n$ as an estimate of the binomial success probability $p.$
                          In order to test $H_0:p = p_0$ vs $H_a: p ne p>0,$ we use the statitic $Z = frachat p - p_0sqrtp_0(1-p_0)/n.$ Under $H_0,$ we know that $Z stackrelaprxsim mathsfNorm(0,1).$ So we reject $H_0$ if $|Z| ge 1.96.$



                          If we seek to invert this test to get a 95% CI for $p,$ we run into some difficulties. The 'easy' way to invert the test is to
                          start by writing $hat p pm 1.96sqrtfracp(1-p)n.$ But his is useless because the value of $p$ under the square root is unknown. The traditional Wald CI assumes that, for sufficiently large $n,$ it is OK to substitute $hat p$ for unknown $p.$ Thus the Wald CI is of the form $hat p pm 1.96sqrtfrachat p(1-hat p)n.$ [Unfortunately, the Wald interval works well only if the number of trials $n$ is at least several hundred.]



                          More carefully, one can solve a somewhat messy quadratic inequality to 'invert the test'. The result is the Wilson interval. (See Wikipedia.) For a 95% confidence interval a somewhat simplified version of this result comes from
                          defining $check n = n+4$ and $check p = (X+2)/check n$ and then computing the interval as $check p pm 1.96sqrtfraccheck p(1-check p)check n.$
                          This style of binomial confidence interval is widely known as the Agresti-Coull interval; it has been widely advocated in elementary textbooks for about the last 20 years.



                          In summary, one way to look at your question is that CIs for normal $mu$ and binomial $p$ can be viewed as inversions of tests.



                          (a) The t distribution provides an exact solution to the problem of needing to use $S$ for $sigma$ when $sigma$ is unknown.



                          (b) Using $hat p$ for $p$ requires some care because the mean and variance of $hat p$ both depend on $p.$ The Agresti-Coull CI provides one serviceable way to get CIs for binomial $p$ that are reasonably accurate even for moderately small $n.$






                          share|cite|improve this answer











                          $endgroup$

















                            0












                            $begingroup$

                            Confidence interval for normal mean. Suppose we have a random sample $X_1, X_2, dots X_n$ from a normal population. Let's look at the confidence interval for normal mean $mu$ in terms of hypothesis testing. If $sigma$ is known, then a two-sided test of $H_0:mu = mu_0$ against $H_a: mu ne mu_0$ is based on the statistic $Z = fracbar X - mu_0sigma/sqrtn.$ When $H_0$ is true, $Z sim mathsfNorm(0,1),$ so we reject $H_0$ at the 5% level if $|Z| ge 1.96.$



                            Then 'inverting the test', we say that a 95% CI for $mu$ consists of the values $mu_0$ that do not lead to rejection--the 'believable' values of $mu.$ The CI is of the form $bar X pm 1.96sigma/sqrtn,$ where $pm 1.96$ cut probability
                            0.025 from the upper and lower tails, respectively, of the standard normal distribution.



                            If the population standard deviation $sigma$ is unknown and estimated by by the sample standard deviation $S,$ then we use the statistic $T=fracbar X - mu_0S/sqrtn.$ Before the early 1900's people supposed that $T$ is approximately standard normal for $n$ large enough and used $S$ as a substitute for unknown $sigma.$ There was debate about how large counts as large enough.



                            Eventually, it was known that $T sim mathsfT(nu = n-1),$ Student's t distribution with $n-1$ degrees of
                            freedom. Accordingly, when $sigma$ is not known, we use $bar X pm t^*S/sqrtn,$ where $pm t^*$ cut probability 0.025 from the upper and lower tails, respectively, of $mathsfT(n-1).$



                            [Note: For $n > 30,$ people have noticed that for 95% CIs $t^* approx 2 approx 1.96.$ Thus the century-old idea that you can "get by" just substituting $S$ for $sigma$ when $sigma$ is unknown and $n > 30,$ has persisted even in some recently-published books.]



                            Confidence interval for binomial proportion. In the binomial case, suppose we have observed $X$ successes in a binomial experiment with $n$ independent trials. Then we use $hat p =X/n$ as an estimate of the binomial success probability $p.$
                            In order to test $H_0:p = p_0$ vs $H_a: p ne p>0,$ we use the statitic $Z = frachat p - p_0sqrtp_0(1-p_0)/n.$ Under $H_0,$ we know that $Z stackrelaprxsim mathsfNorm(0,1).$ So we reject $H_0$ if $|Z| ge 1.96.$



                            If we seek to invert this test to get a 95% CI for $p,$ we run into some difficulties. The 'easy' way to invert the test is to
                            start by writing $hat p pm 1.96sqrtfracp(1-p)n.$ But his is useless because the value of $p$ under the square root is unknown. The traditional Wald CI assumes that, for sufficiently large $n,$ it is OK to substitute $hat p$ for unknown $p.$ Thus the Wald CI is of the form $hat p pm 1.96sqrtfrachat p(1-hat p)n.$ [Unfortunately, the Wald interval works well only if the number of trials $n$ is at least several hundred.]



                            More carefully, one can solve a somewhat messy quadratic inequality to 'invert the test'. The result is the Wilson interval. (See Wikipedia.) For a 95% confidence interval a somewhat simplified version of this result comes from
                            defining $check n = n+4$ and $check p = (X+2)/check n$ and then computing the interval as $check p pm 1.96sqrtfraccheck p(1-check p)check n.$
                            This style of binomial confidence interval is widely known as the Agresti-Coull interval; it has been widely advocated in elementary textbooks for about the last 20 years.



                            In summary, one way to look at your question is that CIs for normal $mu$ and binomial $p$ can be viewed as inversions of tests.



                            (a) The t distribution provides an exact solution to the problem of needing to use $S$ for $sigma$ when $sigma$ is unknown.



                            (b) Using $hat p$ for $p$ requires some care because the mean and variance of $hat p$ both depend on $p.$ The Agresti-Coull CI provides one serviceable way to get CIs for binomial $p$ that are reasonably accurate even for moderately small $n.$






                            share|cite|improve this answer











                            $endgroup$















                              0












                              0








                              0





                              $begingroup$

                              Confidence interval for normal mean. Suppose we have a random sample $X_1, X_2, dots X_n$ from a normal population. Let's look at the confidence interval for normal mean $mu$ in terms of hypothesis testing. If $sigma$ is known, then a two-sided test of $H_0:mu = mu_0$ against $H_a: mu ne mu_0$ is based on the statistic $Z = fracbar X - mu_0sigma/sqrtn.$ When $H_0$ is true, $Z sim mathsfNorm(0,1),$ so we reject $H_0$ at the 5% level if $|Z| ge 1.96.$



                              Then 'inverting the test', we say that a 95% CI for $mu$ consists of the values $mu_0$ that do not lead to rejection--the 'believable' values of $mu.$ The CI is of the form $bar X pm 1.96sigma/sqrtn,$ where $pm 1.96$ cut probability
                              0.025 from the upper and lower tails, respectively, of the standard normal distribution.



                              If the population standard deviation $sigma$ is unknown and estimated by by the sample standard deviation $S,$ then we use the statistic $T=fracbar X - mu_0S/sqrtn.$ Before the early 1900's people supposed that $T$ is approximately standard normal for $n$ large enough and used $S$ as a substitute for unknown $sigma.$ There was debate about how large counts as large enough.



                              Eventually, it was known that $T sim mathsfT(nu = n-1),$ Student's t distribution with $n-1$ degrees of
                              freedom. Accordingly, when $sigma$ is not known, we use $bar X pm t^*S/sqrtn,$ where $pm t^*$ cut probability 0.025 from the upper and lower tails, respectively, of $mathsfT(n-1).$



                              [Note: For $n > 30,$ people have noticed that for 95% CIs $t^* approx 2 approx 1.96.$ Thus the century-old idea that you can "get by" just substituting $S$ for $sigma$ when $sigma$ is unknown and $n > 30,$ has persisted even in some recently-published books.]



                              Confidence interval for binomial proportion. In the binomial case, suppose we have observed $X$ successes in a binomial experiment with $n$ independent trials. Then we use $hat p =X/n$ as an estimate of the binomial success probability $p.$
                              In order to test $H_0:p = p_0$ vs $H_a: p ne p>0,$ we use the statitic $Z = frachat p - p_0sqrtp_0(1-p_0)/n.$ Under $H_0,$ we know that $Z stackrelaprxsim mathsfNorm(0,1).$ So we reject $H_0$ if $|Z| ge 1.96.$



                              If we seek to invert this test to get a 95% CI for $p,$ we run into some difficulties. The 'easy' way to invert the test is to
                              start by writing $hat p pm 1.96sqrtfracp(1-p)n.$ But his is useless because the value of $p$ under the square root is unknown. The traditional Wald CI assumes that, for sufficiently large $n,$ it is OK to substitute $hat p$ for unknown $p.$ Thus the Wald CI is of the form $hat p pm 1.96sqrtfrachat p(1-hat p)n.$ [Unfortunately, the Wald interval works well only if the number of trials $n$ is at least several hundred.]



                              More carefully, one can solve a somewhat messy quadratic inequality to 'invert the test'. The result is the Wilson interval. (See Wikipedia.) For a 95% confidence interval a somewhat simplified version of this result comes from
                              defining $check n = n+4$ and $check p = (X+2)/check n$ and then computing the interval as $check p pm 1.96sqrtfraccheck p(1-check p)check n.$
                              This style of binomial confidence interval is widely known as the Agresti-Coull interval; it has been widely advocated in elementary textbooks for about the last 20 years.



                              In summary, one way to look at your question is that CIs for normal $mu$ and binomial $p$ can be viewed as inversions of tests.



                              (a) The t distribution provides an exact solution to the problem of needing to use $S$ for $sigma$ when $sigma$ is unknown.



                              (b) Using $hat p$ for $p$ requires some care because the mean and variance of $hat p$ both depend on $p.$ The Agresti-Coull CI provides one serviceable way to get CIs for binomial $p$ that are reasonably accurate even for moderately small $n.$






                              share|cite|improve this answer











                              $endgroup$



                              Confidence interval for normal mean. Suppose we have a random sample $X_1, X_2, dots X_n$ from a normal population. Let's look at the confidence interval for normal mean $mu$ in terms of hypothesis testing. If $sigma$ is known, then a two-sided test of $H_0:mu = mu_0$ against $H_a: mu ne mu_0$ is based on the statistic $Z = fracbar X - mu_0sigma/sqrtn.$ When $H_0$ is true, $Z sim mathsfNorm(0,1),$ so we reject $H_0$ at the 5% level if $|Z| ge 1.96.$



                              Then 'inverting the test', we say that a 95% CI for $mu$ consists of the values $mu_0$ that do not lead to rejection--the 'believable' values of $mu.$ The CI is of the form $bar X pm 1.96sigma/sqrtn,$ where $pm 1.96$ cut probability
                              0.025 from the upper and lower tails, respectively, of the standard normal distribution.



                              If the population standard deviation $sigma$ is unknown and estimated by by the sample standard deviation $S,$ then we use the statistic $T=fracbar X - mu_0S/sqrtn.$ Before the early 1900's people supposed that $T$ is approximately standard normal for $n$ large enough and used $S$ as a substitute for unknown $sigma.$ There was debate about how large counts as large enough.



                              Eventually, it was known that $T sim mathsfT(nu = n-1),$ Student's t distribution with $n-1$ degrees of
                              freedom. Accordingly, when $sigma$ is not known, we use $bar X pm t^*S/sqrtn,$ where $pm t^*$ cut probability 0.025 from the upper and lower tails, respectively, of $mathsfT(n-1).$



                              [Note: For $n > 30,$ people have noticed that for 95% CIs $t^* approx 2 approx 1.96.$ Thus the century-old idea that you can "get by" just substituting $S$ for $sigma$ when $sigma$ is unknown and $n > 30,$ has persisted even in some recently-published books.]



                              Confidence interval for binomial proportion. In the binomial case, suppose we have observed $X$ successes in a binomial experiment with $n$ independent trials. Then we use $hat p =X/n$ as an estimate of the binomial success probability $p.$
                              In order to test $H_0:p = p_0$ vs $H_a: p ne p>0,$ we use the statitic $Z = frachat p - p_0sqrtp_0(1-p_0)/n.$ Under $H_0,$ we know that $Z stackrelaprxsim mathsfNorm(0,1).$ So we reject $H_0$ if $|Z| ge 1.96.$



                              If we seek to invert this test to get a 95% CI for $p,$ we run into some difficulties. The 'easy' way to invert the test is to
                              start by writing $hat p pm 1.96sqrtfracp(1-p)n.$ But his is useless because the value of $p$ under the square root is unknown. The traditional Wald CI assumes that, for sufficiently large $n,$ it is OK to substitute $hat p$ for unknown $p.$ Thus the Wald CI is of the form $hat p pm 1.96sqrtfrachat p(1-hat p)n.$ [Unfortunately, the Wald interval works well only if the number of trials $n$ is at least several hundred.]



                              More carefully, one can solve a somewhat messy quadratic inequality to 'invert the test'. The result is the Wilson interval. (See Wikipedia.) For a 95% confidence interval a somewhat simplified version of this result comes from
                              defining $check n = n+4$ and $check p = (X+2)/check n$ and then computing the interval as $check p pm 1.96sqrtfraccheck p(1-check p)check n.$
                              This style of binomial confidence interval is widely known as the Agresti-Coull interval; it has been widely advocated in elementary textbooks for about the last 20 years.



                              In summary, one way to look at your question is that CIs for normal $mu$ and binomial $p$ can be viewed as inversions of tests.



                              (a) The t distribution provides an exact solution to the problem of needing to use $S$ for $sigma$ when $sigma$ is unknown.



                              (b) Using $hat p$ for $p$ requires some care because the mean and variance of $hat p$ both depend on $p.$ The Agresti-Coull CI provides one serviceable way to get CIs for binomial $p$ that are reasonably accurate even for moderately small $n.$







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited 6 hours ago

























                              answered 6 hours ago









                              BruceETBruceET

                              8,4131721




                              8,4131721





















                                  0












                                  $begingroup$

                                  Both AdamO and jsk give a great answer.



                                  I would try to repeat their points with plain English:



                                  When the underlying distribution is normal, you know there are two parameters: mean and variance. T distribution offers a way to do inference on the mean without knowing the exact value of the variances. Instead of using actual variances, only sample means and sample variances are needed. Because it is an exact distribution, you know exactly what you are getting. In other words, the coverage probability is correct. The usage of t simply reflects the desire to get around the unknown populuation variance.



                                  When we do inference on proportion, however, the underlying distribution is binomial. To get the exact distribution, you need to look at Clopper-Pearson confidence intervals. The formula you provide is the formula for Wald confidence interval. It use the normal distribution to approximate the binomial distribution, because normal distribution is the limiting distribution of the binomial distribution. In this case, because you are only approximating, the extra level of precision from using t statistics becomes unnecessary, it all comes down to empirical performance. As suggested in BruceET's answer, the Agresti-Coull is simple and standard formula nowadays for such approximation.



                                  My professor Dr Longnecker of Texas A&M has done a simple simulation to illustrate how the different approximation works compared to the binomial based CI.



                                  Comparison of Various 95% C.I.’s for Proportion



                                  Further information can be found in the article Interval Estimation for a Binomial Proportion in Statistical Science, Vol. 16, pp.101-133, by L. Brown, T. Cai and A. DasGupta. Basically, A-C CI is recommended for n >= 40.



                                  enter image description here






                                  share|cite|improve this answer








                                  New contributor



                                  Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                  Check out our Code of Conduct.





                                  $endgroup$

















                                    0












                                    $begingroup$

                                    Both AdamO and jsk give a great answer.



                                    I would try to repeat their points with plain English:



                                    When the underlying distribution is normal, you know there are two parameters: mean and variance. T distribution offers a way to do inference on the mean without knowing the exact value of the variances. Instead of using actual variances, only sample means and sample variances are needed. Because it is an exact distribution, you know exactly what you are getting. In other words, the coverage probability is correct. The usage of t simply reflects the desire to get around the unknown populuation variance.



                                    When we do inference on proportion, however, the underlying distribution is binomial. To get the exact distribution, you need to look at Clopper-Pearson confidence intervals. The formula you provide is the formula for Wald confidence interval. It use the normal distribution to approximate the binomial distribution, because normal distribution is the limiting distribution of the binomial distribution. In this case, because you are only approximating, the extra level of precision from using t statistics becomes unnecessary, it all comes down to empirical performance. As suggested in BruceET's answer, the Agresti-Coull is simple and standard formula nowadays for such approximation.



                                    My professor Dr Longnecker of Texas A&M has done a simple simulation to illustrate how the different approximation works compared to the binomial based CI.



                                    Comparison of Various 95% C.I.’s for Proportion



                                    Further information can be found in the article Interval Estimation for a Binomial Proportion in Statistical Science, Vol. 16, pp.101-133, by L. Brown, T. Cai and A. DasGupta. Basically, A-C CI is recommended for n >= 40.



                                    enter image description here






                                    share|cite|improve this answer








                                    New contributor



                                    Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                    Check out our Code of Conduct.





                                    $endgroup$















                                      0












                                      0








                                      0





                                      $begingroup$

                                      Both AdamO and jsk give a great answer.



                                      I would try to repeat their points with plain English:



                                      When the underlying distribution is normal, you know there are two parameters: mean and variance. T distribution offers a way to do inference on the mean without knowing the exact value of the variances. Instead of using actual variances, only sample means and sample variances are needed. Because it is an exact distribution, you know exactly what you are getting. In other words, the coverage probability is correct. The usage of t simply reflects the desire to get around the unknown populuation variance.



                                      When we do inference on proportion, however, the underlying distribution is binomial. To get the exact distribution, you need to look at Clopper-Pearson confidence intervals. The formula you provide is the formula for Wald confidence interval. It use the normal distribution to approximate the binomial distribution, because normal distribution is the limiting distribution of the binomial distribution. In this case, because you are only approximating, the extra level of precision from using t statistics becomes unnecessary, it all comes down to empirical performance. As suggested in BruceET's answer, the Agresti-Coull is simple and standard formula nowadays for such approximation.



                                      My professor Dr Longnecker of Texas A&M has done a simple simulation to illustrate how the different approximation works compared to the binomial based CI.



                                      Comparison of Various 95% C.I.’s for Proportion



                                      Further information can be found in the article Interval Estimation for a Binomial Proportion in Statistical Science, Vol. 16, pp.101-133, by L. Brown, T. Cai and A. DasGupta. Basically, A-C CI is recommended for n >= 40.



                                      enter image description here






                                      share|cite|improve this answer








                                      New contributor



                                      Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.





                                      $endgroup$



                                      Both AdamO and jsk give a great answer.



                                      I would try to repeat their points with plain English:



                                      When the underlying distribution is normal, you know there are two parameters: mean and variance. T distribution offers a way to do inference on the mean without knowing the exact value of the variances. Instead of using actual variances, only sample means and sample variances are needed. Because it is an exact distribution, you know exactly what you are getting. In other words, the coverage probability is correct. The usage of t simply reflects the desire to get around the unknown populuation variance.



                                      When we do inference on proportion, however, the underlying distribution is binomial. To get the exact distribution, you need to look at Clopper-Pearson confidence intervals. The formula you provide is the formula for Wald confidence interval. It use the normal distribution to approximate the binomial distribution, because normal distribution is the limiting distribution of the binomial distribution. In this case, because you are only approximating, the extra level of precision from using t statistics becomes unnecessary, it all comes down to empirical performance. As suggested in BruceET's answer, the Agresti-Coull is simple and standard formula nowadays for such approximation.



                                      My professor Dr Longnecker of Texas A&M has done a simple simulation to illustrate how the different approximation works compared to the binomial based CI.



                                      Comparison of Various 95% C.I.’s for Proportion



                                      Further information can be found in the article Interval Estimation for a Binomial Proportion in Statistical Science, Vol. 16, pp.101-133, by L. Brown, T. Cai and A. DasGupta. Basically, A-C CI is recommended for n >= 40.



                                      enter image description here







                                      share|cite|improve this answer








                                      New contributor



                                      Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.








                                      share|cite|improve this answer



                                      share|cite|improve this answer






                                      New contributor



                                      Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.








                                      answered 3 hours ago









                                      Qilin WangQilin Wang

                                      11




                                      11




                                      New contributor



                                      Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.




                                      New contributor




                                      Qilin Wang is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                                      Check out our Code of Conduct.






















                                          Abhijit is a new contributor. Be nice, and check out our Code of Conduct.









                                          draft saved

                                          draft discarded


















                                          Abhijit is a new contributor. Be nice, and check out our Code of Conduct.












                                          Abhijit is a new contributor. Be nice, and check out our Code of Conduct.











                                          Abhijit is a new contributor. Be nice, and check out our Code of Conduct.














                                          Thanks for contributing an answer to Cross Validated!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f411699%2fwhy-we-don-t-make-use-of-the-t-distribution-for-constructing-a-confidence-interv%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                                          Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                                          Кастелфранко ди Сопра Становништво Референце Спољашње везе Мени за навигацију43°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.5588543°37′18″ СГШ; 11°33′32″ ИГД / 43.62156° СГШ; 11.55885° ИГД / 43.62156; 11.558853179688„The GeoNames geographical database”„Istituto Nazionale di Statistica”проширитиууWorldCat156923403n850174324558639-1cb14643287r(подаци)