Unbiased estimator of exponential of measure of a set?Finding a minimum variance unbiased (linear) estimatorTrying to understand unbiased estimatorIs unbiased maximum likelihood estimator always the best unbiased estimator?unbiased estimatorImportance sampling: unbiased estimator of the normalizing constantIs this MLE estimator unbiased?Unbiased Estimator for $EX_1EY_1$Estimate correlation between data and data-fit model for variance reduction in Monte Carlo estimate

Gofer work in exchange for Letter of Recommendation

Can I check a small array of bools in one go?

Show two plots together: a two dimensional curve tangent to the maxima of a three dimensional plot

Would it be illegal for Facebook to actively promote a political agenda?

Playing a fast but quiet Alberti bass

Do banks' profitability really suffer under low interest rates

Unsolved Problems due to Lack of Computational Power

What happened after the end of the Truman Show?

Are there any OR challenges that are similar to kaggle's competitions?

Have made several mistakes during the course of my PhD. Can't help but feel resentment. Can I get some advice about how to move forward?

Meaning and structure of headline "Hair it is: A List of ..."

Hiker's Cabin Mystery | Pt. XV

How best to join tables, which have different lengths on the same column values which exist in both tables?

Is it alright to say good afternoon Sirs and Madams in a panel interview?

How can I train a replacement without letting my bosses and the replacement knowing?

Graph of the function (2x^2-2)/(x^2-1)

Earliest evidence of objects intended for future archaeologists?

Does git delete empty folders?

What is the evidence on the danger of feeding whole blueberries and grapes to infants and toddlers?

Reducing contention in thread-safe LruCache

Installing the original OS X version onto a Mac?

Control GPIO pins from C

What can I do to keep a threaded bolt from falling out of it’s slot

Radix2 Fast Fourier Transform implemented in C++



Unbiased estimator of exponential of measure of a set?


Finding a minimum variance unbiased (linear) estimatorTrying to understand unbiased estimatorIs unbiased maximum likelihood estimator always the best unbiased estimator?unbiased estimatorImportance sampling: unbiased estimator of the normalizing constantIs this MLE estimator unbiased?Unbiased Estimator for $EX_1EY_1$Estimate correlation between data and data-fit model for variance reduction in Monte Carlo estimate






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6












$begingroup$


Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.



For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?



As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?










share|cite|improve this question









$endgroup$




















    6












    $begingroup$


    Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.



    For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?



    As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
    But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?










    share|cite|improve this question









    $endgroup$
















      6












      6








      6





      $begingroup$


      Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.



      For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?



      As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
      But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?










      share|cite|improve this question









      $endgroup$




      Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.



      For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?



      As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
      But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?







      probability monte-carlo unbiased-estimator measure-theory






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 9 hours ago









      Justin SolomonJustin Solomon

      2641 silver badge8 bronze badges




      2641 silver badge8 bronze badges























          2 Answers
          2






          active

          oldest

          votes


















          2












          $begingroup$

          Suppose that you have the following resources available to you:



          1. You have access to an estimator $hatlambda$.


          2. $hatlambda$ is unbiased for $lambda ( S )$.


          3. $hatlambda$ is almost surely bounded above by $C$.

          4. You know the constant $C$, and

          5. You can form independent realisations of $hatlambda$ as many times as you'd like.

          Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):



          beginalign
          e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
          &= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
          endalign



          Now, do the following:



          1. Sample $K sim textPoisson ( u )$.

          2. Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.

          3. Return the estimator

          $$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$



          $hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because



          beginalign
          mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
          endalign



          and thus



          beginalign
          mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
          &= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
          &= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
          &= e^-alpha lambda ( S )
          endalign



          by the earlier calculation.






          share|cite|improve this answer











          $endgroup$














          • $begingroup$
            Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
            $endgroup$
            – πr8
            8 hours ago










          • $begingroup$
            Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
            $endgroup$
            – jbowman
            7 hours ago






          • 1




            $begingroup$
            Seems like it's ok because they're computed iid, right?
            $endgroup$
            – Justin Solomon
            7 hours ago


















          4












          $begingroup$

          The answer is in the negative.



          A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$



          For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is



          $$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$



          which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)



          The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$



          Consequently, no unbiased estimator exists.






          share|cite|improve this answer









          $endgroup$










          • 1




            $begingroup$
            Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
            $endgroup$
            – Justin Solomon
            8 hours ago











          • $begingroup$
            How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
            $endgroup$
            – whuber
            8 hours ago










          • $begingroup$
            In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            BTW I'd really appreciate your thoughts on the other answer to this question!
            $endgroup$
            – Justin Solomon
            8 hours ago













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "65"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f422696%2funbiased-estimator-of-exponential-of-measure-of-a-set%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          Suppose that you have the following resources available to you:



          1. You have access to an estimator $hatlambda$.


          2. $hatlambda$ is unbiased for $lambda ( S )$.


          3. $hatlambda$ is almost surely bounded above by $C$.

          4. You know the constant $C$, and

          5. You can form independent realisations of $hatlambda$ as many times as you'd like.

          Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):



          beginalign
          e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
          &= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
          endalign



          Now, do the following:



          1. Sample $K sim textPoisson ( u )$.

          2. Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.

          3. Return the estimator

          $$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$



          $hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because



          beginalign
          mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
          endalign



          and thus



          beginalign
          mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
          &= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
          &= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
          &= e^-alpha lambda ( S )
          endalign



          by the earlier calculation.






          share|cite|improve this answer











          $endgroup$














          • $begingroup$
            Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
            $endgroup$
            – πr8
            8 hours ago










          • $begingroup$
            Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
            $endgroup$
            – jbowman
            7 hours ago






          • 1




            $begingroup$
            Seems like it's ok because they're computed iid, right?
            $endgroup$
            – Justin Solomon
            7 hours ago















          2












          $begingroup$

          Suppose that you have the following resources available to you:



          1. You have access to an estimator $hatlambda$.


          2. $hatlambda$ is unbiased for $lambda ( S )$.


          3. $hatlambda$ is almost surely bounded above by $C$.

          4. You know the constant $C$, and

          5. You can form independent realisations of $hatlambda$ as many times as you'd like.

          Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):



          beginalign
          e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
          &= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
          endalign



          Now, do the following:



          1. Sample $K sim textPoisson ( u )$.

          2. Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.

          3. Return the estimator

          $$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$



          $hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because



          beginalign
          mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
          endalign



          and thus



          beginalign
          mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
          &= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
          &= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
          &= e^-alpha lambda ( S )
          endalign



          by the earlier calculation.






          share|cite|improve this answer











          $endgroup$














          • $begingroup$
            Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
            $endgroup$
            – πr8
            8 hours ago










          • $begingroup$
            Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
            $endgroup$
            – jbowman
            7 hours ago






          • 1




            $begingroup$
            Seems like it's ok because they're computed iid, right?
            $endgroup$
            – Justin Solomon
            7 hours ago













          2












          2








          2





          $begingroup$

          Suppose that you have the following resources available to you:



          1. You have access to an estimator $hatlambda$.


          2. $hatlambda$ is unbiased for $lambda ( S )$.


          3. $hatlambda$ is almost surely bounded above by $C$.

          4. You know the constant $C$, and

          5. You can form independent realisations of $hatlambda$ as many times as you'd like.

          Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):



          beginalign
          e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
          &= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
          endalign



          Now, do the following:



          1. Sample $K sim textPoisson ( u )$.

          2. Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.

          3. Return the estimator

          $$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$



          $hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because



          beginalign
          mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
          endalign



          and thus



          beginalign
          mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
          &= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
          &= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
          &= e^-alpha lambda ( S )
          endalign



          by the earlier calculation.






          share|cite|improve this answer











          $endgroup$



          Suppose that you have the following resources available to you:



          1. You have access to an estimator $hatlambda$.


          2. $hatlambda$ is unbiased for $lambda ( S )$.


          3. $hatlambda$ is almost surely bounded above by $C$.

          4. You know the constant $C$, and

          5. You can form independent realisations of $hatlambda$ as many times as you'd like.

          Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):



          beginalign
          e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
          &= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
          endalign



          Now, do the following:



          1. Sample $K sim textPoisson ( u )$.

          2. Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.

          3. Return the estimator

          $$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$



          $hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because



          beginalign
          mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
          &= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
          endalign



          and thus



          beginalign
          mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
          &= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
          &= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
          &= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
          &= e^-alpha lambda ( S )
          endalign



          by the earlier calculation.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 7 hours ago

























          answered 8 hours ago









          πr8πr8

          1937 bronze badges




          1937 bronze badges














          • $begingroup$
            Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
            $endgroup$
            – πr8
            8 hours ago










          • $begingroup$
            Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
            $endgroup$
            – jbowman
            7 hours ago






          • 1




            $begingroup$
            Seems like it's ok because they're computed iid, right?
            $endgroup$
            – Justin Solomon
            7 hours ago
















          • $begingroup$
            Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
            $endgroup$
            – πr8
            8 hours ago










          • $begingroup$
            Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
            $endgroup$
            – jbowman
            7 hours ago






          • 1




            $begingroup$
            Seems like it's ok because they're computed iid, right?
            $endgroup$
            – Justin Solomon
            7 hours ago















          $begingroup$
          Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
          $endgroup$
          – Justin Solomon
          8 hours ago




          $begingroup$
          Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
          $endgroup$
          – Justin Solomon
          8 hours ago












          $begingroup$
          The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
          $endgroup$
          – πr8
          8 hours ago




          $begingroup$
          The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
          $endgroup$
          – πr8
          8 hours ago












          $begingroup$
          Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
          $endgroup$
          – jbowman
          7 hours ago




          $begingroup$
          Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
          $endgroup$
          – jbowman
          7 hours ago




          1




          1




          $begingroup$
          Seems like it's ok because they're computed iid, right?
          $endgroup$
          – Justin Solomon
          7 hours ago




          $begingroup$
          Seems like it's ok because they're computed iid, right?
          $endgroup$
          – Justin Solomon
          7 hours ago













          4












          $begingroup$

          The answer is in the negative.



          A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$



          For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is



          $$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$



          which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)



          The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$



          Consequently, no unbiased estimator exists.






          share|cite|improve this answer









          $endgroup$










          • 1




            $begingroup$
            Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
            $endgroup$
            – Justin Solomon
            8 hours ago











          • $begingroup$
            How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
            $endgroup$
            – whuber
            8 hours ago










          • $begingroup$
            In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            BTW I'd really appreciate your thoughts on the other answer to this question!
            $endgroup$
            – Justin Solomon
            8 hours ago















          4












          $begingroup$

          The answer is in the negative.



          A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$



          For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is



          $$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$



          which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)



          The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$



          Consequently, no unbiased estimator exists.






          share|cite|improve this answer









          $endgroup$










          • 1




            $begingroup$
            Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
            $endgroup$
            – Justin Solomon
            8 hours ago











          • $begingroup$
            How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
            $endgroup$
            – whuber
            8 hours ago










          • $begingroup$
            In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            BTW I'd really appreciate your thoughts on the other answer to this question!
            $endgroup$
            – Justin Solomon
            8 hours ago













          4












          4








          4





          $begingroup$

          The answer is in the negative.



          A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$



          For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is



          $$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$



          which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)



          The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$



          Consequently, no unbiased estimator exists.






          share|cite|improve this answer









          $endgroup$



          The answer is in the negative.



          A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$



          For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is



          $$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$



          which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)



          The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$



          Consequently, no unbiased estimator exists.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 9 hours ago









          whuberwhuber

          215k34 gold badges471 silver badges862 bronze badges




          215k34 gold badges471 silver badges862 bronze badges










          • 1




            $begingroup$
            Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
            $endgroup$
            – Justin Solomon
            8 hours ago











          • $begingroup$
            How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
            $endgroup$
            – whuber
            8 hours ago










          • $begingroup$
            In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            BTW I'd really appreciate your thoughts on the other answer to this question!
            $endgroup$
            – Justin Solomon
            8 hours ago












          • 1




            $begingroup$
            Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
            $endgroup$
            – Justin Solomon
            8 hours ago











          • $begingroup$
            How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
            $endgroup$
            – whuber
            8 hours ago










          • $begingroup$
            In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
            $endgroup$
            – Justin Solomon
            8 hours ago










          • $begingroup$
            BTW I'd really appreciate your thoughts on the other answer to this question!
            $endgroup$
            – Justin Solomon
            8 hours ago







          1




          1




          $begingroup$
          Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
          $endgroup$
          – Justin Solomon
          8 hours ago





          $begingroup$
          Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
          $endgroup$
          – Justin Solomon
          8 hours ago













          $begingroup$
          How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
          $endgroup$
          – whuber
          8 hours ago




          $begingroup$
          How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
          $endgroup$
          – whuber
          8 hours ago












          $begingroup$
          In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
          $endgroup$
          – Justin Solomon
          8 hours ago




          $begingroup$
          In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
          $endgroup$
          – Justin Solomon
          8 hours ago












          $begingroup$
          BTW I'd really appreciate your thoughts on the other answer to this question!
          $endgroup$
          – Justin Solomon
          8 hours ago




          $begingroup$
          BTW I'd really appreciate your thoughts on the other answer to this question!
          $endgroup$
          – Justin Solomon
          8 hours ago

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f422696%2funbiased-estimator-of-exponential-of-measure-of-a-set%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

          Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

          19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу