Unbiased estimator of exponential of measure of a set?Finding a minimum variance unbiased (linear) estimatorTrying to understand unbiased estimatorIs unbiased maximum likelihood estimator always the best unbiased estimator?unbiased estimatorImportance sampling: unbiased estimator of the normalizing constantIs this MLE estimator unbiased?Unbiased Estimator for $EX_1EY_1$Estimate correlation between data and data-fit model for variance reduction in Monte Carlo estimate

Multi tool use
Gofer work in exchange for Letter of Recommendation
Can I check a small array of bools in one go?
Show two plots together: a two dimensional curve tangent to the maxima of a three dimensional plot
Would it be illegal for Facebook to actively promote a political agenda?
Playing a fast but quiet Alberti bass
Do banks' profitability really suffer under low interest rates
Unsolved Problems due to Lack of Computational Power
What happened after the end of the Truman Show?
Are there any OR challenges that are similar to kaggle's competitions?
Have made several mistakes during the course of my PhD. Can't help but feel resentment. Can I get some advice about how to move forward?
Meaning and structure of headline "Hair it is: A List of ..."
Hiker's Cabin Mystery | Pt. XV
How best to join tables, which have different lengths on the same column values which exist in both tables?
Is it alright to say good afternoon Sirs and Madams in a panel interview?
How can I train a replacement without letting my bosses and the replacement knowing?
Graph of the function (2x^2-2)/(x^2-1)
Earliest evidence of objects intended for future archaeologists?
Does git delete empty folders?
What is the evidence on the danger of feeding whole blueberries and grapes to infants and toddlers?
Reducing contention in thread-safe LruCache
Installing the original OS X version onto a Mac?
Control GPIO pins from C
What can I do to keep a threaded bolt from falling out of it’s slot
Radix2 Fast Fourier Transform implemented in C++
Unbiased estimator of exponential of measure of a set?
Finding a minimum variance unbiased (linear) estimatorTrying to understand unbiased estimatorIs unbiased maximum likelihood estimator always the best unbiased estimator?unbiased estimatorImportance sampling: unbiased estimator of the normalizing constantIs this MLE estimator unbiased?Unbiased Estimator for $EX_1EY_1$Estimate correlation between data and data-fit model for variance reduction in Monte Carlo estimate
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.
For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?
As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?
probability monte-carlo unbiased-estimator measure-theory
$endgroup$
add a comment |
$begingroup$
Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.
For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?
As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?
probability monte-carlo unbiased-estimator measure-theory
$endgroup$
add a comment |
$begingroup$
Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.
For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?
As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?
probability monte-carlo unbiased-estimator measure-theory
$endgroup$
Suppose we have a (measurable and suitably well-behaved) set $Ssubseteq Bsubsetmathbb R^n$, where $B$ is compact. Moreover, suppose we can draw samples from the uniform distribution over $B$ wrt the Lebesgue measure $lambda(cdot)$ and that we know the measure $lambda(B)$. For example, perhaps $B$ is a box $[-c,c]^n$ containing $S$.
For fixed $alphainmathbb R$, is there a simple unbiased way to estimate $e^-alpha lambda(S)$ by uniformly sampling points in $B$ and checking if they are inside or outside of $S$?
As an example of something that doesn't quite work, suppose we sample $k$ points $p_1,ldots,p_ksimtextrmUniform(B)$. Then we can use the Monte Carlo estimate $$lambda(S)approx hatlambda:= frac#p_iin Sklambda(B).$$
But, while $hatlambda$ is an unbiased estimator of $lambda(S)$, I don't think it's the case that $e^-alphahatlambda$ is an unbiased estimator of $e^-alphalambda(S)$. Is there some way to modify this algorithm?
probability monte-carlo unbiased-estimator measure-theory
probability monte-carlo unbiased-estimator measure-theory
asked 9 hours ago
Justin SolomonJustin Solomon
2641 silver badge8 bronze badges
2641 silver badge8 bronze badges
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Suppose that you have the following resources available to you:
- You have access to an estimator $hatlambda$.
$hatlambda$ is unbiased for $lambda ( S )$.
$hatlambda$ is almost surely bounded above by $C$.- You know the constant $C$, and
- You can form independent realisations of $hatlambda$ as many times as you'd like.
Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):
beginalign
e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
&= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
endalign
Now, do the following:
- Sample $K sim textPoisson ( u )$.
- Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.
- Return the estimator
$$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$
$hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because
beginalign
mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
&= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
endalign
and thus
beginalign
mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
&= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
&= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
&= e^-alpha lambda ( S )
endalign
by the earlier calculation.
$endgroup$
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
add a comment |
$begingroup$
The answer is in the negative.
A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$
For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is
$$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$
which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)
The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$
Consequently, no unbiased estimator exists.
$endgroup$
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f422696%2funbiased-estimator-of-exponential-of-measure-of-a-set%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Suppose that you have the following resources available to you:
- You have access to an estimator $hatlambda$.
$hatlambda$ is unbiased for $lambda ( S )$.
$hatlambda$ is almost surely bounded above by $C$.- You know the constant $C$, and
- You can form independent realisations of $hatlambda$ as many times as you'd like.
Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):
beginalign
e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
&= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
endalign
Now, do the following:
- Sample $K sim textPoisson ( u )$.
- Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.
- Return the estimator
$$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$
$hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because
beginalign
mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
&= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
endalign
and thus
beginalign
mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
&= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
&= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
&= e^-alpha lambda ( S )
endalign
by the earlier calculation.
$endgroup$
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
add a comment |
$begingroup$
Suppose that you have the following resources available to you:
- You have access to an estimator $hatlambda$.
$hatlambda$ is unbiased for $lambda ( S )$.
$hatlambda$ is almost surely bounded above by $C$.- You know the constant $C$, and
- You can form independent realisations of $hatlambda$ as many times as you'd like.
Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):
beginalign
e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
&= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
endalign
Now, do the following:
- Sample $K sim textPoisson ( u )$.
- Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.
- Return the estimator
$$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$
$hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because
beginalign
mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
&= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
endalign
and thus
beginalign
mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
&= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
&= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
&= e^-alpha lambda ( S )
endalign
by the earlier calculation.
$endgroup$
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
add a comment |
$begingroup$
Suppose that you have the following resources available to you:
- You have access to an estimator $hatlambda$.
$hatlambda$ is unbiased for $lambda ( S )$.
$hatlambda$ is almost surely bounded above by $C$.- You know the constant $C$, and
- You can form independent realisations of $hatlambda$ as many times as you'd like.
Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):
beginalign
e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
&= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
endalign
Now, do the following:
- Sample $K sim textPoisson ( u )$.
- Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.
- Return the estimator
$$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$
$hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because
beginalign
mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
&= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
endalign
and thus
beginalign
mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
&= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
&= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
&= e^-alpha lambda ( S )
endalign
by the earlier calculation.
$endgroup$
Suppose that you have the following resources available to you:
- You have access to an estimator $hatlambda$.
$hatlambda$ is unbiased for $lambda ( S )$.
$hatlambda$ is almost surely bounded above by $C$.- You know the constant $C$, and
- You can form independent realisations of $hatlambda$ as many times as you'd like.
Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $exp x$):
beginalign
e^-alpha lambda ( S ) &= e^-alpha C cdot e^alpha left( C - lambda ( S ) right) \
&= e^- alpha C cdot sum_k geqslant 0 frac left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^- alpha C cdot e^u cdot sum_k geqslant 0 frac e^-u cdot left( alpha left[ C - lambda ( S ) right] right)^k k! \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k
endalign
Now, do the following:
- Sample $K sim textPoisson ( u )$.
- Form $hatlambda_1, cdots, hatlambda_K$ as iid unbiased estimators of $lambda(S)$.
- Return the estimator
$$hatLambda = e^u -alpha C cdot left(frac alpha u right)^K cdot prod_i = 1^K left C - hatlambda_i right.$$
$hatLambda$ is then a non-negative, unbiased estimator of $lambda(S)$. This is because
beginalign
mathbfE left[ hatLambda | K right] &= e^u -alpha C cdot left(frac alpha u right)^K mathbfE left[ prod_i = 1^K left C - hatlambda_i right | K right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K mathbfE left[ C - hatlambda_i right] \
&= e^u -alpha C cdot left(frac alpha u right)^K prod_i = 1^K left[ C - lambda ( S ) right] \
&= e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K
endalign
and thus
beginalign
mathbfE left[ hatLambda right] &= mathbfE_K left[ mathbfE left[ hatLambda | K right] right] \
&= mathbfE_K left[ e^u -alpha C cdot left(frac alpha u right)^K left[ C - lambda ( S ) right]^K right] \
&= e^u -alpha C cdot sum_k geqslant 0 mathbfP ( K = k ) left(frac alpha u right)^K left[ C - lambda ( S ) right]^K \
&= e^u -alpha C cdot sum_k geqslant 0 frac u^k e^-u k! left(frac alpha left[ C - lambda ( S ) right]u right)^k \
&= e^-alpha lambda ( S )
endalign
by the earlier calculation.
edited 7 hours ago
answered 8 hours ago
πr8πr8
1937 bronze badges
1937 bronze badges
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
add a comment |
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
Interesting! Doesn't the estimator for $hatlambda$ described in the question work here, since it's bounded above by $lambda(B)<infty$? Also how come this doesn't contradict @whuber 's answer below? Is there an easy argument why this is unbiased? Sorry for many questions, my probability theory is weak :-)
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
The estimator you describe works, since you know $lambda (B) $. I think this doesn't contradict the other answer because of assumption $5$; given finite access to unbiased estimators, I don't think this construction would work. The unbiasedness comes by comparing the expectation of $hatLambda$ to the power series above; I'll make that clearer in the answer.
$endgroup$
– πr8
8 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
$begingroup$
Are you sure you can interchange the product and expectation in the second line of the proof of unbiasedness?
$endgroup$
– jbowman
7 hours ago
1
1
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
$begingroup$
Seems like it's ok because they're computed iid, right?
$endgroup$
– Justin Solomon
7 hours ago
add a comment |
$begingroup$
The answer is in the negative.
A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$
For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is
$$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$
which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)
The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$
Consequently, no unbiased estimator exists.
$endgroup$
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
add a comment |
$begingroup$
The answer is in the negative.
A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$
For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is
$$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$
which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)
The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$
Consequently, no unbiased estimator exists.
$endgroup$
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
add a comment |
$begingroup$
The answer is in the negative.
A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$
For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is
$$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$
which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)
The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$
Consequently, no unbiased estimator exists.
$endgroup$
The answer is in the negative.
A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,lambda(S)/lambda(B))$ distribution. Write $p=lambda(S)/lambda(B)$ and $alpha^prime = alphalambda(B).$
For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $exp(-alpha lambda(S)) = exp(-(alphalambda(B)) p) = exp(-alpha^prime p).$ The expectation is
$$E[t_n(X)] = sum_x=0^n binomnxp^x (1-p)^n-x, t_n(x),$$
which equals a polynomial of degree at most $n$ in $p.$ But if $alpha^prime p ne 0,$ the exponential $exp(-alpha^prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.)
The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$
Consequently, no unbiased estimator exists.
answered 9 hours ago
whuber♦whuber
215k34 gold badges471 silver badges862 bronze badges
215k34 gold badges471 silver badges862 bronze badges
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
add a comment |
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
1
1
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
Ah, that's a downer! Thanks for the nice proof. But, the Taylor series for $exp(t)$ converges fairly quickly --- perhaps there's an "approximately unbiased" estimator out there? Not sure what that means (I'm not much of a statistician :-) )
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
How quickly, exactly? The answer depends on the value of $alpha^prime p$--and therein lies your problem, because you don't know what that value is. You know only that it lies between $0$ and $alpha.$ You could use that to establish a bound on the bias if you like.
$endgroup$
– whuber♦
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
In my application I expect $S$ to occupy a large portion of $B$. I'd like to use this value in a pseudo-marginal Metropolis-Hastings acceptance ratio, not sure if that method can handle even controllable levels of bias...
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
$begingroup$
BTW I'd really appreciate your thoughts on the other answer to this question!
$endgroup$
– Justin Solomon
8 hours ago
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f422696%2funbiased-estimator-of-exponential-of-measure-of-a-set%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
EoPhurBk bfF0E0xMlVJKABx5AdS1eO s9k,Nc19C,Bi7JXHnBt9 pmm6H35zkw1pbnI94i4ZHz p7G94aWeSgnDFTMG44