If I have a 16.67% fail rate (N=24) & I do another 24 tests, what is the likelihood that I get 0 fails by chance?When is a deviation statistically significant?Predicting number of events with 99.9% probability based on tests of four devicesStatistics for gambling machine validationRandom number generation and probability% chance, when the success rate goes up with every failureAfter how many events can you say the failure rate of one piece of equipment is greater than another?Computing Conditional Probability - $P(A|B^c)$Poisson Process: Computers in a lab fail, on average, twice a day, according to a Poisson processNumber of tests to perform to get confidence rate

Is it safe to remove the bottom chords of a series of garage roof trusses?

How would a situation where rescue is impossible be handled by the crew?

Are required indicators necessary for radio buttons?

How do I make distance between concentric circles equal?

A list of proofs of "Coherent topoi have enough points"

When translating the law, who ensures that the wording does not change the meaning of the law?

Co-author responds to email by mistake cc'ing the EiC

LeetCode: Pascal's Triangle C#

Why didn’t Doctor Strange stay in the original winning timeline?

How much code would a codegolf golf if a codegolf could golf code?

Why is my Earth simulation slower than the reality?

Give function defaults arguments from a dictionary in Python

Is it best to use a tie when using 8th notes off the beat?

How do I find the fastest route from Heathrow to an address in London using all forms of transport?

Avoiding racist tropes in fantasy

What to say to a student who has failed?

How to persuade recruiters to send me the Job Description?

In what ways can a Non-paladin access Paladin spells?

How is "sein" conjugated in this sub-sentence?

Is a butterfly one or two animals?

Is there a known non-euclidean geometry where two concentric circles of different radii can intersect? (as in the novel "The Universe Between")

Justifying the use of directed energy weapons

Is there a limit on how long the casting (speaking aloud part of the spell) of Wish can be?

If the first law of thermodynamics ensures conservation of energy, why does it allow systems to lose energy?



If I have a 16.67% fail rate (N=24) & I do another 24 tests, what is the likelihood that I get 0 fails by chance?


When is a deviation statistically significant?Predicting number of events with 99.9% probability based on tests of four devicesStatistics for gambling machine validationRandom number generation and probability% chance, when the success rate goes up with every failureAfter how many events can you say the failure rate of one piece of equipment is greater than another?Computing Conditional Probability - $P(A|B^c)$Poisson Process: Computers in a lab fail, on average, twice a day, according to a Poisson processNumber of tests to perform to get confidence rate






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


Doing some code testing and I have a pre-fix environment where I ran 24 test scenarios and 20 out of 24 worked as expected—only 4 (16.67%) failed.



In the code where the fix exists, if I do another 24 tests, what are the chances that I get a 100% pass rate even if the code didn't fix the problem?



I feel like I probably need to do more tests but I can't remember my stats course and was hoping someone here could help










share|cite|improve this question









New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$













  • $begingroup$
    Is it a homework?
    $endgroup$
    – Tim
    8 hours ago






  • 1




    $begingroup$
    No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
    $endgroup$
    – Ponch22
    7 hours ago










  • $begingroup$
    If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
    $endgroup$
    – Mooing Duck
    5 mins ago


















3












$begingroup$


Doing some code testing and I have a pre-fix environment where I ran 24 test scenarios and 20 out of 24 worked as expected—only 4 (16.67%) failed.



In the code where the fix exists, if I do another 24 tests, what are the chances that I get a 100% pass rate even if the code didn't fix the problem?



I feel like I probably need to do more tests but I can't remember my stats course and was hoping someone here could help










share|cite|improve this question









New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$













  • $begingroup$
    Is it a homework?
    $endgroup$
    – Tim
    8 hours ago






  • 1




    $begingroup$
    No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
    $endgroup$
    – Ponch22
    7 hours ago










  • $begingroup$
    If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
    $endgroup$
    – Mooing Duck
    5 mins ago














3












3








3





$begingroup$


Doing some code testing and I have a pre-fix environment where I ran 24 test scenarios and 20 out of 24 worked as expected—only 4 (16.67%) failed.



In the code where the fix exists, if I do another 24 tests, what are the chances that I get a 100% pass rate even if the code didn't fix the problem?



I feel like I probably need to do more tests but I can't remember my stats course and was hoping someone here could help










share|cite|improve this question









New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






$endgroup$




Doing some code testing and I have a pre-fix environment where I ran 24 test scenarios and 20 out of 24 worked as expected—only 4 (16.67%) failed.



In the code where the fix exists, if I do another 24 tests, what are the chances that I get a 100% pass rate even if the code didn't fix the problem?



I feel like I probably need to do more tests but I can't remember my stats course and was hoping someone here could help







probability statistical-significance prediction






share|cite|improve this question









New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|cite|improve this question









New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|cite|improve this question




share|cite|improve this question








edited 6 hours ago









whuber

215k34 gold badges472 silver badges864 bronze badges




215k34 gold badges472 silver badges864 bronze badges






New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 8 hours ago









Ponch22Ponch22

161 bronze badge




161 bronze badge




New contributor



Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Ponch22 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • $begingroup$
    Is it a homework?
    $endgroup$
    – Tim
    8 hours ago






  • 1




    $begingroup$
    No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
    $endgroup$
    – Ponch22
    7 hours ago










  • $begingroup$
    If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
    $endgroup$
    – Mooing Duck
    5 mins ago

















  • $begingroup$
    Is it a homework?
    $endgroup$
    – Tim
    8 hours ago






  • 1




    $begingroup$
    No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
    $endgroup$
    – Ponch22
    7 hours ago










  • $begingroup$
    If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
    $endgroup$
    – Mooing Duck
    5 mins ago
















$begingroup$
Is it a homework?
$endgroup$
– Tim
8 hours ago




$begingroup$
Is it a homework?
$endgroup$
– Tim
8 hours ago




1




1




$begingroup$
No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
$endgroup$
– Ponch22
7 hours ago




$begingroup$
No, actual work work—I'm doing QA testing for code & trying to figure out just how hard I have to test this to be confident the code fixes the problem
$endgroup$
– Ponch22
7 hours ago












$begingroup$
If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
$endgroup$
– Mooing Duck
5 mins ago





$begingroup$
If these are 24 different scenarios, then you don't have nearly enough data to even start an analysis. If they're the same, we can sort of guess the values. Can you clarify?
$endgroup$
– Mooing Duck
5 mins ago











1 Answer
1






active

oldest

votes


















5











$begingroup$

One straightforward way to analyze this situation is to assume for testing purposes that the fix made no difference. Under this assumption, you may view the assignment of the (potential) 48 observations into pre-fix and post-fix groups as being random. If (hypothetically) all the post-fix outcomes work as expected, it means you have observed 44 expected outcomes and 4 "failures" out of 48 (presumably) independent observations.



This permutation test frames your question as "what is the chance that a random partition of 48 test scenarios, of which four are failures, places all four failures in the first group?" Because all such random partitions are equally probable, this reduces the answer to a simple counting calculation:



  • There are $24cdot 23 cdot 22 cdot 21$ equally-likely ways to place the four failures within the sequence of $24$ pre-fix scenarios.


  • There are $48cdot 47cdot 46 cdot 45$ equally-likely ways to place the four failures within the sequence of all $48$ scenarios.


Ergo, under these hypotheses the chances that all four failures occurred before the fix are



$$frac24cdot 23 cdot 22 cdot 21 48cdot 47cdot 46 cdot 45 = frac771410 approx 5.5%.$$



The interpretation could be written in the following way:




Suppose four of 24 initial scenarios resulted in failure and none of 24 post-fix scenarios resulted in failure. This had a $5.5%$ chance of occurring when the fix changed nothing. Because $5.5%$ is small, it provides some evidence that the fix worked; but because it's not really that small--events with those chances happen all the time--it's probably not enough evidence to convince anyone who is sceptical that the fix worked. Unless collecting more data is very expensive, it might not be enough evidence to support consequential decisions related to the fix.




Even if the fix has already occurred, you have the flexibility to collect more post-fix data. You can experiment with analogous computations to see how many more post-fix scenarios you need to run (all without failure) in order to establish a convincing level of evidence that something changed for the better after the fix. For instance, if you need this chance to be less than $1%,$ you would want to run $49$ post-fix scenarios rather than $24.$



Of course, as soon as one post-fix scenario produces a failure you would likely stop your testing and try to improve the fix. Sophisticated versions of this approach (where you stop once you have enough information, without trying to guess how much you need beforehand) are called sequential testing procedures.






share|cite|improve this answer











$endgroup$














  • $begingroup$
    I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
    $endgroup$
    – Ponch22
    5 hours ago










  • $begingroup$
    Yes, that's the number I get.
    $endgroup$
    – whuber
    5 hours ago










  • $begingroup$
    I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
    $endgroup$
    – Ponch22
    3 hours ago










  • $begingroup$
    That's right, you got the pattern.
    $endgroup$
    – whuber
    3 hours ago










  • $begingroup$
    @Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
    $endgroup$
    – Glen_b
    48 mins ago














Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






Ponch22 is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423151%2fif-i-have-a-16-67-fail-rate-n-24-i-do-another-24-tests-what-is-the-likelih%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5











$begingroup$

One straightforward way to analyze this situation is to assume for testing purposes that the fix made no difference. Under this assumption, you may view the assignment of the (potential) 48 observations into pre-fix and post-fix groups as being random. If (hypothetically) all the post-fix outcomes work as expected, it means you have observed 44 expected outcomes and 4 "failures" out of 48 (presumably) independent observations.



This permutation test frames your question as "what is the chance that a random partition of 48 test scenarios, of which four are failures, places all four failures in the first group?" Because all such random partitions are equally probable, this reduces the answer to a simple counting calculation:



  • There are $24cdot 23 cdot 22 cdot 21$ equally-likely ways to place the four failures within the sequence of $24$ pre-fix scenarios.


  • There are $48cdot 47cdot 46 cdot 45$ equally-likely ways to place the four failures within the sequence of all $48$ scenarios.


Ergo, under these hypotheses the chances that all four failures occurred before the fix are



$$frac24cdot 23 cdot 22 cdot 21 48cdot 47cdot 46 cdot 45 = frac771410 approx 5.5%.$$



The interpretation could be written in the following way:




Suppose four of 24 initial scenarios resulted in failure and none of 24 post-fix scenarios resulted in failure. This had a $5.5%$ chance of occurring when the fix changed nothing. Because $5.5%$ is small, it provides some evidence that the fix worked; but because it's not really that small--events with those chances happen all the time--it's probably not enough evidence to convince anyone who is sceptical that the fix worked. Unless collecting more data is very expensive, it might not be enough evidence to support consequential decisions related to the fix.




Even if the fix has already occurred, you have the flexibility to collect more post-fix data. You can experiment with analogous computations to see how many more post-fix scenarios you need to run (all without failure) in order to establish a convincing level of evidence that something changed for the better after the fix. For instance, if you need this chance to be less than $1%,$ you would want to run $49$ post-fix scenarios rather than $24.$



Of course, as soon as one post-fix scenario produces a failure you would likely stop your testing and try to improve the fix. Sophisticated versions of this approach (where you stop once you have enough information, without trying to guess how much you need beforehand) are called sequential testing procedures.






share|cite|improve this answer











$endgroup$














  • $begingroup$
    I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
    $endgroup$
    – Ponch22
    5 hours ago










  • $begingroup$
    Yes, that's the number I get.
    $endgroup$
    – whuber
    5 hours ago










  • $begingroup$
    I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
    $endgroup$
    – Ponch22
    3 hours ago










  • $begingroup$
    That's right, you got the pattern.
    $endgroup$
    – whuber
    3 hours ago










  • $begingroup$
    @Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
    $endgroup$
    – Glen_b
    48 mins ago
















5











$begingroup$

One straightforward way to analyze this situation is to assume for testing purposes that the fix made no difference. Under this assumption, you may view the assignment of the (potential) 48 observations into pre-fix and post-fix groups as being random. If (hypothetically) all the post-fix outcomes work as expected, it means you have observed 44 expected outcomes and 4 "failures" out of 48 (presumably) independent observations.



This permutation test frames your question as "what is the chance that a random partition of 48 test scenarios, of which four are failures, places all four failures in the first group?" Because all such random partitions are equally probable, this reduces the answer to a simple counting calculation:



  • There are $24cdot 23 cdot 22 cdot 21$ equally-likely ways to place the four failures within the sequence of $24$ pre-fix scenarios.


  • There are $48cdot 47cdot 46 cdot 45$ equally-likely ways to place the four failures within the sequence of all $48$ scenarios.


Ergo, under these hypotheses the chances that all four failures occurred before the fix are



$$frac24cdot 23 cdot 22 cdot 21 48cdot 47cdot 46 cdot 45 = frac771410 approx 5.5%.$$



The interpretation could be written in the following way:




Suppose four of 24 initial scenarios resulted in failure and none of 24 post-fix scenarios resulted in failure. This had a $5.5%$ chance of occurring when the fix changed nothing. Because $5.5%$ is small, it provides some evidence that the fix worked; but because it's not really that small--events with those chances happen all the time--it's probably not enough evidence to convince anyone who is sceptical that the fix worked. Unless collecting more data is very expensive, it might not be enough evidence to support consequential decisions related to the fix.




Even if the fix has already occurred, you have the flexibility to collect more post-fix data. You can experiment with analogous computations to see how many more post-fix scenarios you need to run (all without failure) in order to establish a convincing level of evidence that something changed for the better after the fix. For instance, if you need this chance to be less than $1%,$ you would want to run $49$ post-fix scenarios rather than $24.$



Of course, as soon as one post-fix scenario produces a failure you would likely stop your testing and try to improve the fix. Sophisticated versions of this approach (where you stop once you have enough information, without trying to guess how much you need beforehand) are called sequential testing procedures.






share|cite|improve this answer











$endgroup$














  • $begingroup$
    I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
    $endgroup$
    – Ponch22
    5 hours ago










  • $begingroup$
    Yes, that's the number I get.
    $endgroup$
    – whuber
    5 hours ago










  • $begingroup$
    I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
    $endgroup$
    – Ponch22
    3 hours ago










  • $begingroup$
    That's right, you got the pattern.
    $endgroup$
    – whuber
    3 hours ago










  • $begingroup$
    @Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
    $endgroup$
    – Glen_b
    48 mins ago














5












5








5





$begingroup$

One straightforward way to analyze this situation is to assume for testing purposes that the fix made no difference. Under this assumption, you may view the assignment of the (potential) 48 observations into pre-fix and post-fix groups as being random. If (hypothetically) all the post-fix outcomes work as expected, it means you have observed 44 expected outcomes and 4 "failures" out of 48 (presumably) independent observations.



This permutation test frames your question as "what is the chance that a random partition of 48 test scenarios, of which four are failures, places all four failures in the first group?" Because all such random partitions are equally probable, this reduces the answer to a simple counting calculation:



  • There are $24cdot 23 cdot 22 cdot 21$ equally-likely ways to place the four failures within the sequence of $24$ pre-fix scenarios.


  • There are $48cdot 47cdot 46 cdot 45$ equally-likely ways to place the four failures within the sequence of all $48$ scenarios.


Ergo, under these hypotheses the chances that all four failures occurred before the fix are



$$frac24cdot 23 cdot 22 cdot 21 48cdot 47cdot 46 cdot 45 = frac771410 approx 5.5%.$$



The interpretation could be written in the following way:




Suppose four of 24 initial scenarios resulted in failure and none of 24 post-fix scenarios resulted in failure. This had a $5.5%$ chance of occurring when the fix changed nothing. Because $5.5%$ is small, it provides some evidence that the fix worked; but because it's not really that small--events with those chances happen all the time--it's probably not enough evidence to convince anyone who is sceptical that the fix worked. Unless collecting more data is very expensive, it might not be enough evidence to support consequential decisions related to the fix.




Even if the fix has already occurred, you have the flexibility to collect more post-fix data. You can experiment with analogous computations to see how many more post-fix scenarios you need to run (all without failure) in order to establish a convincing level of evidence that something changed for the better after the fix. For instance, if you need this chance to be less than $1%,$ you would want to run $49$ post-fix scenarios rather than $24.$



Of course, as soon as one post-fix scenario produces a failure you would likely stop your testing and try to improve the fix. Sophisticated versions of this approach (where you stop once you have enough information, without trying to guess how much you need beforehand) are called sequential testing procedures.






share|cite|improve this answer











$endgroup$



One straightforward way to analyze this situation is to assume for testing purposes that the fix made no difference. Under this assumption, you may view the assignment of the (potential) 48 observations into pre-fix and post-fix groups as being random. If (hypothetically) all the post-fix outcomes work as expected, it means you have observed 44 expected outcomes and 4 "failures" out of 48 (presumably) independent observations.



This permutation test frames your question as "what is the chance that a random partition of 48 test scenarios, of which four are failures, places all four failures in the first group?" Because all such random partitions are equally probable, this reduces the answer to a simple counting calculation:



  • There are $24cdot 23 cdot 22 cdot 21$ equally-likely ways to place the four failures within the sequence of $24$ pre-fix scenarios.


  • There are $48cdot 47cdot 46 cdot 45$ equally-likely ways to place the four failures within the sequence of all $48$ scenarios.


Ergo, under these hypotheses the chances that all four failures occurred before the fix are



$$frac24cdot 23 cdot 22 cdot 21 48cdot 47cdot 46 cdot 45 = frac771410 approx 5.5%.$$



The interpretation could be written in the following way:




Suppose four of 24 initial scenarios resulted in failure and none of 24 post-fix scenarios resulted in failure. This had a $5.5%$ chance of occurring when the fix changed nothing. Because $5.5%$ is small, it provides some evidence that the fix worked; but because it's not really that small--events with those chances happen all the time--it's probably not enough evidence to convince anyone who is sceptical that the fix worked. Unless collecting more data is very expensive, it might not be enough evidence to support consequential decisions related to the fix.




Even if the fix has already occurred, you have the flexibility to collect more post-fix data. You can experiment with analogous computations to see how many more post-fix scenarios you need to run (all without failure) in order to establish a convincing level of evidence that something changed for the better after the fix. For instance, if you need this chance to be less than $1%,$ you would want to run $49$ post-fix scenarios rather than $24.$



Of course, as soon as one post-fix scenario produces a failure you would likely stop your testing and try to improve the fix. Sophisticated versions of this approach (where you stop once you have enough information, without trying to guess how much you need beforehand) are called sequential testing procedures.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited 6 hours ago

























answered 6 hours ago









whuberwhuber

215k34 gold badges472 silver badges864 bronze badges




215k34 gold badges472 silver badges864 bronze badges














  • $begingroup$
    I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
    $endgroup$
    – Ponch22
    5 hours ago










  • $begingroup$
    Yes, that's the number I get.
    $endgroup$
    – whuber
    5 hours ago










  • $begingroup$
    I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
    $endgroup$
    – Ponch22
    3 hours ago










  • $begingroup$
    That's right, you got the pattern.
    $endgroup$
    – whuber
    3 hours ago










  • $begingroup$
    @Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
    $endgroup$
    – Glen_b
    48 mins ago

















  • $begingroup$
    I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
    $endgroup$
    – Ponch22
    5 hours ago










  • $begingroup$
    Yes, that's the number I get.
    $endgroup$
    – whuber
    5 hours ago










  • $begingroup$
    I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
    $endgroup$
    – Ponch22
    3 hours ago










  • $begingroup$
    That's right, you got the pattern.
    $endgroup$
    – whuber
    3 hours ago










  • $begingroup$
    @Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
    $endgroup$
    – Glen_b
    48 mins ago
















$begingroup$
I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
$endgroup$
– Ponch22
5 hours ago




$begingroup$
I like this... And to make sure I understand, running 36 post-fix scenarios I'd have a ~97.8% confidence all four failures weren't just in the pre-fix scenarios by chance!
$endgroup$
– Ponch22
5 hours ago












$begingroup$
Yes, that's the number I get.
$endgroup$
– whuber
5 hours ago




$begingroup$
Yes, that's the number I get.
$endgroup$
– whuber
5 hours ago












$begingroup$
I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
$endgroup$
– Ponch22
3 hours ago




$begingroup$
I might have to do more pre-fix tests, so let me see if I TRULY understand the math... Let's say I do another 8 pre-fix tests and see 3 new errors. Now with N = 32 I have 7 errors or a 21.9% fail rate If I then do 24 additional post-fix tests and see 0 errors in those 24, I'd have ~98.5% confidence the fix worked because there would only be a 1.45% chance the 7 errors were localized within the first 32 by chance: (32⋅31⋅30⋅29⋅28⋅27⋅26) / (56⋅55⋅54⋅53⋅52⋅51⋅50) ≈ 1.45% (It's sad I was a math major in college, but I always had trouble with stats!)
$endgroup$
– Ponch22
3 hours ago












$begingroup$
That's right, you got the pattern.
$endgroup$
– whuber
3 hours ago




$begingroup$
That's right, you got the pattern.
$endgroup$
– whuber
3 hours ago












$begingroup$
@Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
$endgroup$
– Glen_b
48 mins ago





$begingroup$
@Ponch While I think whuber's answer to the posted question is pitched perfectly, your extended questions in followup suggest that it may be worth noting that the relevant distribution here is called the hypergeometric distribution, and with which many programs can do the calculations for you, perhaps making it a little easier to figure out whatever you're trying to work out. It may save you some effort. e.g. Excel has a function for it.
$endgroup$
– Glen_b
48 mins ago











Ponch22 is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















Ponch22 is a new contributor. Be nice, and check out our Code of Conduct.












Ponch22 is a new contributor. Be nice, and check out our Code of Conduct.











Ponch22 is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f423151%2fif-i-have-a-16-67-fail-rate-n-24-i-do-another-24-tests-what-is-the-likelih%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу