Can I arbitrarily eliminate 20% of my training data if doing so significantly improves model accuracy?When can AUC and accuracy rate be equal?Can training label confidence be used to improve prediction accuracy?Artificially Increasing Training dataIs a 100% model accuracy on out-of-sample data overfitting?How can I check if a bigger training data set would improve my accuracy of my scikit classifier?Feature addition/ subtraction and SVM model accuracy

Why do modes sound so different, although they are basically the same as a mode of another scale?

Is there anything in the universe that cannot be compressed?

What is this red bug infesting some trees in southern Germany?

How to annoymously report the Establishment Clause being broken?

What is the maximal acceptable delay between pilot's input and flight control surface actuation?

Meaning of "offen balkon machen"?

Meaning of "educating the ice"

How does Harry wear the invisibility cloak?

How do we know if a dialogue sounds unnatural without asking for feedback?

Given a specific computer system, is it possible to estimate the actual precise run time of a piece of Assembly code

How did Gollum know Sauron was gathering the Haradrim to make war?

How could reincarnation magic be limited to prevent overuse?

Why didn't Thatcher give Hong Kong to Taiwan?

How do you get the angle of the lid from the CLI?

Were the women of Travancore, India, taxed for covering their breasts by breast size?

Where is the correct position to set right or left of muscle names for anatomical names?

How can I oppose my advisor granting gift authorship to a collaborator?

Which is the best password hashing algorithm in .NET Core?

Are there photos of the Apollo LM showing disturbed lunar soil resulting from descent engine exhaust?

Divide Numbers by 0

What is the significance of 104%?

Do index funds really have double-digit percents annual return rates?

To which country did MiGs in Top Gun belong?

What is a "fat pointer" in Rust?



Can I arbitrarily eliminate 20% of my training data if doing so significantly improves model accuracy?


When can AUC and accuracy rate be equal?Can training label confidence be used to improve prediction accuracy?Artificially Increasing Training dataIs a 100% model accuracy on out-of-sample data overfitting?How can I check if a bigger training data set would improve my accuracy of my scikit classifier?Feature addition/ subtraction and SVM model accuracy






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








1












$begingroup$


My dataset contains 2000 records with 125 meaningful fields 5 of which are distributed along highly skewed lognormal behavior.



I've found that if I eliminate all records below some threshold of this lognormal behavior (by combining the fields together then filtering for Nth percentile), my model improves in accuracy from ~78% to ~86%, using a highly tuned random forests classifier. This filter is only done after splitting my data into train, test (which is done after SMOTE).



What makes this particularly odd is that that filter improves results across multiple sampling methods.



Is this filtering acceptable behavior? Why might it be resulting in better predictions?










share|improve this question









$endgroup$









  • 1




    $begingroup$
    Do you also threshold test data ?
    $endgroup$
    – Elliot
    7 hours ago










  • $begingroup$
    I'm not sure what you mean @Elliot
    $endgroup$
    – Yaakov Bressler
    6 hours ago











  • $begingroup$
    do you also filter out the test data you have splitted before ?
    $endgroup$
    – Elliot
    6 hours ago










  • $begingroup$
    No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
    $endgroup$
    – Yaakov Bressler
    3 hours ago










  • $begingroup$
    Okay, I’ll make an answer.
    $endgroup$
    – Elliot
    2 hours ago

















1












$begingroup$


My dataset contains 2000 records with 125 meaningful fields 5 of which are distributed along highly skewed lognormal behavior.



I've found that if I eliminate all records below some threshold of this lognormal behavior (by combining the fields together then filtering for Nth percentile), my model improves in accuracy from ~78% to ~86%, using a highly tuned random forests classifier. This filter is only done after splitting my data into train, test (which is done after SMOTE).



What makes this particularly odd is that that filter improves results across multiple sampling methods.



Is this filtering acceptable behavior? Why might it be resulting in better predictions?










share|improve this question









$endgroup$









  • 1




    $begingroup$
    Do you also threshold test data ?
    $endgroup$
    – Elliot
    7 hours ago










  • $begingroup$
    I'm not sure what you mean @Elliot
    $endgroup$
    – Yaakov Bressler
    6 hours ago











  • $begingroup$
    do you also filter out the test data you have splitted before ?
    $endgroup$
    – Elliot
    6 hours ago










  • $begingroup$
    No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
    $endgroup$
    – Yaakov Bressler
    3 hours ago










  • $begingroup$
    Okay, I’ll make an answer.
    $endgroup$
    – Elliot
    2 hours ago













1












1








1





$begingroup$


My dataset contains 2000 records with 125 meaningful fields 5 of which are distributed along highly skewed lognormal behavior.



I've found that if I eliminate all records below some threshold of this lognormal behavior (by combining the fields together then filtering for Nth percentile), my model improves in accuracy from ~78% to ~86%, using a highly tuned random forests classifier. This filter is only done after splitting my data into train, test (which is done after SMOTE).



What makes this particularly odd is that that filter improves results across multiple sampling methods.



Is this filtering acceptable behavior? Why might it be resulting in better predictions?










share|improve this question









$endgroup$




My dataset contains 2000 records with 125 meaningful fields 5 of which are distributed along highly skewed lognormal behavior.



I've found that if I eliminate all records below some threshold of this lognormal behavior (by combining the fields together then filtering for Nth percentile), my model improves in accuracy from ~78% to ~86%, using a highly tuned random forests classifier. This filter is only done after splitting my data into train, test (which is done after SMOTE).



What makes this particularly odd is that that filter improves results across multiple sampling methods.



Is this filtering acceptable behavior? Why might it be resulting in better predictions?







classification random-forest






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 8 hours ago









Yaakov BresslerYaakov Bressler

1285 bronze badges




1285 bronze badges










  • 1




    $begingroup$
    Do you also threshold test data ?
    $endgroup$
    – Elliot
    7 hours ago










  • $begingroup$
    I'm not sure what you mean @Elliot
    $endgroup$
    – Yaakov Bressler
    6 hours ago











  • $begingroup$
    do you also filter out the test data you have splitted before ?
    $endgroup$
    – Elliot
    6 hours ago










  • $begingroup$
    No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
    $endgroup$
    – Yaakov Bressler
    3 hours ago










  • $begingroup$
    Okay, I’ll make an answer.
    $endgroup$
    – Elliot
    2 hours ago












  • 1




    $begingroup$
    Do you also threshold test data ?
    $endgroup$
    – Elliot
    7 hours ago










  • $begingroup$
    I'm not sure what you mean @Elliot
    $endgroup$
    – Yaakov Bressler
    6 hours ago











  • $begingroup$
    do you also filter out the test data you have splitted before ?
    $endgroup$
    – Elliot
    6 hours ago










  • $begingroup$
    No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
    $endgroup$
    – Yaakov Bressler
    3 hours ago










  • $begingroup$
    Okay, I’ll make an answer.
    $endgroup$
    – Elliot
    2 hours ago







1




1




$begingroup$
Do you also threshold test data ?
$endgroup$
– Elliot
7 hours ago




$begingroup$
Do you also threshold test data ?
$endgroup$
– Elliot
7 hours ago












$begingroup$
I'm not sure what you mean @Elliot
$endgroup$
– Yaakov Bressler
6 hours ago





$begingroup$
I'm not sure what you mean @Elliot
$endgroup$
– Yaakov Bressler
6 hours ago













$begingroup$
do you also filter out the test data you have splitted before ?
$endgroup$
– Elliot
6 hours ago




$begingroup$
do you also filter out the test data you have splitted before ?
$endgroup$
– Elliot
6 hours ago












$begingroup$
No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
$endgroup$
– Yaakov Bressler
3 hours ago




$begingroup$
No @Elliot, the data is split then the filter is applied to the train set only. A second iteration would start from the main data then resplit then refilter.
$endgroup$
– Yaakov Bressler
3 hours ago












$begingroup$
Okay, I’ll make an answer.
$endgroup$
– Elliot
2 hours ago




$begingroup$
Okay, I’ll make an answer.
$endgroup$
– Elliot
2 hours ago










1 Answer
1






active

oldest

votes


















3













$begingroup$

One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance).



Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.






share|improve this answer









$endgroup$














  • $begingroup$
    Thanks! This is really neat. I had a loud "ahhhha" moment just now.
    $endgroup$
    – Yaakov Bressler
    19 mins ago













Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f58561%2fcan-i-arbitrarily-eliminate-20-of-my-training-data-if-doing-so-significantly-im%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3













$begingroup$

One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance).



Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.






share|improve this answer









$endgroup$














  • $begingroup$
    Thanks! This is really neat. I had a loud "ahhhha" moment just now.
    $endgroup$
    – Yaakov Bressler
    19 mins ago















3













$begingroup$

One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance).



Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.






share|improve this answer









$endgroup$














  • $begingroup$
    Thanks! This is really neat. I had a loud "ahhhha" moment just now.
    $endgroup$
    – Yaakov Bressler
    19 mins ago













3














3










3







$begingroup$

One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance).



Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.






share|improve this answer









$endgroup$



One flaw in your procedure is the use of SMOTE before splitting in train/test. This should be avoided as you may have synthetic examples in the test data which generation depends on training data and that will be highly close to this data in your feature space (as SMOTE uses Euclidean distance).



Moreover, if most of the minority data belongs to the not-skewed region of your specific variables, these points will be also over sampled and so this reduction in the variables space will produce an overly optimistic performance which does not reflect the real distribution of the data.







share|improve this answer












share|improve this answer



share|improve this answer










answered 2 hours ago









ElliotElliot

7021 silver badge11 bronze badges




7021 silver badge11 bronze badges














  • $begingroup$
    Thanks! This is really neat. I had a loud "ahhhha" moment just now.
    $endgroup$
    – Yaakov Bressler
    19 mins ago
















  • $begingroup$
    Thanks! This is really neat. I had a loud "ahhhha" moment just now.
    $endgroup$
    – Yaakov Bressler
    19 mins ago















$begingroup$
Thanks! This is really neat. I had a loud "ahhhha" moment just now.
$endgroup$
– Yaakov Bressler
19 mins ago




$begingroup$
Thanks! This is really neat. I had a loud "ahhhha" moment just now.
$endgroup$
– Yaakov Bressler
19 mins ago

















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f58561%2fcan-i-arbitrarily-eliminate-20-of-my-training-data-if-doing-so-significantly-im%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

ParseJSON using SSJSUsing AMPscript with SSJS ActivitiesHow to resubscribe a user in Marketing cloud using SSJS?Pulling Subscriber Status from Lists using SSJSRetrieving Emails using SSJSProblem in updating DE using SSJSUsing SSJS to send single email in Marketing CloudError adding EmailSendDefinition using SSJS

Кампала Садржај Географија Географија Историја Становништво Привреда Партнерски градови Референце Спољашње везе Мени за навигацију0°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.340°11′ СГШ; 32°20′ ИГД / 0.18° СГШ; 32.34° ИГД / 0.18; 32.34МедијиПодациЗванични веб-сајту

19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу