Why use regularization instead of decreasing the modelImplementing the Dependency Sensitive CNN (DSCNN ) in KerasAre parametric method and supervised learning exactly the same?Training Error decreasing with each epochGANs and grayscale imagery colorizationIs it always better to use the whole dataset to train the final model?Why might a neural network consistently underestimate its target?How does training a ConvNet with huge number of parameters on a smaller number of images work?Model Not Learning with Sparse Dataset (LSTM with Keras)What is the point of getting rid of overfitting?Medication relations using word2vec
Is it true that control+alt+delete only became a thing because IBM would not build Bill Gates a computer with a task manager button?
What was the first multiprocessor x86 motherboard?
Is it really ~648.69 km/s Delta-V to "Land" on the Surface of the Sun?
How does The Fools Guild make its money?
Validation and verification of mathematical models
Can we use other things than single-word verbs in our dialog tags?
How to realistically deal with a shield user?
How to help new students accept function notation
How to explain to a team that the project they will work for 6 months will 100% fail?
Does bottle color affect mold growth?
Can ads on a page read my password?
Why is there a need to prevent a racist, sexist, or otherwise bigoted vendor from discriminating who they sell to?
In Pokémon Go, why does one of my Pikachu have an option to evolve, but another one doesn't?
Can I say "if a sequence is not bounded above, then it is divergent to positive infinity" without explicitly saying it's eventually increasing?
Does this put me at risk for identity theft?
Does the length of a password for Wi-Fi affect speed?
Erratic behavior by an internal employee against an external employee
What does Fisher mean by this quote?
If there were no space agencies, could a person go to space?
Was there ever a difference between 'volo' and 'volo'?
How do I change the output voltage of the LM7805?
How is the return type of a ternary operator determined?
What are the examples (applications) of the MIPs in which the objective function has nonzero coefficients for only continuous variables?
What is the resistivity of copper at 3 kelvin?
Why use regularization instead of decreasing the model
Implementing the Dependency Sensitive CNN (DSCNN ) in KerasAre parametric method and supervised learning exactly the same?Training Error decreasing with each epochGANs and grayscale imagery colorizationIs it always better to use the whole dataset to train the final model?Why might a neural network consistently underestimate its target?How does training a ConvNet with huge number of parameters on a smaller number of images work?Model Not Learning with Sparse Dataset (LSTM with Keras)What is the point of getting rid of overfitting?Medication relations using word2vec
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.
My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)
Thanks in advance!
machine-learning neural-network regularization
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.
My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)
Thanks in advance!
machine-learning neural-network regularization
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.
My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)
Thanks in advance!
machine-learning neural-network regularization
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
Regularization is used to decrease the capacity of a machine learning model to avoid overfitting. Why don't we just use a model with less capacity (e.g. decrease the number of layers). This would also benefit the computational time and memory.
My guess would be that different regularization methods make different assumptions of the dataset. If so, what assumptions are made for the common regularizations (L1, L2, dropout, any other)
Thanks in advance!
machine-learning neural-network regularization
machine-learning neural-network regularization
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
asked 8 hours ago
Deep_OzeanDeep_Ozean
162 bronze badges
162 bronze badges
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Deep_Ozean is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.
L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).
L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.
Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting
All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).
The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.
Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.
To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f57267%2fwhy-use-regularization-instead-of-decreasing-the-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.
L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).
L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.
Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting
All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.
L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).
L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.
Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting
All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.
L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).
L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.
Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting
All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
Regularization does decrease the capacity of the model in some sense, but as you already guessed, different capacity reductions result in models of different quality and are not interchangeable.
L1 can be interpreted as making the assumption that the influence of different factors (represented by neurons) on each other shouldn’t be assumed without significant support by data (i.e. the gain achieved by larger influence has to outweight the L1 loss associated with increased absolute value of the parameter that „connects“ them).
L2 does the same, but makes this dependent on the connection strength, i.e. very light connections basically need no support (and are therefore not driven further to exact zero) and very large connections are almost impossible.
Dropout can be interpreted as training a large amount of smaller networks and using the approximated average network for inference: „So training a neural network with dropout can be seen as training a collection of 2^n thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, if at all.“ Dropout: A Simple Way to Prevent Neural Networks from Overfitting
All these methods make certain network parameter combinations highly improbable or even impossible to achieve for a given dataset, which otherwise could have been the result of the training. In this sense, the capacity of the model is reduced. But as one can imagine, some capacity reductions are more useful than others.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
edited 5 hours ago
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
answered 5 hours ago
leonardleonard
413 bronze badges
413 bronze badges
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
leonard is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
add a comment |
$begingroup$
Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).
The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.
Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.
To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.
$endgroup$
add a comment |
$begingroup$
Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).
The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.
Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.
To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.
$endgroup$
add a comment |
$begingroup$
Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).
The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.
Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.
To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.
$endgroup$
Regularization is not primarily used to avoid overfitting. Regularization shrinks weights which are not „useful“ to make good predictions. And regularization is also used in many other models, where it has more the notion of feature or model selection (regression, logit, boosting).
The benefit of regularization is, that you can work with a model which has high capacity, but using regularization you don‘t need to worry too much about features (and their representation in NN). Regularization kind of automatically drops weights which are not too important. So it is a really useful tool, e.g. in cases where you have a lot of information but you don‘t know what information is actually needed to make good predictions.
Dropout is a different thing, since it means to randomly drop weights. Shrinking means that weights which do not contribute much to good predictions, receive less attention by the model. L1 can shrink weights to zero, while L2 will never be exactly zero.
To learn more about regularization, you may look at Introduction to Statistical Learning. In the book, there is a really instructive Chapter on the issue.
answered 8 hours ago


PeterPeter
1,2491 gold badge3 silver badges19 bronze badges
1,2491 gold badge3 silver badges19 bronze badges
add a comment |
add a comment |
Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.
Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.
Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.
Deep_Ozean is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f57267%2fwhy-use-regularization-instead-of-decreasing-the-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown