To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning
Autor(a) principal: | |
---|---|
Data de Publicação: | 2015 |
Outros Autores: | , , , , |
Tipo de documento: | Artigo de conferência |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://hdl.handle.net/11449/161236 |
Resumo: | Many classification algorithms, such as Neural Networks and Support Vector Machines, have a range of hyperparameters that may strongly affect the predictive performance of the models induced by them. Hence, it is recommended to define the values of these hyper-parameters using optimization techniques. While these techniques usually converge to a good set of values, they typically have a high computational cost, because many candidate sets of values are evaluated during the optimization process. It is often not clear whether this will result in parameter settings that are significantly better than the default settings. When training time is limited, it may help to know when these parameters should definitely be tuned. In this study, we use meta-learning to predict when optimization techniques are expected to lead to models whose predictive performance is better than those obtained by using default parameter settings. Hence, we can choose to employ optimization techniques only when they are expected to improve performance, thus reducing the overall computational cost. We evaluate these meta-learning techniques on more than one hundred data sets. The experimental results show that it is possible to accurately predict when optimization techniques should be used instead of default values suggested by some machine learning libraries. |
id |
UNSP_e2fc121fa8bdddaf192cc76e5a1c2704 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/161236 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learningMany classification algorithms, such as Neural Networks and Support Vector Machines, have a range of hyperparameters that may strongly affect the predictive performance of the models induced by them. Hence, it is recommended to define the values of these hyper-parameters using optimization techniques. While these techniques usually converge to a good set of values, they typically have a high computational cost, because many candidate sets of values are evaluated during the optimization process. It is often not clear whether this will result in parameter settings that are significantly better than the default settings. When training time is limited, it may help to know when these parameters should definitely be tuned. In this study, we use meta-learning to predict when optimization techniques are expected to lead to models whose predictive performance is better than those obtained by using default parameter settings. Hence, we can choose to employ optimization techniques only when they are expected to improve performance, thus reducing the overall computational cost. We evaluate these meta-learning techniques on more than one hundred data sets. The experimental results show that it is possible to accurately predict when optimization techniques should be used instead of default values suggested by some machine learning libraries.Univ Sao Paulo, Sao Carlos, SP, BrazilUniv Estadual Paulista UNESP, Itapeva, SP, BrazilEindhoven Univ Technol, NL-5600 MB Eindhoven, NetherlandsLudwig Maximilians Univ Munchen, Munich, GermanyUniv Estadual Paulista UNESP, Itapeva, SP, BrazilIeeeUniversidade de São Paulo (USP)Universidade Estadual Paulista (Unesp)Eindhoven Univ TechnolLudwig Maximilians Univ MunchenMantovani, Rafael G.Rossi, Andre L. D. [UNESP]Vanschoren, JoaquinBischl, BerndCarvalho, Andre C. P. L. F.IEEE2018-11-26T16:26:29Z2018-11-26T16:26:29Z2015-01-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/conferenceObject8application/pdf2015 International Joint Conference On Neural Networks (ijcnn). New York: Ieee, 8 p., 2015.2161-4393http://hdl.handle.net/11449/161236WOS:000370730602079WOS000370730602079.pdfWeb of Sciencereponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPeng2015 International Joint Conference On Neural Networks (ijcnn)info:eu-repo/semantics/openAccess2024-01-21T06:19:49Zoai:repositorio.unesp.br:11449/161236Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T23:32:40.225557Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
title |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
spellingShingle |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning Mantovani, Rafael G. |
title_short |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
title_full |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
title_fullStr |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
title_full_unstemmed |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
title_sort |
To tune or not to tune: recommending when to adjust SVM hyper-parameters via Meta-learning |
author |
Mantovani, Rafael G. |
author_facet |
Mantovani, Rafael G. Rossi, Andre L. D. [UNESP] Vanschoren, Joaquin Bischl, Bernd Carvalho, Andre C. P. L. F. IEEE |
author_role |
author |
author2 |
Rossi, Andre L. D. [UNESP] Vanschoren, Joaquin Bischl, Bernd Carvalho, Andre C. P. L. F. IEEE |
author2_role |
author author author author author |
dc.contributor.none.fl_str_mv |
Universidade de São Paulo (USP) Universidade Estadual Paulista (Unesp) Eindhoven Univ Technol Ludwig Maximilians Univ Munchen |
dc.contributor.author.fl_str_mv |
Mantovani, Rafael G. Rossi, Andre L. D. [UNESP] Vanschoren, Joaquin Bischl, Bernd Carvalho, Andre C. P. L. F. IEEE |
description |
Many classification algorithms, such as Neural Networks and Support Vector Machines, have a range of hyperparameters that may strongly affect the predictive performance of the models induced by them. Hence, it is recommended to define the values of these hyper-parameters using optimization techniques. While these techniques usually converge to a good set of values, they typically have a high computational cost, because many candidate sets of values are evaluated during the optimization process. It is often not clear whether this will result in parameter settings that are significantly better than the default settings. When training time is limited, it may help to know when these parameters should definitely be tuned. In this study, we use meta-learning to predict when optimization techniques are expected to lead to models whose predictive performance is better than those obtained by using default parameter settings. Hence, we can choose to employ optimization techniques only when they are expected to improve performance, thus reducing the overall computational cost. We evaluate these meta-learning techniques on more than one hundred data sets. The experimental results show that it is possible to accurately predict when optimization techniques should be used instead of default values suggested by some machine learning libraries. |
publishDate |
2015 |
dc.date.none.fl_str_mv |
2015-01-01 2018-11-26T16:26:29Z 2018-11-26T16:26:29Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/conferenceObject |
format |
conferenceObject |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
2015 International Joint Conference On Neural Networks (ijcnn). New York: Ieee, 8 p., 2015. 2161-4393 http://hdl.handle.net/11449/161236 WOS:000370730602079 WOS000370730602079.pdf |
identifier_str_mv |
2015 International Joint Conference On Neural Networks (ijcnn). New York: Ieee, 8 p., 2015. 2161-4393 WOS:000370730602079 WOS000370730602079.pdf |
url |
http://hdl.handle.net/11449/161236 |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
2015 International Joint Conference On Neural Networks (ijcnn) |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
8 application/pdf |
dc.publisher.none.fl_str_mv |
Ieee |
publisher.none.fl_str_mv |
Ieee |
dc.source.none.fl_str_mv |
Web of Science reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808129530602192896 |