Deep Learning has becoming a popular and effective way to address a large set of issues. In particular, in computer vision, it has been exploited to get satisfying recognition performance in unconstrained conditions. However, this wild race towards even better performance in extreme conditions has overshadowed an important step i.e. the assessment of the impact of this new methodology on traditional issues on which for years the researchers had worked. This is particularly true for biometrics applications where the evaluation of deep learning has been made directly on newest large and more challencing datasets. This lead to a pure data driven evaluation that makes difficult to analyze the relationships between network configurations, learning process and experienced outcomes. This paper tries to partially fill this gap by applying a DNN for gender recognition on the MORPH dataset and evaluating how a lower cardinality of examples used for learning can bias the recognition performance.
Assessment of deep learning for gender classification on traditional datasets
M Leo;P L Mazzeo;P Spagnolo
2016
Abstract
Deep Learning has becoming a popular and effective way to address a large set of issues. In particular, in computer vision, it has been exploited to get satisfying recognition performance in unconstrained conditions. However, this wild race towards even better performance in extreme conditions has overshadowed an important step i.e. the assessment of the impact of this new methodology on traditional issues on which for years the researchers had worked. This is particularly true for biometrics applications where the evaluation of deep learning has been made directly on newest large and more challencing datasets. This lead to a pure data driven evaluation that makes difficult to analyze the relationships between network configurations, learning process and experienced outcomes. This paper tries to partially fill this gap by applying a DNN for gender recognition on the MORPH dataset and evaluating how a lower cardinality of examples used for learning can bias the recognition performance.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.