Paper submission NCCV 2019

For the Deep Learning course at the TU Delft with Prof J. van Gemert. There was a project to be made, this had some success, and after some hard work due the summer holiday, it was accepted by the NCCV 2019. The paper is been called Fine-grained Classification of Rowing teams, because we have to look at a few pixels (fine-gradient), of the image that determines which rowing association that team belongs. In our paper, we have shown that this was actually quite an easy task for the neural networks. In the following picture, we show

rowingboat Above an example image from our rowing boat data set. Here a crew from Proteus-Eretes and Njord is visible. Under specific lighting conditions, the different shades of blue can appear similar. The similarities will make it difficult for a deep learning network to correctly classify the teams.


Fine-grained classification tasks such as identifying different breeds of dog are quite challenging as visual differences between categories is quite small and can be easily overwhelmed by external factors such as object pose, lighting, etc. This work focuses on the specific case of classifying rowing teams from various associations.
Currently, the photos are taken at rowing competitions and are manually classified by a small set of members, in what is a painstaking process. To alleviate this, Deep learning models can be utilised as a faster method to classify the images. Recent studies show that localising the manually defined parts, and modelling based on these parts, improves on vanilla convolution models, so this work also investigates the detection of clothing attributes.
The networks were trained and tested on a partially labelled data set mainly consisting of rowers from multiple associations.
This paper resulted in the classification of up to ten rowing associations by using deep learning networks the smaller VGG network achieved 90.1% accuracy whereas ResNet was limited to 87.20%. Adding attention to the ResNet resulted into a drop of performance as only 78.10% was achieved.

Paper submission link:

Paper source code link: