Description: The facial expression recognition tools are trained and evaluated on benchmark datasets that contain many expressions generated 'at the request' of the expressor and photographed en face. This does not match the reality, where expressions are not so strong and where the face is not always facing the camera. The project is the continuation of the previous research published by our team and extended during previous WSHOP. During the project, one should: (a) explore previous results, (b) integrate them together, (c) perform consistent experiments in a unified environment on a unified dataset (it may be necessary to find/add elements to the dataset!), (d) summarise the results by indicating the strengths and weaknesses (supported situations) for each
API.