These are some projects that I designed and developed during last years. If you are interested in the code or you want to ask any question about them do not hesitate to send me an informal e-mail. It will be a pleasure for me to discuss about any of the projects.
This section is part of the Classifier Calibration Tutorial: How to asses and improve classifier confidence and uncertainty. The event was organised by Peter Flach, Miquel Perello Nieto, Hao Song, Telmo Silva Filho and Meelis Kull, and held at the 2020 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2020).
Part 4 consists of a Hands-On Course in which we cover existing Python packages and implementations of calibration techniques, while providing a series of Jupyter Notebooks available to be followed or run by the participants. The material will be made available in this GitHub repository beforehand and announced during the break for download or to be run online with Binder. The content will focus on a full pipeline on how to train and evaluate classifiers and calibrators for neural and non-neural models, the process to produce statistical comparison of calibration methods on several datasets, and also covers visualisation tools which will provide better insights into the strengths and weaknesses of uncalibrated classifiers and their calibrated counterparts (eg. reliability diagrams in a multi-class scenario).
This is a fun project that some colleagues in the SPHERE project collaborated in. We made it purely for fun and did not invest a lot of time performing it. However, it is the first collaboration that we performed and may be a motivation to record future ones.
Welcome introduction to the PyData Bristol Meetup
The SPHERE project is devoted to advancing eHealth in a smart-home context, and supports full-scale sensing and data analysis to enable a generic healthcare service. We describe, from a data-science perspective, our experience of taking the system out of the laboratory into more than thirty homes in Bristol, UK. We describe the infrastructure and processes that had to be developed along the way, describe how we train and deploy Machine Learning systems in this context, and give a realistic appraisal of the state of the deployed systems. (Find more information in the following link
Video depicting all the layers of a deep nerual network. The Convolutional Neural Network has been pretrained with ImageNet. The first input image is a picture of myself and at every step the image is zoomed with a ratio of 0.05. At every step the actual input image (frame) is forwarded to the actual hidden layer. The error is set to be the same representation in order to maximize all its activations. Then, a backward pass is computed to modify the input image. After 100 iterations of zooming in one layer, the next layer is used. See more in my Aalto personal web-page
I did not implement this project, I am only demonstrating how it works. It is an interface in QT to try OpenCV implementations of several computer vision techniques to find objects in an image. More information and the source code can be found in google code