Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Multimodal Deep Learning in Medical Imaging
Carlo Sansone, University of Naples Federico II, Italy

Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Best Practice in Explainable AI
Nirmalie Wiratunga, School of Computing, Engineering and Technology, Robert Gordon University, Aberdeen, United Kingdom

Recent Advances in Learning from Data Streams
João Gama, University of Porto, Portugal

 

Multimodal Deep Learning in Medical Imaging

Carlo Sansone
University of Naples Federico II
Italy
 

Brief Bio
Carlo Sansone is currently Full Professor of Computer Engineering at the Dipartimento di Ingegneria Elettrica e Tecnologie dell’Informazione of the University of Naples Federico II. His basic interests cover the areas of image analysis, pattern recognition and machine and deep learning. From an applicative point of view, his main contributions were in the fields of biomedical image analysis, biometrics, intrusion detection in computer networks and image forensics. He coordinated several projects in the areas of artificial intelligence, biomedical images interpretation and network intrusion detection. Prof. Sansone is a member of the IEEE and of the International Association for Pattern Recognition (IAPR). In 2012 he was elected Vice-President of the GIRPR (the Italian Association affiliated to the IAPR) for two terms (four years).


Abstract
In this talk, we will consider how Deep Learning (DL) approaches can profitably exploit the presence of multiple data sources in the medical domain. First, the need to be able to use information from multimodal data sources is addressed. Starting from an analysis of different multimodal data fusion techniques, an innovative approach will be proposed that allows the different modalities to influence each other. However, in medical applications it is often very difficult to obtain high quality and balanced labelled datasets due to privacy and sharing policy issues. Therefore, several applications have leveraged DL approaches in data augmentation techniques, proposing models that can create new realistic and synthetic samples. Consequently, a new data source can be identified, namely a synthetic data source. In this context, a data augmentation method based on deep learning, specifically designed for the medical domain, will be presented. It exploits the biological characteristics of images by implementing a physiologically-aware synthetic image generation process.



 

 

Intelligent Reuse of Explanation Experiences: The Role of Case-Based Reasoning in Promoting Best Practice in Explainable AI

Nirmalie Wiratunga
School of Computing, Engineering and Technology, Robert Gordon University, Aberdeen
United Kingdom
https://rgu-repository.worktribe.com/person/142640/nirmalie-wiratunga
 

Brief Bio
Nirmalie Wiratunga is a Professor in Intelligent Systems at RGU's School of Computing, Engineering and Technology; and the Associate Dean for Research in the school, with over two decades of experience in computer science and AI research. She has held positions such as post-doctoral researcher on EPSRC funded projects and was appointed Readership in 2009, and Professorship in 2016. Nirmalie leads the Artificial Intelligence & Reasoning Research Group (AIR) in the School at RGU. She has been involved in numerous funded AIR projects, including the development of platforms for reusable explainable AI experiences and initiatives in digital health.


Abstract
The EU now requires that machine learning models provide an explanation of their decisions. Different stakeholders may have different backgrounds, competencies, and goals, which may require different types of explanations. Interpreting and explaining machine learning (ML) models can be done in various ways, and there are many options available. However, it's difficult to know which method or combination of methods to use for different AI models and different deployment situations.

The iSee project is trying to tackle this question. In this talk we will discuss why Case-Based Reasoning (CBR) is well placed to promote best practices in Explainable AI (XAI). We will also explore how CBR can be used to reason about end-users' XAI experiences and enable the sharing and reusing of such experiences through the iSee platform (https://isee4xai.com/). The talk will present the key components that facilitate reasoning in iSee – an ontology to model experiences, cases to capture experiences, a retrieval engine to identify best practice, and an interactive interface to engage with end-users



 

 

Recent Advances in Learning from Data Streams

João Gama
University of Porto
Portugal
 

Brief Bio
João Gama is a Full Professor at the Faculty of Economy, University of Porto. He is a researcher and vice-director of LIAAD, a group belonging to INESC TEC. He got the PhD degree from the University of Porto, in 2000. He is a Senior member of IEEE.He has worked on several National and European projects on Incremental and Adaptive learning systems, Ubiquitous Knowledge Discovery, Learning from Massive, and Structured Data, etc. He served as Co-Program chair of ECML 2005, DS 2009, ADMA 2009, IDA 2011, and ECML/PKDD 2015. He served as track chair on Data Streams with ACM SAC from 2007 till 2016. He organized a series of Workshops on Knowledge Discovery from Data Streams with ECML/PKDD, and Knowledge Discovery from Sensor Data with ACM SIGKDD. He is the author of several books on Data Mining (in Portuguese) and authored a monograph on Knowledge Discovery from Data Streams. He authored more than 250 peer-reviewed papers in areas related to machine learning, data mining, and data streams. He is a member of the editorial board of international journals ML, DMKD, TKDE, IDA, NGC, and KAIS. He (co-)supervised more than 12 PhD students and 50 MSc students.


Abstract
Learning from data streams is a hot topic in machine learning and data mining. This talk presents two problems and discusses streaming techniques to solve them. The first problem is the application of data stream techniques to predictive maintenance. We propose a two-layer neuro-symbolic approach to explain black-box models. The explanations are oriented toward equipment failures. For the second problem, we present a streaming algorithm for online hyperparameter tuning. The Self hyper-parameter Tuning (SPT) algorithm is an optimisation algorithm for online hyper-parameter tuning from non-stationary data streams. SPT is a wrapper over any streaming algorithm and can be used for classification, regression, and recommendation.



footer