Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Relations, Events, and Explanation
Giancarlo Guizzardi, Free University of Bolzano-Bozen, Italy and University of Twente, Netherlands

Self-Reflective Hybrid Intelligence: Combining Human with Artificial Intelligence and Logic
Catholijn Jonker, Interactive Intelligence group, Delft University of Technology, Netherlands

Probabilistic Graphical Models: On Reasoning, Learning, and Revision
Rudolf Kruse, Computer Science, Otto von Guericke University Magdeburg, Germany

 

Relations, Events, and Explanation

Giancarlo Guizzardi
Free University of Bolzano-Bozen, Italy and University of Twente
Netherlands
 

Brief Bio
Giancarlo Guizzardi is a Full Professor of Computer Science, at the University of Twente, The Netherlands. He is also a Full Professor of Computer Science and head of the Conceptual and Cognitive Modeling Research Group (CORE), at the Free University of Bozen-Bolzano, Italy. He has been working for circa 25 years in Ontology-Driven Conceptual Modeling and has been one of the main contributors to the development of the Unified Foundational Ontology (UFO) and the OntoUML language. He is associate editor for Applied Ontology and Data & Knowledge Engineering, and a member of the Advisory Board of the International Association for Ontology and its Applications (IAOA).


Abstract
In Computer Science and, in particular, in Artificial Intelligence, ontologies have been deemed for the past 30 years as artifacts capable of supporting knowledge exchange and reuse, and semantic interoperability. However, if they are to play this role, ontologies must be more than just logical specifications, and ontology engineering must be primarily about Ontological Analysis. Ontological Analysis, in turn, is about employing formal (i.e., domain-independent) theories in the philosophical sense, as well as the analytical tools put forth by Ontology (with a capital ‘O’) to unpack the semantic content of representations. One particular type of Ontological Analysis is Truthmaker Analysis, which, in turn, is strongly connected to the notion of Metaphysical Grounding and Explanation. In this talk, I will (i) present an ontological theory of Relationships and Events, and show that they are the natural truthmakers of Relations. I will then show how this theory can be used to systematically unpack (i.e., explain) the semantic content of domain ontologies, and how this reveals important and subtle aspects of domain modeling that can have a decisive impact on semantic interoperability.



 

 

Self-Reflective Hybrid Intelligence: Combining Human with Artificial Intelligence and Logic

Catholijn Jonker
Interactive Intelligence group, Delft University of Technology
Netherlands
www.mmi.tudelft.nl/~catholijn
 

Brief Bio
Prof. dr. Catholijn Jonker is head of the Interactive Intelligence group of the faculty of Electrical Engineering, Mathematics, and Computer Science, TU Delft. Jonker is also full professor of Explainable Artificial Intelligence at the Leiden Institute of Advanced Computer Science of Leiden University. She is Fellow of EurAI, member of the Academia Europaea, president of ICT Platform of the Netherlands, member of the Royal Holland Society of Sciences and Humanities, member of the CLAIRE National Advisory Board for The Netherlands. In the past she was chair of the Dutch Network of Female Full Professors and of De Jonge Akademie of the Royal Netherlands Academy of Arts and Sciences. Prestigious grants are the NWO VICI (1.5 M€, 2007) personal grant negotiation support systems, and NWO Gravitation consortium grants on “Hybrid Intelligence” (19 M€ subsidy, 41.9 M€, 2019) of which she is vice-coordinator, and “Ethics of Socially Disruptive Technologies” (18 M€ subsidy, 23.6 M€, 2019)  of which she is co-applicant.


Abstract
By creating forms of human-AI intelligence we aim to support the individual and social wellbeing of people. We illustrate this with examples of Human-AI decision making, and human-robot teamwork. With the aim in mind of creating responsible AI we depart from the position that we aim to enhance or augment human intelligence, not to replace humans. The implication of this is that from the start we have to design for the co-activities that human and AI will undertake and pay special attention of the interdependencies between human activities and those of the AI. We then go to the next level by discussing how such hybrid intelligent systems can perform self-reflection on their performance, and based on that adapt their activities and measurable criteria for behaving responsibly.
Thus we discuss problems and potential solutions of keeping AI and in particular Machine Learning algorithms in bounds.



 

 

Probabilistic Graphical Models: On Reasoning, Learning, and Revision

Rudolf Kruse
Computer Science, Otto von Guericke University Magdeburg
Germany
https://www.ci.ovgu.de/Team/Rudolf+Kruse.html
 

Brief Bio
Rudolf Kruse is Professor at the Faculty of Computer Science in the Otto-von-Guericke University of Magdeburg in Germany. He obtained his Ph.D. and his Habilitation in Mathematics from the Technical University of Braunschweig in 1980 and 1984 respectively. Following a stay at the Fraunhofer Gesellschaft, he joined the Technical University of Braunschweig as a professor of computer science in 1986. In 1996 he founded the Computational Intelligence Group of the Faculty of Computer Science in the Otto-Von-Guericke University Magdeburg. His current main research interests include data science and intelligent systems.He has coauthored 15 monographs and 25 books as well as more than 350 peer-refereed scientific publications in various areas with more than 18500 citations and h-index of 58 in Google Scholar. Rudolf Kruse is Fellow of the International Fuzzy Systems Association (IFSA), Fellow of the European Association for Artificial Intelligence (EURAI/ECCAI ), and Fellow of the Institute of Electrical and Electronics Engineers (IEEE). His group is successful in various industrial applications in cooperation with companies such as Volkswagen, Daimler, BMW, SAP, and British Telecom.


Abstract
Probabilistic Graphical Models are of high relevance for complex industrial applications. The Markov network approach is one of their most prominent representatives and an important tool to structure uncertain knowledge about high dimensional domains. The decomposition of the underlying high dimensional spaces turns out to be useful to make reasoning in such domains feasible. Compared to conditioning a decomposable model on given evidence, the learning of the structure of the model from data as well as the fusion of several decomposable models is much more complicated. The important belief change operation revision has been almost entirely disregarded in the past, and in this context the problem of inconsistencies is of utmost relevance for real world applications. In this talk it is shown that there are efficient methods for solving such problems. A successful complex industrial application in automotive industry is presented, in which probabilistic graphical models are used for modelling the various belief change operations.



footer