The AI System DLV: Ontologies, Reasoning, and More
Nicola Leone, University of Calabria, Italy
Knowledge Graph for Public Safety: Construction, Reasoning and Case Studies
Xindong Wu, Mininglamp Software Systems, China and University of Louisiana at Lafayette, United States
Building a Self-service IoT Analytics Toolbox: Basics, Models and Lessons Learned
Rudi Studer, Karlsruhe Institute of Technology, Germany
From Image Understanding to Text Description and Return - Deep Learning Paradigms for Annotation and Retrieval
Rita Cucchiara, University of Modena and Reggio Emilia, Italy
Conceptual Modelling and Web Applications: How to Make it a Right Partnership?
Oscar Pastor, Universidad Politécnica de Valencia, Spain
The AI System DLV: Ontologies, Reasoning, and More
Nicola Leone
University of Calabria
Italy
Brief Bio
Nicola Leone is professor of Computer Science at University of Calabria, where he heads the Department of Mathematics and Computer Science, chairs the PhD in Mathematics and Computer Science, and leads the AI Lab. He previously was professor of Database Systems at Vienna University of Technology until 2000.
He is internationally renowned for his research on Knowledge Representation, Answer Set Programming (ASP), and Database Theory, and for the development of DLV, a state-of-the-art ASP system which is popular world-wide.
He published more than 250 papers in prestigious conferences and journals, and has about 10,000 citations, with h-index 52.
He is area editor of the TPLP journal (Cambridge Press) for "Knowledge Representation and Nonmonotonic Reasoning", and been Keynote Speaker and Program Chair of several international conferences, including, e.g., JELIA and LPNMR.
He is a fellow of ECCAI (now EurAI), recipient of a Test-of-Time award from the ACM, and winner of many Best Paper Awards in top-level AI conferences.
Abstract
The talk presents DLV, an advanced AI system from the area of Answer Set Programming (ASP), showing its high potential for reasoning over ontologies.
Ontological reasoning services represent fundamental features in the development of the Semantic Web. Among them, scientists are focusing their attention on the so-called ontology-based query answering (OBQA) task where a (conjunctive) query has to be evaluated over a logical theory (a.k.a. Knowledge Base, or simply KB) consisting of an extensional database (a.k.a. ABox) paired with an ontology (a.k.a. TBox). From a theoretical viewpoint, much has been done. Indeed, Description logics and Datalog+/- have been recognised as the two main families of formal ontology specification languages to specify KBs, while OWL has been identified as the official W3C standard language to physically represent and share them; moreover sophisticated algorithms and techniques have been proposed. Conversely, from a practical point of view, only a few systems for solving complex ontological reasoning services such as OBQA have been developed, and no official standard has been identified yet.
The talk will illustrate the applicability of the well-known ASP system DLV for powerful ontology-based reasoning tasks including OBQA.
Knowledge Graph for Public Safety: Construction, Reasoning and Case Studies
Xindong Wu
Mininglamp Software Systems, China and University of Louisiana at Lafayette
United States
Brief Bio
Xindong Wu, PhD, Chief Scientist and Vice President of Mininglamp Software Systems (China), and Alfred and Helen Lamson Endowed Professor of the School of Computing and Informatics at the University of Louisiana at Lafayette. He is also a Yangtze River Scholar in the School of Computer Science and Information Engineering at the Hefei University of Technology (China), and a Fellow of the IEEE and the AAAS. He holds a PhD in Artificial Intelligence from the University of Edinburgh and Bachelor's and Master's degrees in Computer Science from the Hefei University of Technology, China. Dr. Wu's research interests include data mining, Big Data analytics, knowledge engineering, and Web systems. He has published over 450 refereed papers in these areas in various journals and conferences, as well as 45 books and conference proceedings. His research has been supported by the U.S. National Science Foundation (NSF), the U.S. Department of Defense (DOD), the National Natural Science Foundation of China (NSFC), the Ministry of Science and Technology of China, and the Ministry of Education of China, as well as industrial companies including Microsoft Research, U.S. West Advanced Technologies, and Empact Solutions.
Abstract
Investigating a crime incident for a rank-ordered list of suspects and linking a cluster of suspects to possible unsettled or even never reported crime cases, both need domain expertise and information fusion from heterogenous and autonomous data sources for complex and evolving relationships. In this talk, we will present our efforts on constructing a large knowledge graph for public safety with billions of nodes and billions of connections, and crime-investigation services this knowledge graph facilitates. This knowledge graph supports big knowledge processing and industrial AI applications. We will discuss entity fusion and mapping, relationship computing, and event reasoning with case studies of the knowledge graph.
Building a Self-service IoT Analytics Toolbox: Basics, Models and Lessons Learned
Rudi Studer
Karlsruhe Institute of Technology
Germany
Brief Bio
Rudi Studer has been Full Professor in Applied Informatics at the Karlsruhe Institute of Technology (KIT), Institute AIFB until 2017. In addition, he has been director at the Karlsruhe Service Research Institute (KSRI) as well as at the FZI Research Center for Information Technology. His research interests include knowledge management, Semantic Web technologies and applications, data and text mining, Big Data and Service Science.
He obtained a Diploma in Computer Science at the University of Stuttgart in 1975. In 1982 he was awarded a Doctor's degree in Mathematics and Informatics at the University of Stuttgart, and in 1985 he obtained his Habilitation in Informatics at the University of Stuttgart. From 1985 to 1989 he was project leader and manager at the Scientific Center of IBM Germany.
Rudi Studer is former president of the Semantic Web Science Association (SWSA) and former Editor-in-chief of the Journal Web Semantics: Science, Services and Agents on the World Wide Web. He is an STI International Fellow.
Abstract
In many application domains such as manufacturing, the integration and continuous processing of real-time sensor data from the Internet of Things (IoT) provides users with the opportunity to continuously monitor and detect upcoming situations. One example is the optimization of maintenance processes based on the current condition of machines (condition-based maintenance). While continuous processing of events in scalable architectures is already well supported by the existing Big Data tool landscape, building such applications requires a still enormous effort which, besides programming skills, demands for a rather deep technical background on distributed, scalable infrastructures. Therefore, there is a need for more intuitive solutions supporting the development of real-time applications.
In this talk, we present methods and tools enabling flexible modeling of real-time and batch processing pipelines by domain experts. We will present, lightweight, semantics-based models to describe sensors and data processors. Furthermore, we look deeper into graphical modeling of processing pipelines, i.e., stream processing programs, that can be defined using graphical tool support, but are automatically deployed in distributed stream processors. We motivate our concepts by showing real-world examples we gathered from a number of industry projects, and explain these based on the tool StreamPipes (https://www.streampipes.org) which we have been developing within various research projects during the past years.
From Image Understanding to Text Description and Return - Deep Learning Paradigms for Annotation and Retrieval
Rita Cucchiara
University of Modena and Reggio Emilia
Italy
Brief Bio
Rita Cucchiara is Full Professor of Computer Engineering at the University of Modena and Reggio Emilia, Department of Engineering ‘Enzo Ferrari’. In Modena, she coordinates the AImagelab research lab, which gathers more than 35 researchers in AI, Artificial Vision, Machine Learning, Pattern Recognition and Multimedia. She is in charge of the joint laboratory with Ferrari, Red-Vision and from 2020 of the NVIDIA AI Technology Center (NVAITC@UNIMORE) and is the director of Modena@ELLIS Unit that is one of the ELLIS European network. She coordinates several international, european and national projects in topics related to AI applied to human-AI Interaction, video-surveillance, automotive, industry 4.0 and cultural heritage. Since 2018, Rita Cucchiara heads the National Lab. CINI of Artificial Intelligence and Intelligent Systems (AIIS). She has been President of the Italian Association in Computer Vision, Pattern Recognition and Machine Learning from 2016 to 2018. Since 2017, she has been a member of the Board of Directors of the Italian Institute of Technology. She is currently responsible for the working group on Artificial Intelligence of the National Research Plan PNR2021-2027 of the Ministry of University and Research. She has more than 450 scientific publications in international journals and conferences. She is Associate Editor of T-PAMI and will be GC of ECCV2022.
Abstract
Computer vision and Natural Language Processing are converging under the same reasoning paradigm based on Deep Learning. Artificial neural networks can be now designed with specific architectures and training functions to discover knowledge and recognize patterns in both pictorial and textual domain. In addition it can create convergence between the two modal expressions into embedding space where feature extracted by images and text can be associated. This opens new frontiers in media annotation, information retrieval from documents and image archives, automatic image captioning, query and answering tasks. The talk will present this framework, and will introduce methodologies and neural architectures for image description by text, creating embedding spaces for both pictorial and linguistic data. Recurrent architectures for image and video captioning, generative architectures for image to text translation and viceversa and new semi-supervised approaches to extend results in different domains, such as natural pictures archives, fashion archives and art archives will be discussed. Results and demo from AImageLab UNIMORE within the projects “AI for Digital Humanities” and “Cultmedia” will be presented.
Conceptual Modelling and Web Applications: How to Make it a Right Partnership?
Oscar Pastor
Universidad Politécnica de Valencia
Spain
http://www.pros.upv.es
Brief Bio
Oscar Pastor is Full Professor and Director of the "Centro de Investigación en Métodos de Producción de Software (PROS)" at the Universidad Politécnica de Valencia (Spain). He received his Ph.D. in 1992. He was a researcher at HP Labs, Bristol, UK. He has published more than two hundred research papers in conference proceedings, journals and books, received numerous research grants from public institutions and private industry, and been keynote speaker at several conferences and workshops. Chair of the ER Steering Committee, and member of the SC of conferences as CAiSE, ICWE, CIbSE or RCIS, his research activities focus on conceptual modeling, web engineering, requirements engineering, information systems, and model-based software production. He created the object-oriented, formal specification language OASIS and the corresponding software production method OO-METHOD. He led the research and development underlying CARE Technologies that was formed in 1996. CARE Technologies has created an advanced MDA-based Conceptual Model Compiler called OlivaNova, a tool that produces a final software product starting from a conceptual schema that represents system requirements. He is currently leading a multidisciplinary project linking Information Systems and Bioinformatics notions, oriented to designing and implementing tools for Conceptual Modeling-based interpretation of the Human Genome information.
Abstract
With decades of contributions and applications, conceptual modeling is very well-recognized in information systems engineering. However, the importance and relevance of conceptual modeling in the Web Engineering domain is less well understood. Being Web application development a complex, challenging field that is in continuous evolution and that has high demands in the context of the digital era that we are witnessing, conceptual modeling should play a basic role oriented to design correct Web application development processes. From a web programming perspective, conceptual modeling -even being implementation-independent- should be able to describe a system in sufficient detail so that the model can be automatically compiled into an executable system. This keynote will face this issue, focusing on the problems that conceptual modeling approaches face to provide those required efficient software development solutions, emphasizing the particularities that modeling Web applications presents.