Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

IC3K is a joint conference composed of three concurrent conferences: KDIR, KEOD and KMIS. These three conferences are always co-located and held in parallel. Keynote lectures are plenary sessions and can be attended by all IC3K participants.

Big Data Mining Services and Distributed Knowledge Discovery Applications on Clouds
Domenico Talia, University of Calabria and Fuzhou University, Italy

Big Data Integration - State of the Art & Challenges
Sonia Bergamaschi, Dipartimento di Ingegneria dell'Informazione, DIEF - University of Modena and Reggio Emilia, Italy

Semantics of Innovation
Michele M. Missikoff, Institute of Sciences and Technologies of Cognition, ISTC-CNR, Italy

Semantics of Innovation
Michele M. Missikoff, Institute of Sciences and Technologies of Cognition, ISTC-CNR, Italy

No Knowledge Without Processes - Process Mining as a Tool to Find Out What People and Organizations Really Do
Wil Van Der Aalst, Technology Management - Information Systems, Technische Universiteit Eindhoven, Netherlands

A Research Journey into Enterprise Governance of IT
Wim Van Grembergen, University of Antwerp, Belgium

Bridging the Emotional Gap - From Objective Representations to Subjective Interpretations
Marie-Jeanne Lesot, Independent Researcher, France

Towards an Ontology of Software
Nicola Guarino, Independent Researcher, Italy

 

Big Data Mining Services and Distributed Knowledge Discovery Applications on Clouds

Domenico Talia
University of Calabria and Fuzhou University
Italy
 

Brief Bio
Domenico Talia is a full professor of computer engineering at the University of Calabria and an adjunct professor at Fuzhou University. He is a partner of the startup DtoK Lab. His research interests include Big Data analysis, parallel and distributed data mining algorithms, Cloud computing, distributed knowledge discovery, mobile computing, distributed computing, peer-to-peer systems, and parallel programming. Talia published ten books and about 400 papers in archival journals such as CACM, Computer, IEEE TKDE, IEEE TSE, IEEE TSMC-B, IEEE Micro, ACM Computing Surveys, FGCS, Parallel Computing, IEEE Internet Computing and international conference proceedings. He is a member of the editorial boards of IEEE Transactions on Parallel and Distributed Computing, the Future Generation Computer Systems journal, the International Journal on Web and Grid Services, the Scalable Computing: Practice and Experience journal, the Journal of Cloud Computing, and the Web Intelligence and Agent Systems International journal. Talia has been a project evaluator for several international institutions such as the European Commission, the Aeres in France, the Austrian Science Fund, the Croucher Foundation, and the Russian Federation Government. He served as a program chair, organizer, or program committee member of several international scientific conferences and gave many invited talks and seminars in international conferences and schools. Talia is a member of the ACM and a senior member of IEEE.


Abstract
Digital data repositories are more and more massive and distributed, therefore we need smart data analysis techniques and scalable architectures to extract useful information from them in reduced time. Cloud computing infrastructures offer an effective support for addressing both the computational and data storage needs of big data mining and parallel knowledge discovery applications. In fact, complex data mining tasks involve data- and compute-intensive algorithms that require large and efficient storage facilities together with high performance processors to get results in acceptable times. In this talk we introduce the topic and the main research issues, then we present a Data Mining Cloud Framework designed for developing and executing distributed data analytics applications as workflows of services. In this environment we use data sets, analysis tools, data mining algorithms and knowledge models that are implemented as single services that can be combined through a visual programming interface in distributed workflows to be executed on Clouds. The first implementation of the Data Mining Cloud Framework on Azure is presented and the main features of the graphical programming interface are described.



 

 

Big Data Integration - State of the Art & Challenges

Sonia Bergamaschi
Dipartimento di Ingegneria dell'Informazione, DIEF - University of Modena and Reggio Emilia
Italy
http://www.dbgroup.unimo.it/Bergamaschi.html
 

Brief Bio
Sonia Bergamaschi received her Laurea degree in Mathematics from Università di Modena on 1977. She is currently full professor of Computer Engineering at the Engineering Department “Enzo Ferrari” - Università di Modena e Reggio Emilia and leads the "DBGROUP. Her research activity has been mainly devoted to knowledge representation and management in the context of very large databases facing both theorical and implementation aspects. Since 1985 she was very active in the area of coupling artificial intelligence (Description Logics) and database techniques to develop Intelligent Database Systems. On this topic very relevant theoretical results have been obtained and a system ODB-Tools performing consistency check and semantic query optimization In Object Oriented Databases, based on this theoretical results, has been developed. Since 1999, her research efforts have been devoted to the Intelligent Information Integration research area. A data integration; system, called MOMIS, which provides an integrated access to structured and semi-structured data sources and permits to pose a single query and receive a single unified answer has been developed. On 2009 she founded the academic (UNIMORE) start-up “DataRiver” whose aim was the delivering of an open source version of the MOMIS system (first release on april 2010. Since 2010 (and up to now) her research activities has been extended to: Keyword Search on databases, Semantic Web and automatic annotation of data sources. Recently, her research efforts has been devoted to Big Data and Big data Analytics. Sonia Bergamaschi was coordinator and participant of many ICT European projects: SEWASIE (2002-2005, WINK (2002-2003), STASIS (2006-2009), FACIT-SME (2010-2012), Keystone (2013-2017). She was coordinator of the MURST FIRB project “NeP4B (Networked Peers for Business)”(2006-2009 . She has published more than one hundred international journal and conference papers and her researches have been founded by the Italian MURST, CNR, ASI institutions and by European Community projects. She has served on the committees of international and national Database and AI conferences. She is a member of the IEEE Computer Society and of the ACM.


Abstract
Big data is a popular term for describing the exponential growth, availability and use of information, both structured and unstructured. Much has been written on the big data trend and its potentiality for innovation and growth of enterprises. The advise of IDC (one of the premier advisory firm specialized in information technology) for organizations and IT leaders is to focus on the ever-increasing volume, variety and velocity of information that forms big data.
In most cases, such huge volume of data comes from multiple sources and across heterogeneous systems, thus, data have to be to linked, matched, cleansed and transformed. Moreover, it is necessary to determine how disparate data relates to common definitions and how to systematically integrate structured and unstructured data assets to produce useful, high-quality and up-to-date information.
The research area of Data Integration, active since 90s, provided good techniques for facing the above issues in a unifying framework, Relational Databases (RDB), with reference to a less complex scenario (smaller volume, variety and velocity). Moreover, simpler forms of integration among different databases can be efficiently resolved by Data Federation technologies used for DBMS today.
Adopting RDB as a general framework for big data integration and solving the issues above, namely volume, variety, variability and velocity, by using more powerful RDBMs technologies enhanced with data integration techniques is a possible choice. On the other hand, new emerging technologies came into play: NOSQL systems and technologies, datawarehouse appliances platforms provided by the major software players, data governance platforms, etc.
In this talk, prof. Sonia Bergamaschi will provide an overview of this exciting field that will become more and more important.



 

 

Semantics of Innovation

Michele M. Missikoff
Institute of Sciences and Technologies of Cognition, ISTC-CNR
Italy
 

Brief Bio
Elaheh Pourabbas is a researcher at Istituto di Analisi dei Sistemi ed Informatica (IASI) “Antonio Ruberti” of the National Research Council of Italy. She received her MS degree in Electrical Engineering from the University of Rome “La Sapienza” in 1992, and her PhD from the University of Bologna in 1997. In 2005 she was awarded a Fulbright Fellowship in support of research carried out at the University of California-Lawrence Berkeley National Laboratory, Berkeley, USA. She served as referee of numerous international journals and conferences, and she took part in various research projects of the European Framework Programs and bilateral projects with international institutions. Her research interests include Semantic Web, Ontologies and Similarity Reasoning, Data Warehousing-OLAP, Spatial Databases, Database Management Systems.


Abstract
Innovation is today one of the most used terms, when talking about strategies to recover from the current economic downturn. However, in the large majority of cases, the term is used as a generic 'place holder', a sort of container whose actual content is left to the intuition. If you ask questions, trying to get a deeper understanding, then you realise that the notion of 'innovation' is very rich and articulated and, in parallel, ideas are in general rather vague.

In the European project BIVEE, active since 3 years, we studied business and enterprise innovation, proposing a solution based on the use of semantic technologies, with a focus on Virtual Enterprises (essentially, networks of cooperating SMEs). Innovation in its essence is seen as a complex, ill-defined process of knowledge enrichment: starting from a given problem or idea, the process requires a large amount of (existing) knowledge to produce the new knowledge, necessary to solve the given problem and/or transform the idea into a concrete product (tangible or intangible).

This talk starts with an overview on the broad, encompassing notion of innovation, with its facets and articulation, and then illustrate an approach to innovation support and management based on a knowledge-centric approach. Such an approach has the desirable property to be largely independent from a specific industrial sector, and can be easily adapted to different kinds of production, from manufacturing to the service industry.



 

 

Semantics of Innovation

Michele M. Missikoff
Institute of Sciences and Technologies of Cognition, ISTC-CNR
Italy
 

Brief Bio
Elaheh Pourabbas is a researcher at Istituto di Analisi dei Sistemi ed Informatica (IASI) “Antonio Ruberti” of the National Research Council of Italy. She received her MS degree in Electrical Engineering from the University of Rome “La Sapienza” in 1992, and her PhD from the University of Bologna in 1997. In 2005 she was awarded a Fulbright Fellowship in support of research carried out at the University of California-Lawrence Berkeley National Laboratory, Berkeley, USA. She served as referee of numerous international journals and conferences, and she took part in various research projects of the European Framework Programs and bilateral projects with international institutions. Her research interests include Semantic Web, Ontologies and Similarity Reasoning, Data Warehousing-OLAP, Spatial Databases, Database Management Systems.


Abstract
Innovation is today one of the most used terms, when talking about strategies to recover from the current economic downturn. However, in the large majority of cases, the term is used as a generic 'place holder', a sort of container whose actual content is left to the intuition. If you ask questions, trying to get a deeper understanding, then you realise that the notion of 'innovation' is very rich and articulated and, in parallel, ideas are in general rather vague.

In the European project BIVEE, active since 3 years, we studied business and enterprise innovation, proposing a solution based on the use of semantic technologies, with a focus on Virtual Enterprises (essentially, networks of cooperating SMEs). Innovation in its essence is seen as a complex, ill-defined process of knowledge enrichment: starting from a given problem or idea, the process requires a large amount of (existing) knowledge to produce the new knowledge, necessary to solve the given problem and/or transform the idea into a concrete product (tangible or intangible).

This talk starts with an overview on the broad, encompassing notion of innovation, with its facets and articulation, and then illustrate an approach to innovation support and management based on a knowledge-centric approach. Such an approach has the desirable property to be largely independent from a specific industrial sector, and can be easily adapted to different kinds of production, from manufacturing to the service industry.



 

 

No Knowledge Without Processes - Process Mining as a Tool to Find Out What People and Organizations Really Do

Wil Van Der Aalst
Technology Management - Information Systems, Technische Universiteit Eindhoven
Netherlands
 

Brief Bio
Prof.dr.ir. Wil van der Aalst is a full professor of Information Systems at the Technische Universiteit Eindhoven (TU/e). At TU/e he is the scientific director of the Data Science Center Eindhoven (DSC/e). Since 2003 he holds a part-time position at Queensland University of Technology (QUT). His personal research interests include workflow management, process mining, Petri nets, business process management, process modeling, and process analysis. Wil van der Aalst has published more than 180 journal papers, 18 books (as author or editor), 400 refereed conference/workshop publications, and 60 book chapters. Many of his papers are highly cited (he one of the most cited computer scientists in the world and has an H-index of 122 according to Google Scholar) and his ideas have influenced researchers, software developers, and standardization committees working on process support. He has been a co-chair of many conferences including the Business Process Management conference, the International Conference on Cooperative Information Systems, the International conference on the Application and Theory of Petri Nets, and the IEEE International Conference on Services Computing. He is also editor/member of the editorial board of several journals, including Computing, Distributed and Parallel Databases, Software and Systems Modeling, the International Journal of Business Process Integration and Management, the International Journal on Enterprise Modelling and Information Systems Architectures, Computers in Industry, Business & Information Systems Engineering, IEEE Transactions on Services Computing, Lecture Notes in Business Information Processing, and Transactions on Petri Nets and Other Models of Concurrency. In 2012, he received the degree of doctor honoris causa from Hasselt University in Belgium. He served as scientific director of the International Laboratory of Process-Aware Information Systems of the National Research University, Higher School of Economics in Moscow. In 2013, he was appointed as Distinguished University Professor of TU/e and was awarded an honorary guest professorship at Tsinghua University. In 2015, he was appointed as honorary professor at the National Research University, Higher School of Economics in Moscow. He is also a member of the Royal Netherlands Academy of Arts and Sciences (Koninklijke Nederlandse Akademie van Wetenschappen), Royal Holland Society of Sciences and Humanities (Koninklijke Hollandsche Maatschappij der Wetenschappen) and the Academy of Europe (Academia Europaea).


Abstract
Recently, process mining emerged as a new scientific discipline on the interface between process models and event data. Whereas conventional Business Process Management (BPM) approaches are mostly model-driven with little consideration for event data, the increasing availability of high-quality data enables management decisions based on “evidence” rather than PowerPoints or Visio diagrams. Process mining can be used to (better) configure BPM systems and check compliance. Moreover, the high-quality event logs of BPM systems allow for advanced forms of process mining such as prediction, recommendation, and trend analysis. The challenge is to turn torrents of event data ("Big Data") into valuable insights related to performance and compliance. The results can be used to identify and understand bottlenecks, inefficiencies, deviations, and risks. Process mining helps organizations to "mine their own business": they are enabled to discover, monitor and improve real processes by extracting knowledge from event logs. In his talk, prof. Wil van der Aalst will provide an overview of this exciting field that will become more and more important for BPM.



 

 

A Research Journey into Enterprise Governance of IT

Wim Van Grembergen
University of Antwerp
Belgium
 

Brief Bio
Wim Van Grembergen is a full professor at the Economics and Management Faculty of the University of Antwerp (UA) and executive professor at the Antwerp Management School (AMS).  He teaches information systems at master and executive level, and researches in IT governance, IT strategy, IT performance management and the IT balanced scorecard.  Within his IT Alignment and Governance (ITAG) Research Institute (www.uams.be/itag) he conducts research for ISACA/ITGI on IT governance and supports the continuous development of COBIT and VAL IT. Currently he is evolved in the development of COBIT 5. Dr. Van Grembergen is a frequent speaker at academic and professional meetings and conferences and has served in a consulting capacity to a number of firms. He has several publications in leading academic journals and published books on IT governance and the IT balanced scorecard. His most recent book “Enterprise Governance of IT. Achieving strategic alignment and value” is published in 2009 (Springer, New York).


Abstract
Enterprise governance of IT is a relatively new concept in literature and is gaining more interest in the academic and practitioner’s world. Enterprise governance of IT addresses the definition and implementation of structures, processes and relational mechanisms that enable both business and IT people to execute their responsibilities in support of business/IT alignment and the creation of value from IT-enabled business investments. In his research talk Wim Van Grembergen will discuss the important theories and practices around Enterprise governance of IT and give an overview of his ten year research on this topic. He will also introduce the  recently published COBIT 5 framework. This new practitioner’s framework now clearly makes a distinction between IT governance and IT management and offers interesting opportunities for future research.



 

 

Bridging the Emotional Gap - From Objective Representations to Subjective Interpretations

Marie-Jeanne Lesot
Independent Researcher
France
 

Brief Bio
Marie-Jeanne Lesot obtained her PhD in 2005 from the University Pierre et Marie Curie in Paris. Since 2006 she is an associate professor in the department of Computer Science Lab of Paris 6 (LIP6) and member of the Learning and Fuzzy Intelligent systems (LFI) group. Her research interests focus on fuzzy machine learning with an objective of data interpretation and semantics integration and, in particular, to model and manage subjective information; they include similarity measures, fuzzy clustering, linguistic summaries, affective computing and information scoring.


Abstract
In the framework of affective computing, emotion mining constitutes a classification task that aims at recognising the emotional content of various types of data including, but not limited to, texts, images or physiological signals. It adds to the traditional semantic gap, between
low-level numerical data descriptions and their high-level conceptual interpretations, the difficulty of going from an objective to a subjective representation.

After discussing the difficulty of a computational model of the labels to be considered in this specific classification task, due to the essential ambiguity and imprecision of emotions, the talk will illustrate the shift from numerical data representations to the emotions the data convey, through the integration of intermediate subjectivity levels, exploiting either external knowledge to include emotional information in the objective representation, or a subjective non-emotional level.



 

 

Towards an Ontology of Software

Nicola Guarino
Independent Researcher
Italy
 

Brief Bio
Nicola Guarino is research director at the Institute of Cognitive Sciences and Technologies of the Italian National Research Council (ISTC-CNR), where he leads the Laboratory for Applied Ontology (LOA) in Trento. Since 1991 he has been playing a leading role in the ontology field, developing a strongly interdisciplinary approach that combines together Computer Science, Philosophy, and Linguistics. His impact is testified by a long list of widely cited papers and many keynote talks and tutorials in major conferences involving different communities. Among the most well known results of his lab, the OntoClean methodology and the DOLCE foundational ontology. Current research interests include conceptual modeling, service science, socio-technical systems, and e-government. He is founder and editor-in-chief (with Mark Musen) of the Applied Ontology journal, founder and past president of the International Association for Ontology and its Applications, and editorial board member of Int. Journal of Semantic Web and Information Systems and Journal of Data Semantics.


Abstract
For many, software is just code, something intangible best defined in contrast with hardware, but this is not particularly illuminating. Microsoft Word turned 30 last year. During its lifetime it has been the subject of numerous changes, as its requirements, code and documentation have continuously evolved. Still a community of users recognizes it as “the same software product”, a persistent object undergoing several changes through a social process involving owners, developers, salespeople and users, and it is still producing recognizable effects that meet the same core requirements. It is this process that makes software something different than just a piece of code, and justifies its intrinsic nature as a social artifact. Building on Jackson’s and Zave’s seminal work on foundations of requirements engineering, I will present in this talk a first attempt towards an ontology of software and related notions that accounts for such intuitions, and adopt it in software configuration management to provide a better understanding and control of software changes. This is ongoing work with Giancarlo Guizzardi, John Mylopoulos, and Xiaowei Wang.



footer