Home      INSTICC Portal
 
 
Keynote Speakers

IC3K is a joint conference composed of three concurrent conferences: KDIR, KEOD and KMIS.
These three conferences are always co-located and held in parallel.
Keynote lectures are plenary sessions and can be attended by all IC3K participants.

KEYNOTE SPEAKERS LIST

Edwin Diday, CEREMADE, Université Paris Dauphine, France
          Title: Extracting Knowledge from Complex Data by Symbolic Data Analysis

Rudi Studer, Karlsruhe Institute of Technology (KIT), Germany
          Title: Process-oriented Semantic Web Search - Using Lightweight Semantic Models for Supporting the Search Process

Joydeep Ghosh, University of Texas at Austin, U.S.A.
          Title: Actionable Mining of Large, Multi-relational Data using Localized Predictive Models

Oscar Pastor, Universidad Politécnica de Valencia, Spain
          Title: Conceptual Modeling for the Human Genome Challenge

Alain Bernard, Ecole Centrale de Nantes, France
          Title: Characterisation, Formalisation and Reuse of Knowledge: Models, Methods and Application Cases

 

Keynote Lecture 1
Extracting Knowledge from Complex Data by Symbolic Data Analysis
Edwin Diday
CEREMADE, Université Paris Dauphine
France


Brief Bio
Edwin Diday is Dr. of Science in Applied Mathematics from the University Paris 6 and Professor of Exceptional Class at the University of Paris IX Dauphine in Computer Science and Mathematics. Chef de projet and then Consultant at INRIA (Institut National de Recherche en Informatique et Automatique) until 1995. Scientific Manager of the SODAS and ASSO European EUROSTAT project (17 teams from 9 European countries) on Symbolic Data Analysis. He is or was involved in three other European consortiums. He is in the editorial board of the book series: "Studies in Classification, Data Analysis and Knowledge Organization" and of several international journals.
He is co-author of 5 books, co-editor of 18 books, he has more than 115 refereed papers (including 44 journal papers) and many reports (INRIA (Rocquencourt) or CEREMADE (Dauphine University)). More than 50 doctorate dissertations have been obtained by PHD students under his direction. He is past president of the Francophone Society of Classification. He is member of the International Statistical Institute. His most recent contribution in English are three books on "Symbolic Data Analysis" published jointly with Prof. H. Bock, L. Billard and M. Noirhomme by Springer-Verlag (2000) and Wiley (2006, 2008). His most recent paper is on Spatial Clustering (DAM, 2008). He is awarded laureate of the Montyon Price given by the French Academy of Sciences.


Abstract
"Complex data" have been defined in the following way: "in contrast to the typical tabular data, complex data can consist of heterogeneous data types, can come from different sources, or live in high dimensional spaces. All these specificities call for new data mining strategies". In practice, sometimes "complex data" refers to complex objects like images, video, audio or text documents. Sometimes, it refers to distributed data or structured data or more specifically: spatial-temporal data or heterogeneous data as a mixture of data, as for example, a medical patient described by images, text documents and socio-demographic information. In practice, complex data are more or less based on several kinds of observations described by standard numerical or (and) categorical data contained in several related data tables. In this talk our aim is to show that the study of Complex Data in order to get new knowledge requires the use of “symbolic data” which are an extension of standard numerical or categorical data. The usual data mining model is based on two parts: the first concerns the observations, the second, contains their description by several standard variables including numerical or categorical. The Symbolic Data Analysis (SDA) model (see Billard and Diday (2006), Diday and Noirhomme (2008)) needs two more parts: the first concerns classes of observations called concepts and the second concerns their description by symbolic data. The concepts are characterized by a set of properties called intent and by an extent defined by the set of observations which satisfy these properties. In order to take care of the variation of their extent, these concepts are described by symbolic data which are standard categorical or numerical data but moreover intervals, histograms, sequences of weighted values and the like. These new kinds of data are called symbolic as they cannot be manipulated as numbers. Then, based on this model, new knowledge can be extracted by new tools of data mining extended to concepts considered as new kinds of observations. In this talk we try to answer the following questions: What are Complex Data? What are “symbolic data”? How “Symbolic Data” are built? Are Symbolic Data Complex Data? In which sense Complex Data are Symbolic Data? What is “Symbolic Data Analysis”? In which sense Conceptual Lattices are the underlying structure of Symbolic Data? The talk is illustrated by several industrial applications including on telephone calls text mining in order to discover “themes”. Finally, we provide open directions of research and we show that SDA provides a framework for extracting new knowledge from Complex Data.

 

Keynote Lecture 2
Process-oriented Semantic Web Search - Using Lightweight Semantic Models for Supporting the Search Process
Rudi Studer
Karlsruhe Institute of Technology (KIT)
Germany


Brief Bio
Rudi Studer is Full Professor in Applied Informatics at the Karlsruhe Institute of Technology (KIT), Institute AIFB. In addition, he is director of the Karlsruhe Service Research Institute (KSRI). His research interests include knowledge management, Semantic Web technologies and applications, ontology management, data and text mining, Service Science and Semantic Grid. He is speaker of the Competence field “Organization and Service Engineering” at KIT.
He obtained a Diploma in Computer Science at the University of Stuttgart in 1975. In 1982 he was awarded a Doctor's degree in Mathematics and Informatics at the University of Stuttgart, and in 1985 he obtained his Habilitation in Informatics at the University of Stuttgart. From 1985 to 1989 he was project leader and manager at the Scientific Center of IBM Germany.
Rudi Studer is also member of the executive board of the FZI Research Center for Information Technology at the KIT and director in the research department Information Process Engineering (IPE) at the FZI. He is as co-founder of the spin-off company ontoprise GmbH that develops semantic applications.
He is engaged in various national and international cooperation projects, among others the DFG Graduate School Information Management and Market Engineering (IME), the EU Integrated Projects Active (as technical director) and SOA4All and the THESEUS research program funded by the Federal Ministry of Economics and Technology (BMWi).
He is past president of the Semantic Web Science Association and former Editor-in-chief of the Journal Web Semantics: Science, Services, and Agents on the World Wide Web. He is one of the vice presidents of the Semantic Technology Institute International (STI2).


Abstract
New opportunities and challenges for search arise with the increasing amount of data on the Semantic Web. More complex information needs can be addressed while the problems of scalability, heterogeneity, uncertainty and usability exacerbate in the Web setting. I will present the search technologies we have developed over the years. On the basis of concrete demonstrators we built for various settings, I will discuss how we aim to address these challenges.
The focus of the talk will be on the process-oriented view on Semantic Web Search. I will elaborate on how lightweight semantic models can be used throughout the process and thereby improve the efficiency and usability of the entire search experience. In particular, I will show that a semantic model can help to interpret the information needs of the user. Keyword queries entered by the user can be translated to more expressive structured queries. After the translation, the semantic model can be employed to improve the performance of query processing. Also in the final step, query refinement and result presentation can draw upon valuable information captured by the underlying semantic model.
In the final part, I will give a brief outlook and discuss some important steps towards addressing the remaining challenges.

 

Keynote Lecture 3
Actionable Mining of Large, Multi-relational Data using Localized Predictive Models
Joydeep Ghosh
University of Texas at Austin
U.S.A.


Brief Bio
Joydeep Ghosh is currently the Schlumberger Centennial Chair Professor of Electrical and Computer Engineering at the University of Texas, Austin. He joined the UT-Austin faculty in 1988 after being educated at IIT Kanpur, (B. Tech '83) and The University of Southern California (Ph.D'88). He is the founder-director of IDEAL (Intelligent Data Exploration and Analysis Lab) and a Fellow of the IEEE. His research interests lie primarily in intelligent data analysis, data mining and web mining, adaptive multi-learner systems, and their applications to a wide variety of complex engineering and AI problems.
Dr. Ghosh has published more than 250 refereed papers and 35 book chapters, and co-edited 20 books. His research has been supported by the NSF, Yahoo!, Google, ONR, ARO, AFOSR, Intel, IBM, Motorola, TRW, Schlumberger and Dell, among others. He received the 2005 Best Research Paper Award from UT Co-op Society and the 1992 Darlington Award given by the IEEE Circuits and Systems Society for the Best Paper in the areas of CAS/CAD, besides ten other "best paper" awards over the years. He was the Conference Co-Chair of Computational Intelligence and Data Mining (CIDM'07), Program Co-Chair for ICPR'08 (Pattern Recognition Track), The SIAM Int'l Conf. on Data Mining (SDM'06), and Conf. Co-Chair for Artificial Neural Networks in Engineering (ANNIE)'93 to '96 and '99 to '03. He is the founding chair of the Data Mining Tech. Committee of the IEEE CI Society. He also serves on the program committee of several top conferences on data mining, neural networks, pattern recognition, and web analytics every year. Dr. Ghosh has been a plenary/keynote speaker on several occasions such as ISIT'08, ANNIE'06, MCS 2002 and ANNIE'97 and, and has widely lectured on intelligent analysis of large-scale data. He has co-organized workshops on high dimensional clustering (ICDM 2003; SDM 2005), Web Analytics (with SIAM Int'l Conf. on Data Mining, SDM2002), Web Mining (with SDM 2001), and on Parallel and Distributed Knowledge Discovery (with KDD-2000).
Dr. Ghosh has served as a co-founder, consultant or advisor to successful startups (Stadia Marketing, Neonyoyo and Knowledge Discovery One) and as a consultant to large corporations such as IBM, Motorola and Vinson & Elkins. At UT, Dr. Ghosh teaches graduate courses on data mining, artificial neural networks, and web analytics. He was voted the Best Professor by the Software Engineering Executive Education Class of 2004.


Abstract
Many large datasets associated with modern predictive data mining applications are quite complex and heterogeneous, possibly involving multiple relations, or exhibiting a dyadic nature with associated “side-information”. For example, one may be interested in predicting the preferences of a large set of customers for a variety of products, given various properties of both customers and products, as well as past purchase history, a social network on the customers, and a conceptual hierarchy on the products. This talk will introduce a broad framework for effectively tackling such scenarios using a simultaneous problem decomposition and modeling strategy that can exploit the wide variety of information available. The properties and capabilities of this framework will be illustrated on several real life applications.

 

Keynote Lecture 4
Conceptual Modeling for the Human Genome Challenge
Oscar Pastor
Universidad Politécnica de Valencia
Spain


Brief Bio
Oscar Pastor is Full Professor and Director of the "Centro de Investigación en Métodos de Producción de Software (PROS)" at the Universidad Politécnica de Valencia (Spain). He received his Ph.D. in 1992. He was a researcher at HP Labs, Bristol, UK. He has published more than two hundred research papers in conference proceedings, journals and books, received numerous research grants from public institutions and private industry, and been keynote speaker at several conferences and workshops. Chair of the ER Steering Committee, and member of the SC of conferences as CAiSE, ICWE, CIbSE or RCIS, his research activities focus on conceptual modeling, web engineering, requirements engineering, information systems, and model-based software production. He created the object-oriented, formal specification language OASIS and the corresponding software production method OO-METHOD. He led the research and development underlying CARE Technologies that was formed in 1996. CARE Technologies has created an advanced MDA-based Conceptual Model Compiler called OlivaNova, a tool that produces a final software product starting from a conceptual schema that represents system requirements. He is currently leading a multidisciplinary project linking Information Systems and Bioinformatics notions, oriented to designing and implementing tools for Conceptual Modeling-based interpretation of the Human Genome information.


Abstract
If we look at living beings as a special kind of programs whose "organic-quaternary" -instead of "electric-bynary"- code is made up of combinations of a predefined set of nucleotides, we can wonder which are the models that these programs represent. If Conceptual Modeling is seen as a basic component to understand programs that a the result of a sound Software Production Process, an immediate, interesting question is why Conceptual Modeling is not playing a more prominent role in the challenging problem of interpreting and understanding the Human Genome. Research in the Bioinformatics domain is mainly centered on the solution space, exploring tons of data to try to discover patterns of behavior represented in those data. It is reasonable to conclude that working at the Problem Space level, the understanding of the human genome would advance much faster. This is the central topic of this talk. When talking about the benefits of Conceptual Modeling, the analogy with Software Engineering provides some interesting insights. If we try to find a simple concept as a foreign key looking at the millions of 0s and 1s that constitute a binary-coded program, we can easily conclude that it would be an almost impossible mission. But foreign keys are a very precise, clear and easy-to-identify concept when we deal with conceptual models. The talk will analyze the benefits that working with Conceptual Models can provide to the Genomic domain. In particular, a precise ER model will be introduced, distinguishing among the different relevant views of genome, transcriptome and proteome. A corresponding Genomic Data Base will be presented, together with those data capture mechanisms that are intended to use the heterogeneous set of data made currently available by different biological data banks. How to assure that the database is kept consistent will be explored. Additionally, how to exploit it and its potential applications will be discussed.

 

Keynote Lecture 5
Characterisation, Formalisation and Reuse of Knowledge: Models, Methods and Application Cases
Alain Bernard
Ecole Centrale de Nantes
France


Brief Bio
Prof. A. Bernard, graduated in 82, obtained his PhD in 89. As an associate-Professor, he worked from 90 to 96, in Ecole Centrale de Paris. From Sept. 96 to Oct. 01, he has been Professor in CRAN, University of Nancy I, as the head of the Integrated Design and Manufacturing team. Until 0ct. 01, he is Professor at Ecole Centrale de Nantes, Deputy Director for Research since three years ago. He is in IRCCyN laboratory, the head of the "Virtual Engineering for industrial engineering" team. His main research topics relate to KM and KBE, CAPP, system modeling, interoperability, performance evaluation, virtual engineering, rapid product development and Reverse Engineering.


Abstract
Knowledge is strategically coupled to enterprise performance. A complete framework, based on the knowledge value concept, is introduced to support the mechanisms of knowledge engineering, based on several models (FBS-PPRE), methods (MOKA and MARISKA) and tools (mainly demonstrators and developped specific applications) that we have proposed by several PhD works and projects at IRCCyN laboratory during these last years. The usability of this complete framework will be demonstrated and several projects will be presented in different technical domains (like machining, casting, additive layer manufacturing), organisational domains (like PLM including human safety for example, mass customization for shoes) and other combined domains (like industrial archeology). Recent issues will be introduced and discussed with respect to enterprise needs.