Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Leveraging Knowledge from Humans and from Data: Why Requirements Engineering (Still) Matters
Eric Yu, University of Toronto, Canada

Integrating Big Data, Data Science and Cyber Security with Applications in Cloud and Internet of Things
Bhavani Thuraisingham, University of Texas at Dallas, United States

Validity Concerns for Evaluating Algorithms
Jan Mendling, Humboldt-Universität zu Berlin, Germany

Designing Knowledge Enabled Systems for Interdisciplinary Settings
Deborah L. McGuinness, Rensselaer Polytechnic Institute, United States

 

Leveraging Knowledge from Humans and from Data: Why Requirements Engineering (Still) Matters

Eric Yu
University of Toronto
Canada
https://ischool.utoronto.ca/profile/eric-yu/
 

Brief Bio
Eric Yu is Professor at the Faculty of Information at the University of Toronto. His research interests include conceptual modeling, software requirements engineering, information systems engineering, knowledge management, and enterprise modeling. In his PhD work, he developed the i* framework for social modeling. The work has inspired hundreds of research papers, dozens of PhD theses, and many software tools. A version of i* is part of an international standard. He is co-editor of the MIT Press book series on Information Systems, and is on the editorial boards of the Requirements Engineering journal, IET Software, and the Journal on Data Semantics. He was Program Co- Chair for ER 2008 and 2014, and for CAiSE 2020. He was recipient of the 2019 Peter P. Chen Award.


Abstract
A data-driven computing paradigm has risen to rival the previously dominant knowledge-driven paradigm. As modern society becomes ever more digitalized, all kinds of human activities are increasingly mediated by a mix of data-driven and knowledge-driven software. The wondrous technical capabilities of software systems are evoking utopian dreams as well as dystopian nightmares. Users, regulators, and the general public are calling for responsible AI and transparency and accountability from software professionals.

In software engineering, requirement analysis techniques have served to mediate between human needs and wants on one hand, and technical system design on the other. This talk will outline the role of social modeling in requirements engineering, as illustrated by the i* framework. In i*, actors depend on each other to achieve what they want. By applying i* modeling, one can explore the potential impacts of various system designs on stakeholders, in search of design options that would better meet the desires and aspirations of all concerned.

The widespread adoption of data-driven computing in today’s highly digitalized world suggests that requirements analysis needs to engage with the full cognitive cycle – the cycle by which actors sense and interpret data, generate knowledge, execute actions, and in turn producing new data. Various software technologies, data-driven and knowledge-driven, enhance and complement specific aspects of the cognitive cycles of humans and organizations, often bringing about spectacular gains as well as vulnerabilities. Social modeling encompassing the full cognitive cycle can potentially help system designers and stakeholders jointly explore design options in order to derive the most benefits while avoiding negative consequences when developing today’s advanced software applications.



 

 

Integrating Big Data, Data Science and Cyber Security with Applications in Cloud and Internet of Things

Bhavani Thuraisingham
University of Texas at Dallas
United States
 

Brief Bio
Dr. Bhavani Thuraisingham is the Founders Chair Professor of Computer Science and the Executive Director of the Cyber Security Research and Education Institute at the University of Texas at Dallas (UTD). She is also a visiting Senior Research Fellow at Kings College, University of London and an elected Fellow of the ACM, IEEE, the AAAS, the NAI and the BCS. Her research interests are on integrating cyber security and artificial intelligence/data science including as they relate to the cloud, social media and the Internet of Things. She has received several technical and leadership awards including the IEEE CS 1997 Technical Achievement Award, ACM SIGSAC 2010 Outstanding Contributions Award, the IEEE Comsoc Communications and Information Security 2019 Technical Recognition Award, the IEEE CS Services Computing 2017 Research Innovation Award, the ACM CODASPY 2017 Lasting Research Award, the IEEE ISI 2010 Research Leadership Award, and the ACM SACMAT 10 Year Test of Time Awards for 2018 and 2019 (for papers published in 2008 and 2009).  Her 40-year career includes industry (Honeywell), federal research laboratory (MITRE), US government (NSF) and US Academia. Her work has resulted in 130+ journal articles, 300+ conference papers, 180+ keynote and featured addresses, seven US patents, fifteen books, and podcasts. She received her PhD from the University of Wales, Swansea, UK, and the prestigious earned higher doctorate (D. Eng) from the University of Bristol, UK. She has a Certificate in Public Policy Analysis from the London School of Economics and Political Science. 


Abstract
Big Data, Data Science and Security are being integrated to solve many of the security and privacy challenges. For example, machine learning techniques are being applied to solve security problems such as malware analysis and insider threat detection. However, there is also a major concern that the machine learning techniques themselves could be attacked. Therefore, the machine learning techniques are being adapted to handle adversarial attacks. This area is known as adversarial machine learning. In addition, privacy of the individuals is also being violated through these machine learning techniques as it is now possible to gather and analyze vast amounts of data and therefore privacy enhanced data science techniques are being developed.

To assess the developments on the integration of Big Data, Data Science and Security over the past decade and apply them to the Internet of Transportation, the presentation will focus on four aspects. First it will examine the developments on applying Data Science techniques for detecting cyber security problems such as insider threat detection as well as the advances in adversarial machine learning. Some developments on privacy aware and policy-based data management frameworks will also be discussed. Next it will discuss how cloud technologies may be used to securely and privately share the information for various Big Data applications such as the Internet of Things. Finally, it will describe ways in which Big Data, Data Science and Security could be incorporated into these applications.



 

 

Validity Concerns for Evaluating Algorithms

Jan Mendling
Humboldt-Universität zu Berlin
Germany
 

Brief Bio
Jan Mendling is the Einstein-Professor of Process Science with the Department of Computer Science at Humboldt-Universität zu Berlin, Germany. His research interests include various topics in the area of business process management and information systems. He has published more than 450 research papers and articles, among others in Management Information Systems Quarterly, ACM Transactions on Software Engineering and Methodology, IEEE Transactions on Software Engineering, Journal of the Association of Information Systems and Decision Support Systems. He is a department editor for Business and Information Systems Engineering, member of the board of the Austrian Society for Process Management , one of the founders of the Berlin BPM Community of Practice , organizer of several academic events on process management, and a member of the IEEE Task Force on Process Mining. He is co-author of the textbooks Fundamentals of Business Process Management, Second Edition, and Wirtschaftsinformatik, 12th Edition, , which are extensively used in information systems education.


Abstract
The evaluation of algorithms has been debated in computer science for decades. Recent years have seen an increasing use of different kinds of evaluations based on empirical research methods and sample data. Various applied areas of computing struggle with different concerns of validity, presumably because an overarching theoretical framework has been missing. In this keynote, we introduce a framework for researching algorithms that highlights several validity concerns. We discuss, in turn, research strategies to address these concerns. Our framework can be useful for any type of applied research in computer science that builds on the evaluation of algorithms.



 

 

Designing Knowledge Enabled Systems for Interdisciplinary Settings

Deborah L. McGuinness
Rensselaer Polytechnic Institute
United States
 

Brief Bio
Deborah McGuinness is the Tetherless World Senior Constellation Chair and Professor of Computer, Cognitive, and Web Sciences at RPI. She is also the founding director of the RPI Web Science Research Center. Deborah has been recognized with awards as a fellow of the American Association for the Advancement of Science (AAAS) for contributions to the Semantic Web, knowledge representation, and reasoning environments and as the recipient of the Robert Engelmore award from the Association for the Advancement of Artificial Intelligence (AAAI) for leadership in Semantic Web research and in bridging Artificial Intelligence (AI) and eScience, significant contributions to deployed AI applications, and extensive service to the AI community. Deborah currently leads a number of large diverse data intensive resource efforts and her team is creating next generation ontology-enabled research infrastructure for work in large interdisciplinary settings. Prior to joining RPI, Deborah was the acting director of the Knowledge Systems, Artificial Intelligence Laboratory and Senior Research Scientist in the Computer Science Department of Stanford University, and previous to that she was at AT&T Bell Laboratories in Artificial Intelligence Research. Deborah also has consulted with numerous large corporations as well as emerging startup companies wishing to plan, develop, deploy, and maintain semantic web and/or AI applications. Deborah has also worked as an expert witness in a number of cases, and has deposition and trial experience.   Some areas of recent work include: data science, next generation health advisors, ontology design and evolution environments, semantically-enabled virtual observatories, semantic integration of scientific data, context-aware mobile applications, search, eCommerce, configuration, and supply chain management. Deborah holds a Bachelor of Math and Computer Science from Duke University, her Master of Computer Science from University of California at Berkeley, and her Ph.D. in Computer Science from Rutgers University.


Abstract
There is growing interest in large interdisciplinary data portals to support a wide range of research and research communities.  In this talk, we will introduce emerging best practices in building large data and knowledge portals that aim to be long lived.  We will provide examples in multiple disciplines of semantic knowledge portals that are collaboratively designed and maintained by interdisciplinary teams.



footer