KEOD 2024 Abstracts


Full Papers
Paper Nr: 37
Title:

Framework for a Knowledge-Based Course Recommender System Focused on IT Career Needs

Authors:

Pham Thi Xuan Hien, Le Nguyen Hoai Nam and Cuong Pham-Nguyen

Abstract: This paper presents an approach for a knowledge-based recommender system that provides relevant courses based on learners’ profiles, requirements, and career needs. The framework integrates an automatic data collection process, ensuring that the knowledge base reflects the latest job market and course information. The recommendation method relies on a set of rules that combine various matching techniques, incorporating user requirements, skill and knowledge gaps, contextual information, and the course weight indicating its relevance to the career or market. An experiment was conducted to measure the satisfaction of the approach through a survey of users who used the system. The results reveal that the approach is deemed acceptable. This framework contributes to ongoing discussions surrounding the application of technology in building recommender systems for education.
Download

Paper Nr: 69
Title:

How to Surprisingly Consider Recommendations? A Knowledge-Graph-Based Approach Relying on Complex Network Metrics

Authors:

Oliver Baumann, Durgesh Nandini, Anderson Rossanez, Mirco Schoenfeld and Julio Cesar dos Reis

Abstract: Traditional recommendation proposals, including content-based and collaborative filtering, usually focus on similarity between items or users. Existing approaches lack ways of introducing unexpectedness into recommendations, prioritizing globally popular items over exposing users to unforeseen items. This investigation aims to design and evaluate a novel layer on top of recommender systems suited to incorporate relational information and rerank items with a user-defined degree of surprise. Surprise in recommender systems refers to the degree to which a recommendation deviates from the user’s expectations, providing an unexpected yet relatable recommendation. We propose a knowledge graph-based recommender system by encoding user interactions on item catalogs. Our study explores whether network-level metrics on knowledge graphs (KGs) can influence the degree of surprise in recommendations. We hypothesize that surprisingness correlates with specific network metrics, treating user profiles as subgraphs within a larger catalog KG. The achieved solution reranks recommendations based on their impact on structural graph metrics. Our research contributes to optimizing recommendations to reflect the network-based metrics. We experimentally evaluate our approach on two datasets of LastFM listening histories and synthetic Netflix viewing profiles. We find that reranking items based on complex network metrics leads to a more unexpected and surprising composition of recommendation lists.
Download

Paper Nr: 79
Title:

Semantic Capability Model for the Simulation of Manufacturing Processes

Authors:

Jonathan Reif, Tom Jeleniewski, Aljosha Köcher, Tim Frerich, Felix Gehlhoff and Alexander Fay

Abstract: Simulations offer opportunities in the examination of manufacturing processes. They represent various aspects of the production process and the associated production systems. However, often a single simulation does not suffice to provide a comprehensive understanding of specific process settings. Instead, a combination of different simulations is necessary when the outputs of one simulation serve as the input parameters for another, resulting in a sequence of simulations. Manual planning of simulation sequences is a demanding task that requires careful evaluation of factors like time, cost, and result quality to choose the best simulation scenario for a given inquiry. In this paper, an information model is introduced, which represents simulations, their capabilities to generate certain knowledge, and their respective quality criteria. The information model is designed to provide the foundation for automatically generating simulation sequences. The model is implemented as an extendable and adaptable ontology. It utilizes Ontology Design Patterns based on established industrial standards to enhance interoperability and reusability. To demonstrate the practicality of this information model, an application example is provided. This example serves to illustrate the model’s capacity in a real-world context, thereby validating its utility and potential for future applications.
Download

Paper Nr: 131
Title:

Learning Knowledge Representation by Aligning Text and Triples via Finetuned Pretrained Language Models

Authors:

Víctor Jesús Sotelo Chico and Julio Cesar dos Reis

Abstract: Representation learning has produced embedding for structure and unstructured knowledge constructed independently, not sharing a vectorial space. Alignment between text and RDF triples has been explored in natural language generation, from RDF verbalizers to generative models. Existing approaches have treated the semantics in these data via unsupervised approaches proposed to allow semantic alignment with adequate application studies. The existing datasets involved in text-triples are limited and have only been applied to text-to-triple generation rather than for representation. This research proposes a supervised approach for representing triples. Our approach feeds an existing pretrained model with triple-text pairs exploring measures for the semantic alignment between the pair elements. Our solution employs a data augmentation technique with contrastive loss to address the dataset limitation. We applied a loss function that requires only positive examples, which is suitable for the explored dataset. Our experimental evaluation measures the effectiveness of the fine-tuned models in two main tasks: ’Semantic Similarity’ and ’Information Retrieval’. These tasks were addressed to measure whether our designed models can learn triple representation while maintaining the semantics learned by the text encoder models. Our contribution paves the way for better embeddings targeting text-triples alignment without huge data, bridging unstructured text and knowledge graph data.
Download

Paper Nr: 138
Title:

Multidimensional Knowledge Graph Embeddings for International Trade Flow Analysis

Authors:

Durgesh Nandini, Simon Blöthner, Mirco Schoenfeld and Mario Larch

Abstract: Understanding the complex dynamics of high-dimensional, contingent, and strongly nonlinear economic data, often shaped by multiplicative processes, poses significant challenges for traditional regression methods as such methods offer limited capacity to capture the structural changes they feature. To address this, we propose leveraging the potential of knowledge graph embeddings for economic trade data, in particular, to predict international trade relationships. We implement KonecoKG, a knowledge graph representation of economic trade data with multidimensional relationships using SDM-RDFizer and transform the relationships into a knowledge graph embedding using AmpliGraph.
Download

Paper Nr: 148
Title:

Benchmarking the Ability of Large Language Models to Reason About Event Sets

Authors:

Svenja Kenneweg, Jörg Deigmöller, Philipp Cimiano and Julian Eggert

Abstract: The ability to reason about events and their temporal relations is a key aspect in Natural Language Understanding. In this paper, we investigate the ability of Large Language Models to resolve temporal references with respect to longer event sets. Given that events rarely occur in isolation, it is crucial to determine the extent to which Large Language Models can reason about longer sets of events. Towards this goal, we introduce a novel synthetic benchmark dataset comprising of 2,200 questions to test the abilities of LLMs to reason about events using a Question Answering task as proxy. We compare the performance of 4 state of the art LLMs on the benchmark, analyzing their performance in dependence of the length of the event set considered as well as of the explicitness of the temporal reference. Our results show that, while the benchmarked LLMs can answer questions over event sets with a handful of events and explicit temporal references successfully, performance clearly deteriorates with larger event set length and when temporal references get less explicit. The Benchmark is available at https://gitlab.ub.uni-bielefeld.de/s.kenneweg/bamer.
Download

Paper Nr: 155
Title:

Collaboration Patterns Ontology for Human-Machine Decision Support

Authors:

Alexander Smirnov and Tatiana Levashova

Abstract: Collaboration of humans and machines when they complement capabilities of each other is becoming increasingly relevant. Recurring problems often arise in the collaboration process. Collaboration patterns that provide reusable efficient and proven solutions for recurring problems is a means to facilitate organization of joint activities for specific collaboration goals such as decision support. Existing studies on collaboration patterns make it clear that collaboration faces problems of diverse classes. The paper proposes a holistic view on human-machine collaboration relatively to the decision support domain where a human-machine environment processes a task that the user deals with as a decision support problem. During task processing, humans and machines use collaboration patterns when they intend to achieve goals for which the patterns propose solutions. A collaboration patterns ontology supports the choice of a kind of pattern that the collaborators can use to accomplish a specific goal. The present research provides models for organization of pattern-based collaboration and contributes to the problem of human-machine decision support suggesting a pattern-based decision support process.
Download

Paper Nr: 156
Title:

Adapter-Based Approaches to Knowledge-Enhanced Language Models: A Survey

Authors:

Alexander Fichtl, Juraj Vladika and Georg Groh

Abstract: Knowledge-enhanced language models (KELMs) have emerged as promising tools to bridge the gap between large-scale language models and domain-specific knowledge. KELMs can achieve higher factual accuracy and mitigate hallucinations by leveraging knowledge graphs (KGs). They are frequently combined with adapter modules to reduce the computational load and risk of catastrophic forgetting. In this paper, we conduct a systematic literature review (SLR) on adapter-based approaches to KELMs. We provide a structured overview of existing methodologies in the field through quantitative and qualitative analysis and explore the strengths and potential shortcomings of individual approaches. We show that general knowledge and domain-specific approaches have been frequently explored along with various adapter architectures and downstream tasks. We particularly focused on the popular biomedical domain, where we provided an insightful performance comparison of existing KELMs. We outline the main trends and propose promising future directions.
Download

Short Papers
Paper Nr: 12
Title:

Supervised Machine Learning Models and Schema Matching Techniques for Ontology Alignment

Authors:

Faten Abbassi and Yousra Bendaly Hlaoui

Abstract: The diversity of existing representations of the same ontology creates a problem of manipulation of the same knowledge according to any computational domain. Unifying similar ontologies by reducing their degree of heterogeneity seems to be the appropriate solution to this problem. This solution consists of aligning similar ontologies using a set of existing ontology schema-matching techniques. In this paper, we present an approach for ontology alignment based on these techniques and machine learning models. To do so, we have developed a matrix construction method based on ontology matching techniques, namely element matching techniques and structure matching techniques implemented by elementary matchers. Once the matrix is constructed, we apply a composite matcher, which is a classifier to combine the individual degrees of similarity calculated for each pair of ontology elements into a final aggregated similarity value between the two ontologies. This composite matcher is implemented via various supervised machine learning models such as LogisticRegression, GradientBoostingClassifier, GaussianNB and KNeighborsClassifier. To experiment our alignment method and to validate the used learning models, we used the reference ontologies and their alignments for the conference and benchmark tracks provided by the Ontology Alignment Evaluation Initiative (OAEI a).
Download

Paper Nr: 40
Title:

Personalized Asthma Recommendation System: Leveraging Predictive Analysis and Semantic Ontology-Based Knowledge Graph

Authors:

Ayan Chatterjee

Abstract: Personalized approaches are required for asthma management due to the variability in symptoms, triggers, and patient characteristics. An innovative asthma recommendation system that integrates automatic predictive analysis with semantic knowledge to provide personalized recommendations for asthma management is proposed by this paper. Asthma exacerbation are predicted and the recommendations are enhanced by the system, which leverages automatic Tree-based Pipeline Optimization Tool (TPOT) and semantic knowledge represented in an OWL Ontology (AsthmaOnto). Furthermore, classifications are explained with Local Interpretable Model-Agnostic Explanations (LIME) to identify feature importance. Tailored interventions based on individual patient profiles are provided by this conceptual model, aiming to improve asthma management. The proposed model has been verified using public asthma datasets, and a public weather air-quality dataset has been utilized to support ontology development and verification. In TPOT, the Gaussian Naive Bayes (Gaus-sianNB) classifier has outperformed other supervised machine learning models with an accuracy of 0.75, for the used dataset. To implement and evaluate the proposed model in clinical settings, further development and validation with more diverse and robust datasets with model calibration are required.
Download

Paper Nr: 49
Title:

Elementary Multiperspective Material Ontology: Leveraging Perspectives via a Showcase of EMMO-Based Domain and Application Ontologies

Authors:

Pierluigi Del Nostro, Jesper Friis, Emanuele Ghedini, Gerhard Goldbeck, Oskar Holtz, Otello Maria Roscioni, Francesco Antonio Zaccarini and Daniele Toti

Abstract: The effectiveness of semantic technologies in ensuring interoperability is often hindered by the preference for internally developed knowledge bases and the presence of diverse conceptual frameworks and implementation choices. Foundational, upper-level ontologies based on FOL and OWL2-DL address interoperability and provide a robust foundation for domain and application ontologies. They emphasize logical rigor and expressiveness, aligning with the idea of shared ontologies for knowledge diffusion and reuse. In scientific and industrial contexts, a framework that accommodates scientific pluralism is essential. The Elementary Multiperspective Material Ontology (EMMO) meets this need, offering a rigorous yet pluralistic representation of knowledge through the mereocausal theory, focusing on parthood (mereology) and causation. EMMO’s adaptable architecture includes discipline-specific modules, enabling the representation of items from multiple perspectives, such as viewing an image as both an ’Object’ and ’Data’. This paper presents EMMO’s perspectives, including the Reductionistic, Holistic, Persistence, Contrast, Structural and Semiotics perspectives. It then proceeds to showcase four recently-developed ontologies based on EMMO, one at the domain level (CHAMEO) and three at the application level (BTO, HPO and MAEO), taking advantage of EMMO’s perspectives and therefore demonstrating its representational capabilities and versatility.
Download

Paper Nr: 50
Title:

A Fuzzy Decision Support System with Semantic Knowledge Graph for Personalized Asthma Monitoring: A Conceptual Modeling

Authors:

Ayan Chatterjee

Abstract: Asthma, a complex chronic respiratory condition, poses significant management challenges, necessitating personalized monitoring for optimal treatment outcomes and individual well-being. This study introduces a Fuzzy Decision Support System (FDSS) for personalized asthma monitoring, leveraging semantic reasoning techniques and SPARQL querying to enhance decision-making accuracy and provide individualized assessments of asthma control and exacerbation risk. By utilizing semantic reasoning, the FDSS captures intricate relationships among asthma parameters, health data, triggers, and treatment outcomes, enabling precise management decisions. Development involves creating an ontology to encapsulate asthma domain knowledge, representing fuzzy logic, integrating crisp and fuzzy clinical variables, and executing SPARQL queries for fuzzy inference. The proposed FDSS demonstrates the feasibility of integrating these techniques for personalized asthma management, offering flexibility and adaptability to improve treatment outcomes and quality of life. Further research is needed to validate its efficacy in real-world healthcare settings.
Download

Paper Nr: 64
Title:

Clustering for Explainability: Extracting and Visualising Concepts from Activation

Authors:

Alexandre Lambert, Aakash Soni, Assia Soukane, Amar Ramdane Cherif and Arnaud Rabat

Abstract: Despite significant advances in computer vision with deep learning models (e.g. classification, detection, and segmentation), these models remain complex, making it challenging to assess their reliability, interpretability, and consistency under diverse. There is growing interest in methods for extracting human-understandable concepts from these models, but significant challenges persist. These challenges include difficulties in extracting concepts relevant to both model parameters and inference while ensuring the concepts are meaningful to individuals with varying expertise levels without requiring a panel of evaluators to validate the extracted concepts. To tackle these challenges, we propose concept extraction by clustering activations. Activations represent a model’s internal state based on its training, and can be grouped to represent learned concepts. We propose two clustering methods for concept extraction, a metric for evaluating their importance, and a concept visualization technique for concept interpretation. This approach can help identify biases in models and datasets.
Download

Paper Nr: 70
Title:

Developing a Reference OntoUML Conceptual Model for Data Management Plans: Enhancing Consistency and Interoperability

Authors:

Jana Martínková, Marek Suchánek and Robert Pergl

Abstract: The growing significance of Data Management Plans (DMPs) has highlighted the need for standardized and accurate data management practices. Current DMPs often suffer from inconsistent terminology, leading to misunderstandings and reducing their effectiveness. This study proposes the development of a DMP OntoUML conceptual model to address these issues. The model aims to clearly define all relevant concepts and their relationships, ensuring consistency and interoperability, particularly by connecting with the FAIR principles OntoUML model. The research follows a structured approach: specifying necessary concepts using existing templates and ontologies, defining terms and their relationships within the OntoUML model, and verifying the model’s syntax. The resulting conceptual model will standardize terminology, promote interoperability, and support future DMP development and education.
Download

Paper Nr: 71
Title:

SMACS: Stress Management AI Chat System

Authors:

Daiki Mori, Kazuyuki Matsumoto, Xin Kang, Manabu Sasayama and Keita Kiuchi

Abstract: The purpose of this study is to develop a stress management AI chat system that can connect users who want mental health care with counselors. By means of this chat system, a conversational AI based on a large language model (LLM) will collect data on the user's stressors through text chats with the user. The system is personalized to the user based on the collected data. This paper describes the nature of the data collected in the preliminary experiment conducted in March 2024 and the results of its analysis, and discusses considerations for the main experiment to be conducted after July 2024. The preliminary experiment was conducted with 11 students over a 3-week period. Discuss the distribution of the data collected and the issues involved in building a model for predicting stress levels.
Download

Paper Nr: 105
Title:

owl2proto: Enabling Semantic Processing in Modern Cloud Micro-Services

Authors:

Christian Banse, Angelika Schneider and Immanuel Kunz

Abstract: The usefulness of semantic technologies in the context of security has been demonstrated many times, e.g., for processing certification evidence, log files, and creating security policies. Integrating semantic technologies, like ontologies, in an automated workflow, however, is cumbersome since they introduce disruptions between the different technologies and data formats that are used. This is especially true for modern cloud-native applications, which rely heavily on technologies such as protobuf. In this paper we argue that these technology disruptions represent a major hindrance to the adoption of semantic technologies into the cloud and more effort and research is required to overcome them. We created one such approach called owl2proto, which provides an automatic translation of OWL ontologies into the protobuf data format. We showcase the seamless integration of an ontology and transmission of semantic data in an already existing cloud micro-service.
Download

Paper Nr: 167
Title:

Quantifying Domain-Application Knowledge Mismatch in Ontology-Guided Machine Learning

Authors:

Pawel Bielski, Lena Witterauf, Sönke Jendral, Ralf Mikut and Jakob Bach

Abstract: In this work, we study the critical issue of knowledge mismatch in ontology-guided machine learning (OGML), specifically between domain ontologies and application ontologies. Such mismatches may arise when OGML uses ontological knowledge that was originally created for different purposes. Even if ontological knowledge improves the overall OGML performance, mismatches can lead to reduced performance on specific data subsets compared to machine-learning models without ontological knowledge. We propose a framework to quantify this mismatch and identify the specific parts of the ontology that contribute to it. To demonstrate the framework’s effectiveness, we apply it to two common OGML application areas: image classification and patient health prediction. Our findings reveal that domain-application mismatches are widespread across various OGML approaches, machine-learning model architectures, datasets, and prediction tasks, and can impact up to 40% of unique domain concepts in the datasets. We also explore the potential root causes of these mismatches and discuss strategies to address them.
Download

Paper Nr: 170
Title:

On the Use of Ontologies for Defining, Generating and Exploring the Resulting Simulations of Application Level Protocols

Authors:

Mieczyslaw M. Kokar and Jakub J. Moskal

Abstract: This paper presents a simulator generator designed to aid in the development of domain-specific protocols that enable semantic communication, where messages include annotations specifying the meaning of individual fields. Furthermore, the field types are dynamic, meaning they are incorporated into messages based on the domain’s underlying ontology. To experiment with such domain-specific semantic protocols, designers require data for protocol evaluation. Simulation is a common solution to this need. Simulations are produced by simulators—software systems that take inputs and generate results. For typical applications, generic simulators are often available. This paper introduces a system, named dg, that generates simulators for semantic communication protocol designers, based on specifications (inputs and constraints) provided in an ontology. Additionally, we demonstrate how the same ontology can be used to explore simulation results. Finally, since our approach involves policies that account for information uncertainty in both simulator implementation and result querying, we propose initiating an effort to develop an uncertainty-related extension to the SPARQL query language.
Download

Paper Nr: 177
Title:

Extending DEMO Action Rule Specifications’ Syntax in a Low Code Platform Based Municipality Hearing System Implementation

Authors:

David Aveiro, Vitor Freitas, Duarte Pinto, Valentim Caires and Dulce Pacheco

Abstract: The current official Design and Engineering Methodology for Organizations (DEMO) Action Rules Specification are unnecessarily complex and ambiguous. These specifications are also incomplete, lacking sufficient ontological details required to derive a fully functional implementation; and complex by containing mostly unneeded specifications. Additionally, this paper details our progress in developing a metamodel for DEMO’s Action Model, using an Extended Backus-Naur Form (EBNF) syntax. These advancements were driven by our experience in implementing a system for the case study on a no-code/low-code platform to support a local Municipality Hearings Process. This implementation was done on a Low-Code platform supporting the direct execution of DEMO Models that is being developed by our team. Among our contributions are the models and patterns generated from this implementation, which provide reusable solutions that can be adopted by other low-code platforms using a similar approach.
Download

Paper Nr: 38
Title:

Optimization of Methods for Querying Formal Ontologies in Natural Language Using a Neural Network

Authors:

Anicet Lepetit Ondo, Laurence Capus and Mamadou Bousso

Abstract: A well-designed ontology must be capable of addressing all the needs it is intended to satisfy. This complex task involves gathering all the potential questions from future users that the ontology should answer in order to respond precisely to these requests. However, variations in the questions asked by users for the same need complicate the interrogation process. Consequently, the use of a question-answering system seems to be a more efficient option for translating user queries into the formal SPARQL language. Current methods face significant challenges, including their reliance on predefined patterns, the quality of models and training data, ontology structure, resource complexity for approaches integrating various techniques, and their sensitivity to linguistic variations for the same user need. To overcome these limitations, we propose an optimal classification approach to classify user queries into corresponding SPARQL query classes. This method uses a neural network based on Transformer encoder-decoder architectures, improving both the understanding and generation of SPARQL queries while better adapting to variations in user queries. We have developed a dataset on estate liquidation and Python programming, built from raw data collected from specialist forums and websites. Two transformer models, GPT-2 and T5, were evaluated, with the basic T5 model obtaining a satisfactory score of 97.22%.
Download

Paper Nr: 72
Title:

A Methodology for Interpreting Natural Language Questions and Translating into SPARQL Query over DBpedia

Authors:

Davide Varagnolo, Dora Melo and Irene Pimenta Rodrigues

Abstract: This paper presents a methodology that allows a natural language question to be interpreted using an ontology called Query Ontology. From this representation, using a set of mapping description rules, a SPARQL query is generated to query a target knowledge base. In the experiment presented, the Query Ontology and the set of mapping description rules are designed over DBpedia as target knowledge base. The methodology is tested using QALD-9, a dataset of natural language queries widely used to test question-answering systems on DBpedia.
Download

Paper Nr: 93
Title:

Applying the LOT Methodology to Enhance the Cinematic Heritage Archives

Authors:

Alessandro Cosentino, Webert Júnio Araújo and Inês Koch

Abstract: The Locarno Film Festival (LFF) archives represent a valuable collection of cinematic history, providing essential resources for research, education, and the promotion of international film culture. To ensure these resources are easily accessible, it is crucial to develop advanced methods for managing and linking the information they contain. This work focuses on creating a shared way for organizing information, transforming the LFF archives into dynamic, interconnected resources. This transformation is essential for preserving cinematic heritage, improving discoverability, promoting digital transformation, and efficiently managing archives. Using an interdisciplinary approach, we developed the OntoFest following the Linked Open Terms (LOT) Methodology. Significant outcomes of this project include the successful reuse of existing ontologies to manage heterogeneous information, which has improved our ability to understand and retrieve relevant data. This work demonstrates the potential of digital archives in the cinematic field and provides a foundation for future initiatives in digitizing cinematic heritage archives. OntoFest not only contributes to preserving the cinematic cultural heritage of the LFF but also lays the groundwork for new research and creative applications in the digital transformation of film festival archives.
Download

Paper Nr: 98
Title:

Semantic-Aware Validation in Model-Driven Requirements Engineering Using SHACL

Authors:

Artan Markaj, Felix Gehlhoff and Alexander Fay

Abstract: The development and implementation of technical system concepts require validation to ensure that stakeholder needs, goals, and requirements are fulfilled. Model-driven requirements engineering focuses on the automatic transformation of requirements into concepts and can be supported by ontologies for semantically unambiguous specifications. However, automated and systematic requirements validation using ontologies remains a challenging process. In this contribution, we propose a concept consisting of a systematic workflow, algorithm, and templates for semantic-aware validation in model-driven requirements engineering using Shapes Constraint Language, a formal language for constraint-based ontology validation. The workflow begins with the definition of validation use cases from a requirements model. These use cases are modeled as ontologies using the same metamodel as the requirements. By using Shapes Constraint Language templates, shapes can be generated and enriched with use case-specific information. Lastly, engineering concepts are validated against the requirements by using the defined shapes.
Download

Paper Nr: 126
Title:

HTEKG: A Human-Trait-Enhanced Literary Knowledge Graph with Language Model Evaluation

Authors:

Sreejith Sudhir Kalathil, Tian Li, Hang Dong and Huizhi Liang

Abstract: Knowledge Graphs (KGs) are a crucial component of Artificial Intelligence (AI) systems, enhancing AI’s capabilities in literary analysis. However, traditional KG designs in this field have focused more on events, often ignoring character information. To tackle this issue, we created a comprehensive Human-Trait-Enhanced Knowledge Graph, HTEKG, which combines past event-centered KGs with general human traits. The HTEKG enhances query capabilities by mapping the complex relationships and traits of literary characters, thereby providing more accurate and context-relevant information. We tested our HTEKG on three typical literary comprehension methods: traditional Cypher query, integration with a BERT classifier, and integration with GPT-4, demonstrating its effectiveness in literary analysis and its adaptability to different language models.
Download

Paper Nr: 171
Title:

Traffic Detection and Forecasting from Social Media Data Using a Deep Learning-Based Model, Linguistic Knowledge, Large Language Models, and Knowledge Graphs

Authors:

Wasen Melhem, Asad Abdi and Farid Meziane

Abstract: Traffic data analysis and forecasting is a multidimensional challenge that extracts details from sources such as social media and vehicle sensor data. This study proposes a three-stage framework using Deep Learning (DL) and natural language processing (NLP) techniques to enhance the end-to-end pipeline for traffic event identification and forecasting. The framework first identifies relevant traffic data from social media using NLP, context, and word-level embeddings. The second phase extracts events and locations to dynamically construct a knowledge graph using deep learning and slot filling. A domain-specific large language model (LLM), enriched with this graph, improves traffic information relevancy. The final phase integrates Allen's interval algebra and region connection calculus to forecast traffic events based on temporal and spatial logic. This framework’s goal is to improve the accuracy and semantic quality of traffic event detection, bridging the gap between academic research and real-world systems, and enabling advancements in intelligent transport systems (ITS).
Download

Paper Nr: 193
Title:

An Holistic Approach to Diagnostic and Therapeutic Care Pathways Management

Authors:

Domenico Redavid and Stefano Ferilli

Abstract: The European Commission EU4Health program (2021-2027) is launched after the severe health crisis caused by COVID-19 to support member states in long-term health challenges to build more resilient health systems aimed at reducing inequalities in access to healthcare. In Italy, the PNRR program has among its goals the enhancement of the Diagnostic Therapeutic Care Pathways, particularly their complete informatisation to reduce the gap currently present at the regional and, in some cases, at the hospital level. This paper describes a possible AI framework as a starting point for a potential solution to this goal. The proposed solution involves the use of GraphDB for information persistence and evolved process management methods for the implementation of Care Pathways.
Download