Skip to main content

2024 | Buch

Research Challenges in Information Science

18th International Conference, RCIS 2024, Guimarães, Portugal, May 14–17, 2024, Proceedings, Part II

herausgegeben von: João Araújo, Jose Luis de la Vara, Maribel Yasmina Santos, Saïd Assar

Verlag: Springer Nature Switzerland

Buchreihe : Lecture Notes in Business Information Processing

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 18th International Conference on Research Challenges in Information Sciences, RCIS 2024, which took place in Guimarães, Portugal, during May 2024.

The scope of RCIS is summarized by the thematic areas of information systems and their engineering; user-oriented approaches; data and information management; business process management; domain-specific information systems engineering; data science; information infrastructures, and reflective research and practice.

The 25 full papers, 12 Forum and 5 Doctoral Consortium papers included in these proceedings were carefully reviewed and selected from 100 submissions. They were organized in topical sections as follows:

Part I: Data and information management; conceptual modelling and ontologies; requirements and architecture; business process management; data and process science; security; sustainability; evaluation and experience studies

Part II: Forum papers; doctoral consortium papers.

Inhaltsverzeichnis

Frontmatter

Forum Papers

Frontmatter
Transfer Learning for Potato Leaf Disease Detection
Abstract
Deep learning techniques have demonstrated significant potential in the agriculture sector to increase productivity, sustainability, and efficacy for farming practices. Potato is one of the world's primary staple foods, ranking as the fourth most consumed globally. Detecting potato leaf diseases in their early stages poses a challenge due to the diversity among crop species, variations in symptoms of crop diseases, and the influence of environmental factors. In this study, we implemented five transfer learning models including VGG16, Xception, DenseNet201, EfficientNetB0, and MobileNetV2 for a 3-class potato leaf classification and detection using a publicly available potato leaf disease dataset. Image preprocessing, data augmentation, and hyperparameter tuning are employed to improve the efficacy of the proposed model. The experimental evaluation shows that VGG16 gives the highest accuracy of 94.67%, precision of 95.00%, recall of 94.67%, and F1 Score of 94.66%. Our proposed novel model produced better results in comparison to similar studies and can be used in the agriculture industry for better decision-making for early detection and prediction of plant leaf diseases.
Shahid Mohammad Ganie, K. Hemachandran, Manjeet Rege
Knowledge Graph Multilevel Abstraction: A Property Graph Reification Based Approach
Abstract
Adding knowledge to data or information is an essential step for new information system, which need to break down silos in order to support a large diversity of applications. Conventional integration approaches have difficulties in meeting the flexibility required by new needs, mainly because they rely on rigid schema. Graph-based approaches, such as knowledge graphs, are promising, as they allow to use different graph models such as RDF or property graph models. However, they do not make it easy to describe complex relationships at different levels of abstraction. The reification process, already well-studied for RDF, is a promising solution to add new representation capabilities. This paper delves into the process of knowledge reification within the property graph model as a novel approach to enhance the expressivity of knowledge graph model by adding capabilities of representing complex relationships and multilevel abstractions. Based on the study of reification models for RDF, we formalize a new model for property graphs by generalizing the different reification techniques.
Selsebil Benelhaj-Sghaier, Annabelle Gillet, Éric Leclercq
ExpO: Towards Explaining Ontology-Driven Conceptual Models
Abstract
Ontology-driven conceptual models play an explanatory role in complex and critical domains. However, since those models may consist of a large number of elements, including concepts, relations and sub-diagrams, their reuse or adaptation requires significant efforts. While conceptual model engineers tend to be biased against the removal of information from the models, general users struggle to fully understand them. The paper describes ExpO—a prototype that addresses this trade-off by providing three components: (1) an API that implements model transformations, (2) a software plugin aimed at modelers working with the language OntoUML, and (3) a web application for model exploration mostly designed for domain experts. We describe characteristics of every component and specify scenarios of possible usages.
Elena Romanenko, Diego Calvanese, Giancarlo Guizzardi
Translucent Precision: Exploiting Enabling Information to Evaluate the Quality of Process Models
Abstract
An event log stores information about executed activities in a process. Conformance-checking techniques are used to measure the quality of a process model using an event log. Part of the investigated quality dimensions is precision. Precision puts the behavior of a log and a model in relation. There are event logs that also store information about enabled activities besides the actual executed activities. These event logs are called translucent event logs. A technique for measuring precision is escaping arcs. However, this technique does not consider information on enabled activities contained in a translucent event log. This paper provides a formal definition of how to compute a precision score by considering translucent information. We discuss our method using a translucent event log and four different models. Our translucent precision score conveys the underlying concept by considering more information.
Harry Herbert Beyel, Wil M. P. van der Aalst
IPMD: Intentional Process Model Discovery from Event Logs
Abstract
Intention Mining is a crucial aspect of understanding human behavior. It focuses on uncovering the underlying hidden intentions and goals that guide individuals in their activities. We propose the approach IPMD (Intentional Process Model Discovery) that combines Frequent Pattern Mining, Large Language Model, and Process Mining to construct intentional process models that capture the human strategies inherited from his decision-making and activity execution. This combination aims to identify recurrent sequences of actions revealing the strategies (recurring patterns of activities), that users commonly apply to fulfill their intentions. These patterns are used to construct an intentional process model that follows the MAP formalism based on strategy discovery.
Ramona Elali, Elena Kornyshova, Rébecca Deneckère, Camille Salinesi
Forensic-Ready Analysis Suite: A Tool Support for Forensic-Ready Software Systems Design
Abstract
Forensic-ready software systems integrate preparedness for digital forensic investigation into their design. It includes ensuring the production of potential evidence with sufficient coverage and quality to improve the odds of successful investigation or admissibility. However, the design of such software systems is challenging without in-depth forensic readiness expertise. Thus, this paper presents a tool suite to help the designer. It includes a graphical editor for creating system models in BPMN4FRSS notation, an extended BPMN with forensic readiness constructs, and an analyser utilising Z3 solver for satisfiability checking of formulas derived from the models. It verifies the models’ validity, provides targeted hints to enhance forensic readiness capabilities, and allows for what-if analysis of potential evidence quality.
Lukas Daubner, Sofija Maksović, Raimundas Matulevičius, Barbora Buhnova, Tomás̆ Sedlác̆ek
Control and Monitoring of Software Robots: What Can Academia and Industry Learn from Each Other?
Abstract
Robotic Process Automation (RPA) has witnessed significant growth, becoming widely adopted in practice. This surge in the use of RPA technology has given rise to new challenges, particularly concerning the effective control and monitoring of software robots. Ideally, academia and industry would work together on developing new RPA capabilities, but both domains operate rather separately. In this paper, we employ an explorative approach to examine how academic theories can improve industrial RPA practices and vice versa. By analyzing both academic literature and leading RPA platforms, we present four recommendations for academia, four for industry, and a general recommendation aiming to advance the collaboration between them.
Kelly Kurowski, Antonio Martínez-Rojas, Hajo A. Reijers
Creating a Web-Based Viewer for an ADOxx-Based Modeling Toolkit
Abstract
ADOxx is an environment that allows the creation of a toolkit for a specific modeling technique while using limited resources for development. The result is a toolkit for professional modelers that can be run on Windows, Linux, or MAC. However, such a toolkit is not very friendly for non-IT related stakeholders, as it is primarily aimed at developing the models; it also requires non-trivial installation. This paper is devoted to the project of designing a WEB-based viewer that allows a stakeholder to view a package of models created in an ADOxx-based toolkit. The viewer discussed in this paper was developed for a specific modeling technique called Fractal Enterprise Model (FEM). However, the discussion is of interest not only for modelers using FEM but also for the developers of other ADOxx-based tools. The paper discusses the structure and functionality of the FEM viewer, which can be reused for other toolkits. The authors aim to demonstrate the FEM viewer during the conference.
Ilia Bider, Siim Langel
Ontology-Based Interaction Design for Social-Ecological Systems Research
Abstract
Contemporary social-ecological systems (SESs) research that supports policy and decision-making to tackle sustainability issues requires interdisciplinary and often multistakeholder synergy. Various frameworks have been developed to describe and understand SESs, each producing different kinds of data and knowledge. The resultant lack of interoperability spurred our development of an ontologically grounded SESs integrated conceptual model. This paper explores the deployment of that model and describes techniques for ontology-based interaction design to clarify notions, align perspectives, negotiate terminologies and semantics in inter- and transdisciplinary collaboration settings. We offer examples of interaction scripts that utilise ontologies, discursive artefacts, game and play methods, and report on an exploratory workshop playtest which provided preliminary evidence of the potentials for ontology-based participatory sense-making for knowledge co-production.
Max Willis, Greta Adamo
Scriptless and Seamless: Leveraging Probabilistic Models for Enhanced GUI Testing in Native Android Applications
Abstract
The growing mobile app market demands effective testing methods. Scriptless testing at the Graphical User Interface (GUI) level allows test automation without traditional scripting. Nevertheless, existent scriptless tools lack efficient prioritization and customization of oracles and require manual effort to add application-specific context, hindering rapid application releases. This paper presents Mint as an alternative tool that addresses these drawbacks. Preliminary results indicate its capability to detect accessibility problems.
Olivia Rodríguez-Valdés, Kevin van der Vlist, Robbert van Dalen, Beatriz Marín, Tanja E. J. Vos
Identifying Relevant Data in RDF Sources
Abstract
The increasing number of RDF data sources published on the web represents an unprecedented amount of information. However, querying these sources to extract the relevant information for a specific need represented by a target schema is a complex task as the alignment between the target and the source schemas might not be provided or incomplete. This paper presents an approach which aims at automatically populating the classes of a target schema. Our approach relies on a semi-supervised learning algorithm that iteratively identifies instance patterns in the data source that represent candidate instances for the target schema. We present some preliminary experiments showing the effectiveness of our approach.
Zoé Chevallier, Zoubida Kedad, Béatrice Finance, Frédéric Chaillan
Novelty-Driven Evolutionary Scriptless Testing
Abstract
In recent years, scriptless Graphical User Interface (gui) testing has been positioned as a complement to traditional testing techniques. Automated scriptless GUI testing approaches use Action Selection Rules (asr) to generate on-the-fly test sequences when testing a software system. Currently, random is the standard selection approach in scriptless testing, provoking drawbacks in the testing process, such as test sequences that do not reflect the human strategies for testing, and being unable to deal with multistep tasks. This paper presents an alternate selection approach based on the use of a grammar to design the asr and an Evolutionary Algorithm (ea) with Novelty Search (ns) to direct the evolution process. Preliminary testing shows that the asrs do evolve in the standard ea process. Further research is needed to show the benefits of the additional ns for the testing process.
Lianne V. Hufkens, Tanja E. J. Vos, Beatriz Marín

Doctoral Consortium Papers

Frontmatter
Strengthening Cloud Applications: A Deep Dive into Kill Chain Identification, Scoring, and Automatic Penetration Testing
Abstract
The need to anticipate and defend against potential threats is paramount in cybersecurity. This study addresses two fundamental questions: what attacks can be performed against my system, and how can these attacks be thwarted?
Addressing the first question, this work introduces an innovative method for generating executable attack programs, showcasing the practicality of potential breach scenarios. This approach not only establishes the theoretical vulnerability of a system but also underscores its susceptibility to exploitation.
To respond to the second question, the proposed approach explores a range of mechanisms to counter and thwart the exposed attack strategies. The aim is to use robust and adaptive defensive strategies, leveraging insights from the demonstrated attack programs. These mechanisms encompass proactive measures, such as automatic penetration testing and behavior analysis, and reactive approaches, such as rapid patch deployment and vulnerability prioritization. The resilience of systems against potential breaches can be enhanced by intertwining attack pathways with comprehensive countermeasures, thereby disrupting the adversary’s kill chains. This study aims to contribute to the containerized application security deployed in different environments, like the Cloud, Edge, 5G, Internet of Things (IoT), and Industrial IoT (IIoT), by taking these scenarios as a case study.
This research contributes to the evolution of cyber threat analysis through a Design Science Research (DSR) approach, focusing on developing and validating artifacts, tools, and frameworks. Defenders can anticipate, combat, and ultimately mitigate emerging threats in an increasingly complex digital environment by creating tangible attack programs and formulating effective thwarting mechanisms.
Stefano Simonetto
Improving Understanding of Misinformation Campaigns with a Two-Stage Methodology Using Semantic Analysis of Fake News
Abstract
Internet and social media are fueling the spread of disinformation on an unprecedented scale. Numerous tactics and techniques, such as Fake News, are employed to seek geopolitical advantages or financial gains. Many studies have focused on the automatic detection of Fake News, particularly using machine learning techniques. However, an informational attack often involves various vectors, targets, authors, and content. Detecting such an attack requires a global analysis of multiple Fake News instances. This research proposal aims to assist specialists, such as intelligence analysts or journalists responsible for combating disinformation, in better characterizing and detecting informational attacks.
We propose a framework based on a two-stage approach. The first stage involves extracting valuable knowledge from each Fake News using both Artificial Intelligence and Natural Language Processing (NLP) techniques. The second stage entails aggregating the collected information using data analysis methods to facilitate the characterization and identification of disinformation campaigns.
Sidbewendin Angelique Yameogo
Automated Scriptless GUI Testing Aligned with Requirements and User Stories
Abstract
Testing is an essential phase of software development to evaluate the quality of the product. Scriptless testing is a prominent technique that makes this phase efficient. However, there is a research gap in automating the testing process from the requirements. In this research we want to propose an innovative approach: Automated Scriptless GUI Testing Aligned with Requirements and User Stories. Using the open-source GUI testing tool, TESTAR, we want to propose an AI-powered tool that enables TESTAR to test software against specified requirements and user stories.
Mohammadparsa Karimi
Towards a Cybersecurity Maturity Model Specific for the Healthcare Sector: Focus on Hospitals
Abstract
The intersection of healthcare and technology has brought unprecedented advancements, improving patient care, and enhancing operational efficiency. However, this integration has also exposed the healthcare sector to significant cybersecurity challenges. With the increasing digitization of patient records and the reliance on interconnected systems, healthcare organizations are becoming attractive targets for malicious actors seeking to exploit vulnerabilities for financial gain or to disrupt critical healthcare services. Our main contribution is a cybersecurity maturity level specific to the healthcare sector with a focus on hospital; based on rigorous Research Science Design Methodology. In other words, this research aims to investigate and address the multifaceted cybersecurity issues within the healthcare sector, focusing on hospitals, analyzing their cybersecurity profiles, proposing effective ways to accelerate cyber risks assessment in order to safeguard patient data, maintain system integrity, and ensure the continuity of healthcare services.
Steve Ahouanmenou
Towards a Hybrid Intelligence Paradigm: Systematic Integration of Human and Artificial Capabilities
Abstract
The evolution of Artificial Intelligence from traditional inference-based systems to sophisticated generative models has blurred the boundaries between machine and human capabilities, giving rise to Hybrid Intelligence (HI). HI represents a symbiotic relationship between human and artificial intelligence, integrating human wisdom and expertise with machine intelligence. This work aspires to explore the paradigm shift towards HI, with a focus on integrating human expertise with machine intelligence. It aims to address challenges in human-machine interaction and dynamic task management within HI systems, emphasizing the necessity for seamless integration to fully exploit the capabilities of both entities. Through interdisciplinary collaboration and empirical inquiry, this research endeavors to advance understanding and implementation of HI systems across diverse domains, paving the way for systems that harness the intelligence of humans and machines to tackle complex challenges.
Antoni Mestre
Backmatter
Metadaten
Titel
Research Challenges in Information Science
herausgegeben von
João Araújo
Jose Luis de la Vara
Maribel Yasmina Santos
Saïd Assar
Copyright-Jahr
2024
Electronic ISBN
978-3-031-59468-7
Print ISBN
978-3-031-59467-0
DOI
https://doi.org/10.1007/978-3-031-59468-7

Premium Partner