Artículos y presentaciones en Congresos

URI permanente para esta colecciónhttps://digital.cic.gba.gob.ar/handle/11746/543

Examinar

Envíos recientes

Mostrando 1 - 20 de 328
  • Documento de conferencia
    Acceso Abierto
    RDF Graph GPT - Building Knowledge Graphs with Chat GPT
    (2025) Borrelli Zara, Juana; Fernández, Alejandro; Torres, Diego
    This paper introduces RDF Graph GPT app as a tool to automatically translate natural language texts into RDF format knowledge graphs using Artificial Intelligence. Throughout its reading we will dissect and elaborate the various aspects of the functionality mentioned above, as well as some more that this app provides such as graph visualization of any RDF format text with the possibility to choose between two graph types. It also presents an overview of the connection with the AI, more precisely Chat GPT, and the prompts structure that leads to the desired results. The app was tested using different types of natural language texts and different prompts in order to make a richer analysis of the results. This evaluation process was made by experts who gave us some conclusions about the scope and limitations of the tool. It has promising results that show us how interesting this task is along with the potential that the resultant app has. This app, then, puts the focus on the IA as a resource on the matter of developing powerful tools for translating natural language texts into structured formats such as RDF. A very desired task to do automatically due to its implications and applications in the matter of adding semantics to the data.
  • Documento de conferencia
    Acceso Abierto
    Evaluating Information Extraction Approaches in the Construction of a Real Estate Observatory
    (2025) Tanevitch, Luciana; Antonelli, Leandro; Torres, Diego
    A real estate observatory plays a significant role in the aggregation and analysis of real estate market data. The information that lies in real estate advertisements can be leveraged to populate such an observatory. However, this data can present itself in both a structured and an unstructured manner. Unstructured data represents a problem to automatically process and extract information since it lacks a predefined structure. Thus, there’s a need for techniques to give structure to unstructured data. Information Extraction (IE) is the process of structuring data from unstructured data. Natural Language Processing techniques enable machines to understand texts, making them particularly significant in the context of IE. This work evaluates both rule-based and machine-learning based IE approaches to extract features from real estate descriptions within advertisements. Those features are relevant in the context of real estate observatory construction. The performance of each approach is measured using precision, recall and f1-score metrics.
  • Documento de conferencia
    Acceso Abierto
    What the Semantic Web can do for Cognitive Digital Twins: Challenges and Opportunities
    (2025) Lezoche, Mario; Torres, Diego
    The Cognitive Digital Twin (CDT) is an advanced version of the Digital Twin model. It integrates cognitive computing technologies to create systems that not only connect but also reason, learn from past experiences, and make informed decisions. Integrating machine learning algorithms and artificial intelligence allows CDTs to process and interpret data. This cognitive capability enables the digital twin to function with a layer of intelligence that mimics human cognitive abilities, making the system adaptable to its environment and capable of handling complex decision-making processes autonomously. The cognitive features of CDTs are crucial as they enable the system to predict future states, identify potential problems before they occur, and suggest mitigating actions. Furthermore, semantic web technologies can facilitate advanced analytics and machine learning within CDTs. This article offers a rapid analysis of how Semantic Web approaches can support several aspects of CDT models.
  • Documento de conferencia
    Acceso Abierto
    Aproximaciones a la alfabetización crítica en IA en la enseñanza de la literatura en el nivel secundario
    (2024) Artopoulos, Alejandro Martín; Lliteras, Alejandra Beatriz
    Desde un enfoque sociomaterial analizamos el impacto de la IA en la educación, argumentando que la IA cambia el paradigma de la integración de tecnología en el aula. Deja atrás la inclusión digital para enfrentar desafíos cognitivos de orden superior que impactan por igual a todos los estudiantes. La IA generativa modifica la sociomaterialidad de los textos, afectando la comprensión lectora y la agencia de los lectores en la producción de los mismos. Este trabajo se propone avanzar con una agenda de alfabetización crítica en IA como un pilar fundamental para la formación de ciudadanos digitalmente soberanos. Además, indaga en las pedagogías de las humanidades digitales como fuente para desarrollar el pensamiento computacional crítico. Para abordar la desestabilización de la lectura provocada por la IA, se propone el desarrollo de capacidades de “lectura distante” como método para adquirir competencias digitales críticas en el área de la literatura. Se presentan tres experimentos de lectura distante utilizando la obra La Casa de Bernarda Alba. Se concluye la importancia de triangular la “lectura distante” con la “lectura cercana” para lograr una comprensión profunda en las prácticas lectoras.
  • Documento de conferencia
    Acceso Abierto
    Alfadatizando: enseñanza en nivel secundario de pensamiento computacional para humanidades digitales
    (2025) Lliteras, Alejandra Beatriz; Artopoulos, Alejandro Martín; Fernández, Alejandro
    El trabajo presenta una línea de investigación en curso que se enmarca en un tema doctoral con el fin de desarrollar pensamiento computacional a través de la visualización de datos para el área de humanidades digitales en el nivel secundario. Para la adopción de las humanidades digitales en el nivel secundario, es necesario establecer estrategias de enseñanza para las diferentes disciplinas que involucren ciertos métodos computacionales, permitiendo desarrollar el pensamiento computacional de manera disciplinar, con el fin de poder reflexionar sobre cómo utilizar la tecnología digital y sobre lo que se hace a través de ella, además de utilizarse para pensar en cómo proponer una solución. Esto no sólo permite desarrollar nuevas formas de conocimiento y prácticas en las humanidades, sino que también contribuye a una ciudadanía más crítica y reflexiva. también contribuye a una ciudadanía más crítica y reflexiva.
  • Documento de conferencia
    Acceso Abierto
    Incrementally updating Concept Lattices in Arbitrarily Distributed Formal Contexts
    (2025) Leutwyler, Nicolás; Lezoche, Mario; Panetto, Hervé; Torres, Diego
    Decision-making can be fostered by knowledge extraction methods such as that of Formal Concept Analysis (FCA). On top of that, information is not available as a whole at all times in certain contexts, such as when it is distributed, and consulting all of it would be too timeconsuming. However, there only exists one algorithm for concept lattice batch computation that does not require full knowledge of the entire set of attributes. But batch algorithms are not best suited for stream processing. For that reason, in this article, we present an incremental algorithm for computing a concept lattice coming from an arbitrarily distributed formal context. And finally, we compare its complexity with that of the existing distributed algorithm.
  • Documento de conferencia
    Acceso Abierto
    Prospective user interfaces for decision-making processes in multidimensional agriculture
    (2025) Valencia Arango, Erika Andrea; Fernández, Alejandro; Collazos, Cesar
    Prospective user interfaces play a crucial role in supporting decision-making processes within the multifaceted landscape of agriculture. This study aims to develop a software tool that integrates prospective user interfaces with the Analytic Hierarchy Process (AHP) model to assist in selecting agrochemicals based on criteria such as price, environmental impact, and toxicity. By leveraging dynamic visual tools and interactive interfaces, the proposed system empowers farmers and stakeholders to anticipate and evaluate future scenarios effectively in the context of diverse agricultural dimensions. The integration of prospective user interfaces with the AHP model enhances decisionmaking by providing a structured framework to prioritize criteria and alternatives. This facilitates informed and sustainable agricultural practices while addressing complex challenges related to environmental impact and economic viability. The software tool will enable users to input their preferences, weigh different criteria according to their importance, and generate recommendations based on the AHP analysis. This study highlights the importance of technological innovation in advancing decision-making capabilities within multidimensional agriculture, ultimately contributing to more efficient and responsible agricultural practices.
  • Documento de conferencia
    Acceso Abierto
    Análisis comparativo de arquitecturas de NLP para detectar similitudes entre escenarios en español
    (2025) Pérez, Gabriela Alejandra; Mostaccio, Catalina Alba; Antonelli, Leandro
    La ingeniería de requerimientos es una fase crítica en el desarrollo de software, ya que permite identificar y definir los requerimientos del sistema. Involucra clientes y desarrolladores, quienes deben comunicarse de manera efectiva a pesar de manejar vocabulario diferente. Uno de los artefactos utilizado para este propósito es el escenario, ya que permite especificar el conocimiento de un dominio utilizando lenguaje natural. La especificación de requerimientos implica un trabajo colaborativo, por lo tanto, es esencial detectar tempranamente escenarios similares, con el fin de evitar la duplicación de esfuerzos. Una técnica comúnmente utilizada para identificar similitudes entre oraciones es el uso de LLMs para generar representaciones vectoriales que capturan el significado semántico de las frases en un espacio de alta dimensionalidad. Sin embargo, tienden a generar falsos positivos cuando dos oraciones emplean términos similares con significados distintos, debido a la proximidad superficial de sus embeddings en el espacio vectorial. En este trabajo, se analizan arquitecturas de modelos de procesamiento de lenguaje natural basadas en modelos encoder-decoder para detectar similitudes entre escenarios escritos en español. Para abordar las limitaciones de los encoders tradicionales, se analizan otras estrategias que combinan eficientemente arquitecturas de codificación y decodificación. Esta investigación busca determinar si estas aproximaciones pueden aumentar la precisión y reducir la tasa de falsos positivos en escenarios con terminología diversa.
  • Documento de conferencia
    Acceso Abierto
    An end-user tool for supporting frequent multisource web search workflows
    (2025) Tacuri, Alex; Ripa, Gonzalo; Fernández, Alejandro; Rossi, Gustavo Héctor; Gutierrez Saravia, Juan Cruz; Firmenich, Sergio
    Current web browser support for search tasks often lead to tedious and time-consuming processes especially when information needs span across various domains and when these tasks are done frequently. This paper addresses this challenge by proposing an approach that empowers end users to seamlessly conduct complex web search workflows. By prioritizing end-user programming and controllability, our approach enables users to customize and automate their search processes according to individual preferences and needs. We emphasize the importance of maintaining user control and awareness of information sources. Through our approach, we aim to enhance user experience and efficiency in accessing information across diverse online platforms.
  • Documento de conferencia
    Acceso Abierto
    Detección de idiomas como tarea de curaduría de datos para repositorios institucionales: desempeño de bibliotecas disponibles y modelos de lenguaje
    (2024) Nusch, Carlos Javier; Cagnina, Leticia Cecilia; Errecalde, Marcelo Luis; Antonelli, Leandro; De Giusti, Marisa Raquel
    - Presentación del problema: El enorme volumen de recursos almacenados actualmente en los repositorios digitales representa una gran dificultad a la hora de supervisar y corregir errores o mejorar la calidad de los metadatos. El presente trabajo se enfoca en la corrección del metadato idioma en los registros de resúmenes del repositorio institucional SEDICI. - Materiales y metodología: A partir de un dataset exportado del repositorio de unos 126.081 ítems se planificó una tarea de detección automática de idiomas utilizando diferentes bibliotecas existentes compatibles con el método zero-shot (langdetect, CLD3, fastText, Polyglot, langid y TextCat). Luego se compararon los resultados obtenidos con los datos de los idiomas registrados por el personal de catalogación del repositorio. Para tratar de mejorar aún más la detección de idiomas se entrenó un modelo mBERT multilenguaje y se comparó su desempeño con el conjunto más pequeño de ítems cuya clasificación por idiomas era diferente entre humanos y la biblioteca Polyglot. - Resultados: En general, todas las bibliotecas de detección de idiomas mostraron alrededor de un 95% de coincidencia con los idiomas identificados y catalogados por los humanos. En el caso de los modelos mBERT entrenados las coincidencias obtenidas son bajas tanto para los idiomas detectados automáticamente por Polyglot como los catalogados por humanos (78,7% y 19,6% respectivamente). Se encontraron errores de catalogación atribuibles a humanos, pero también errores de las bibliotecas o de los modelos de lenguaje en la tarea de detección.
  • Artículo
    Acceso Abierto
    Research Infrastructures in Citizen Science: State of Knowledge and Taxonomic Framework As a Pathway to Sustainability
    (2025) Soacha-Godoy, Karen; Prakash, Pranesh; Álvarez, Ana; Giraldo Tejada, Adriana; López-Borrull, Alexandre; Salvador, Xavier; Rodero, Carlos; Torres, Diego; Rodero, Iván; Ayure, Mabel; Piera, Jaume
    Citizen science platforms (CSPs) and citizen observatories (COs) are rapidly expanding research infrastructures (RIs) that support the growth of citizen science. These systems have boosted data collection capabilities and broadened participant engagement across spatial and demographic dimensions. Despite their essential role in advancing citizen science, the current state of knowledge of these infrastructures remains largely unexplored, affecting both theoretical understanding and practical implementation. The study discussed herein addresses this knowledge gap through a systematic review of 474 articles, with in-depth analysis of 72 publications spanning a 15-year period across multiple disciplinary domains. The methodological framework integrates bibliometric analysis with qualitative investigation, utilizing Web of Science and Scopus databases, supplemented by grey literature from Zenodo and Google Scholar. Findings indicate that research in this field has developed across three main waves: technological development and engagement, monitoring systems and openness, and frontiers technologies. This evolution reflects a progression in the CSP body of knowledge from technical documentation to complex socio-technological systems. Analysis of 450 articles identified 98 unique terms referring to CSPs, highlighting conceptual fragmentation. To clarify the landscape of overlapping, we propose a CSP purpose-based taxonomic framework comprising nine platform categories, contributing to a clearer understanding of the CSPs’ role in citizen science. Additionally, our systematic analysis reveals key research trajectories essential for strengthening CSPs and COs as sustainable infrastructures.
  • Artículo
    Acceso Abierto
    Ciencia Ciudadana en el arbolado urbano: integración interdisciplinaria, nuevos aportes
    (2025) Nazarre, Rosalina Amneris; Correa, Natalia; de Antueno, Lucía; Delpino, Juan Pablo; Fernández, Alejandro; Torres, Diego
    Este artículo es una extensión de un trabajo realizado en 2023 en donde se presentaron las actividades realizadas en el marco del proyecto de extensión de la Facultad de Informática de la Universidad Nacional de La Plata (UNLP) titulado “Ciencia Ciudadana en el censado del arbolado urbano". Aquí, de forma complementaria, se mencionan los nuevos aportes propuestos desde una mirada cualitativa e interdisciplinaria y los resultados obtenidos en la experiencia.
  • Artículo
    Acceso Abierto
    El uso de la Inteligencia Artificial (IA) y Procesamiento de Lenguaje Natural (PLN) para mejorar la transparencia del servicio de Justicia
    (2024) Giannini, Leandro; Martínez, Diego; Delle Ville, Juliana; Antonelli, Leandro; Grigera, Julián
    Se presentan en este trabajo los resultados de la segunda fase del proyecto de implementación de herramientas de inteligencia artificial y procesamiento de lenguaje natural en la producción de indicadores del funcionamiento de la Corte Suprema (CS). El proyecto, desarrollado en colaboración por integrantes del Instituto de Derecho Procesal y del Laboratorio de Investigación y Formación en Informática Avanzada, ambos de la Universidad Nacional de La Plata, busca promover la incorporación de herramientas de IA y PLN para favorecer prácticas de transparencia activa en la órbita del sistema de justicia.
  • Artículo
    Acceso Abierto
    Investigating STEM Students’ First-Time Experience with Smart Glasses
    (2022) Santana, Ronny; Rossi, Gustavo Héctor; Méndez, Gonzalo Gabriel; Rybarczyk, Yves; Vera, Francisco; Rodríguez, Andrés
    We study how STEM students experience the use of smart glasses for the first time. We evaluate the glasses’ usability, degree of technological acceptance, experience, and elicited emotional response. To this end, we resort to several quantitative instruments and semistructured interviews. We found that students greatly appreciate the potential and current support that smart glasses and AR provide as educational tools. We discuss our findings and identify opportunities for further research with these devices to support educational activities.
  • Artículo
    Acceso Abierto
    A co-training model based in learning transfer for the classification of research papers
    (2024) Cevallos-Culqui, Alex; Pons, Claudia Fabiana; Rodríguez, Gustavo
    A multitude of scholarly papers can be accessed online, and their continual growth poses challenges in categorization. In diverse academic fields, organizing these documents is important, as it assists institutions, journals, and scholars in structuring their content to improve the visibility of research. In this study, we propose a co-training model based on transfer learning to classify papers according to institutional research lines. We utilize co- training text processing techniques to enhance model learning through transformers, enabling the identification of trends and patterns in document texts. The model is structured with two views (titles and abstracts) for data preprocessing and training. Each input employs different document representation techniques that augment its training using BERT's pre-trained scheme. For evaluating the proposed model, a dataset comprising 898 institutional papers is compiled. These documents undergo classification prediction in five or eleven classes, and the model performance is compared with individually trained models from each view using the BART pre-trained scheme and combined models. The best precision level of 0,87 has been achieved, compared to BERT pre-trained model's metric of 0,78 (five classes). These findings suggest that co-training models can be a valuable approach to improve the predictive performance of text classification.
  • Artículo
    Acceso Abierto
    AGUTER a platform for Automated Generation of User Acceptance TEsts from Requirements Specifications
    (2021) Antonelli, Leandro; Camilleri, Guy; Torres, Diego; Zarate, Pascale
    This article proposes a strategy to make the testing step easier, generating User Acceptance Tests (UATs) in an automatic way from requirements artifacts. [Design/methodology/approach] This strategy is based on two modeling frameworks: Scenarios and Task/method paradigm. Scenarios is a requirement artifact used to describe business processes and requirements, and Task/Method paradigm is a modeling paradigm coming from the Arti-ficial Intelligence field. The proposed strategy is composed of four steps. In the first step, scenarios are described through a semantic wiki website. Then scenarios are automatically translated into a task/method model (step two). In the third step, the Task/method model obtained in step two is executed in order to produce and store all possible achievements of tasks and thus scenarios. The stored achievements are saved in a data structure called execution tree. Finally, from this execution tree (step four), the user acceptance tests are generated. [Findings] The feasibility of this strategy is shown through a case study coming from the agriculture production systems field. [Originality/value] Generally, test design approaches deal with a small number of variables describing one specific situation where a decision table or workflow is used to design tests. Our proposed approach can deal with many variables because we rely on scenarios that can be composed in order to obtain a tree with all the testing paths that can arise from their description.
  • Artículo
    Acceso Abierto
    Alfadatizando 2.0 applied to data visualization at high school level and for digital humanities: empowering digital citizens
    (2025) Lliteras, Alejandra Beatriz; Artopoulos, Alejandra; Ger, Julián; Boza, Gerónimo
    Given the great evolution and transformation of digital technologies and their penetration in different aspects of daily life, there is a need to provide equal possibilities and equal rights to access them, which implies training digi-tal citizens. On the other hand, digital technology and some methods of social sciences and computer science to visualize data, support what is known as digi-tal humanities. These add computational thinking and create new kinds of jobs, skills and specific knowledge. Currently there are several efforts to add their teaching at the higher level, however, there is little evidence of their presence at the high school level. With the aim of promoting the formation of digital citi-zens by considering aspects of computational thinking through data visualiza-tion in digital humanities, this paper first analyzes articles that consider the teaching of digital humanities in order to know the visualization methods ap-plied. A platform to make educational activities considering some of the sur-veyed methods is presented, a case study considering a curricular design is pro-posed and a proof of concepts from the above is performed. The results of the survey analysis show, on the one hand, the use of certain data visualization methods in the teaching of digital humanities at high school level, on the other hand, the feasibility of using the proposed platform for the defined case study and the viability to develop digital skills and computational thinking in students. This is considered a contribution to the empowerment of digital cit-izens from the high school level.
  • Artículo
    Acceso Abierto
    Rethinking Breath in VR: A Performative Approach to Enhance User Flow with Bio-sensing Wearable Interfaces
    (2025) Duarte, Yesica; Rodríguez, Andrés
    Asignificant number of Virtual Reality (VR) applications focus on mindfulness, using biosensor technologies (e.g., ECG) to provide real-time feedback on users’ physiological states. However, the measurement of data for the human body is complex. Commercial devices often lack precision, while medical-grade sensors require controlled environments, which can lead to disruptions and break immersion, affecting the flow of VR experiences. Thus, complicating the evaluation of mindfulness. Pinch To Awaken XR is a VR art game that utilizes wearable interfaces to measure breathing from a holistic perspective. Showcased as an Extended Reality (XR) performance, it uses first-person research methods by embodying both the researcher and performer, placing the body as the central source of inquiry. This case study reveals that integrating a performative approach can enhance the flow and engagement in mindfulness VR experiences, while also offering a novel approach for evaluating biosensing interfaces in HCI user studies, using a body-centered design.
  • Documento de conferencia
    Acceso Abierto
    Teaching digital humanities at high school level
    (2025) Lliteras, Alejandra Beatriz; Artopoulos, Alejandro Martín
    The incorporation of digital technology in educational environments was usually thought of from a didactic and/or pedagogical perspective. However, the digital transformation driven by smartphones, platforms and generative artificial intelligence showed that curricular and/or epistemological aspects had to be considered, such as the incorporation of programming and computational thinking in basic education. Within this line of research, it is particularly productive to inquire about the incorporation of digital humanities at the high school level. The objective of this work is to know through a first literature review, the implementation of educational experiences in the field of digital humanities at the high school level of articles from academic sources. In addition to present global and local initiatives of innovative aspects in digital education. The analysis addresses the description of the fields in which relevant cases of teaching experiences were found, the digital methods used as well as the emergence of new emerging curricular areas such as Digital Citizenship. The conclusions point out the areas of opportunity for the development of knowledge areas, topics, applications and didactic materials to continue the development of the teaching of humanistic or digital humanistic computational knowledge at the high school level.
  • Documento de conferencia
    Acceso Abierto
    Adversarial image generation using geneticalgorithms with black-box technique
    (2023) Pérez, Gabriela Alejandra; Pons, Claudia Fabiana
    Convolutional neural networks are a technique that has demon-strated great success in computer vision tasks, such as image classifica-tion and object detection. Like any machine learning model, they havelimitations and vulnerabilities that must be carefully considered for safeand effective use. One of the main limitations lies in their complexityand the difficulty of interpreting their internal workings, which can beexploited for malicious purposes. The goal of these attacks is to makedeliberate changes to the input data in order to deceive the model andcause it to make incorrect decisions. These attacks are known as adver-sarial attacks. This work focuses on the generation of adversarial im-ages using genetic algorithms for a convolutional neural network trainedon the MNIST dataset. Several strategies are employed, including tar-geted and untargeted attacks, as well as the presentation of interpretableand non-interpretable images that are unrecognizable to humans but aremisidentified and confidently classified by the network. The experimentdemonstrates the ability to generate adversarial images in a relativelyshort time, highlighting the vulnerability of neural networks and the easewith which they can be deceived. These results underscore the impor-tance of developing more secure and reliable artificial intelligence systemscapable of resisting such attacks