<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
    <channel>
        <title>Applied Computer Systems Feed</title>
        <link>https://sciendo.com/journal/ACSS</link>
        <description>Sciendo RSS Feed for Applied Computer Systems</description>
        <lastBuildDate>Sun, 10 May 2026 14:14:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>All rights reserved 2026, Riga Technical University</copyright>
        <item>
            <title><![CDATA[Physics-Inspired Hamiltonian Particle Swarm Optimisation for Multi-Agent Movement]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0006</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0006</guid>
            <pubDate>Sun, 26 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

This study puts forward a Hamiltonian-inspired modification of Particle Swarm Optimisation (PSO) algorithm. Since the standard PSO procedure does not take into account physical properties like particle masses, geometrical sizes, and energy consumption, it is not fully applicable as a navigational and coordination tool in real-world environments. In particular, generic PSO mechanism cannot stop the particles from collisions. To address these issues, we propose a new PSO formulation based on a Hamiltonian interpretation. This approach allows bringing together the kinetic and potential energy terms with the forces acting on the agents, as well as the derivation of agent’s velocities and positions. The potential energy represents attraction toward both personal and global best positions in a spring-like manner. As a component of conservative forces derived from the potential energy term, we introduce a special repulsive potential function to prevent collisions among agents. The kinetic energy, which is derived via agent mass and momentum, determines the movement dynamics. To model the energy loss, we incorporate Rayleigh dissipation term that accounts for non-conservative forces. According to the proposed model, agent displacements are computed using the obtained velocity and momentum vectors. Additionally, we introduce individual and swarm energy efficiency metrics to study the agents’ motion in a 2D testing environment. The presented approach enables stable, coordinated, and collision-free multi-agent motion within a physics-inspired optimisation framework.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[User as the System Core: Evolution of the User-Aligned Systems Engineering Framework]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0005</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0005</guid>
            <pubDate>Tue, 21 Apr 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

Efficiency, convenience, and safety are the standard promises of modern engineered systems. However, the execution often fails to match this vision, leading to a disconnect where users become skeptical, frustrated, or simply unwilling to adopt the technology in their daily lives. To bridge this gap between technical potential and user reality, this paper proposes the User-Aligned Systems Engineering Framework (UASEF). While the framework is deeply rooted in the complexities of smart home research, its core principles are designed to be universally applicable, reorienting the engineering process to place the human element at the centre. UASEF mandates a structure built around a central core of security and trust by design, revolving through six iterative phases, moving beyond basic functionality to prioritize deep stakeholder analysis, transparent architecture, and critical factors like cost, accessibility, and embedded security. By deconstructing specific friction points such as usability barriers and privacy concerns, this study demonstrates that the design principles required for a smart home are actually vital for any complex system. Ultimately, UASEF provides developers with actionable guidance to create technology that is not merely functional, but inherently secure, intuitive, and capable of earning long-term user confidence.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Sensitivity-Oriented YOLOv11 for Robust Multi-Label Lesion Detection in Chest X-rays]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0004</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0004</guid>
            <pubDate>Mon, 23 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

Chest X-ray lesion detection remains challenging due to severe class imbalance, subtle lesion appearance, and the risk of over-optimistic evaluation caused by improper data splitting. In this study, we propose a sensitivity-oriented detection framework based on YOLOv11 for robust chest X-ray screening under clinically realistic conditions. The proposed approach integrates patient-wise data partitioning, enhanced data augmentation, and prediction fusion to improve generalization while mitigating data leakage. Experiments are conducted on the VinDr-CXR dataset using a strict patient-level split to ensure full separation between training and validation sets. A series of internal fine-tuning scenarios is designed to analyse the trade-offs among precision, recall, and localization accuracy. Based on internal validation, the medium-scale YOLOv11-m configuration (denoted as M3) is selected as the reference model, as it provides the most stable balance between sensitivity and localization performance. Under rigorous evaluation, M3 achieves a precision of 0.431, a recall of 0.416, an mAP@0.5 of 0.387, and an mAP@0.5:0.95 of 0.193. Compared with representative baselines, M3 demonstrates improved robustness under patient-wise evaluation, outperforming transformer-based DETR by a large margin (mAP@0.5: 0.387 vs. 0.232) and achieving performance comparable to YOLOv7 while exhibiting substantially higher sensitivity to small and diffuse lesions. Further comparison with recent studies shows that the proposed method achieves higher overall mAP@0.5 (0.387 vs. 0.362–0.378) while improving detection performance on clinically challenging abnormality classes. These results indicate that the proposed YOLOv11-based framework provides a reliable and clinically meaningful baseline for chest X-ray lesion screening and future methodological advancements.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[A Physics-Stabilized Self-Updating Digital Twin Framework Using Physics-Informed Neural Networks for Thermal Field Prediction]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0003</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0003</guid>
            <pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

Digital twins increasingly rely on autonomous self-updating mechanisms to remain synchronized with physical systems; however, repeated self-updating can lead to error accumulation, numerical instability, and progressive loss of physical consistency when models iteratively learn from their own predictions. To address this challenge, the study proposes a physics-stabilized self-updating digital twin framework based on Physics-Informed Neural Networks (PINNs) and demonstrates its core principles on a canonical thermal field prediction problem. The framework integrates adaptive physics-loss weighting, a physics-only stabilization stage, and second-derivative smoothness regularization within the self-updating loop, enabling controlled data assimilation while explicitly enforcing governing equation constraints. Numerical results show a monotonic reduction in root mean square error (RMSE) from approximately 1 × 10−3 in the first update cycle to 8 × 10−5 after four update cycles, accompanied by effective suppression of model drift and a substantial reduction in partial differential equation (PDE) residuals compared to a naïve self-updating strategy. Furthermore, the analysis reveals the existence of an update saturation point, beyond which additional autonomous updates yield diminishing accuracy improvements, providing a physically motivated stopping criterion for autonomous updating. By establishing a stable and physically interpretable self-updating architecture, the study provides a foundational framework for the development of reliable digital twins, with clear potential for extension to more complex thermal and Multiphysics systems.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Investigation of Electronic Document Management System Usability across User Groups]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0002</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0002</guid>
            <pubDate>Sun, 22 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

The growing reliance on Electronic Document Management Systems (EDMSs) in public institutions necessitates an improved understanding of usability across diverse user groups. This study evaluates the perceived usability of a widely deployed commercial EDMS used in a public university context by examining differences across gender, age, personnel type (academic/administrative), education level, and prior EDMS experience. A mixed-methods design integrates objective task-based performance measures (completion, time, and perceived difficulty) and the System Usability Scale (SUS-TR) with qualitative feedback collected via open-ended questions. Overall, users valued the system’s contribution to streamlining document workflows; however, they reported notable usability barriers, particularly related to complexity and navigation. No significant differences were observed by gender, age category, or prior EDMS experience, whereas administrative staff and participants with higher education levels reported higher SUS scores. The mean SUS score (53.25) indicates below-average perceived usability when interpreted against established benchmarks, suggesting the need for targeted usability improvements. Qualitative feedback further highlights the need to simplify interaction flows – particularly around document search, dispatch, and leave management – and to enhance training and support resources to address recurring usability issues and reduce user errors. The study offers actionable recommendations for improving EDMS usability in public institutions and underscores the importance of user-centred design in digital transformation initiatives.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Beyond Accuracy: Cross-Linguistic Equity and Socio-Technical Dimensions of Large Language Models]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2026-0001</link>
            <guid>https://sciendo.com/article/10.2478/acss-2026-0001</guid>
            <pubDate>Wed, 18 Feb 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[

Artificial intelligence (AI) and AI-based systems are rapidly gaining popularity across all areas of daily life. Among these systems, large language models (LLMs), which probabilistically model language to understand and generate text, stand out at the forefront. The ability to generate results from LLMs, whose primary focus is language, is of significant technical and social importance. As language diversity increases, the ability of LLMs to produce stable and consistent results is trending downwards. This decrease has a close relation with the size of the model, the scope of the training data, and the prompt technique used in response generation. To this end, a study was conducted to measure the success of LLMs in different languages. In the study, four LLMs were examined, three of which were open-source (DeepSeek-Coder-6.7B-Instruct, Qwen2.5-Coder-7B-Instruct, Llama-3.1-8B-Instruct) and one was closed-source (GPT-5). These models were evaluated using the HumanEval-XL dataset across seven natural languages that have different data sources and usage prevalences. Additionally, the effect of the human development index (HDI) values of the countries where the languages are spoken and the prompt technique used on the results was also analysed. Results show that as LLMs grow, performance differences between languages have decreased. Additionally, it has been observed that whether the models are open-source or closed-source also has a significant impact on performance. Among open-source LLMs, DeepSeek-Coder-6.7B-Instruct's accuracy rates range from 37 % to 60 %, while Qwen2.5-Coder-7B-Instruct and Llama-3.1-8B-Instruct have performed more consistently in the 95–99 % range. GPT-5, which is a closed-source LLM, has demonstrated balanced accuracy across all languages. The results obtained reveal remarkable results in ethics, quantity of linguistic data, and equality of access to technology. The results also clearly demonstrate the relationship between multilingual accuracy, language prevalence, and prompt techniques. In this way, the study offers a clearer and more comprehensive understanding of the issues surrounding linguistic justice and the generalization of LLMs in the field of AI.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[CrackNet-VGG: A Deep Learning Framework for Automated Detection of Surface Cracks in Concrete Structures]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0020</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0020</guid>
            <pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The reliable and timely detection of cracks in concrete structures is essential for maintaining the safety, functionality, and longevity of civil infrastructure, including buildings, bridges, highways, and dams. Structural cracks can emerge due to multiple factors such as material fatigue, environmental stressors, seismic activity, and thermal expansion, necessitating accurate and efficient monitoring systems. Traditional inspection techniques, including manual visual inspection and non-destructive testing, are labour-intensive, prone to subjectivity, and often lack scalability. To address these limitations, the research presents CrackNet-VGG, a deep learning-based framework that leverages the VGG16 convolutional neural network architecture for automatic binary classification of surface cracks in concrete images. The proposed model leverages transfer learning by fine-tuning the VGG16 architecture on concrete surface datasets, utili sing its convolutional layers for robust feature extraction and its fully connected layers for final binary classification. The model is trained and evaluated on publicly available benchmark datasets, categorised into two classes: cracked and non-cracked surfaces. Experimental results demonstrate that CrackNet-VGG achieves a high classification accuracy of 96.07 %, with 95.57 % precision and 95.31 % recall, surpassing several baseline deep learning models in terms of accuracy. These results validate the applicability of CrackNet-VGG as an effective solution for automated concrete crack detection in real-world scenarios.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Scenario Modelling and Impact Estimation of a Local Pollutant on the Environment]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0021</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0021</guid>
            <pubDate>Tue, 23 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The growing problem of air pollution by fine particulate matter (PM2.5) from local sources, such as boiler houses or small industrial facilities, requires effective and accessible assessment tools. A significant gap exists between complex, resource-intensive dispersion models used in research and the practical needs of engineers, ecologists, and regulatory bodies who require instruments for rapid operational analysis. This problem is particularly acute in regions like Ukraine, where access to real-time, high-resolution environmental data is limited, and regulatory practices often rely on legacy methodologies. The paper describes the development and testing of a desktop software application with a graphical user interface (GUI) designed for scenario modelling (“what if” analysis) and quantitative assessment of air pollution levels from a local source. The core of the software tool is based on an adapted Gaussian plume analytical model, which calculates pollutant dispersion considering meteorological conditions and source parameters. The system integrates a developed method for integral impact assessment, categorising the pollution level based on calculated concentrations. The developed software allows the user to interactively input the constructive (stack height, diameter) and operational (emission rate) parameters of a pollution source, as well as current meteorological conditions. The system provides an instantaneous calculation of the expected PM2.5 concentration at a given point and classifies the impact: “Low”, “Moderate”, “High”, or “Very High”. The developed tool brings practical value, supporting the decision-making process. It provides a means for the operational monitoring of environmental impact and the preliminary planning of measures to reduce the ecological load from local pollution sources, making complex analysis accessible to a wider range of specialists, especially in data scarce environments.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Comparative Analysis of Pipeline Architecture, Resource Deployment, and Configuration for Cassandra API-Compatible Databases: ScyllaDB vs. Cassandra]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0019</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0019</guid>
            <pubDate>Fri, 12 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

This work builds a benchmarking pipeline for resource deployment, configuration input, and the evaluation of Cassandra and ScyllaDB performance. It investigates performance under different scenarios, workload types, and internal structures. Insights and future improvements are provided. Analysing comparisons of both databases, there were several notable differences between Cassandra and ScyllaDB. ScyllaDB demonstrated superior performance in production-ready materialized views, global secondary indices, lightweight transactions, change data capture, row-based data cache, and adaptive behaviour to real-world workloads. On the other hand, Cassandra exhibited advantages, such as a well-established size-tiered compaction approach and the ability to leverage existing Java Virtual machine (JVM) tuning techniques.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Genetic Algorithm for Approximation of Equilibrium Strategies within Finite Space of Actions in Bimatrix Games for Improving Telecommunication Interactions]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0018</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0018</guid>
            <pubDate>Sat, 06 Dec 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

To improve the network interaction step-wise process, a genetic algorithm is suggested for finding more stable solutions in bimatrix games. The algorithm is based on using an approach of successive approximation to an equilibrium situation within a finite space of actions whose size directly depends on the number of game repetitions (network interactions). The algorithm has seven input parameters: the population size, the maximum number of generations, the number of generations for the early stop, the mutation rate, the number of bits per pure strategy, the number of maximum network interactions, and the number of the best chromosomes selected. The algorithm is more efficient for fewer network interactions, when the network peers obtain more stable and consistent strategies that encourage interaction itself rather than resigning from sending any information due to instability. The equilibrium concept is strengthened by introducing a criterion of mutual profitability, which is clearly a good tone and respect in network interactions (particularly, in P2P file-sharing networks). This criterion, expressed as a fitness value equal to negative maximum of potential losses, can be varied to alternatively evaluate the consequence of swerving from a given mixed strategy.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Whisperer: A Real-Time Prompting System with Multilayered Semantic Matching and Adaptive Speech Synthesis]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0016</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0016</guid>
            <pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Whisperer is introduced as an intelligent, real-time prompting system that aims to improve the flow and naturalness of speaking in public and on camera. It is different from regular teleprompters because it does not just follow a script. Instead, it uses Google Cloud’s low-latency speech-to-text (STT) and text-to-speech (TTS) services to sync spoken content with a prepared script in real time. The system can handle synonyms, homophones, numeric variations, and spontaneous improvisations because it uses linguistic models such as CMUDict for phoneme-level alignment, FastText for semantic similarity, and BERT for contextual understanding. Whisperer also has adaptive TTS feedback that matches the speaker’s speed. This includes changes made in real time based on how long the speaker pauses and how fast they speak. Testing shows that the speakers’ fluency and consistency of delivery have both improved.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[A Comprehensive Vision-Based Gait Data Collection Framework with a Systematic Multi-Camera Placement Strategy]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0017</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0017</guid>
            <pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Systematic collection of gait data directly affects the reliability of gait analysis. Therefore, the first step that should be given importance in gait analysis is to ensure that the data collected is of high quality and reliable. Optimising and standardising the data collection process is a critical requirement that increases the success of the analysis. This study proposes a systematic multi-camera placement strategy and data processing process to collect gait data in real time with RGB cameras for gait recognition and analysis applications. The proposed method provides an end-to-end framework from the physical setup of the data collection environment to the data processing steps. The camera placement strategy aims to maximise the visibility of all body parts by capturing the participant’s body from different directions during walking. The results of three different methods used for silhouette extraction from the acquired videos were compared. Furthermore, a video-based approach was used to calculate participants’ walking speeds. Theoretical and practical information provided regarding the data collection and analysis process is detailed to guide future studies. In this respect, the paper is aimed to be a guiding study for researchers working in the related field.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Accelerated Planar Development of Convex Free-form Mesh Patches Using a Variable Step-size Energy Dissipation Approach]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0015</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0015</guid>
            <pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Free-form complex surfaces are prevalent in modern graphic applications. With the increasing prevalence of complex 3D surfaces enabled by advances in range scanning and 3D printing technologies, minimising parameterization times for large meshes has become crucial. This paper proposes an efficient approach for the planar development of convex free-form mesh patches using an improved energy-based technique with a variable step-size algorithm. Building upon the energy model of Wang et al., our study addresses the limitations of conventional energy dissipation algorithms, which employ fixed step sizes. The proposed variable step-size method, particularly suitable for convex or disk-shaped mesh surfaces, dynamically adjusts steps, significantly reducing energy dissipation iterations. Leveraging our previous geometric flattening method, we further enhance planar surface development using an advanced mass-spring-based approach. Here, we show that our method accelerates the mechanical flattening process while maintaining high accuracy, achieving a shape error of 0.400 and an area error of 0.147 after 36 iterations for the Surf1 patch, reducing the required iterations by nearly half compared to the fixed step-size method. This study contributes to advancing the field of surface parameterization and flattening, with potential applications in various industries.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[PathGuard: Dynamic Large Vehicle Detection and Real-time Alerts on Narrow Roads Using Mobile Sensors]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0014</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0014</guid>
            <pubDate>Mon, 27 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

On a narrow road, an accident is hard to avoid even for a responsible driver. If vehicles are stuck in traffic, driving on a single lane is worrying and takes time. For small vehicles, narrow roads pose unique challenges, especially in identifying large vehicles, hence reducing the likelihood of an accident. The study discovers these issues and presents how an inventive Intelligent Transportation System (ITS) has been developed as a worldwide phenomenon that aims at enhancing safety on narrow roads by integrating with mobile sensors. Smartphones are used by almost everyone today because their prices have gone down. The study examines the effectiveness of different machine learning models for the task of classifying vehicle type using (accelerometer, and gyroscope) sensors. The results reveal that the Random Forest model is the most effective having a mean accuracy rate of 99.78 %. Moreover, the trained Random Forest Model has been combined with an originally developed unique warning algorithm that integrates geofencing methods for drawing polygons around narrow roads and location data from smartphones. To summarise, this study adds to the development of safety systems in transport and offers useful ideas for developing and implementing real-time safety applications for narrow roads.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Analysing Software Quality of AI-Translated Code: A Comparative Study of Large Language Models Using Static Analysis]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0013</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0013</guid>
            <pubDate>Thu, 23 Oct 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Context: Source code translation enables cross-platform compatibility, code reusability, legacy system migration, and developer collaboration. Numerous state-of-the-art techniques have emerged to address demand for efficient and accurate translation methodologies.
Objective: This study compares code translation capabilities of Large Language Models (LLMs), specifically DeepSeek R1 and ChatGPT 4.1, evaluating their proficiency in translating code between programming languages. We systematically assess model outputs through quantitative and qualitative measures, focusing on translation accuracy, execution efficiency, and coding standard conformity. By examining each model’s strengths and limitations, this work provides insights into their applicability for various translation scenarios and contributes to discourse on LLM potential in software engineering.
Method: We evaluated translation quality from ChatGPT 4.1 and DeepSeek R1 using SonarQube Analyzer to identify strengths and weaknesses through comprehensive software metrics including translation accuracy, code quality, and clean code attributes. SonarQube’s framework enables objective quantification of maintainability, reliability, technical debt, and code smells which are critical factors in software quality measurement. The protocol involved randomly sampling 500 code instances from 1695 Java programming problems. Java samples were translated to Python by both models, then analysed quantitatively using SonarQube metrics to evaluate adherence to software engineering best practices.
Results: This comparative analysis reveals capabilities and limitations of state-of-the-art LLM-based translation systems, providing developers, researchers, and practitioners actionable guidance for model selection. Identified gaps highlight future research directions in automated code translation. Result s demonstrate that DeepSeek R1 consistently generates superior software quality compared to ChatGPT 4.1 across Sonar-Qube metrics.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[User Perceptions of a Home Automation System: A TAM-Based Evaluation]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0012</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0012</guid>
            <pubDate>Wed, 09 Jul 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The study explores user perceptions of a newly developed home automation system, using the Technology Acceptance Model (TAM) as a theoretical framework to guide the evaluation. The system includes advanced features for security, energy efficiency, and user convenience. It was tested in three different residential environments to evaluate its adaptability. Participants interacted with the system in a controlled laboratory setting and completed a structured survey. The study examines core TAM constructs – perceived ease of use, perceived usefulness, and behavioural intention – alongside additional factors such as trust and privacy concerns. The results highlight strong user satisfaction and intention to adopt the system, while also identifying areas for improvement, particularly in privacy assurance and device control. The findings offer practical recommendations for developers to enhance usability, transparency, and user trust. The research contributes to the smart home literature by demonstrating TAM’s applicability to real-world systems and by providing insights into the practical and user-centred factors that influence technology acceptance.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[A Multi-View Fuzzy Clustering Framework for Semantic-Rich Text Data Using SBERT and Ensemble Learning]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0011</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0011</guid>
            <pubDate>Thu, 05 Jun 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

The increasing volume of text data across diverse fields presents substantial challenges for effective clustering and analysis. Traditional methods often struggle to capture the nuanced semantic relationships and high dimensionality of textual data, particularly in noisy or heterogeneous datasets. This study introduces a refined clustering approach leveraging a multi-view ensemble method that integrates Sentence-BERT embeddings, bootstrap bagging, and Fuzzy C-Means clustering. Multiple SBERT embeddings are initially generated to capture various facets of the text data. These embeddings are then aggregated using bootstrap bagging to enhance representation robustness. Dimensionality reduction, using Uniform Manifold Approximation and Projection (UMAP), facilitates visualization and improves cluster analysis. Finally, Fuzzy C-Means clustering is applied to identify nuanced clusters within the data. Evaluation using established metrics like the Silhouette score (0.5205), Davies-Bouldin Index (0.51), and Calinski-Harabasz Index (1 386 143.83) demonstrates significant performance improvements compared to previous methods. These findings hold potential implications for tasks such as topic modelling, sentiment analysis, and information retrieval across various text-based applications. This approach offers a promising solution for navigating the complexities of high-dimensional text data analysis.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[Benchmarking 24 Large Language Models for Automated Multiple-Choice Question Generation in Latvian]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0010</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0010</guid>
            <pubDate>Fri, 30 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Large Language Models (LLMs) are increasingly being used for a wide range of text generation tasks. This paper investigates the generation of Multiple-Choice Questions in Latvian to assess both the ability of LLMs to generate high-quality questions and answers and, more broadly, their capability to process Latvian, a lower-resourced language that has received relatively little attention in LLM research. This study benchmarks 24 different LLMs, specifically those developed by Anthropic, DeepSeek, OpenAI, Google, Meta, Mistral, and Microsoft. The findings highlight the varying capabilities of these models in handling Latvian, producing grammatically correct, coherent, and meaningful text. The best-performing closed-weights model is claude-3.5-sonnet (by Anthropic), the best-performing open-weights model is deepseek-v3 (by DeepSeek), and the best-performing small open-weights model is open-mistral-nemo (by Mistral).
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[An Approach using Skeleton-based Representations and Neural Networks for Yoga Pose Recognition]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0009</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0009</guid>
            <pubDate>Sat, 24 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Amid a rapidly developing era, people can inevitably have problems with stress, depression, pressure, or difficulty sleeping due to frequent overthinking. To overcome the above problems, yoga will be an excellent solution to help adjust thoughts and harmonize body and soul, helping us relax, relax the mind, and retain positive thoughts. Negative and evil auras will be pushed away, and the worldview will improve. Yoga practice has incorrectly caused many unwanted injuries for practitioners. Therefore, we present an approach grounded in skeleton-based feature extraction and neural networks to find a solution to the recognition of yoga postures, creating a premise for researching a smart virtual trainer that supports home workouts for users from input image data converted into skeleton data through MoveNet. The classification models were used to train recognition and classification of yoga poses. The models were trained and evaluated on a dataset of 3939 images of 10 yoga poses. Experimental results show that the proposed algorithms are entirely suitable for the classification task when achieving good results on different metrics such as Precision, Recall, F1-score, and Accuracy.
]]></description>
            <category>ARTICLE</category>
        </item>
        <item>
            <title><![CDATA[VeinKAN: A Finger Vein Recognition Model Based on Kolmogorov–Arnold Networks]]></title>
            <link>https://sciendo.com/article/10.2478/acss-2025-0008</link>
            <guid>https://sciendo.com/article/10.2478/acss-2025-0008</guid>
            <pubDate>Tue, 20 May 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[

Finger vein recognition has become a secure biometric method known for its robustness against spoofing and environmental variations. Traditional methods, which often rely on Multi-layer Perceptrons (MLPs), face limitations in adaptability stemming from fixed activation functions and linear weight constraints. Kolmogorov–Arnold Networks (KANs) offer a novel architecture that enhances nonlinear learning capabilities to improve performance without significantly increasing computational overhead. This study proposes a KAN-based approach for finger vein recognition and evaluates its performance against established Convolutional Neural Network (CNN) models, including InceptionV3, EfficientNet, and MobileNetV3. Experiments on the FV_USM and SDUMLA-HMT benchmark datasets reveal that the proposed model achieves accuracies of 99.3 % and 96.2 %, respectively, surpassing conventional architectures. Despite a higher parameter count (34.81 million), the proposed model maintains an inference time of 1.0096 ms, which is comparable to InceptionV3 (1.006 ms) and notably faster than EfficientNet_B4 (1.349 ms). With a computational complexity of 539.12 MMAC, it supports the feasibility of biometric systems requiring high accuracy and efficient processing. These findings highlight KANs as a promising advancement in biometric recognition technologies.
]]></description>
            <category>ARTICLE</category>
        </item>
    </channel>
</rss>