Have a personal or library account? Click to login
Optimize to Open: An Exploratory-Experimental Approach to the Computational Optimization of Open Large Language Models for Educational Access Cover

Optimize to Open: An Exploratory-Experimental Approach to the Computational Optimization of Open Large Language Models for Educational Access

Open Access
|Mar 2026

Figures & Tables

Table 1

Complementary strategies.

TECHNIQUEEDUCATIONAL PURPOSE
Unstructured PruningReduce the number of redundant parameters by 20%, allowing models to run on-premises servers at medium capacity (Das, Ma & Shen 2024).
RAGEnsure that the responses generated are based on verified open educational sources, ensuring transparency and relevance (Bevara et al. 2025).
Table 2

Environmental characteristics.

SIMULATED ENVIRONMENTREPRESENTATIVE INFRASTRUCTURE
Public UniversityServer with 64 GB RAM + 6 GB VRAM GPU.
Community Center for Digital LiteracyMid-range laptop with 16GB RAM, no dedicated GPU.
Self-Organized Rural ClassroomBasic computer with 8 GB RAM and an unstable internet connection.
Figure 1

Reduction in response time after unstructured pruning.

Figure 2

Reduction in resource consumption after model optimization.

Figure 3

Improvement in educational response quality with RAG integration.

Figure 4

Overall impact of OLLM optimization on open education metrics.

Table 3

Preliminarily observed results.

DIMENSIONOBSERVED IMPACT
Response Style LearningAfter the initial adjustment, 100% of the responses generated consistently followed the template ‘Definition: …’.
Adaptability to New ContentA progressive increase in the relevance of the responses was observed, directly related to the diversity of documents that users uploaded.
Computational SustainabilityEach incremental fine-tuning session could be completed in less than 10 minutes using accessible GPUs (e.g., Colab T4).
Table 4

Comparative performance of OLLMs across optimization metrics.

MODELRESPONSE TIME REDUCTION (%)RAM/VRAM SAVINGS (%)EDUCATIONAL PRECISION (%)THROUGHPUT GAIN (%)
Falcon11.019.095.030
Bloom10.520.094.528
GPT-NeoX10.319.395.633
T511.219.894.829
Flan-T511.520.295.431
DOI: https://doi.org/10.5334/jime.1051 | Journal eISSN: 1365-893X
Language: English
Submitted on: Apr 30, 2025
|
Accepted on: Oct 24, 2025
|
Published on: Mar 20, 2026
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Iván Miguel García-López, José-Martín Molina-Espinosa, María-Soledad Ramírez-Montoya, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.