Abstract
The rapid integration of artificial intelligence into open education has intensified global demands for equitable access, sustainable infrastructure, and technological inclusion. This study evaluated the feasibility and educational impact of optimizing Open Large Language Models (OLLMs) for deployment in low-resource learning environments. Five open-source models (Falcon, Bloom, GPT-NeoX, T5, and Flan-T5) were optimized using an exploratory-experimental approach with unstructured pruning and Retrieval-Augmented Generation (RAG). The intervention was tested in three simulated educational infrastructures (public university, community digital center, and rural classroom) and analyzed using quantitative metrics on system efficiency and educational output. The findings revealed: (a) a reduction of up to 11% in response time across all models; (b) a decrease of approximately 20% in RAM and VRAM usage; (c) a 1.4% improvement in the educational relevance of responses; and (d) a 33% increase in query throughput, indicating greater scalability in open education contexts. These results offer practical and ethical guidance for educators, policymakers, and technology developers by showcasing how optimized OLLMs can become key enablers of worldwide open, inclusive, and sustainable learning ecosystems.
