Table 1
Complementary strategies.
| TECHNIQUE | EDUCATIONAL PURPOSE |
|---|---|
| Unstructured Pruning | Reduce the number of redundant parameters by 20%, allowing models to run on-premises servers at medium capacity (Das, Ma & Shen 2024). |
| RAG | Ensure that the responses generated are based on verified open educational sources, ensuring transparency and relevance (Bevara et al. 2025). |
Table 2
Environmental characteristics.
| SIMULATED ENVIRONMENT | REPRESENTATIVE INFRASTRUCTURE |
|---|---|
| Public University | Server with 64 GB RAM + 6 GB VRAM GPU. |
| Community Center for Digital Literacy | Mid-range laptop with 16GB RAM, no dedicated GPU. |
| Self-Organized Rural Classroom | Basic computer with 8 GB RAM and an unstable internet connection. |

Figure 1
Reduction in response time after unstructured pruning.

Figure 2
Reduction in resource consumption after model optimization.

Figure 3
Improvement in educational response quality with RAG integration.

Figure 4
Overall impact of OLLM optimization on open education metrics.
Table 3
Preliminarily observed results.
| DIMENSION | OBSERVED IMPACT |
|---|---|
| Response Style Learning | After the initial adjustment, 100% of the responses generated consistently followed the template ‘Definition: …’. |
| Adaptability to New Content | A progressive increase in the relevance of the responses was observed, directly related to the diversity of documents that users uploaded. |
| Computational Sustainability | Each incremental fine-tuning session could be completed in less than 10 minutes using accessible GPUs (e.g., Colab T4). |
