Abstract
The goal of log anomaly detection is to accurately detect system anomalies from logs. Traditional methods often suffer from insufficient generalization and delayed anomaly detection when dealing with semantically diverse and loosely structured log data. As the complexity of the system increases, the size of the logs is getting larger and larger, and it has become impractical to analyze them manually. To this end, this paper proposes FRLog, a log anomaly detection framework based on large language model, which realizes contextualized semantic embeddings of log sequences by fusing BERT and LLaMA models, thereby enabling more accurate log anomaly detection. Meanwhile, the parameter fine-tuning strategy ReFT is introduced, and the semantic bootstrapping, representation alignment and global tuning process are optimized by a three-phase collaborative training mechanism. Experimental results on three typical log datasets, BGL, HDFS and Thunderbird, show that FRLog outperforms the existing mainstream methods in terms of F1, Precision and Recall, especially in complex scenarios, demonstrating stronger anomaly discrimination and sample efficiency, which verifies its superiority and robustness in the log anomaly detection task.