Have a personal or library account? Click to login
Designing an LSTM-Based Model for Financial Asset Forecasting Using Machine Learning Cover

Designing an LSTM-Based Model for Financial Asset Forecasting Using Machine Learning

Open Access
|Feb 2026

Figures & Tables

Figure 1.

LSTM model learning curve
Source: Authors‘ elaboration (python, 2025)
LSTM model learning curve Source: Authors‘ elaboration (python, 2025)

Figure 2.

UML model
UML model

Figure 3.

Actual and predicted Apple stock prices using LSTM
Source: Authors‘ elaboration (python, 2025)
Actual and predicted Apple stock prices using LSTM Source: Authors‘ elaboration (python, 2025)

Figure 4.

Simulation of Apple’s predictions stock prices using the LSTM model
Source: Author elaboration (based on python implementation)
Simulation of Apple’s predictions stock prices using the LSTM model Source: Author elaboration (based on python implementation)

Figure 5.

Simulation of MICROSOFT’s future stock prices using the LSTM model
Source: Authors‘ elaboration (based on python implementation)
Simulation of MICROSOFT’s future stock prices using the LSTM model Source: Authors‘ elaboration (based on python implementation)

Figure 6.

LSTM model performance before and after market crisis
Source: Authors‘ elaboration
LSTM model performance before and after market crisis Source: Authors‘ elaboration

Figure 7.

SHAP summary plot (Apple)
Source: Authors‘ elaboration
SHAP summary plot (Apple) Source: Authors‘ elaboration

Figure 8.

SHAP bar plot (Apple/Microsoft)
Source: Authors‘ elaboration
SHAP bar plot (Apple/Microsoft) Source: Authors‘ elaboration

Figure 9.

SHAP dependence plot (Apple, Lag_1)
Source : Authors‘ elaboration
SHAP dependence plot (Apple, Lag_1) Source : Authors‘ elaboration

Descriptive statistics of Apple stock closing prices (2010–2025)

MetricValue
Mean Price$142.78
Standard Deviation$36.52
Minimum Price$54.12
Maximum Price$198.87

Metrics results for the MICROSOFT LSTM model

MetricValueInterpretation
RMSE$3.65On average, predictions deviate by $3.65 from the actual closing prices. Lower RMSE indicates high precision.
MAE$2.98The mean absolute error is $2.98, showing the model’s daily prediction is close to real market values.
R20.9537The model explains 95.37% of the variance in stock prices, indicating excellent generalization capacity.
MAPE3.03%The mean absolute percentage error is 3.03%, confirming that predictions are closely aligned with actual prices.
SMAPE3.10%The symmetric MAPE is 3.10%, reinforcing strong forecast consistency across the prediction range.

LSTM model construction

LayerTypeParametersDetails
1LSTMunits=LSTM_UNITS,return_sequences=True, input_shape=(SEQUENCE_LENGTH, 1)First LSTM layer processes input sequences; outputs full sequence
2Dropoutrate=0.2Regularization to prevent overfitting
3LSTMunits=LSTM_UNITS, return_sequences=TrueSecond LSTM layer builds on previous output
4Dropoutrate=0.2Regularization
5LSTMunits=LSTM_UNITS, return_sequences=FalseThird LSTM layer outputs final summary vector
6Dropoutrate=0.2Regularization
7Denseunits=1Fully connected output layer for single-step price prediction
Compilationoptimizer=Adam(learning_rate=0.001), loss=’mean_squared_error’Model compiled with Adam optimizer and MSE loss

Evaluation metrics results for the APPLE LSTM model

MetricValueInterpretation
RMSE$7.03On average, the predictions deviate by $7.03 from the actual values.
MAE$5.50The average absolute error is $5.50, indicating high daily accuracy.
R20.9537The model explains 95.37% of the price variance, reflecting excellent generalization capability.
MAPE3.03%The average percentage error is very low, indicating predictions are closely aligned with real values.
SMAPE3.10%The symmetric percentage error is also very low, confirming balanced accuracy for both over- and under-predictions.
Sharpe ratio1.23Indicative of strong risk-adjusted performance.
Hit ratio0.68The model correctly predicts the direction of price movement in 68% of cases.

Prediction and visualization

CategoryItem/ParameterValue/Description
Prediction ProcessStart messagePrediction generation
Scaled predictionsy_pred_scaled = model.predict(X_test, verbose=0)

DenormalizationPredicted valuesy_pred = scaler.inverse_transform(y_pred_scaled)
Actual valuesy_test_actual = scaler.inverse_transform(y_test.reshape(-1, 1))

Status messageCompletion messageGenerated Denormalized prediction

Script for 60-day future prediction using the LSTM model

CategoryItem/ParameterValue/Description
Prediction InitializationStart Message“Prediction generated (60 days)… “
Last sequencescaled_prices[-SEQUENCE_LENGTH:].reshape(1, SEQUENCE_LENGTH, 1) (Uses last 90 days to predict next 60)
Future predictions listfuture_predictions = []

Iterative Prediction (60 days)Loopfor _ in range(60): (Iterates 60 times for each future day)
Next predictionnext_pred = model.predict(last_sequence, verbose=0)
Append predictionfuture_predictions.append(next_pred[0, 0])
Update sequencelast_sequence = np.roll(last_sequence, -1, axis=1) (Removes first element)
Add new predictionlast_sequence[0, -1, 0] = next_pred[0, 0] (Adds new prediction)

DenormalizationActual future predictionsfuture_predictions = np.array(future_predictions).reshape(-1, 1)
future_predictions_actual = scaler.inverse_transform(future_predictions

Date creationLast datelast_date = data.index[-1]
Future datesfuture_dates = pd.bdate_range(start=last_date + pd.Timedelta(days=1), periods=60) (60 business days)

Status messageCompletion messagePrediction generated

Model training

CategoryItem/ParameterValue/Description
Callbacks ConfigurationEarly_stoppingStops training if val_loss does not improve for 10 epochs. Restores best weights.
-Monitorval_loss
-Patience10
-Restore best weightsTrue
-Verbose1

Reduce_lrReduces learning rate if val_loss does not improve for 5 epochs.
-Monitorval_loss
-Factor0.5
-Patience5
-Minimum learning rate0.0001
-Verbose1

Model TraininghistoryStores training history
-ModelMode1
-Training Data (X)x_train
-Training Data (Y)y_train
-EpochsEPOCHS (variable)
-Batch SizeBATCH_SIZE (variable)
-Validation SplitVALIDATION_SPLIT (variable)
-Callbacks[early_stopping, reduce_lr]
-Verbose1

Training Status MessagesStart Message“callback configurations...”
Callbacks Configured“Callbacks Configured”
Training StartStart training (EPOCHS) epochs...”
Training EndEnd of the training

Comparative performance of ARIMA, SVR, and LSTM models

ModelRMSEMAER2MAPESMAPESharpe ratioHit ratio
ARIMA9.507.100.885.70 %5.85%0.7454%
SVR8.106.000.904.80 %4.90%0.9560%
LSTM7.035.500.953.03 %3.10%1.2368%
DOI: https://doi.org/10.2478/ceej-2026-0001 | Journal eISSN: 2543-6821 | Journal ISSN: 2544-9001
Language: English
Page range: 1 - 23
Submitted on: Jun 17, 2025
|
Accepted on: Nov 20, 2025
|
Published on: Feb 2, 2026
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2026 Najlae Yachou, Omar Abahman, Khalid Hakimi, published by Faculty of Economic Sciences, University of Warsaw
This work is licensed under the Creative Commons Attribution 4.0 License.

Volume 13 (2026): Issue 60 (January 2026)