Figure 1.

Figure 2.

Figure 3.

Figure 4.

Figure 5.

Figure 6.

Figure 7.

Figure 8.

Figure 9.

Descriptive statistics of Apple stock closing prices (2010–2025)
| Metric | Value |
|---|---|
| Mean Price | $142.78 |
| Standard Deviation | $36.52 |
| Minimum Price | $54.12 |
| Maximum Price | $198.87 |
Metrics results for the MICROSOFT LSTM model
| Metric | Value | Interpretation |
|---|---|---|
| RMSE | $3.65 | On average, predictions deviate by $3.65 from the actual closing prices. Lower RMSE indicates high precision. |
| MAE | $2.98 | The mean absolute error is $2.98, showing the model’s daily prediction is close to real market values. |
| R2 | 0.9537 | The model explains 95.37% of the variance in stock prices, indicating excellent generalization capacity. |
| MAPE | 3.03% | The mean absolute percentage error is 3.03%, confirming that predictions are closely aligned with actual prices. |
| SMAPE | 3.10% | The symmetric MAPE is 3.10%, reinforcing strong forecast consistency across the prediction range. |
LSTM model construction
| Layer | Type | Parameters | Details |
|---|---|---|---|
| 1 | LSTM | units=LSTM_UNITS,return_sequences=True, input_shape=(SEQUENCE_LENGTH, 1) | First LSTM layer processes input sequences; outputs full sequence |
| 2 | Dropout | rate=0.2 | Regularization to prevent overfitting |
| 3 | LSTM | units=LSTM_UNITS, return_sequences=True | Second LSTM layer builds on previous output |
| 4 | Dropout | rate=0.2 | Regularization |
| 5 | LSTM | units=LSTM_UNITS, return_sequences=False | Third LSTM layer outputs final summary vector |
| 6 | Dropout | rate=0.2 | Regularization |
| 7 | Dense | units=1 | Fully connected output layer for single-step price prediction |
| Compilation | optimizer=Adam(learning_rate=0.001), loss=’mean_squared_error’ | Model compiled with Adam optimizer and MSE loss |
Evaluation metrics results for the APPLE LSTM model
| Metric | Value | Interpretation |
|---|---|---|
| RMSE | $7.03 | On average, the predictions deviate by $7.03 from the actual values. |
| MAE | $5.50 | The average absolute error is $5.50, indicating high daily accuracy. |
| R2 | 0.9537 | The model explains 95.37% of the price variance, reflecting excellent generalization capability. |
| MAPE | 3.03% | The average percentage error is very low, indicating predictions are closely aligned with real values. |
| SMAPE | 3.10% | The symmetric percentage error is also very low, confirming balanced accuracy for both over- and under-predictions. |
| Sharpe ratio | 1.23 | Indicative of strong risk-adjusted performance. |
| Hit ratio | 0.68 | The model correctly predicts the direction of price movement in 68% of cases. |
Prediction and visualization
| Category | Item/Parameter | Value/Description |
|---|---|---|
| Prediction Process | Start message | Prediction generation |
| Scaled predictions | y_pred_scaled = model.predict(X_test, verbose=0) | |
| Denormalization | Predicted values | y_pred = scaler.inverse_transform(y_pred_scaled) |
| Actual values | y_test_actual = scaler.inverse_transform(y_test.reshape(-1, 1)) | |
| Status message | Completion message | Generated Denormalized prediction |
Script for 60-day future prediction using the LSTM model
| Category | Item/Parameter | Value/Description |
|---|---|---|
| Prediction Initialization | Start Message | “Prediction generated (60 days)… “ |
| Last sequence | scaled_prices[-SEQUENCE_LENGTH:].reshape(1, SEQUENCE_LENGTH, 1) (Uses last 90 days to predict next 60) | |
| Future predictions list | future_predictions = [] | |
| Iterative Prediction (60 days) | Loop | for _ in range(60): (Iterates 60 times for each future day) |
| Next prediction | next_pred = model.predict(last_sequence, verbose=0) | |
| Append prediction | future_predictions.append(next_pred[0, 0]) | |
| Update sequence | last_sequence = np.roll(last_sequence, -1, axis=1) (Removes first element) | |
| Add new prediction | last_sequence[0, -1, 0] = next_pred[0, 0] (Adds new prediction) | |
| Denormalization | Actual future predictions | future_predictions = np.array(future_predictions).reshape(-1, 1) |
| future_predictions_actual = scaler.inverse_transform(future_predictions | ||
| Date creation | Last date | last_date = data.index[-1] |
| Future dates | future_dates = pd.bdate_range(start=last_date + pd.Timedelta(days=1), periods=60) (60 business days) | |
| Status message | Completion message | Prediction generated |
Model training
| Category | Item/Parameter | Value/Description |
|---|---|---|
| Callbacks Configuration | Early_stopping | Stops training if val_loss does not improve for 10 epochs. Restores best weights. |
| -Monitor | val_loss | |
| -Patience | 10 | |
| -Restore best weights | True | |
| -Verbose | 1 | |
| Reduce_lr | Reduces learning rate if val_loss does not improve for 5 epochs. | |
| -Monitor | val_loss | |
| -Factor | 0.5 | |
| -Patience | 5 | |
| -Minimum learning rate | 0.0001 | |
| -Verbose | 1 | |
| Model Training | history | Stores training history |
| -Model | Mode1 | |
| -Training Data (X) | x_train | |
| -Training Data (Y) | y_train | |
| -Epochs | EPOCHS (variable) | |
| -Batch Size | BATCH_SIZE (variable) | |
| -Validation Split | VALIDATION_SPLIT (variable) | |
| -Callbacks | [early_stopping, reduce_lr] | |
| -Verbose | 1 | |
| Training Status Messages | Start Message | “callback configurations...” |
| Callbacks Configured | “Callbacks Configured” | |
| Training Start | Start training (EPOCHS) epochs...” | |
| Training End | End of the training | |
Comparative performance of ARIMA, SVR, and LSTM models
| Model | RMSE | MAE | R2 | MAPE | SMAPE | Sharpe ratio | Hit ratio |
|---|---|---|---|---|---|---|---|
| ARIMA | 9.50 | 7.10 | 0.88 | 5.70 % | 5.85% | 0.74 | 54% |
| SVR | 8.10 | 6.00 | 0.90 | 4.80 % | 4.90% | 0.95 | 60% |
| LSTM | 7.03 | 5.50 | 0.95 | 3.03 % | 3.10% | 1.23 | 68% |
