Figure 1:

Figure 2:

Figure 3:

Figure 4:

Figure 5:

Figure 6:

Figure 7:

Figure 8:

Figure 9:

Figure 10:

Figure 11:

Figure 12:

Dataset statistics_
| Dataset domain | Total | +ve | ‒ve |
|---|---|---|---|
| Restaurant from Yelp | 1,000 | 500 | 500 |
| Mobile from Amazon | 1,000 | 500 | 500 |
| Movies from IMDB | 1,000 | 500 | 500 |
Comparison of proposed HCL-Bi-LSTM model_
| Model | Single-layer Bi-LSTM (Hameed and Garcia-Zapirain, 2020) | Two-layer Bi-LSTM | Two-layer HCT Bi-LSTM | |||
|---|---|---|---|---|---|---|
| Dataset | T | V | T | V | T | V |
| Amazon | 0.83 | 0.51 | 0.91 | 0.70 | 0.95 | 0.76 |
| Yelp | 0.84 | 0.70 | 0.85 | 0.72 | 0.86 | 0.75 |
| IMDB | 0.71 | 0.81 | 0.90 | 0.81 | 0.95 | 0.82 |
Various model parameters_
| Parameter | Value |
|---|---|
| Vocabulary size | 10,000 |
| Bi-LSTM | 2 layer |
| Dense | 1 |
| Activation | Sigmoid |
| Optimizer | Adam function |
| Loss Function | Binary cross-entropy |
| Input Length | 100 |
| Learning rate | 0.002 |
| Epoch | 10 |