Have a personal or library account? Click to login
Optimization and Improvement of BP Decoding Algorithm for Polar Codes Based on Deep Learning Cover

Optimization and Improvement of BP Decoding Algorithm for Polar Codes Based on Deep Learning

By: Li Ge and  Guiping Li  
Open Access
|Aug 2023

Figures & Tables

Figure 1.

Structure of polar code
Structure of polar code

Figure 2.

Multi-layer structure of deep neural network
Multi-layer structure of deep neural network

Figure 3.

Block diagram of neural network based decoder system
Block diagram of neural network based decoder system

Figure 4.

Performance of different network structures at N=8
Performance of different network structures at N=8

Figure 5.

Performance of different network structures at N=16
Performance of different network structures at N=16

Figure 6.

Performance of different network structures at N=32
Performance of different network structures at N=32

Figure 7.

Structure diagram of the proposed MLP-BP
Structure diagram of the proposed MLP-BP

Figure 8.

Interaction of BP and DNN blocks
Interaction of BP and DNN blocks

Figure 9.

Change of MLP-BP training loss value when N=128
Change of MLP-BP training loss value when N=128

Figure 10.

Evolution of MLP-BP and BP BER when N=128
Evolution of MLP-BP and BP BER when N=128

Figure 11.

BER performance comparison of two decoding methods at N=32
BER performance comparison of two decoding methods at N=32

Figure 12.

BER performance comparison of two decoding methods at N=128
BER performance comparison of two decoding methods at N=128

Decoding time delay

AlgorithmBPMLP-BP
Decoding time delay38072

Proposed MLP-BP decoding algorithm

1: Enter. y0, y1, ⋯ yN−1
2: Output. u0, u1, ⋯ uN−1
3: Initialization: Initialization using (2) LLR(yj)
4. for iter ←1 to itermax do
5.  for i ←n + 1 to nNND do
6: Update using equation (3), Li,jiter {\rm{L}}_{i,j}^{iter}
7.  end for
8: After reaching NND use the sub-block NNDsub to calculate usub
9: usub After recoding to get xsub
10: if after encoding xsub by CRC checksum do
11: Using equation (7) yields, RnNND,subiter R_{{n_{NND}},sub}^{iter}
12.    end if
13: Retransmission
14.    for i ← nNND to n do
15: Update using equation (3) Ri+1,jiter R_{i + 1,j}^{iter}
16:    end for
17:   end for

Network Structure

32-16-8128-64-32512-256-128
N=8102411752169846
N=16135213488174992
N=32135213488174992

Parameter Setting

Set optionsValue
Test platformTensorflow
EncodingPolar(32,16), (64,128)
Signal to noise ratio1~5dB
loss functionCross Entropy Loss
OptimizerAdam

Parameters Settings

ParametersValue
code length8, 16, 32
code rate0.5
batchsize512
learning rate0.001
training set size106
epoch103
network structure32-16-8, 128-64-32, 512-256-128

Polar(32,16) Divided into Four Parts

PartitioningInformation bitsRelative LocationCode Rate
[0–7]NoneNone0
[8–15]{11,12,13,14}{3,5,6,7}0.5
[16–23]{19,21,22,23}{3,5,6,7}0.5
[24–31]{24,25,26,27,28,29,30,31}{0,1,2,3,4,5,6,7}1

Polar(32,16) Divided into Two Parts

PartitionInformation bitsCode Rate
[0–15]{11,12,14,15}0.25
[16–31]{19,21,22,23,24,25,26,27,28,29,30,31}0.75
Language: English
Page range: 61 - 71
Published on: Aug 16, 2023
In partnership with: Paradigm Publishing Services
Publication frequency: 4 issues per year

© 2023 Li Ge, Guiping Li, published by Xi’an Technological University
This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.