The program features invited talks and a poster session, as well as several networking opportunities. Note that no proceedings will be published.
All the oral sessions will take place in room 1035 of the J.-Armand Bombardier building.
Session 1 – Energy-efficient coding and coding for energy efficiency
Analog Error Correcting Codes for Energy Efficient Computing
Presenter: Cai Kui [REMOTE]
Co-author: Song Wentu
The resistive non-volatile memory (NVM) crossbar arrays are critical for various future data storage applications. They hold even greater potential for next-generation computing such as the neuromorphic computing. This is because the resistive memory crossbar arrays can be naturally used in the analog computing circuits to implement the vector-matrix multiplication (VMM) operation, which is a fundamental and computational expensive operation in deep neural networks (DNNs) and other machine learning algorithms. The corresponding energy consumption will be significantly reduced compared to the traditional digital computing circuits. However, computing with analog devices is prone to errors that are caused by the impairments of the NVM devices and the circuit noise. It has been found that although small errors are tolerable for many applications such as neural networks, significant outlying errors due to the stuck cells, short cells and cell conductance drift must be detected and corrected. In this work, we propose novel analog error correcting codes (ECCs) that can correct a single outlying error. Different from digital ECCs, our analog ECCs are defined over the real number domain, and hence can support the VMM operation in the analog domain. For a fixed magnitude outlier, our proposed code construction can achieve significant improvement of the code rate over the prior art coding scheme with the same encoding and decoding complexity.
Coding for power-efficient Binary Neural Networks implemented with noisy memristor crossbars
Presenter: Elsa Dupraz
Memristor crossbar architectures emerged as innovative resistive storage techniques which allow to perform computation operations directly within the memory. Such techniques can reduce data transfers between memories and processors, which often constitute the bottleneck in terms of delay and power consumption of Deep Neural Networks (DNN) implementations. Binary Neural Networks (BNN) are specific DNN structures which involve only binary weights and binary outputs. They have simple hardware implementations and low memory requirements, while offering good learning performance. Therefore, it seems natural to implement BNN on memristor crossbars in which memristance values can take only two levels. When building a computational system from a memristor crossbar, one of the main difficulties is to set up memristance values with arbitrary precision. Hence in this work, we consider the setting where memristance values are noisy. We first introduce a theoretical analysis to evaluate the effect of noise on the memristor crossbar computation needed by a BNN. This theoretical analysis also allows to predict the average power consumption of the crossbar. We then propose a coding method that efficiently protects the crossbar computation against errors in the memristance values. This coding method relies on Low Density Generator Matrix (LDGM) codes, and on a Belief Propagation decoder over integer values. Our simulation results show that the proposed coding method allows to significantly reduce the power consumption of the memristor crossbar compared to the uncoded case.
Enabling Efficient (LDPC) Channel Decoding and DSP on Unreliable Silicon for High-Volume Manufacturing by Restoring the Beauty of Randomness
Presenter: Andreas Burg
Co-author: R. Ghanaathian
The idea of avoiding costly design margins to save power and area of silicon implementations of DSP algorithms has received significant attention in the research community. In particular, many papers have studied the impact of tolerating silicon reliability issues on channel decoders, including LDPC and Turbo codes. In fact, numerous simulation based studies and even beautiful theoretical results based on information theoretic metrics and principles predict significant advantages of such concepts (referred to as approximate computing, computing on unreliable silicon, or tolerating noisy hardware) especially in the context of communications where signals are anyway distorted by noise. Unfortunately, despite very promising theoretical results, industry has so far refused to even remotely consider any idea that involves compromising 100% reliable operation. In this talk, I will explain the reasons of this conservative attitude and what are the issues that create a gap between theoretical (simulation) models and the reality of high-volume silicon manufacturing. To this end, I will use the example of an ASIC, which demonstrates for the first time an LDPC channel decoder on unreliable silicon. I will establish a meaningful quality assessment strategy that addresses the concerns of high-volume manufacturing and I will explain some straightforward, but important design modifications that are necessary to render the idea of (approximate) computing on unreliable or noisy silicon viable in practice.
Density Evolution for Hybrid-Schedule LDPC Decoders
Presenter: François Leduc-Primeau
Co-authors: Jérémy Nadal and Elsa Dupraz
We consider relaxed forms of message-passing schedules for LDPC decoding between the typical sequential and flooding schedules, thus called hybrid. The added degree of freedom in the scheduling can be used to increase parallelism, for instance to allow deeper circuit pipelines and thus higher clock rates in implementations. We propose a density evolution analysis for hybrid-schedule decoders to quickly predict the bit-error rate and the number of iterations required by a particular code and decoder. We then use the method to solve relevant optimization problems.
Sign Preserving Min-Sum: a low LDPC decoder complexity
Presenter: Emmanuel Boutillon
This presentation presents recent results on a LDPC decoding algorithm called Sign-Preserving Min-Sum (SP-MS). SP-MS significantly improves the decoding performance compared to the classical Offset Min-Sum (OMS) decoder when messages are quantized on q = 2, 3, or 4 bits. The particularity of the SP-MS decoder is that messages cannot take the 0 value, and can fully benefit from the q bits of precision. The optimization of the SP-MS decoder is investigated in the asymptotic limit of the code length using density evolution (DE). The study shows that 3-bit SP-MS decoders can achieve the same error-correcting performance as 5-bit OMS decoders. The finite-length simulations confirm the conclusions of the DE analysis for several LDPC codes. Finally, a post-processing method is proposed to mitigate the error floor. Simulation results and hardware complexity are given for several LDPC decoders. In particular, for the 5G LDPC code, it is shown that a fully parallel architecture on FGPA can reach an average decoding throughput of 25 Gbit/s (5 Gbit/s worst case) for all code rate above than, or equal to, 1/3 and for a payload of K = 1046 bits with a degradation in SNR limited to 0.5 dB at a FER of 10^-3.
Short Blocklength Polar Codes for Non-Coherent Channels, Probabilistic Shaping, and Dirty-Paper Coding
Presenter: Gerhard Kramer
The talk reviews recent results on polar codes for non-coherent channels, multi-level coding with probabilistic shaping, and dirty-paper coding. The results are based on work by doctoral students at the TUM Institute for Communications Engineering.
Session 2 – Machine learning for code design and decoding algorithms + capacity
Neural Decoding with Optimization of Node Activations
Presenter: Eliya Nachmani [REMOTE]
Co-author: Yair Be’ery
The problem of maximum likelihood decoding with a neural decoder for error-correcting code is considered. It is shown that the neural decoder can be improved with two novel loss terms on the node’s activations. The first loss term imposes a sparse constraint on the node’s activations. Whereas, the second loss term tried to mimic the node’s activations from a teacher decoder which has better performance. The proposed method has the same run time complexity and model size as the neural Belief Propagation decoder, while improving the decoding performance by up to 1.1dB on BCH codes.
All You Need is Feedback: Communication with Block Attention Feedback Codes
Presenter: Yulin Shao [REMOTE]
Co-authors: Emre Ozfatura, Alberto Perotti, Branislav Popovic, Deniz Gunduz
Deep neural network (DNN)-based channel code designs have recently gained interest as an alternative to conventional coding schemes, particularly for channels where existing codes do not provide satisfactory performance. Coding in the presence of feedback is one such problem, for which promising results have recently been obtained by various DNN-based coding architectures. In this paper, we introduce a novel learning-aided feedback code design, dubbed generalized block attention feedback (GBAF) codes, that achieves order-of-magnitude improvements in block error rate (BLER) compared to existing solutions. More importantly, GBAF codes provide a modular architecture that can be implemented with different neural network architectures, reduce the communication overhead due to switching between the forward and feedback channels thanks to its block structure, and enable much more flexible coding rates than prior schemes.
RELDEC: Reinforcement Learning-Based Decoding of Moderate Length LDPC Codes
Presenter: Joerg Kliewer
Co-authors: Salman Habib, Allison Beemer
We propose RELDEC, a novel approach for sequential decoding of moderate length low-density parity-check (LDPC) codes. The main idea behind RELDEC is that an optimized decoding policy is subsequently obtained via reinforcement learning based on a Markov decision process (MDP). In contrast to our previous work, where an agent learns to schedule only a single check node (CN) within a group (cluster) of CNs per iteration, in this work we train the agent to schedule all CNs in a cluster, and all clusters in every iteration. That is, in each learning step of RELDEC an agent learns to schedule CN clusters sequentially depending on a reward associated with the outcome of scheduling a particular cluster. We also modify the state space representation of the MDP, enabling RELDEC to be suitable for larger block length LDPC codes than those studied in our previous work. Furthermore, to address decoding under varying channel conditions, we propose agile meta-RELDEC (AM-RELDEC) that employ meta-reinforcement learning. The proposed RELDEC scheme significantly outperforms standard flooding and random sequential decoding for a variety of LDPC codes, including codes designed for 5G new radio.
Straggler-resilient coded federated learning
Presenter: Alexandre Graell i Amat
Reed-Muller Codes Achieve Capacity on BMS Channels
Presenter: Henry Pfister
In this talk, I will give an overview of my recent work with Galen Reeves that proves Reed-Muller (RM) codes achieve capacity on binary memoryless symmetric (BMS) channels with respect to bit-error rate. This resolves a long-standing open problem that connects information theory and error-correcting codes. Our approach generalizes some elements of an earlier proof for the binary erasure channel (BEC) but also derives new tools to avoid previous steps that do not generalize. In contrast with the earlier BEC result, the new proof does not rely on hypercontractivity. Instead, it combines a nesting property of RM codes with new information inequalities relating the generalized extrinsic information transfer (GEXIT) function with minimum mean-squared error estimation. A key element in this work is the GEXIT area theorem which was first introduced to analyze the iterative decoding of turbo and low-density parity-check codes.
Session 3 – Coding for media-access control
FASURA: A Scheme for Quasi-Static Unsourced Random Access Channels
Presenter: Jean-François Chamberland
Co-authors: Michail Gkagkos, Krishna Narayanan, and Costas Georghiades
Unsourced random access has recently emerged as a novel wireless paradigm that enables massive device connectivity on the uplink. Along these lines, this work considers a quasi-static Rayleigh fading environment wherein the access point is equipped with multiple receive antennas and every mobile device possesses a single transmit antenna. The design objective is to construct a coding scheme that minimizes the energy-per-bit subject to a maximum probability of error given a fixed message length and a prescribed number of channel uses. The envisioned architecture builds on lessons from past contributions, while concurrently introducing a new approach. In the proposed system, every information message is partitioned into two parts: the first part determines pilot values and spreading sequences, whereas the remaining bits are encoded using a polar code. Likewise, the sent signal contains two distinct sections. The initial portion features pilots and the second portion is composed of coded symbols that are spread and modulated prior to transmission. The receiver consists of three modules: a detection block, which is tasked with recovering the set of active pilot sequences and estimating channel coefficients; a bank of MMSE estimators acting on measurements at the receiver; and polar decoders, each of which seeks to retrieve coded information bits. A successive interference cancellation step is applied to subtract recovered codewords, before the residual signal is fed back to the decoder. Empirical evidence suggests that an appropriate combination of these ideas can outperform state-of-the-art coding techniques in certain regimes.
Coded Compressed Sensing with List Recoverable Codes for the Unsourced Random Access
Presenter: Alexey Frolov [REMOTE]
In the talk I will consider a coded compressed sensing approach for the unsourced random access and replace the outer tree code proposed by Amalladinne et al. with the list recoverable code capable of correcting t errors. First, I will present a finite-length random coding bound for such codes. Next, two practical constructions of outer codes will be described. The first is a modification of the tree code. It utilizes the same code structure, and a key difference is a decoder capable of correcting up to t errors. The second is based on the Reed-Solomon codes and Guruswami-Sudan list decoding algorithm. The talk will be concluded with numerical results for AWGN, single antenna quasi-static Rayleigh fading and MIMO channels, which show that transition to list recoverable codes correcting t errors improves the performance of coded compressed sensing scheme compared to the tree code-based scheme.
Massive uncoordinated access over binary adder channels
Presenter: Alexandre Graell i Amat
Codes for the A-channel with Applications to Unsourced Multiple Access
Presenter: Alexander Fengler
Co-Authors: Alejandro Lancho and Yury Polyanskiy
The A-channel was introduced by Chang and Wolf in 1981 as a model for multiaccess communication on a q-frequency channel without intensity information. The output of the channel is the set of the received frequencies but not how often each frequency was transmitted. In recent years, A-channel codes have been used in conjunction with compressed sensing techniques to construct energy- and spectral-efficient codes for uncoordinated multiple access communication. In this talk we present new finite length random coding bounds for the unsourced A-channel and improved decoding methods that can be applied to a large class of codes including the best known practical A-channel codes. Based on joint work with Alejandro Lancho and Yury Polyanskiy.
On Unsourced Random Access with MIMO Receivers over Fading Channels
Presenter: Tolga Duman
Co-Authors: Mohammad Javad Ahmadi, Mert Ozates and Mohammad Kazemi
We describe two unsourced random access schemes over fading channels employing multiple antennas at the receiver. Both schemes are based on a slotted structure. The first scheme combines the recently proposed idea of employing multiple stages of orthogonal pilots with power diversity; while the second solution utilizes non-orthogonal pilots detected at the receiver using a generalized orthogonal matching pursuit algorithm. Both schemes employ a suitably designed successive interference cancellation algorithm at the receiver. Numerical results show that both schemes offer excellent performance with a reasonable decoding complexity. We also observe that while the second scheme is more favorable for lower active user loads, the first one outperforms the state-of-the-art in higher active user loads by differentiating the users employing randomly assigned power-interleaver pairs.
Association of Non-Binary code and CCSK modulation
Presenter: Emmanuel Boutillon
This presentation tackles the problem of massive IoT connectivity in wide area network by defining a frame composed of an outer non-binary code and a Cyclic Code Shift Keying (CCSK) modulation viewed as an inner code. This new coded modulation scheme can be easily implemented in a cost efficient way. NB-code with CCSK modulation provides several advantages compared to state of the art waveforms. The whole frame can be considered first as a preamble for detection and synchronisation, then as a codeword for error correction. Some result of real-time experimentation are given. The second part of the presentation shows that the proposed scheme with truncated CCSK can lead to an efficient single encoder/decoder scheme for a wide range of SNR.
Session 4 – (De-)coding on a budget
High Throughput Decoding with GRAND
Presenter: Warren J. Gross
Guessing Random Additive Noise Decoding (GRAND) is a code-agnostic decoding technique for short-length and high-rate channel codes. GRAND tries to guess the channel noise by generating test error patterns repeatedly. In this talk, we describe our recent work on developing variants of GRAND targeted towards various channel types, and the corresponding high-throughput VLSI architectures.
High Throughput FEC Decoding under Silicon Implementation Constraints
Presenter: Norbert Wehn
High Throughput FEC Decoding under Silicon Implementation Constraints Abstract: Channel coding is a crucial technology component in any digital baseband processing. Decoder IPs for advanced coding schemes like Turbo-, LDPC-, and Polar codes are major sources of power consumption and silicon area in baseband System-on-Chips and largely contribute to the baseband latency and throughput limitations. Current and emerging communication standards require a continuous increase in error correction performance, throughput, and lower latencies. The transistor density still follows Moore’s law, but the improvement in interconnect delay and power largely slows down. Hence, for use cases with high throughput requirements like B5G, efficient implementation becomes a major bottleneck for the successful application of advanced channel coding from a silicon implementation perspective. In this talk we discuss implementation challenges and solutions for decoders targeting throughput towards 1Tb/s with a focus on LDPC and Polar codes.
High Throughput Turbo Decoding - Ways across the throughput gap
Presenter: Stefan Weithoffer
For next generation standards, Turbo codes are losing ground to LDPC and Polar codes despite their inherent rate-flexibility and recent progress in achieved performance. Indeed, Turbo codes have been seen at a disadvantage for high-throughput applications exhibiting a throughput-gap of more than an order of magnitude. This presentation summarizes recent advancements and ways across the throughput gap moving away from classical Max-Log-Map decoding.
The poster session will take place in Galerie Rolland in the Main building, the same room that is used for the lunches.
1. Successive-Cancellation Flip Decoding of Polar Codes Under Fixed Channel-Production Rate
Presenter: Ilshat Sagitov
The Successive-Cancellation Flip (SCF) decoders of Polar Codes are able to improve the error-correction performance compared to pure SC decoder. The main challenge of SCF-based decoders is a variable execution time with a high worst-case and average decoding latency. This characteristic poses a challenge to the design of receivers that have to operate at fixed data rates. By equipping decoder with the input fixed-size buffer, we propose the multi-threshold mechanism that restrains the delay of SCF decoder depending on the state of the buffer to avoid overflow. We show that the proposed mechanism provides better error-correction performance compared to a straightforward codeword-dropping mechanism at the cost of a small increase in complexity.
2. Performance-Complexity-Latency Analysis of Concatenated RS-BCH Codes
Presenter: Alvin Yonathan Sukmadji
The output bit error probabilities of a concatenated coding scheme with Reed-Solomon outer code and Bose-Chauduri-Hocquenghem inner code is derived in this research. Then, the decoding performance of the concatenated system under some constraints on the latency and complexity will be analyzed. A few examples for fiber optical application will be discussed..
3. Using Deep Neural Networks to Predict and Improve the Performance of Polar Codes
Presenter: Mathieu Léonardon
Polar codes can theoretically achieve very competitive Frame Error Rates. In practice, their performance may depend on the chosen decoding procedure, as well as other parameters of the communication system they are deployed upon. As a consequence, designing efficient polar codes for a specific context can quickly become challenging. In this paper, we introduce a methodology that consists in training deep neural networks to predict the frame error rate of polar codes based on their frozen bit construction sequence. We introduce an algorithm based on Projected Gradient Descent that leverages the gradient of the neural network function to generate promising frozen bit sequences. We showcase on generated datasets the ability of the proposed methodology to produce codes more efficient than those used to train the neural networks, even when the latter are selected among the most efficient ones.
4. Guessing Random Additive Noise Assisted Decoding
Presenter: Marwan Jalaleddine
Guessing random additive noise decoding (GRAND) is a maximum likelihood (ML) decoder in discrete channels which is useful in applications requiring short and high-rate codes. Rather than relying on knowledge of the structure of the code, GRAND tries to guess the noise impacting the codeword to achieve universal (code-agnostic decoding). In addition, we can also use GRAND-assisted decoding (AGRAND) to reduce the latency of conventional decoders. We propose adding a GRAND stage before a conventional decoder. If GRAND succeeds to decode the codeword, the message bypasses the conventional decoder stage. In case GRAND fails, the conventional decoder is used. We discuss the benefits of using AGRAND with successive cancellation list (SCL) decoding of CRC-aided Polar code [105+11,128]. We show that AGRAND can reduce the average latency per frame for the SCL decoder by 83.7% at an Eb/N0=5.5 with no degradation in the frame error rate (FER) performance. Similar reductions in average latency can also be seen using AGRAND with a highly parallelized conventional BM decoder on BCH code [106,127] and SC decoding of Polar code [105,128]. These findings open up new avenues to use AGRAND as a latency and power reduction scheme to be used with the current state-of-the-art decoders.
5. Iterative Soft-Input Soft-Output Decoding with Ordered Reliability Bits GRAND
Presenter: Carlo Condo
Guessing Random Additive Noise Decoding (GRAND) is a universal decoding algorithm that can be used to perform maximum likelihood decoding. It attempts to find the errors introduced by the channel by generating a sequence of possible error vectors in order of likelihood of occurrence and applying them to the received vector. Ordered reliability bits GRAND (ORBGRAND) integrates soft information received from the channel to refine the error vector sequence. In this work, ORBGRAND is modified to produce a soft output, to enable its use as an iterative soft-input soft-output (SISO) decoder. Three techniques specific to iterative GRAND-based decoding are then proposed to improve the error-correction performance and decrease computational complexity and latency. Using the OFEC code as a case study, the proposed techniques are evaluated, yielding substantial performance gain and astounding complexity reduction of 48\% to 85\% with respect to the baseline SISO ORBGRAND.
6. Ensemble BP Decoding for Short Block Codes
Presenter: Kira Kraft
Recently, Automorphism Ensemble Decoding (AED) has evolved as an appealing approach for decoding short linear block codes. The idea of AED is to create several permuted instances of an incoming noisy codeword, decode them in parallel, and choose the most probable one by an ML criterion as output, thereby drastically increasing the error-correcting performance of the decoder. While AED was originally designed for Reed-Muller codes, it is applicable for all kinds of codes that possess a non-empty automorphism group. It can be seen as a generalization of the Multiple Bases Belief Propagation (MBBP) algorithm, where parallel (yet different) instances of a Belief Propagation (BP) decoder were used for decoding. As multiple different instances of Belief Propagation require multiple different parity-check matrices of the same code, MBBP was restricted to cyclic codes to easily generate these matrices. In this work, we present a generalization of MBBP. In contrast to AED, where the decoding algorithm is flexible (BP/SC/SCL/...) but the code is restricted to having a large automorphism group, our approach is restricted to the BP decoder, but can be applied to any kind of linear block code, especially for those that do not possess a large automorphism group. To evaluate the benefit, we apply our decoding scheme to LTE Turbo-Codes - a class of codes designed to be as random as possible and which does not possess any automorphisms. Preliminary results show that our approach outperforms the MAP algorithm for short block lengths.
7. Automorphism-based design of polar-like codes and its decoding solution
Presenter: Charles Pillet
Polar codes, introduced in 2009, have been adopted in the 5th generation wireless system (5G) in late 2016. During the standardization process, it was agreed that CRC-aided Successive Cancellation List (CA-SCL) would be adopted as standard decoding algorithm. However, this algorithm requires to compare the metrics of up to 2L candidate codewords at each decoded bit; to avoid this extra decoding delay, permutation list decoding (pldec) was proposed. Very recently, polar code design for pldec is gaining momentum. The use of affine automorphisms effectively improves decoding performance of pldec; the resulting decoder is labeled as automorphism ensemble (AE) decoder. The poster gathers several contributions related to pldec and focuses on the role of automorphisms. In details, the automorphism group of polar codes isomorphic to the specific affine group BLTA(S) will be given. Thanks to a proposed design, the feasibility of useful codes under pldec, having both a maximum code distance and a good automorphism group structure will also be verified. Finally, the decoding schedule of SC permits to classify the automorphisms into equivalent classes, a crucial mathematical aspect to avoid redundancy in pldec.
8. A Sequence Repetition Node-Based SCL Decoder for 5G Polar Codes: Algorithm and Implementation
Presenter: Yuqing Ren
Due to the low-latency and high-reliability requirements of 5G, low-complexity node-based successive cancellation list (SCL) decoding for polar codes has received considerable attention for use in 5G communications. However, the current node-based SCL decoders lack a more generalized node that can efficiently decode a large number of different constituent codes to optimize the decoding time and hardware utilization further. In this work, we extend a recent generalized node, the sequence repetition (SR) node, to SCL decoding. Moreover, we describe the first VLSI implementation of an SR-List decoder, which is also the first generalized node-based SCL decoder that is fully compliant with 5G-NR polar codes. Finally, by merging SR-List decoding operations, applying optimizations specially for 5G-NR polar codes, and exploring the decoder design space, we show that our SR-List decoder can outperform other state-of-the-art polar decoders in throughput and area efficiency.
9. Finite Blocklength Bounds for the A-channel: Applications to Unsourced Random Access
Presenter: Alejandro Lancho
We will present finite-blocklength achievability bounds and refined asymptotic expansions for the unsourced A-channel. In this multiple-access channel, users noiselessly transmit codewords picked from a common codebook with entries generated from a q-ary alphabet. At each channel use, the receiver observes the set of different transmitted symbols but not their multiplicity. Among others, the A-channel finds a relevant application in unsourced random-access (URA). Indeed, A-channel code constructions have been widely used in the URA literature as outer codes within the coded compressed sensing framework. We will compare our finite blocklength achievability bounds and their refined asymptotic expansions with existing A-channel codes, and we will show how the analytical solution of our bounds provides intuition about how to improve existing A-channel codes.
10. Complexity Reduction and Optimization Techniques for Projection-Aggregation Decoders for Reed-Muller Codes
Presenter: Jiajie Li
Reed-Muller (RM) codes have recently attracted lots of attention due to their close relation to capacity achieving polar codes and their excellent maximum likelihood (ML) decoding performance. A recursive projection-aggregation (RPA) decoder is proposed to decode low-rate and short-length RM codes. The RPA decoder outperforms the SCL decoder for CA-polar codes under various rates and lengths, and it can achieve near ML decoding performance by extending to a list decoder. The main drawback of RPA decoders is the high computational complexity due to their recursive structure. Research on complexity reduction has been conducted. In this poster, complexity reduction techniques that reduce the average and worst-case complexity are presented. A syndrome-based early stopping criterion is proposed to reduce the average decoding complexity, while a scheduled reduction scheme based on the number of sign change is proposed to reduce the worst-case complexity. The syndrome-based early stopping check can perform within a decoding iteration, and an optimal frequency of the syndrome check can be determined by simulations. According to simulation results, the proposed syndrome-based early stopping criterion can reduce at least 69% average complexity at a frame error rate (FER) of 1e-5. A scheduled reduction scheme is proposed to reduce the worst-case complexity based on the observation that the number of sign change decreases after the first iteration, which implies that most errors are corrected in the first iteration. Thus, the worst-case complexity can be reduced by removing parts of the projection after the first iteration, and simulation results show that the worst-case complexity can be reduced by more than 50% with negligible performance degradation regardless of the target FER. Furthermore, an optimized pruning method for the collapsed projection-aggregation (CPA) decoder is presented, which transforms the pruning process into a mixed-integer quadratic programming problem. Sensitivity analyses and FER simulations show that the optimized pruned CPA (PCPA) has better decoding performance than the randomly constructed PCPA decoders under the same computational complexity. Lastly, a simplified list decoder for RPA/CPA decoders is proposed, which replaces Reed’s decoder with a syndrome check. Given that the returned codeword from RPA/CPA decoders is not guaranteed to be a RM code, Reed’s decoder is used to correct the returned codeword from RPA/CPA decoders to be a RM code before a ML in the list decoding process is performed in the list decoder. The simplified list decoder only computes the posterior probability of returned codewords from RPA/CPA decoders with an all-zeros syndrome vector, and the codeword with an all-zeros syndrome vector and the highest posterior probability is selected as the decoded codeword for the simplified list decoder. By comparing the simplified list decoder to the original list decoder, the performance degradation is negligible.
11. Partitioned Guessing Random Additive Noise Decoding
Presenter: Marwan Jalaleddine
Guessing random additive noise decoding (GRAND) is a maximum likelihood (ML) decoder which is useful in applications requiring short and high-rate codes. Rather than relying on knowledge of the structure of the code, GRAND tries to guess the noise impacting the codeword to achieve universal (code-agnostic decoding). Soft GRAND (SGRAND) and ordered reliability bits GRAND (ORBGRAND) are soft-decision variants of GRAND. SGRAND outperforms ORBGRAND since SGRAND generates ML noise patterns while ORBGRAND generates error patterns using integer partitions. Using SGRAND results in a dependency between patterns which introduces additional scheduling complexity making hardware implementations unwieldy. In an effort to improve on the performance of ORBGRAND and reduce the scheduling complexity of SGRAND, we propose partitioned GRAND (PGRAND) which uses the quantized reliability information from the channel to generate the most likely test error patterns. The proposed PGRAND algorithm divides the received signal into several partitions and generates test error patterns for each partition. Monte-Carlo simulations are conducted in an additive white gaussian noise (AWGN) channel to determine a universal sequence of the test error patterns which is only dependent on the order of received channel values. We assess the performance of PGRAND on 5G NR CA-polar code and BCH codes. PGRAND achieves up to a 0.3 dB gain over ORBGRAND at a target frame error rate (FER) of 10^(-5) with comparable average queries per frame. Additionally, PGRAND exhibits lower scheduling complexity compared to SGRAND. This makes PGRAND a desirable candidate as a near maximum likelihood code-agnostic decoder for any short, high-rate code.