This fixes a bug in the calculation of the frame size used by the tagged decoder that affects the CC decoder in terminated mode.
Currently, tagged_decoder calls the
set_frame_size() method of the decoder with a size computed as the input size multiplied by the rate.
In the cc_decoder, the frame_size parameter of the
set_frame_size() method is used to calculate and set
d_veclen. The value of
d_veclen is used to determine how many forward steps (butterflies) the Viterbi algorithm should be run. The number of input items that are read in
d_veclen forward steps is
d_veclen * d_rate (note that
d_rate is actually the inverse of the rate; typically,
d_rate == 2). Therefore, the size of the input buffer should be as large as this value, and in most cases it should be equal, because we want the cc_decoder to look at all the provided input.
d_veclen variable is related to
d_frame_size, which indicates how many output bits are produced by the decoder. The value of
d_frame_size is equal to the parameter of
set_frame_size() in all cases except in terminated mode with byte padding, in which case the byte padding is subtracted when computing
d_frame_size. The value of
d_veclen is equal to
d_frame_size + d_k - 1 in terminated and streaming modes, and to
d_frame_size in truncated mode.
The problem with the current code is that in tagged_decoder the
set_frame_size() method is called with
ninput_items * rate. In the streaming and terminated modes this will cause
d_veclen to be too large. The forward steps (butterflies) of the Viterbi algorithm will read past the end of the input buffer. This is undefined behaviour and can cause some bit errors at the end of the decoded codeword (the effect of reading past the end of the buffer is essentially extending the codeword with 12 symbols of garbage).
This problem does not happen in the async_decoder because there is some code that computes a "diff", which works out to be
d_k - 1 + d_padding * rate()
in terminated mode and 0 in all other cases. The diff is subtracted from the frame size. This subtraction gives the correct
d_veclen calculation, so that the butterflies do not read past the input end.
This commit fixes the tagged_decoder when used with the terminated CC decoder by adding the same code as in the async_decoder to calculate and subtract this "diff" value.
- This commit changes the size of the output packets of tagged_decoder when used with a terminated CC decoder. Formerly, the output packets were 6 bits longer, as if the tail bits were included in the output. However, having these 6 extra bits was just an artifact of looking 12 symbols past the end of the input. The
cc_decoder_impl::chainback_viterbi() function is not prepared to extract tail bits in the terminated case. This chainback extracts bits "as they exit the shift register on the right", so tail bits cannot be extracted, because they "never exit the shift register". (Here I have in mind the usual depiction of a convolutional encoder, where input bits are shifted from left to right into a shift register).
The CC_TRUNCATED case in
cc_decoder_impl::generic_work() has some additional code to extract additional bits which haven't "exited the shift register" yet, because it is mandatory to extract them in this case (they are the last 6 bits of the message).
It think there is no good use case for attempting to extract the tail bits in the terminated case, so this modification seems a good way forward. The new behaviour also matches that of async_decoder, which does not output tail bits.
However, this breaks tag propagation of the tagged_decoder when used with the terminated CC decoder, because the block calls
set_relative_rate() with the nominal rate (1/2), which is slightly different from the true rate once we take into account the fact that tail bits are dropped. This could be fixed by propagating tags manually in the
The streaming mode is still broken, both when used with the async_decoder and with the tagged_decoder. Since for the streaming case diff is 0 but
d_veclen includes the term
d_k - 1, the streaming mode reads past the input buffer. It isn't really possible to run the streaming mode properly with the async_decoder or tagged_decoder (it should be run with the decoder_impl.cc), because the streaming mode requires history (some overlapping input items between consecutive calls), which is not available with the async_decoder or tagged_decoder.
The code copied from async_decoder has a "watch out" comment stating that it might be over-specializing for the CC decoder in terminated mode. It is true that this code seems specially crafted to cover this very specific case, but as far as I have been able to think, it does not affect negatively any other cases (taking into account other modes of the CC decoder and other FEC decoders).
I'm specially interested in feedback regarding point 1 above. If we think that this is a good way forward, then I can implement the manual tag propagation. If we think we don't want to change the output size and still have the 6 tail bits in the output, a different approach would be needed to fix the bug.
Which blocks/areas does this affect?
Affected blocks in gr-fec: Tagged Decoder and CC Decoder.
I could provide an ad-hoc test flowgraph that shows the problem if needed.
FEC Bug Fix Backport-3.10