Practical_Error Correction_Design_For_Engineers_2ed_1991 Practical Error Correction Design For Engineers 2ed 1991

User Manual: Practical_Error-Correction_Design_For_Engineers_2ed_1991

Open the PDF directly: View PDF PDF.
Page Count: 495

DownloadPractical_Error-Correction_Design_For_Engineers_2ed_1991 Practical Error-Correction Design For Engineers 2ed 1991
Open PDF In BrowserView PDF
PRACTICAL ERROR CORRECTION
DESIGN FOR ENGINEERS
REVISED SECOND EDmON

~

CUIER

ZERO
DETECT

dl

+ CORRECTED dl

Neal Glover and Trent Dudley

PRACTICAL ERROR CORRECTION
DESIGN FOR ENGINEERS
Neal Glover and Trent Dudley
CIRRUS LOGIC - COLORADO
The study of error-correcting codes is now more than forty years
old.
There are several excellent texts on the subject, but they were
written mainly by coding theorists and are based on a rigorous mathematical approach. This book is written from a more intuitive, practical
viewpoint.
It is intended for practicing engineers who must specify,
architect, and design error-correcting code hardware and software. It is
an outgrowth of a series of seminars presented during 1981 and 1982 on
practical error-correction design.
An engineer who must design an error-control system to meet data
recoverability, data accuracy, and performance goals must become familiar
with the characteristics and capabilities of different types of EDAC codes
as well as their implementation alternatives, including tradeoffs between
hardware and software complexity, speed/space/cost, etc. The goal of this
book is to provide this information in a concise manner from a practical
engineering viewpoint. Numerous examples are used throughout to develop
familiarity and confidence in the methods presented. Most proofs and
complex derivations have been omitted; these may be found in theoretical
texts on error correction coding.

CIRRUSLQGIC-COLORADO
INTERLOCKEN BUSINESS PARK
100 Technology Drive, Suite 300
Broomfield, Colorado 80021
Telephone (303) 466-5228
FAX (303) 466-5482

If you wish to receive updates to this book, please copy this form, complete it and
send it to the above address.

Please add my name to your permanent mailing list for
updates to the book:

PRACTICAL ERROR CORRECTION DESIGN

FOR ENGINEERS (Revised Second Edition)
Name
Title
Organization ________________________

~

______

~

_______

Address ______________________________________--------

Phone ____________________________________

~

__________

Other Areas of Interest _______________________________

PRACTICAL ERROR CORRECTION

DESIGN FOR ENGINEERS

REVISED SECOND EDmON

PRACTICAL ERROR CORRECTION
DESIGN FOR ENGINEERS

REVISED SECOND EDmON

Neal Glover

and

Trent Dudley

Published By:
Cirrus Logic - Colorado
Broomfield, Colorado 80020
(303) 466-5228

Cirrus Logic believes the information contained in this book to be accurate. However,
neither Cirrus Logic nor its authors guarantees the accuracy or completeness of any
information published herein and neither Cirrus Logic nor its authors shall be responsible for any errors, omissions or damages arising out of use of this information. Cirrus
Logic does not assume any responsibility or liability arising out of the application or
use of any software, circuit, method, technique, or algorithm described herein, nor does
the purchase of this book convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property or trade secret rights of Cirrus
Logic or third parties.
This work is published with the understanding the Cirrus Logic and its authors are
supplying information but are not attempting to render engineering or other professional
services. If such services are required, the assistance of an appropriate professional
should be sought.

Second Edition Revision 1.1
Copyright © 1991 by CIRRUS LOGIC, INC.
Revised Second Edition COPYRIGHT © 1991. Second Edition COPYRIGHT © 1988. First
Edition COPYRIGHT © 1982. ALL RIGHTS RESERVED.
No part of this book maybe reproduced in any form or by any means, electronic or
mechanical, including photocopying, recording, or by any information storage and retrieval system, without the prior written permission of DATA SYSTEMS TECHNOLOGY,
CORP. or CIRRUS LOGIC, INC.
Published by:

CIRRUSLOGIC-COLORADO
INTERLOCKEN BUSINESS PARK
100 Technology Drive, Suite 300
Broomfield, Colorado 80021
Phone (303) 466-5228 FAX (303) 466-5482
Second Printing 1990.
ISBN #0-927239-00-0

To my children,

To the memory of my parents,

Rhonda, Karen, Sean, and Robert

Robert and Constance

Neal Glover

Trent Dudley

PREFACE
The study of error-correcting codes is now more than forty years old. There are
several excellent texts on the subject, but they were written mainly by coding theorists
and are based on a rigorous mathematical approach. This book is written from a more
intuitive, practical viewpoint. It is intended for practicing engineers who must specify,
architect, and design error-correcting code hardware and software. It is an outgrowth
of a series of seminars presented during 1981 and 1982 on practical error-correction
design.
An engineer who must design an error-control system to meet data recoverability,
data accuracy, and performance goals must become familiar with the characteristics and
capabilities of different types of EDAC codes as well as their implementation alternatives, including tradeoffs between hardware and software complexity, speed/space/ cost,
etc. Our goal is to provide this information in a concise manner from a practical
engineering viewpoint. Numerous examples are used throughout to develop familiarity
and confidence in the methods presented. Most proofs and complex derivations have
been omitted; tllese may be found in theoretical texts on error correction coding.

We would like to thank our friends for their assistance and advice.
attending DST's seminars also deserve thanks for their suggestions.
Neal Glover
Trent Dudley
Broomfield, Colorado
August 1988

v

The engineers

ABOUT CIRRUS LOGIC - COLORADO
Cirrus· Logic - Colorado was originally founded in 1979 as Data System Technology
(DST) and was sold to Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides error detection and correction· (BDAC) products and
services to the electronics industries. We specializes in the practical implementation of
EDAC, recording and data compression codes to enhance the reliability and efficiency of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error management.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, workstations and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
Cirrus Logic - Colorado was a pioneer in the development and implementation of
computer-generated codes to improve data accuracy. These codes have become widely
used in magnetic disk systems over the past few years and are now defacto standards
for 5 t.4 inch Winchester drives. Cirrus Logic - Colorado developed the first low-cost
high-performance Reed-Solomon code integrated circuits; the codes implemented therein
have become worldwide standards for the optical storage industry. EDAC codes produced by Cirrus Logic - Colorado have become so associated with high data integrity that
many users include them in their lists of requirements when selecting storage subsystems.
Cirrus Logic - Colorado licenses EDAC software and discrete and integrated circuit
designs for various EDAC codes, offers books and technical reports on EDAC and recording codes, and conducts seminars on error tolerance and data integrity as well as
EDAC, recording, and data compression codes.
PRODUcrS

•

Error tolerant controller designs for magnetic and optical storage.

•

Turnkey integrated circuit development.

•

Low-cost, high-performance EDAC integrated circuit designs.

•

Discrete and integrated circuit designs for high-performance ReedSolomon codes, product codes, and computer-generated codes.

•

Universal encoder/decoder designs for Reed-Solomon codes including
bit-serial, time slice, and function sharing designs.

•

Multiple-burst EDAC designs for high-end storage devices with highspeed parallel interfaces, supporting record lengths beyond 100,000
bytes.
vi

•

EDAC designs supporting QIC tape formats.

•

Software written for· a number of processors to support integrated
circuits implementing Cirrus Logic - Colorado's EDAC technology.

•

Practical E"or Co"ection Design for Engineers, a book on EDAC
written especially for engineers.

•

Cirrus Logic - Colorado develops polynomials for use in storage
products.

CONSULTING SERVICES

Consulting services are offered in the following areas:
•

Semiconductor memories and large cache memories

•

Magnetic disk devices

•

Magnetic tape devices

•

Optical storage devices using read-only, write-once, and erasable
media

•

Smart cards

•

CommUlrications

Consulting services offered include:
•

Code selection

•

Design of discrete hardware and integrated circuits

•

Design of software

•

Advice in the selection of synchronization, header, and
agement strategies

•

Complex MTBF computations

•

Analysis of existing codesand/or designs

•

Establishing EDAC requirements from defect data

•

Assistance in system integration of integrated circuits implementing
Cirrus Logic's EDAC technology.

vii

defect man-

PROLOGUE
THE COMING REVOLUTION
IN ERROR CORRECTION TECHNOLOGY
By: Neal Glover
Presented at ENDL's 1988 Disklfest Conference

INTRODUcnON
The changes that are occurring today in error detection and correction, error tolerance, and failure tolerance are indeed revolutionary. Two major factors are driving
the revolution: need and capability. The need arises from more stringent error and
failure tolerance requirements due to changes in capacity, through-put, and storage
technology. The capability is developing due to continuing increases in VLSI density
and decreases in VLSI cost, along with more sophisticated error-correction teChniques.
This preface discusses the changes in requirements, technology, and techniques that are
presently occurring and those that are expected to occur over the next few years.
Some features of today's error-tolerant systems would have been hard to imagine a
few years ago.
Some optical storage systems now available are so error tolerant that user data is
correctly recovered even if there exists a defect situation so gross that the sector
mark, header and sync mark areas of a sector are totally obliterated along with dozens
of data bytes.
Magnetic disk drive array systems under development today are so tolerant to
errors and failures that simultaneous head crashes on two magnetic disk drives would
neither take the system down nor cause any loss of data. Some of these systems will
also be able to detect and correct many errors that today go undetected, such as transient errors in unprotected data paths and buffers and even software errors that result
in the transfer of the wrong sector. Some magnetic disk drive array systems specify
mean time between data loss (MTBDL) in the hundreds of thousands of hours.
The contrast with prior-generation computer systems is stark. Before entering development I spent some time on a team maintaining a large computer at a plant in California that developed nuclear reactors. I will never forget an occasion when the head
of computer operations pounded his fist on a desk and firmly stated that if we saw a
mushroom cloud over Vallecito it would be the fault of our computer. The mainframe's
core memory was prone to intermittent errors. The only checking in the entire computer was parity on tape. Punch card decks were read to tape twice and compared.

viii

By the mid-seventies, the computer industry had come a long way in improving
data integrity. I had become an advisory engineer in storage-subsystem development,
and in 1975 I was again to encounter a very unhappy operations manager when a microcode bug, which I must claim responsibility for, intermittently caused undetected erroneous data to be transferred in a computer system at an automobile manufacturing plant.
Needless to say, the consequences were disastrous. This experience taught me the importance of exhaustive firmware verification testing and has influenced my desire to
incorporate data-integrity features in Cirrus Logic's designs that are intended to detect
and in some cases even correct for firmware errors as well as hardware errors.
Changes in hardware and software data-integrity protection methods are occurring
today at a truly revolutionary rate and soon the weaknesses we knew of in the past and
those that we live with today will be history forever.

THE CHANGING REQUIREMENTS
Requirements for error and failure tolerance increase with capacity and throughOver the years, many storage systems have
put, and changing storage technology.
specified their non-recoverable read error rate as l.E-12 events per bit. In many cases
As more sophisticated applications require ever faster
this is no longer acceptable.
access to ever larger amounts of information, system integrators will demand that
storage system manufacturers meet much higher data-integrity standards.
As an example of how capacity influences error tolerance requirements, consider a
hypothetical write-once optical storage device employing removable 5 gigabyte cartridges. Twenty-five such cartridges would hold l.E+ 12 bits, so a non-recoverable read
error rate of l.E-12 would imply the existence of a non-recoverable read error on about
one in twenty-five cartridges. Is this acceptable? Would one non-recoverable read
error in every 250 platters be acceptable?
, As an example of how through-put influences error tolerance requirements, consider a magnetic disk array subsystem which is designed to transfer data simultaneously
from all drives and has no redundant drives. The through-put of ten lO-megabit-persecond magnetic disk drives operating with a ten percent read duty cycle would be
8. 64E + 11 bits per day. A l.E-12 non-recoverable read error rate would imply one nonrecoverable read error every eleven days. Is this acceptable? Would one non-recoverable read error per year be acceptable?
For new storage technologies, it is often not practical to achieve the low media
defect event rates which we have been accustomed to handling in .magnetic storage.
New techniques have been and must continue to be developed and implemented to
accommodate higher defect rates and different defect characteristics.

ix

mE CHANGING TECHNOLOGY
VLSI density continues to increase, allowing us to incorporate logic on a single integrated circuit today that a few years ago would have required several separate boards.
This allows us to implement very complex data-integrity functions within a single IC.
Cirrus Logic's low-cost, high-performance, Reed-Solomon code Ie's for optical storage
devices are a good example. As VLSI densities increase, such functions will occupy a
small fraction of the silicon area of a multi-function IC. The ability to place very
complex functions on a single IC and further to integrate multiple complex functions on
a single IC opens the door for greater data integrity. Our ability to achieve greater
data integrity at reasonable cost is clearly one of the forces behind the revolution in
error and failure tolerant technology.
Even with the development of cheaper, higher density VLSI technOlogy, it is often
more economical to split the implementation· of high-performance EDAC systems between
hardware and software. Using advanced software algorithms and buffer management
techniques, nearly "on-the-fly" correction performance can be achieved at lower cost
than using an all-hardware approach.

CHANGES IN ERROR CORRECFlON
For single-burst correction, Cirrus Logic - Colorado still recommends computergenerated codes.
Most new designs employing computer-generated codes are using
binary polynomials of degree 48, 56, and 64. In many cases, implementations of the
higher degree polynomials include hardware to assist in performing on-the-fly correction.
Economic and technical factors are driving the industry to accommodate higher
defect rates to which single-burst error-correction codes are not suited. Consequently,
Reed-Solomon codes, a class of powerful codes which allow efficient correction of
multiple bursts, are currently being designed into a wide variety of storage products
including magnetic tape, magnetic disk, and optical disk. Reed-Solomon codes were
discovered more than twenty-five years ago but only recently have improved encoding
and decoding algorithms, along with decreased VLSI costs, made them economical to
implement. Using software decoding techniques running on standard processors, Cirrus
Logic - Colorado now routinely achieves correction times for Reed-Solomon codes that
were difficult to achieve with bit-slice designs just a few years ago.
IBM has announced a new version of its 3380 magnetic disk drive that employs
multiple-burst error detection and correction, using Reed-Solomon codes, to achieve
Singletrack densities significantly higher than realizable with previous technology.
burst error correction can handle modest defect densities, but defect densities increase
exponentially with track density. On-the-fly, multiple-burst error correction and errortolerant synchronization are required to handle these higher defect densities. On earlier
models of the 3380, IBM corrected a single burst in a record of up to several thousand
bytes. Using IBM's 3380K error-correction code, under the right circumstances it would
be possible to correct hundreds of bursts in a record. A unique feature of the 3380K
code is that it can be implemented to perform on-the-fly correction with a data delay
that is roughly 100 bytes.

x

The impact of this IBM announcement, coupled with the general push toward higher track densities, the success of high-performance error detection and correction on
optical storage devices, and the availability of low-cost, high-performance EDAC IC's,
will stimulate the use of high-performance EDAC codes on a wide range of magnetic
disk products. Cirrus Logic - Colorado itself is currently implementing double-burst
correcting, Reed-Solomon codes on a wide range of magnetic disk products, ranging
from low-end designs which process one bit per clock edge to high-end designs which
process sixteen bits per clock edge.

CHANGES IN ERROR DETECTION
When an error goes undetected, erroneous data is transferred to the user as if it
were error free. The transfer of undetected erroneous data can be one of the most
catastrophic failures of a data storage system. Some causes of undetected erroneous
data transfer are listed below.
• Miscorrection by an error-correction code.
• Misdetection by an error-detection or error-correction code.
• Synchronization failure in an implementation without synchronization
framing error protection.
• Intermittent failure in an unprotected data path on write or read.
• Intermittent failure in an unprotected RAM buffer on write or read.
• A software error resulting in the transfer of the wrong sector.
• Failed hardware, such as a failed error latch that never flags an error.
It is important to understand that no error-correction code is perfect; all are
subject to miscorrection when an error event occurs that exceeds the code's guarantees.
However, it is also important to understand that the miscorrection probability for a
code can be reduced to any arbitrarily low level simply by adding enough redundancy.
As VLSI costs go down, more redundancy is being added to error-detection and errorcorrection codes to achieve greater detectability of error events exceeding code guarantees. New single-burst error-correction code designs use polynomials of degree 48, 56,
and 64 to accomplish the same correctability achieved with degree 32 codes several
years ago, but with significantly improved detectability. If correctability is kept the
same, detectability is improved more than nine orders of magnitude in moving from a
degree 32 code to a degree 64 code.

Error-detection codes are not perfect either; they are subject to misdetection.
Like miscorrection, misdetection can be reduced to any arbitrarily low level by adding
enough redundancy. Unfortunately, the industry has not, in general, increased the level
of detectability of implemented error-detection codes significantly in the last twentyfive years. Two degree 16 polynomials, CRC-16 and CRC-CCITT, have been in wide use
for many years. For many storage device applications, there are degree 16 polynomials
with superior detection capability, and moreover, the requirements of many applications

xi

would be better met by error-detection polynomials of degree 32 or greater.
In the last few years, the industry has been doing a better job of avoiding pattern
Cirrus Logic - Colorado
sensitivities of error-detection and error-correction codes.
avoids using the Fire code because of its pattern sensitivity, and we use 32-bit auxiliary
error detection codes in conjunction with our Reed-Solomon codes in order to overcome
their interleave pattern sensitivity.

Auxiliary error-detection codes that are used in conjunction with ECC codes to enhance detectability have special requirements. The error-detection code check cannot
be made until after correction is complete. It is undesirable to run corrected data
through error-detection hardware after performing correction due to the delay involved.
It is also not feasible to perform the error-detection code check as data is transferred
to the host after correction, since some standard interfaces have no provision for a
device to flag an uncorrectable sector after the transfer of data has been completed.
To meet these requirements, some error-detection codes developed over the last few
years are specially constructed so that their residues can be adjusted as correction
occurs. When correction is complete, the residue should have been adjusted to zero.
Cirrus Logic - Colorado has been using such error-detection codes since 1982, and such
a code is included within Cirrus Logic - Colorado Reed-Solomon code IC's for optical
storage. IBM's 3380K also uses such an auxiliary error-detection code.
As the requirements for data integrity have increased, Cirrus Logic - Colorado has
tightened its recommendations accordingly. One of the areas needing more attention in
the industry is synchronization framing error protection. To accomplish this protection,
Cirrus Logic - Colorado now recommends either the initialization of EDAC shift registers to a specially selected pattern or the inversion of a specially selected set of EDAC
redundancy bits.
The magnetic disk drive array segment of the industry is making significant gains
in detectability. Some manufacturers are adding two redundant drives to strings of ten
data drives in order to handle the simultaneous failure of any two drives without losing
data. The mean time between data loss (MTBDL) for such a system computed from the
MTBF for individual drives may be in the millions of hours. In order for these systems
to meet such a high MTBDL, all sources of errors and transient failures that could
dominate and limit MTBDL must be identified, and means for detection and correction of
such errors and failures must be developed. For these systems, Cirrus Logic - Colorado
recommends that a four-byte error-detection code be appended and checked at the host
adapter. We also recommend that the logical block number and logical drive number be
included in this check. This allows the detection with high probability of a wide variety of errors and transient failures, including the transfer of a wrong sector or transfer
of a sector from the wrong drive.

CHANGES IN TRACK-FORMAT ERROR TOLERANCE
In many of today's single-burst-correcting EDAC designs, tolerance to errors is
limited by the ability to handle errors in the track format rather than by the capability
of the data-field EDAC code. In upgrading such designs, it is pointless to change from
single-burst to multiple-burst error correction without also improving track-format error
tolerance. In the future, all magnetic disk products will use error-tolerant synChronization and header strategies.

xii

The optical storage industry has already proved the feasibility of handling error
rates as high as l.E-4 through track-format error tolerance as well as powerful datafield EOAC codes. Optical track-format error tolerance has been achieved using multiple headers, error-tolerant sync marks, and periodic resynchronization within data fields.
Some systems now available are so error tolerant that user data is correctly recovered
even if there exists a defect situation so gross that the sector mark, header, and sync
mark areas of a sector are totally obliterated along with dozens of data bytes.

CHANGES IN DEFECT MANAGEMENT
As track densities increase in magnetic recording, and as erasable optical technology becomes more common, many companies will implement defect skipping to handle
higher defect densities without significantly affecting performance. This technique is
not applicable to write-once optical applications, where sector retirement and reassignment will be used. Such techniques also work well within dynamic defect management
strategies. Combining the two will allow the full power of the EDAC code to be used
for margin against new defects. Dynamic defect management will become more common,
especially for write-once and erasable optical technologies subject to relatively high new
defect rates and defect growth.
As more complex and intelligent device interfaces and controllers are implemented,
more responsibility for defect management will be shifted from the host to the device
controller.

CHANGES IN SELF-CHECKING
As data integrity requirements increase, it becomes very important to detect transient hardware failures. New designs for component IC's for controller implementations
are carrying parity through the data paths of the part when possible, rather than just
checking and regenerating parity. Cirrus Logic - Colorado sees this as a step forward,
but we also look beyond, to the day when all data paths are protected by CRC as well.
It is especially important to detect transient failures in EOAC hardware. Some
companies have implemented parity-predict circuitry to continuously monitor their EOAC
shift registers for proper operation.

When possible, Cirrus Logic - Colorado has incorporated circuitry to divide codewords on write by a factor of the code generator polynomial and check for zero remainder. This function is performed as close to the recording head as possible.
Cirrus Logic - Colorado's 8520 IC uses dynamic cells for the major EOAC shift
registers. To detect transient failures in the shift registers themselves, we incorporated
a feature whereby the parity of all bits going into a shift register is compared with the
parity of all bits coming out of the shift register.

CHANGES IN VERIFlCA TlON AND TESTING
The traditional diagnostic technique for storage-device EDAC circuitry uses write
xiii

long and read long. For write-once optical media, this technique has two problems.
Since these are high error rate devices, real errors may be encountered along with
simulated errors. Also, each write long operation uses up write-once media. Cirrus
Logic - Colorado incorporates a special diagnostic mode in its EDAC IC's that allows
the EDAC hardware to be tested without writing to or reading from the media.
The introduction of complex, high-performance hardware and software algorithms
for error correction and track-format error tolerance introduce new verification and
testing challenges. Cirrus Logic - Colorado verifies its error-correction software for
optical storage devices against millions of test cases. To verify the track-format error
tolerance of optical storage devices, Cirrus Logic - Colorado recommends a track format
simulator that allows all forms of errors to be simulated, including slipped PLL cycles.
Cirrus Logic - Colorado plans to market such a track simulator in the future. Cirrus
Logic - Colorado also recommends programmable buggers to allow all forms of errors to
be simulated during the performance of a wide range of operational tasks on real devices.
CHAUENGES FOR THE FUTURE

Many of the factors shaping the future of error correction and error tolerance
have already been discussed. One of the most significant will be carry-through error
detection that will be generated and checked for each sector at the host adapter. The
redundancy for this overall check will include the logical block number and the logical
drive number and will cover the entire path from the host adapter to the media and
back. A logical next step will be for hosts to provide an option for carrying all or
part of the overall check code redundancy through host memory when data is being
moved from one device to another. Looking further into the future, we may also see
the redundancy for the overall check maintained in host memory for those sectors that
are to be updated. In this case, an updatable error-detection code will be used and the
error-detection redundancy will be adjusted for each change made to the contents of
the sector.
An area that needs more attention is verification that we will be able to properly
read back all the data that we write. To avoid adversely impacting performance, we
mUst be able to accomplish this without following each write with a verify read. At the
closest possible point to the head we need to verify that the written user write data
and associated redundancy constitute a valid codeword. A good forward step in this
direction would be to decode the write encoded RLL bits back to data bits and to divide
this data stream by the code generator polynomial or compare it to the write data
stream going into the encoder.

xiv

CONTENTS
Preface . . . . . . . . . . .

. v

About Cirrus Logic - Colorado .

. vi

Prologue. . . . . . . . . . .

viii

CHAPTER 1 - INTRODUCTION.

1

1.1 Introduction to Error Correction

1

1.2 Mathematical Foundations. . .

8

1.3 Polynomials and Shift Register Sequences .

.16

CHAPTER 2 - EDAC FUNDAMENTALS.

.49

2.1 Detection Fundamentals .

.49

2.2 Correction Fundamentals.

.56

2.3 Decoding Fundamentals.

.67

2.4 Decoding Shortened Cyclic Codes.

.82

2.5 Introduction to Finite Fields

.87

2.6 Finite Field Circuits . .

103

2.7 Subfield Computation .

129

CHAPTER 3 - CODES AND CIRCUITS

135

3.1 Fire Codes. . . . . . . .

135

3.2 Computer-Generated Codes

140

3.3 Binary BCH Codes . .

145

3.4 Reed-Solomon Codes .

158

3.5 b-Adjacent Codes. . .

. 205

xv

CHAPTER 4 - APPLICATION CONSIDERATIONS.

· 213

4.1 Raw Error Rates and Nature of Error

· 213

4.2 Decoded Error Rates

· 215

4.3 Data Recoverability .

· 223

4.4 Data Accuracy . . .

· 230

4.5 Performance Requirements.

.240

4.6 Pattern Sensitivity. . . .

,

.

· 241

4.7 K-Bit-Serial Techniques.

· 243

4.8 Synchronization . . . .

· 250

4.9 Interleaved. Product and Redundant Sector Codes.

.,

.270

CHAPTER 5 - SPECIFIC APPLICATIONS .

· 274

5.1 Evolution of EDAC Schemes. . .

.274

5.2 Application to Large-Systems Magnetic Disk.

· 278

5.3 Application to Small-Systems Magnetic Disk.

· 293

5.4 Application to Mass-Storage Devices . . . .

· 350

CHAPTER 6 - TESTING OF ERROR-CONTROL SYSTEMS .

· 364

6.1 Microdiagnostics. . . . .

· 364

6.2 Host Software Diagnostics.

· 365

6.3 Verifying an ECC Implementation

· 365

6.4 Error Logging . . . . .

· 366

6.5 Hardware Self-Checking.

· 367

xvi

CHAPTER 1 - INTRODUCTION
1.1 INTRODUCTION TO ERROR CORRECTION

1.1.1 A REVIEW OF PAIUIY
A byte, word, vector, or data stream is said to have odd parity if the number of
'l's it contains is odd. Otherwise, the byte, word, vector, or data stream is said to
have even parity. Parity may be determined with combinational or sequential logic.
The parity of two bits may be determined with an EXCLUSIVE-QR (XOR) gate.
The circled ' +' symbol is understood to represent XOR throughout this book.

dl

dl~

Odd

=)u=..

-OR-

t

dO

dO

Parity across a nibble may be determined with a parity tree.

'~D}~
D
d3

-OR-

dO

en

~

d3

d2

d2

~~

dl0J

-OR-

l

==0t

Odd

•

dl
dO

~

Parity of a bit stream may be determined by a single shift register stage and one
XOR gate. The shift register is assumed to be initialized to zero. The highest numbered bit is always transmitted and received first.

d3 d2 dl dO

p

=

d3 + d2 + dl + dO

.~.
or

- 1-

P

= d3 e

d2

p

e dl e dO

The circuit below determines parity across groups of data stream bits.

d6 d5 d4 d3 d2 d1 dO
PO

= dO

P1

=

+ d3 + d6

d1 + d4
d2 + d5

P2
Note that each bit is included in only one parity check.

The circuit below will also determine parity across groups of data stream bits.

d6 d5 d4 d3 d2 d1 dO

~

------------------~~~

P1

=
=

P2

= d6

PO

d4 + d3 + d2 + dO
d5 + d2 + d1 + dO

+ d3 + d2 + d1

The contribution of each data bit to the finaI shift register state is shown below.
Each data bit affects a unique combination of parity checks.

contribution
Data Bit

P2 P1 PO
100
010
001
101
111
110
all

d6
d5
d4
d3
d2
d1
dO

The contributions to the f'mal shift register state made by several strings of data
bits are shown below.

contribution

string
d6,d4
d3,d2,dO
d4,dO

P2 P1 PO

=>
=>
=>

- 2 -

101
001
010

SUPPLEMENTARY PROBLEMS . . . . .

.370

APPENDIX A. PRIME FACTORS OF 2K 1 .

.374

APPENDIX B. METHODS OF FINDING LOGARITHMS AND
EXPONENTIALS OVER A FINITE FIELD.

· 375

ABBREVIATIONS.

· 403

GLOSSARY . . .

.404

BIBLIOGRAPHY.

· 419

INDEX . . . . .

· 463

xvii

CHAPTER 1 - INTRODUCTION
1.1 INTRODUCTION ro ERROR CORRECI'ION
1.1.1 A. REYIEWOF PARITY

A byte, word, vector, or data stream is said to have odd parity if the number of
'l's it contains is odd. Otherwise, the byte, word, vector, or data stream is said to
have even parity. Parity may be determined with combinational or sequential logic.
The parity of two bits may be determined with an EXCLUSIVE-OR (XOR) gate.
The circled '+' symbol is understood to represent XOR throughout this book.

dl

dl~

Odd

=)~

-OR-

t

dO

dO

Parity across a nibble may be determined with a parity tree.

d3

d2
dl

lD}D

Odd

•

-OR-

=)D
dO

en

~

d3

d2

d2

~~

d1&I

-OR-

~

=0t

Odd

•

dl

dO

~

Parity of a bit stream may be determined by a single shift register stage and one
XOR gate. The shift register is assumed to be initialized to zero. The highest numbered bit is always transmitted and received first.

d3 d2 dl dO

P

=

d3 + d2 + dl + dO

~.
or

- 1-

P

= d3

$

d2

p

$

dl

$

dO

The circuit below determines parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO
PO
P1

=
=

dO + d3 + d6
d1 + d4

P2 = d2 + d5
Note that each bit is included in only one parity check.
The circuit below will also determine parity across groups of data stream bits.
d6 d5 d4 d3 d2 d1 dO

~

------------------~.~

PO
P1

=
=

d4 + d3 + d2 + dO
d5 + d2 + d1 + dO

P2 = d6 + d3 + d2 + dl
The contribution of each data bit to the final shift register state is shown below.
Each data bit affects a unique combination of parity checks.

contribution
Data Bit

P2 P1 PO

d6
d5
d4
d3
d2
d1
dO

100
010
001
101
111

110
011

The contributions to the fInal shift register state made by several strings of data
bits are shown below.

contribution

string
d6,d4
d3,d2,dO
d4,dO

P2 P1 PO
=>
=>
=>

- 2 -

101
001
010

The contribution to the final shift register state by each string is the XOR sum of
contributions from individual bits of the string, because the circuit is linear. For a
linear function f:

f(x+y) = f(x)+f(y)
The parity function P is linear, and therefore

P(x+y) = P(x)+P(y)
Circuits of this type are the basis of many error-correction systems.

-3-

1.1.2 A FIRSI WOK ATERROR CORRECIJON

This discussion presents an introduction to single-bit error correction using a code
that is intuitive and simple. Consider the two-dimensional parity-check code defined
below.

PO
P1
P2
P3
P4
P5
P6
P7

S~ndrome

Check-Bit Generation
dO + d4 + dB + d12
d1 + d5 + d9 + d13
d2 + d6 + d10 + d14
= d3 + d7 + dll + d15
d12 + d13 + d14 + d15
dB + d9 + d10 + d11
= d4 + d5 + d6 + d7
= dO + d1 + d2 + d3
dO
d4
dB
d12

d1
d5
d9
d13

= dO
= d1
= d2
d3
= d12
dB
= d4
= dO

SO
Sl
S2
S3
S4
S5
S6
S7
d2
d6
d10
d14

d3
d7
dll
d15

+
+
+
+
+
+
+
+

J!il

d4
d5
d6
d7
d13
d9
d5
d1

Generation

+
+
+
+
+
+
+
+

dB
d9
d10
d11
d14
d10
d6
d2

+
+
+
+
+
+
+
+

d12
d13
d14
d15
d15
d11
d7
d3

+
+
+
+
+
+
+
+

PO
P1
P2
P3
P4
P5
P6
P7

Row Checks

P4

Ipo IPl Ip2 Ip3
Column Checks

One of the eight required check/syndrome circuits is shown below.
dO

:-l

:0 ---r-~-'_+ - -

_d_2_......It

S7

= dO

+ d1 + d2 + d3 + P7

L-§--1

d3

On write, each row check bit is selected to make the parity of its row even.
Each column check bit is selected to make the parity of its column even. The data bits
and the parity bits together are called a codeword.
On read, row syndrome bits are generated by checking parity across each row,
including the row check bit. Column syndrome bits are generated in a similar fashion.
Syndrome means symptom of error. For this code, syndrome bits can be viewed as the
XOR differences between read checks and write checks. If there is no error, all syndrome bits are zero.
When a single-bit error occurs, one row and one column will have inverted syndrome bits (odd parity). The bit in error is at the intersection of this row and column.
The circuit above shows the logic necessary for generating the write-check bit and
the syndrome bit for one row. For parallel decoding, this logic is required for each

-4-

row and column. Also, 16 AND gates are required for detecting the futersections of
inverted row and column syndrome bits. In addition, 16 XOR gates are required for
inverting data bits. The correction circuit for one particular data bit is shown below.

Raw Data Bit dlO
--------------~)

D

Corrected dlO •

S2
S5

ALLOW CORRECTION

Two data bits in error will cause either two rows, two columns, or both to have
inverted syndrome bits (odd parity). This condition can be trapped to give the code the
capability to detect double-bit errors in data.
All single check-bit errors are detected, but not all double check-bit errors. One
row and one column check bit in error will result in miscorrection (false correction). If
an overall check bit across data is added, the code is capable of detecting all double-bit
errors in data and check bits. This includes the case where one data bit and one parity
bit are in error. The overall check bit can be generated by forming parity across all
row or all column check bits. With the overall check bit added, all double-bit errors
are detectable but uncorrectable.
Miscorrection occurs when three bits are in error on three comers of a rectangle.
For example:

CJ

overall Check

Column Checks

The three errors which are illustrated above cause the decoder to respond as if
Miscorrection does not result for all
there were a single-bit error at location m.
combinations of three bits in error, only for those where there,are errors on three
comers of a rectangle.
Miscorrection probability for three-bit errors is the ratio of three-bit error combinations that result in miscorrection to all possible three bit-error combinations.

-5 -

Misdetection (error condition not detected at all) occurs when four-bits are in
error on the comers of a rectangle. For example:

Column Checks

CJ

Overall Check

This error condition leaves all syndrome bits equal to zero.
Misdetection does not result for all combinations of four bits in error, only those
where there are errors on four comers of a rectangle. Misdetection probability for
four-bit errors is the ratio of four-bit error combinations that result in misdetection to
all possible four-bit error combinations.
This discussion introduced the following error-correction concepts:
Check bits
Syndromes
Codeword
Correctable error
Detectable error
Miscorrection
Misdetection
Miscorrection probability
Misdetection probability

- 6 -

PROBLEMS
1. Write the parity check equations for the circuit below.

d6 d5 d4 d3 d2 dl dO
' - - - - - - . . PO
~----------...

2.

Pl

=
=

Write the parity check equations for the circuit below.

d6 d5 d4 d3 d2 dl dO

PO
~-----------...

Pl

=
=

~-------------------------- P2

3.

Generate a chart showing· the contribution of each data bit to the fmal shift
register state for the circuits shown above.
If the data stream is zeros except for d3 and dl, what is the fmal shift register
state?

- 7 -

1.2 MATHEMATICAL FOUNDATIONS
1.2.1 SOME DEFlNmONS. THEOREMSAND ALGORlTHMS FOR INTEGERS

Dginition 1.2.1. When we say an integer a divides an integer b we mean a divides
b with zero remainder. "a divides b" is written as alb. "a does not divide bl! is written
as a,fb.
Examples: 316, 3i4, 2il

Dginition1.2.2. An integer a is called prime if a is greater than 1 and there are
no divisors of a that are less than a but greater than 1. If an integer a greater than 1
is not prime, then it is called composite.
Examples:

2, 3, and 5 are prime
4, 6, and 8 are composite

Dginidon 1.2.3. The greatest common divisor (GCD) of a set of integers
{alta2,· ... ,an} is the largest positive integer that divides each of al,a2,··· ,an.
The
greatest common divisor may be written as GCD(a 1,a2, ••• ,an>.

Algorithm 1.2.1. To find GCD(alta~, ••• ,an>, express each integer as the product of
prime factors. Form the product of thelr common factors. For repeated factors, include in the product the highest power that is a factor of all the given integers. The
GCD is the absolute value of the product. If there are no common factors, the GCD is
one.
Examples:

GCD(3,9,15)
GCD(-165,231)
GCD(105,165)
GCD(45,63,297)

=
=
=
=

GCD(3,3 2,3*5)
GCD(-3*5*11,3*7*11)
GCD(3*5*7,3*5*11)
GCD(3 2*5,32*7,33*11)

=3
= 33
= 15
=9

The GCD can also be found using Euclid's Algorithm.

Dginition 1.2.4. The least common multiple (LCM) of a set of integers {al,a2, ••• ,an} is the smallest positive integer that is divisible by each of al ,a2,· •• ,an. The
least common multiple may be written LCM(al,a2,· •• ,an>.

Algorithm 1.2.2. To fmd LCM(al,a2,··· ,an>, express each integer as a product of
prime factors. Form the product _of primes that are a factor of any of the given integers. Common factors between two or more integers are included in the product only
once. For repeated factors, include in the product the highest power that occurs in any
of the prime factorizations. The LCM is the absolute value of the product.
.
Examples:

LCM(6, 15,21) = LCM(2*3,3*5,3*7)
LCM(30,42,66) =LCM(2*3*5,2*3*7,2*3*11)
LCM(-15,21 ,1 I) = LCM(-3*5,3*7 ,II)
LCM(45,63,297)=LCM(32*5,3 2*7,33*11)

- 8-

=
=
=
=

210
2310
1155
10395

Theorem 1.2.1. Every integer a>1 can be expressed as the product of primes, (with
at least one factor).

Examples:

3 =3
6 =2*3
15 =3*5

Dtifinition 1.2.5. Integers a and b are relatively prime if their greatest common
divisoris 1.
Examples:

3, 7
3, 4
15, 77

Theorem 1.2.2. Let integers a, b, and c be relatively prime in pairs, then a*b*c
divides d if, and only if, each of a, b, and c divide d.

Examples: 3115,5115, 7%15, therefore, (3*5*7)%15
31210,51210,71210, therefore, (3*5*7)1210
Theorem 1.2.3. Let an integer a be prime, then a divides b*c*d if, and only if, a
divides b or c or d.

Examples: 316, therefore, 31 (6*5*7)
3%5, 3%7, 3%11, therefore, 3%385

Definition 1.2.6. Let x be any real number.
INT(x), is the greatest integer less than or equal to x.
Examples:

The integer function of x, written as

INT(l/2) = 0
INT(5/3)

=

1

INT(-l/2) = -1

Definition 1.2.7. Let x and y be any real numbers. x modulo y, written as x MOD
y, is defined as follows:
x MOD Y = x - y*INT(x/y)
Examples:

5 MOD 3 = 2
9MOD3 = 0
-5 MOD 7 = 2

1.2.2 SOME DEFINmONS. TlIEOREMSAND ALGORITHMS FOR POLYNOMIALS

Definition 1.2.8. A polynomial is said to be monic if the coefficient of the term
with the highest degree is 1.

Definition 1.2.9. The greatest common divisor of two polynomials is the monic
polynomial of greatest degree which divides both.
Dt;/inition 1.2.10. The least common multiple of a(x) and b(x) is some c(x) divisible
by each of a(x) and b(x) , which itself divides any other polynomial that· is divisible by
each of a(x) and b(x).
Dt;/inition 1.2.11. If the greatest common divisor of two polynomials is 1, they are
said to be relatively prime.
Definition 1.2.12. A polynomial of degree n is said to be irreducible if it is
divisible by any polynomial of degree greater than 0 but less than n.

not

Theorem 1.2.4. Let a(x) , b(x) , and c(x) be relatively prime in pairs, then
a(x)' b(x)' c(x) divides d(x) if, and only if, a(x) and b(x) and c(x) divide d(x).
Theorem 1.2.5. Let a(x) be irreducible, then a(x) divides b(x)' c(x) • d(x) if, and only
if, a(x) divides b(x) or c(x) or d(x).
.

hold:

Definition 1.2.13. A function is said to be linear if' the properties stated below
.

= a' f(x)
f(x+y) = f(x)+f(y)

a.

Linearity: f(a· x)

b.

Superposition:

- 10 -

1.2.3 THE CHINESE REMAINDER METHOD
There are times whIm integer arithmetic in a modular notation is preferred to a
fixed radix notation. The integers are represented by residues modulo a set of relatively prime moduli.
Assume integers are represented by residues modulo the
moduli 3 and 5.

Example:

Integer(k}
0

1
0

0

20,
MODULUS

=

4

2
3
4
5
6
7
8
9
10
11
12
13

0'
3

3

2

MODULUS

5

14
15
16

Residues(rO,r1}
0
1
2
0
1
2
0

1
2
0
1
2
0
1
2
0
1

0
1
2
3
4
0
1
2
3
4
0
1
2
3
4
0
1

Notice that the integer k has a unique representation in residues from k =0 through
k=14. The integer k=15 has the same representation as k=O. In this case, the total
number of integers that have unique representation is 15. In general, the total number
of integers n having unique representation is given by the equation:
n

= LCM(eo,e I, ••• )

where the ei are moduli.
There are also times when an integer d must be determined if its residues modulo
a set of moduli are given. This can be accomplished with the Chinese Remainder Method. This method is based on the Chinese Remainder Theorem. See any number theory
text.

11 _

METHOD
ei = Moduli

Ai
ri

(The ei must be relatively prime in pairs)

Constants such that (Ai*mi) MOD ei = 1

=

Residues
desired integer = (AO*mO*rO + Al*ml*rl + ••• ) MOD n

d

EXAMPLE
3,5 (eO=3, el=5)

ei

n = LCM(3,5) = 15
mO = n/eO
n/el

ml

15/3 = 5
15/5

This calculation
is performed at
development time.

3

AO*5 MOD 3

1, therefore AO = 2

Al*3 MOD 5

1, therefore Al

d

=

2

(10*rO + 6*rl) MOD 15

If rO,rl = 2,3 then d = 8
If rO,rl

This calculation

is performed at
}- execution
time.

1,3 then d = 13

A PROCEDURE FOR PERFORMING THE CHINESE REMAINDER METHOD
WITHOUT USING MULTIPLICA TION
Frequently, the Chinese Remainder Method must be solved on a processor that
does not have a multiply instruction. A procedure using only addition and compare
instructions is described below.
The integer d is to be determined where d is the least integer such that:
d MOD eo

= ro

and simultaneously d MOD el

= rl

or equivalently,
d

-eO =

d

rO
nO +

eO

and simultaneously

- 12 -

Rearranging gives
d

= no*eo + ro

and simultaneously d

d

= no*eo +

= n1 *e1

= n1 *e1 +r1

or,
ro

+r1

Multiplication can be expressed as repeated addition. Therefore,

d

=

ro + eo
+ eO + • • •
I

= r1

+ e1____________
+ e1 +

nO times

~I

~

n1 times

A procedure for finding d based on the relationship above is detailed in the following flowchart.

dO

- 13-

= do

+ eO

1.2.4 MULTIPLICATION BY SHIFIlNG, ADDING, AND SUBTRACl7NG

Many 8-bit processors do not have a' multiply instruction. This discussion describes techniques to minimize the complexity of multiplying a variable by a constant,
when these processors are used.
These techniques provide another alternative for
accomplishing the multiplications required in performing the Chinese Remainder Method.
On an 8-bit processor any shift that is a multiple of 8 bits can be accomplished
with register moves. Therefore, multiplying by a power of 2 that is a multiple of 8 can
be accomplished by register moves. Any string of ones in a binary value can be represented by the power of 2 that is just greater than the highest power of 2 in the string
minus the lowest power of 2 in the string. These results can be used to minimize the
complexity of multiplying a variable by a constant using register moves, shifts, adds and
subtracts.
Examples: In all examples, x is less than 256. The results are shown in a
form where register moves and shifts are identifiable.
y

=
=
y

=
=
y

y

=

=
=
=
=

255*x
(2 8 -1)*X
2 8 *x-x
257*x
(2 8 +1)*x
2 8 *x+x
992*x
(29+28+27+26+25)*x
(2 1O -2 5 )*x
2 1O *x-2 5 *x

32131*x
(214+213+212+211+210+28+27+21+20)*x
(215_29_27+21+20)*x

=
=

215*x-29*x-27*x+21*x+2o*x
28*(27*x)-(27*x)-28*(21*x)+(21*x)+x

In the last example, only two unique shift operations are required even though
the original constant contains nine powers of 2. This particular example is from the
Chinese Remainder Method when moduli 255 and 127 are used.

- 14 -

PROBLEMS
1.

Find the GCD of 70 and 15.

2.

Find the GCD of 70 and 11.

3.

Find the LCM of 30 and 42.

4.

Find the LCM of 33 and 10.

5.

Express 210 as a product of primes.

6.

Are 70 and 15 relatively prime?

7.

Are 70 and 11 relatively prime?

8.

Determine a
a
a
a
a
a
a

9.

Is 2· x2

=
=
=
=
=
=

INT(7/3)
=
-INT(1I3)
=
INT(-1I3)
=
10 MOD 3 =
-3 MOD 15 =
254 MOD 255 =

+ 1 a monic polynomial?

10. Write the residues modulo the moduli 5 and 7 of the integer 8.
11.

The residues for several integers modulo 5 and 7 are listed below. Compute the Ai
of the Chinese Remainder Method. Then use the Chinese Remainder Method to
determine the integers.
aMOD5 = 4, aMOD7 = 6, a =?
a MOD 5 = 3, a MOD 7 = 5, a = ?
a MOD 5 = 0, a MOD 7 = 4, a = ?
What is the total number of unique integers that can be represented by residues
modulo 5 and 7?

12. Define a fast division algorithm for dividing by 255 on an 8-bit processor that does
not have a divide instruction. The dividend must be less than 65536.
13. What is the total number of unique integers that can be represented by residues
modulo 4 and II?

- 15 -

1.3 POLYNOMIALS AND SHIFf REGISTER SEOUENCES

1.3.1 INTRODUcnON TO POLYNOMIALS
It is convenient to consider the symbols of a binary data stream to be coefficients
of a polynomial in a variable x, with the powers of x serving as positional indicators.
These polynomials can be treated according to the laws of ordinary algebra with one
exception: coefficients are to be added modulo-2 (EXCLUSIVE-OR sum). The' +' operator will be used to represent both ordinary addition and modulo-2 addition; when used
to represent modulo-2 addition, it will usually be separated from its operands by a
preceding and a following space.

As with polynomials of ordinary algebra, these polynomials have properties of
associativity, distributivity, and commutativity. These polynomials also factor into prime
or irreducible factors in only one way, just as do those of ordinary algebra.
For now, the value of coefficients will be either '1' or '0' depending on the value
of the corresponding data bit. Such polynomials are said to have binary coefficients or
to have coefficients from the field of two elements. Later, polynomials with coefficients other than '1' and '0' will be discussed. When transmitting and receiving polynomials, the highest order symbol is always transmitted or received first.

MULTIPLICATION OF POLYNOMIALS
Multiplication is just like ordinary multiplication of polynomials, except the addition of coefficients is accomplished with the XOR operation (modulo-2 addition).

Example #1:

x3
• x3 + x + 1
x 6 + x4 + x 3

-or-

1000
• 1011
1000
1000
1000
1011000

Example #.2:

. x 3 + xx ++ 11
x4 + x 3

-or-

x2 + x
x + 1
3
2
x4 + x + x + 1

11
• 1011
11
11
11
11101

In example 12, unlike in ordinary polynomial multiplication, the two x terms cancel.

- 16 -

DIVISION OF POLYNOMIALS
Division is just like ordinary division of polynomials. except the addition of coefficients is accomplished with the XOR operation (modulo-2 addition).

Exarogle U:
x3 + x + 1

Exarogle i2:
x3 + x + 1

x2
x5
x5
x3
x3
x2

+
+
+
+
+
+

x2
x5
x5
x3
x3

+
+
+
+
+

1
1

101

-OR-

1011

x3 + x2
x2 + 1
x + 1
x

1011
1101
1011
0110

1

x2 + 1
x3 + x2

100001

101

-OR-

1011

100101
1011
1001

1

x + 1
x

1011
0010

- 17 -

1.3.2 INTRODUcnON TO SHIff REGISTERS
A linear sequential circuit (LSC) is constructed with three building blocks. Any
connection is permissible as long as a single output arrow of one block is mated to a
single input arrow of another block.

MEMORY CIRCUITS (LATCHES).
Single input, single output.

+

GI--_~
t

MODULO-2 ADDITION (XOR GATES).
Single output, no restriction
on the number of inputs.
CONSTANT MULTIPLIERS.
Single input, single output.

Latches are clocked by a synchronous clock. The output of a latch at any point
in time is the binary value that appeared on its input one time unit earlier.
The output of a modulo-2 adder at any point in time is the modulo-2 sum of the
inputs at that time.
For now, a constant multiplier ' ·a' will be either ' ·1' or ' ·0'. If such a constant
multiplier is '. 1', a connection exists. No connection exists for a constant multiplier of
, ·0'.

AN EXAMPLE OF AN LSC

A linear sequential circuit of the above form is also called a linear feedback shift
register (LFSR), a linear shift register (LSR) or simply a shift register (SIR).

ANEQUlVALENTCIRCUITWHEREaJ - Q.lI2 - 1.«3 - I

- 18-

SHIFT REGISTER IMPLEMENTATION OF MULTIPLICATION
Polynomial multiplication can be implemented with a linear shift register.
The circuit below will multiply any input bit stream (input polynomial) by (x + 1).
The product appears on the output line. The number of shifts required is equal to the
sum of the degrees of the input polynomial and the multiplier polynomial plus one.

OUTPUT

I

•

~

INPUT
Example #1:

Assume the input polynomial to be (x5

Input
Bit
1 (x 5 )
0
1 (x 3 )
0
0
1 (1)
0

Shift Reg
State
1
0
1
0
0
1
0

+ x3 + 1).

Output
Bit
1
1
1
1
0
1
1

(x 6 )
(x 5 )
(x4)
(x 3 )
(x)
( 1)

Example #2: Assume the input polynomial to be x3 .

Input
Bit
1
0
0
0
0

NOTE:

(x 3 )

Shift Reg
State
1
0
0
0
0

Output
Bit
1
1
0
0
0

(x4)
(x 3 )

The shift register state is shown after the indicated input bit is clocked.

- 19 -

(x

3 The circuits below will multiply any input bit stream (input polynomial) by
+ x + 1).

OUTPUT

•

INPUT

Shift Register "A"
OUTPUT

I
•
t
t~
~

INPUT

Shift Register "B"

Example #1: Assume the input polynomial to be x3 .
Input
Bit

Shift Req
'B'State

1(x3)
0
0
0
0
0
0

011
110
100
000
000
000
000

Output
Bit

l(x~
0
1(X;)
l(x )
0
0
0

Example #2: Assume the input polynomial to be (x
Input
Bit

ShiftReq
'B'State

Output
Bit

1(x)

011
101
010
100
000

1 (x;)
1 (x2)
1 (x )

1(1)

o
o
o

NOTE:

+ 1).

o

1 (1)

The shift register state is shown after the indicated input bit is clocked.

- 20 -

A GENERAL MULTIPLICATION CIRCUIT

OUTPUT

The circuit shown above multiplies any input polynomial D(x) by a fixed polynomial
P(x). The product appears on the output line.
P(x) = hi-xi

+ hi-l xi-l + hi-2 xi-2 + ... + hi-x + hO
o

0

The number of shifts required is equal to the sum of the degrees of the input
polynomial and multiplier polynomial, plus one.

MULTIPLY CIRCUIT EXAMPLES

OUTPUT

I

0~

INPUT1~
Multiply by

x2 + 1
OUTPUT

.

I

G~~
~

INPUT 1

1

Multiply by

x4 + x3 + 1
OUTPUT

I

G~
~

INPUT 1

1

Multiply by

x5 + x3 + x2 + 1

- 21 -

SHIFT REGISTER IMPLEMENTATION OF DIVISION
Polynomial division can be implemented with an LSR.
The circuit below will divide any input bit stream by (x + 1). One shift is required for each input bit. The quotient appears on the output line. The final state of
the LSR represents the remainder.

OUTPUT

INPUT
Example #1: Assume the input polynomial to be
x6

+ x5 + x4 + x3 + x + 1.

Input
Bit
1 (x~)
1 (x )
1 (x~)
1 (x )
0
1 (x)
1 (1)

Shift Reg Output
State
Bit

1 (1)
0
1 (1)
0
0
1 (1)
0

0
1 (x5)
0
1 (x3)
0
0
1 (1)

Example #2: Assume the input polynomial to be (x4
Input
Bit

1 (x~)
1 (x )
0
0
1 (1)

NOTE:

Shift Reg
State

+ x3 + 1).

Output
Bit

1 (1)

0

0
0
0
1 (1)

0
0
0

1 (x3)

The shift register state is shown after the indicated input bit is clocked.

- 22 -

The circuit below will divide any input bit stream by (x3

+

x

+ 1).
OUTPUT

INPUT
Example #1: Assume the input polynomial to be (x5
Input
Bit

Shift Reg
State

Output
Bit

1 (x5)

001 (1)
010 (x)
100 (x2)
011 (x+1)
110 (x2+x)
110 (x2+x)

o
o
o .

o
o
o
o

1 (1)

+ 1).

1 (x2)

o

1 (1)

Example #2: Assume the input polynomial to be x6.
Input
Bit

Shift Reg
State

1 (x6)

001 (1)
0
010 (x~
0
100 (x )
0
011 (xiI)
1 (x3)
110 (x +x)
0
111 (x2+x+1) 1 (x)
101 (x2 + 1)
1 (1)

o
o
o
o
o
o
NOTE:

Output
Bit

The shift register state is shown after the indicated input bit is clocked.

- 23 -

A GENERAL DIVISION CIRCUIT
OUTPUT

I

t

I

~~

I
C20

@
l

l

l

I
®
l

INPUT

The circuit above divides any input polynomial D(x) by a fixed polynomial P(x).
The quotient appears on the output line. The remainder is the final shift register state.
P(x) = gixi

+ gi_l xi- 1 + ... + glx + gO

The number of shifts required is equal to the degree of the input polynomial plus
one.

DIVIDE CIRCUIT EXAMPLES
OUTPUT

INPUT

~I
Divide by

x2 + 1
OUTPUT

INPUT

~~I
Divide by

x4 + x 2 + 1
OUTPUT

r

INPUT

Divide by

x 6 + x 5 + x4 +

- 24 -

x3 + 1

SHIFT REGISTER IMPLEMENTATION OF SIMULTANEOUS
MULTIPLICATION AND DIVISION
It is possible to use a shift register to accomplish simultaneous multiplication ang
division. The circuit below wi~ multiply any input bit stream (input polynomial) by x
and simultaneously divide by (x + x + 1). The number of shifts required is equal to
the degree of the input polynomial plus one. The quotient appears on output line. The
remainder is the (mal state of shift register.

OUTPUT

~
j

INPUT

Example #1: Assume the input polynomial to be (x5
Input
Bit

Shift Reg
State

1 (x5)

011 (x + 1)
1 (x5)
110 (x2+x)
0
111 (x~+x+l) 1 (x~)
101 (x + 1)
1 (x )
001 (1)
1 (x)
001 (1)
1 (1)

o
o
o
o

1 (1)

+ 1).

Output
Bit

Example #2: Assume the input polynomial to be x6.
Input
Bit

Shift Reg
State

1 (x6)

011 (xt 1)
1 (x6)
110 (x + 1)
0
111 (X~+X+l) 1 (xj)
101 (x +1)
1 (x2)
001 (1)
1 (x )
010 (x~
0
100 (x )
0

o
o
o
o
o
o
NOTE:

Output
Bit

The shift register state is shown after the indicated input bit is clocked.

- 25 -

A CIRCUIT TO MULTIPLY AND DIVIDE SIMULTANEOUSLY
A general circuit to accomplish simultaneous multiplication by a polynomial hex) of
degree three and division by a polynomial g(x) of degree two is shown below. The
multipliers are all'· l' (connection) or '·0' (no connection).

OUTPUT

INPUT
To multiply by x3, set h3 = 1 and set all other multipliers to O.
To multiply by 1 and divide by (x3
other multipliers to O.

+ x + 1), set 110=1, gO=1 and gl =1 and set all

To multiply by x3 and divide by (x3 + x + 1), set h3=1 gO=I, and gl =1 and set all
other multipliers to O. This is a form of simultaneous multiplication and division that is
encountered frequently in error-correction circuits.
To multiply by (x
multipliers to O.

+ I) and divide by x3 , set hO=1 and hI =1 and set all other

- 26 -

A GENERAL CIRCUIT FOR SIMULTANEOUS MULTIPLICATION AND DIVISION

(5 ...

f

C20

!

OUTPUT

(±) (3)'

.......

INPUT

!

!

Q1

r

The circuit above multiplies any input polynomial by Pl(x) and simultaneously divides by
P2(x).

Pl(x)
P2(X)

=
=

hiXi + hi_1Xi-l + hi_2Xi-2 + ... + hlx + ho

gixi + gi_lxi-l + gi_2 xi - 2 + ... + glx + go

The number of shifts required is equal to the degree of the input polynomial plus one.

EXAMPLES OF CIRCUITS TO MULTIPLY AND DIVIDE SIMULTANEOUSLY
OUTPUT

INPUT

4W
r

Multiply by

x3 + 1

and divide by

x4 + x 2 + 1
OUTPUT

I
I I
I
~

INPU~

Multiply by

x5 + 1

and divide by

- 27 -

x5 + x3 + x2 + 1

. SIMULTANEOUS MULTIPLICATION AND DIVISION
WHEN THE MULTIPLIER POLYNOMIAL HAS A HIGHER DEGREE
The circuit below shows how to construct a shift register to multiply and divide
simultaneously when the multiplier polynomial has a higher degree. The number of
shifts required is equal to the degree of the input polynomial, plus the degree of the
multiplier polynomial, minus the degree of the divider polynomial, plus one. Register
. states are labeled below for the multiply polynomial and above for the divide polynomial.
OUTPUT

I

x2

x

I

1

I

INPUT

Multiply by

x5 + 1

and divide by

x3 + x + 1

SHIFT REGISTER IMPLEMENTATION TO COMPUTE A SUM OF PRODUCTS
A single shift register can be used to compute the sum of the products of different variable polynomials with different fixed polynomials e.g. a(x)·ht (x) + b(x)·h2(x).
The circuit below will multiply an input polynomial a(x) by a fixed polynomial x3 +
x + 1 and simultaneously multiply an input polynomial b(x) by the fixed polynomial x2 +
1 and sum the products. The sum of the products appears on the output line. The
number of shifts required is equal to the sum of the degrees of the input polynomial of
the highest degree and the fixed polynomial of the highest degree plus one.
OUTPUT

a(x)
b(x)

Examl!le #1: Assume a(x) to be x3 and b(x) to be (x5

NOTE:

a(x)
Inl!ut

b(x)
Input

Shift Req
State

0
0
1 (x3)
0
0
0
0
0
0

1 (x5)
0
1 (x3)
0
0
1 (1)
0
0
0

101
010
010
100
000
101
010
100
000

+ x3 + 1).

Output
Bit
0
1 (x~)
1 (x )
0
1 (x4)
0
1 (x2)
0
1 (1)

The shift register state is shown after the indicated input bit is clocked.

- 29 -

"

-.

"

SHIFT REGISTER IMPLEMENTATION TO COMPUTE A SUM OF PRODUCTS
MODULOA DIVISOR
A single shift register can be used to compute the remainder of the sum of products of different variable polynomials with different fixed polynomials when divided by
another polynomial e. g. [a(x). hi (x), + b(x) .h2(x)lMOD g(x).
The J,:ircJJ!tbelo\V. will !llultlply

an

input polynomial a(x) bya fixed polynomial x2

+ x + 1 and simultaneously multiply an input polynomial b(x) by th~ fixed polynomial x2
+ 1 and sum the prod:ucts.Thesum of the products is reduced modulo x +x +1.
The shift register contents at th~ end of the ope~tion is the result.·· The number
of shifts required is equal to the degree of the input polynomial of the highest degree
plus one.
OUTPUT

a(x}
b(x}

Example #1: Assume a(x) to be x3 andb(x) to be (x5 + x3 + 1).
a(x)
Input

b(x)
Input

Shift Req
State

0
0

1 (x5)
0
1 (x3)
0
0
1 (1)

101
001
000

J (x3)

0
0
0
NOTE:

000
000

101 (x2 +1)

The shift register state is shown after the indicated input bit is clocked.

- 30 -

OTHER FORMS OF THE DIVISION CIRCUIT
The circuit examples below are implemented using the internal-XOR form of shift
register.

OUTPUT

~
t

Premultiply by x 3 and
divide by x 3 + x + 1

OUTPUT

~.
r
Divide by x 3 + x + 1

The circuit shown below can accomplish the circuit function of either of the
circuits shown above. If the gate is enabled for ~e entire input polynomial, the circuit
function is to premultiply by x"3 and divide by (x + x + 1). However, if the gate is
disabled for the last m (~ is 3 in this· case) bits of the input polynomial, the circuit
function is to divide by (x + x + 1) without premultiplying. In the following general
discussion, g(x) is the division polynomial and m is the degree of the division polynomial.

OUTPUT

GATE ENABLED DURING LASTm BITS OF INPUT POLYNOMIAL
The circuit function is premultiply by xm and divide by g(x). The quotient appears
on the output line. The remainder is taken from the shift register.

GATE DISABLED DURING LASTm BITS OF INPUT POLYNOMIAL
The circuit function is to divide by g(x) without premultiplying by xm. The quotient appears on the output line up to the last m bits of the input polynomial. The
remainder appears on the output line during the last m bits of the input polynomial.
The remainder can also be taken from the shift register.

- 31 -

EXTERNAL-XOR FORM OF SHIFT REGISTER DIVIDE CIRCUIT
There is another form of the shift register divide circuit called the external-XOR
form that in many cases can be implemented with less logic than the internal-XOR form.
An example is shown below.
External XOR form of shift register divide circuit

0INPUT

OUTPUT

NOTE: The odd circuit is a parity tree.

This circuit is sometimes drawn as shown below.

INPUT
OUTPUT

•
The external-XOR form of the shift register can be implemented two ways.
1.

The shift register input is enabled during the entire read of the input polynomial.
In this case, the circuit function is premultiply by xm and divide by g(x).

2.

The shift register input is disabled during the last m bits of the input polynomial.
In this case, the circuit function is divide by g(x).

- 32 -

Example #1. Input to shift register enabled during entire read of input polynomial.
Circuit function_= a(x)Uxffi/g(x)
where a(x) = x::l and g(x) = xJ + x + 1

INPUT
OUTPUT

Clocks with gate enabled during read. to get quotient.
DATA
1
0
0
0
0
0

SIR
001
010
101
011
111
110

OUTPUT
1
0
1
1
1
0

x6 + x3 + x2 + x

Quotient

1

-

LSB

Clocks with gate disabled after read, to get remainder.
SIR
100
000
000

OUTPUT

~ ] -Remainder
LSB

x

1.

During read, the output is the quotient.

2.

After read is complete, disable the gate and clock m more times to place the
remainder on the output line.
x5 + x3 + x 2 + x

(x 8 because of premultiply)

x3 + x + 1
+ x5
+ x4 +
+ x4 +
+ x3 +
+ x2
+ x2 +

x3
x3
x2

x

x

- 33-

E;"ample #2. Input to shift register disabled during iast m bits of input poiynomiaL
Circuit function..= a(x)/g(x) ~
where a(x) = x.) and g(x) = x" + x + 1

INPUT

SIR
001
010
101

OAT.:;'
1

0
0

OUTPUT

OUTPUT

1
0
1

] Quotient = x 2 +
-

1

!.SB

Gate disabled at this point.
0
0

010
100
000

0

1
1
1

] Remainder = x 2 + x +
-

!.SB

Output

1.

Up to the last m bits, the output is the quotient.

2.

During the last m bits, the output is the remainder.

x3 + x + 1

x2
x5
x5
x3
x3
x2

+

1

+
+
+
+

X"

..,

~

+

x2
x +
x +

X"

1
1

- 34-

1

PERFORMING POLYNOMIAL MULTIPLICATION AND DIVISION
WITH COMBINATORIAL LOGIC

Computing parity across groups of data bits using the circuit below was previously
studied.
a(x)

=

d6·x 6 + d5·x 5 + d4·x 4 + d3·x 3 + d2·x 2 + di·x + dO

INPUT
~--?

~---------------?
~-----------------------?

PO = d4 + d3 + d2 + dO
Pi = d5 + d2 + di + dO
P2
d6 + d3 + d2 + di

Now that polynomials have been introduced, ~e function of this ~ircuit can be
restated. It premultiplies the input polynomial by x and divides by (x + x + 1).
Obviously, the parity check equations can be implemented with combinatorial logic.
Therefore, the circuit function can be implemented with combinatorial logic.
d4
d3
d2
dO ----'

d6 ----,

d5
d2
-

d3

PO

-

Pi

di ------

d2

dO ----'

di

-

P2

The combinatorial IOjic circuit above ~mputes the remainder from premultiplying a
7-bit input polynomial by x and dividing by (x + x + 1).

~:.35

-

THE SHIFT REGISTER AS A SEQUENCE GENERATOR
Consider the circuit below:

If this circuit is initialized to '001' and clocked, the sequence below will be generated.

The sequence repeats every seven shifts. Tfle length of the sequence is seven.
The maximum length that a shift register can generate is 2m-I, where m is the shi ft
register length. Shift registers do not always generate the maximum length sequence.
The sequence length depends on the implemented polynomial. It will be a maximum
length sequence only if the polynomial is primitive.

- 36 -

1.3.3 MORE ON POLYNOMIALS
Reciprocal Polynomial. The reciprocal of a polynomial P(x) of degree m with
binary coefficients,

P(x) = Pmoxm + Pm_loXm-l +

0

0

+ PlOX + Po

0

is defined as:

xmoP(l/x)

= Pooxm + PloXm- l

+

0

0

0

+ Pm-lOX + Pm

i.e., the coefficients are flipped end-for-end. "Reverse" is a synonym for "reciprocal."
Self-Reciprocal Polynomial. A polynomial is said to be self-reciprocal if it has the
same coefficients as its reciprocal polynomial.
Forward Polynomial. A polynomial is called the forward polynomial when it is
necessary to distinguish it from its reciprocal (reverse) polynomial. This applies only to
polynomials which are not self-reciprocal.
Polynqmial Period. The period of a polynomial P(x) is the least positive integer e
such that (xe + 1) is divisible by P(x).
Reducible. A polynomial of degree m is reducible if it is divisible by some polynomial of a degree greater than 0 but less than m.
Irreducible. A polynomial of degree m is said to be irreducible if it is not divisible by any polynomial of degree greater than 0 but less than m. "Prime" is a synonym
for "irreducible."

The reciprocal polynomial of an irreducible polynomial is also irreducible.
Primitive Polynomial.
period is 2m-I.

A polynomial of degree m is said to be primitive if its

A primitive polynomial is also irreducible.
The reciprocal polynomial of a primitive polynomial is also primitive.

- 37 -

A PROPERTY OF RECIPROCAL POLYNOMIALS
The reciprocal polynomial can be used to generate a sequence in reverse of that generated by the forward polynomial.
Example: Shift register" A" 3belo~ implements (x3 + x +
"B" implements (x + x + 1), the reciprocal of (x

J)+andx +shifts1), and
left. Shift register
shifts right.

Initialize-shift register" A" to '001' and clock four times.

Clock
1

2
3
4

Shift Register "A"

contents
001
010
100
011
110

Transfer the contents of shift register" A" to shift register "B" and clock four times.

-..,

Clock

~

1
2
3
4

Shift Register "B"

contents
110
011
100
010
001

Shift register "B" retraces in the reverse direction the states of shift register" A " .
The property of reciprocal polynomials described above will be used later for decoding
some types of error-correcting codes.

- 38 -

DETERMINING THE PERIOD OF AN IRREDUCIBLE POLYNOMIAL
WITH BINARY COEFFICIENTS

The algorithm described below for determining the period of an irreducible polynomial g(x) with binary coefficients requires a table. The table is used in determining the
residues of powers of x up to (2m-I).
The table is a list of residues of x,x2 ,x4 , - - - ,x2m- I modulo g(x), where m is the
degree of the g(x). Each entry in the table can be computed by squaring the prior
entry and reducing modulo g(x). The justification is as follows.

x 2 *a MOD g(x)

=

(xaox a ) MOD g(x)

=

([x a MOD g(x)]-[x a MOD g(x)]} MOD g(x)
[xa MOD g(x)]2 MOD g(x)

50 The example below illustrates the use of the table for determining the residue of
x modulo g(x).

x 50 MOD g(x)

=

=

[X32+16+2] MOD g(x)

=

[X32_x16·x2] MOD g(x)

{[f32 MOD g(Xl]o[f 16 MOD g(Xl]·[f 2 MOD g(xl]} MOD g(x)

I

I

I

Select these residues from the table.
The period of an irreducible polynomial of degree m must be a divisor of (2m-I).
For each e that is a divisor of 2m-I, compute the residue of xe modulo g(x) by
multiplying together and reducing modulo g(x) an appropriate set of residues from the
table.
The period of the pognomial is the least e such that the residue of xe modulo
g(x) is one. If the period is 2 -1, the polynomial is primitive.

- 39 -

DETERMINING THE PERIOD OF A COMPOSITE POLYNOMIAL
MTH BINARY COEFRCfflNTS
Let fi(x) represent the irreducible factors of f(x). If,
f(x) =(fl (x) ° f2(x) ° f3(x) ° •••• )
and there are no repeating factors, the periode of f(x) is given by:

where the ei are periods of the irreducible factors.
Example: The period of (x3

+ 1) = (x + 1) ° (x2 + x + 1) is 3.

If f{x) is of the form:
f{x) = [flCX)ml)o[f2CX)m2)o[f3{X)m3»)ooo
where the mi are powers of repeating irreducible factors, then the period e of f(x) is
given by:

e

=

k oLCM(el,e2,e3,ooo)

where k is the least power of two which is not less than any
of the mi.
Example: The period of (x3

+ x2 + x + 1) = (x + 1)3 is 4.

A SIMPLE METHOD OF COMPUTING PERIOD
A simple method for computing the period of a polynomial is as follows: Initialize
a hardware or software shift register implementing the polynomial to '00 ·01'. Clock
the shift register until it returns to the '00 ·01' state. The number of clocks required
is the period of the polynomial.
0

0

•

•

This method can be used to compute the period of composite as well as irreducible
polynomials. However, it can be very time consuming when the period is large.

- 40 -

NUMBER OF PRIMITIVE POLYNOMIALS OF GIVEN DEGREE
The divisors (factors) of (x2m- I + 1) are the polynomials with period 2m-lor
whose period divides 2m-I. This may include polynomials of degree less than or greater
than m.
The divisors (factors) of (x2m- I
nomials of degree m.

+ I) that are of degree m are the primitive poly-

The number n of primitive polynomials of degree m with binary coefficients is
given by:

where U(x) is Euler's phi function and is the number of positive integers equal to or
less than x that are relatively prime to x:


(x)

= I I

(Pi) ei- 1 • (Pi -1)

i
where
Pi
ei
Example:

= The prime factors of x

= The powers of prime factors Pi

There are 30 positive integers that are equal to or less than 31 and relatively prime to 31. Therefore, there are 6 primitive polynomials of degree

5.

- 41 -

SHIFT REGISTER SEQUENCES USING A NONPRIMITIVE POLYNOMIAL
Previously, a maximum length sequence generated by a primitive polynomial was
studied. Nonprimitive polynomials generate multiple sequences.
The state sequence diagram shown below is for the irreducible nonprimitive polynomial
X4 + x 3 + x 2 + x + 1
0101
1010
1011
1001
1101

oofol

LJ

The state sequence diagram shown below is for th,e reducible polynomial
x4 + x 3 + x 2 + 1
0011
0110
1100
0101
1010
1001
1111

0001
0010
0100
1000
1101
0111
1110

=

ex + 1)·eX 3 + x + 1)

ooE)

10E]

Each of the four sequences directly above contain states with either an odd number of bits or an even number of bits, but not both. This is caused by the (x + 1)
factor.

- 42 -

REDUCTION MODULO A FIXED POLYNOMIAL
It is frequently necessary to reduce an arbitrary polynomial modulo a fixed polynomial, or it may be necessary to reduce the result of an operation modulo a fixed poly.
nomial.
The arbitrary polynomial could be divided by the fixed polynomial and the remainder retained as the modulo result.
.
Another method is illustrated below. Assume the fixed polynomial to be (x3 + x +
1). Reduce all terms of the arbitrary polynomial by repeated applica~on of the following relationship.

xi+3 = x i +1 + xi
Suppose the arbitrary polynomial is x4.
i=1 gives:

X4

= x2

Then, using the relationship above with

+ x.

Other examples of arbitrary polynomials reduced modulo (x3
below.

X4 + x 2

x9

= (x2
=x

x + 1) are shown

+ x) + x 2

= x7 + x 6
= (x 5 + x 4 )
= x5 + x 3
= (x3 + x 2 )
= x2

- 43-

+

+

~x4

+ f3

+ x3)

DIVIDING BYA COMPOSITE POLYNOMIAL
Sometimes it is necessary to divide a received polynomial C'(x) by a composite
polynomial p(x) = pI (x)· p2(x) • p3(x)' ••• , where pl(x),p2(x),p3(x),'" are relatively prime,
in pairs. Assume the remainder is to be checked for zero.
The remainder could be checked for zero after dividing the received polynomial by
the composite polynomial. However, dividing the received polynomial by the individual
factors of the composite polynomial and checking all individual remainders for zero
would be equivalent.
Example #1: p(x)

= pl(x)'p2(x)
(x + 1) .(x 3 + x + 1)

Composite Remainder rex)

Individual Remainder r1(x)

~r ~I

INPUT

INPUT

Individual
Remainder

@
xO

+

r2 (x)

At other times, when the generator polynomial is composite, individual remainders
are required for computation.
The received polynomial could be divided directly by each factor of the composite
polynomial to get individual remainders.
However, the following two-step procedure
would be equivalent.
1.

Divide the received polynomial by the composite polynomial to get a
composite remainder.

2.

Divide the composite remainder by factors of the composite polynomial to get individual remainders.

Step 2 could be accomplished by software, sequential logic or combinatorial logic.
In many cases, a slower process can be used in step 2 than in step 1
fewer cycles are required in dividing the composite remainder.

- 44 -

because

The diagram below shows an example of computing individual remainders from a
"composite remainder using combinatorial logic.

Example #2

Composite Remainder

x

r(x~

,

1

1

Individual Remainders
It is also possible to compute a composite remainder from individual remainders, as
shown below.

- 45 -

Example #3

IndivIdual
Remainder rl(x)

Individual
Remainder r2(x)

x

1

Composite Remainder rex)
In the examples above, the factors of the composite polynomial are assumed to be
relatively prime. If this is the case, the Chinese Remainder Theorem for polynomials
guarantees a one-to-one mapping between composite remainders and sets of individual
remainders.

To understand how the connections in circuit Examples #2 and #3 were determined,
study the mappings below. To generate the first mapping, the individual remainders
corresponding to each composite remainder are determined by dividing each possible
composite remainder by the factors of the composite polynomial. For the second mapping, the composite remainder corresponding to each set of individual remainders is
determined by rearranging the first mapping.
The boxed areas of the first mapping establish the circuit connections for Example
#2. The boxed areas of the second mapping establish the circuit connections for Example #3. There are other ways to establish these mappings. The method shown here
has been selected for simplicity. However, in a practical sense it is limited to polynomials of a low degree.

- 46 -

FIRST MAPPING

Corresponding
Individual
Remainders

Composite
Remainder
0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

000
001
010
011
100
101
110
111
011
010
001
000
111
110
101
100

a
1
1
0
1
0
0
1
1
0
0
1
0
1
1
0

SECOND MAPPING

Corresponding
Composite
Remainder

Individual
Remainders
000
000
001
001
010

0
1
0
1
0

0000
1011
1010
0001
1001

010 1
011 0
011 1
100 0
100 1
101 0
101 1
1100
1101
111 0
111 1

0010
0011
1000
1111
0100
0101
1110
0110
1101
1100
0111

- 47 -

PROBLEMS
1.

Write the sequence for the circuit below.

2.

Write the polynomial for the circuit above.

3.

Perform the multiplication operations below.
x3 + x2 + x + 1
X3 + 1

x3 + x + 1
x + 1
4.

Perform the division operations below. Show the quotient and the remainder.

x3 + x + 1
5.

x5 + 1
x3 + x + 1

I x6

+ x + 1

x3 + x + 1

I x3

+ x

Determine the period of the following polynomials:

x3 + 1,

x3 + x 2 + 1

+ 1).

6.

Show a circuit to multiply by (x3

7.

Show a circuit to divide by (x3

8.

Show a circuit to compute a remainder modulo (x3
logic. The input polynomial is 7 bits in length.

9.

Is (x2

+ 1).

+ x + 1) reducible?

+ x + 1).

10.

Compute the reciprocal polynomial of(x4

11.

How many primitive polynomials are of degree 4?

- 48 -

+ x2 + I) using combinatorial

CHAPTER 2 - ERROR DETECTION
AND CORRECTION FUNDAMENTALS
2.1

DETECTION FUNDAMENTALS

MORE ON POLYNOMIAL SHIFT REGISTERS
The shift register form below is used frequently for error detection and correction.
This circuit multiplies by xm and divides by g(x), where m is the degree of g(x) and
also the shift register length. g(x) is the generator polynoDJial of the error detection/correction code being implemented. For this example, g(x) = x3 + x + 1 and m=3.
OUTPUT

Two properties of this form of shift register are discussed below.
PropertY #1

If the shift register above is recelvmg a stream of bits, the last m. bits (in this
case three) must match the shift register contents in order for the final shift register
state to be zero. This is because a difference between the input data bit and the high
order shift register stage causes at least the low order stage to be loaded with '1'.

Assume an all-zeros data record. Any burst of length m or fewer bits will leave
the shift register in a nonzero state~ If an error burst of length greater than m bits is
to .leave the shift register in its zero state, the last m bits of the burst must match the
shift register contents created by the error bits which preceded the last m bits of the
burst.
PropertY #2

Assume the shift register is zero. Receiving an error burst of length m or fewer
bits has the same effect as placing the shift register at the state represented by the
sequence of error bits.
When reading an all-zeros data record, an error burst of length m or fewer bits
sets the shift register to a state on its sequence that is b shifts away from the state
representing the error burst, where b is the length of the burst.

- 49 -

SELECI'ING CHECK BITS
Property #1 implies that for all-zero data, any burst of length m or fewer bits is
guaranteed to be detected.
Property #2 indicates that for all-zero data, it may be
possible to correct some bursts of length less than m bits by clocking the shift register
.
along its sequence until the error burst is contained within the shift register.
Clearly, we must find a way to extend these results to cases of nonzero data if
they are to be of any use. The following discussion describes intuitively how check bits
must be selected so that on read, the received polynomial leaves the shift register at
zero in the absence of error.
Assume a shift register configuration that premultiplies by xm and divides by g(x).
On write, after clocking for all data bits has been completed, the shift register will
likely be in a nonzero state if nonzero data bits have been processed. If we transmit
as check bits following the data bits, the contents of the shift register created by
processing the data bits, then on read in the absence of error, the received data bits
will create the same pattern in the shift register, and the received check bits will
match this pattern, leaving the shift register in its zero state.
The concatenation of the data bits and their associated check bits is called a
codeword polynomial or simply a codeword. A codeword C(x) generated in the manner
outlined above by a shift register implementing a generator polynomial g(x) has the
property:
C(x) MOD g(x)

=0

This is a mathematical restatement of the condition that processing a codeword
must leave the shift register in its zero state.
Theorem 2.1.1. The Euclidean Division Algorithm. If D(x) and g(x) are polynomials
with coefficients in a field F, and g(x) is not zero, there exists polynomials q(x) (the
quotient) and rex) (the remainder) with coefficients in F such that:
D(x)

= q(x)' g(x) + rex)

where the degree of rex) is less than the degree of g(x); rex) may in fact be zero.

- 50 -

The Euclidean Division Algorithm provides a formal justification for the method of
producing check bits outlined above. By the Euclidean Division Algorithm,

+ r(x)

D(x)

=

q(x) • g(x)

D(x)
g(x)
q(x)
r(x)

=
=
=
=

Data polynomial
Generator polynomial
Quotient polynomial
Remainder polynomial

where

Rearranging gives

O(x) + rex)
g(x)

= q(x)

This shows that in order to make the data polynomial itself divisible by g(x) , r(x)
would have to be EXCLUSIVE-OR-ed against D(x). However, this would modify the last
m bits of the data polynomial, which is not desirable.
Appendinlfu the remainder bits to the input data bits has the effect of premultiplying D(x) by x and then dividing by g(x). Then by Euclidean Division Algorithm we
have,

xm·O(x)

= q(x)·g(x) + rex)

or equivalently,

xm·O(x) + rex)
g(x)

= q(x)

This shows that if r(x) is EXCLUSIVE-OR-ed against the data polynomial premultiplied by xm, the resulting polynomial will be divisible by g(x). This is equivalent to
appending r(x) to the end of the original input data polynomial, since coefficients of all
x terms of xm.D(x) are zero for im where m is the degree of the generator polynomial, that go undetected is:
Pmd

-1

if b > (m+l)

2m
= -1-

2 m- 1

if b = (m+l)

When all errors are assumed to be possible and equally probable, Pmd is given by:
Pmd :::: -1
2m
If some particular error bursts are more likely to occur than others (which is
generally the case), then the misdetection probability depends on the particular polynomial and the nature of the errors.

- 54 -

MULTIPLE-SYMBOL ERROR DETECfION
An error-detection code can be constructed from the binary BCH or Reed-Solomon
codes to achieve multiple-bit or multiple-symbol error detection. See Sections 3.3 and
3.4.

CAPABILITY OF A PARTICULAR ERROR-DETECfION CODE: CRC-CCITF CODE
The generator polynomial for the CRC-CCITT code is:
x 16

+ x12 + x5 + 1 =

(x

+ 1).(x15 + x14 + x l3 + x12 + x4 + x3 + x2 + x +

1)

The code's guaranteed capability as determined by its structure is defmed below:
a) Detects all occurrences of an odd number of bits in error. (fheorem 2.1.3)
b) Detects all single-, double- and triple-bit errors if the record length (including check bits) is no greater than 32,767 bits. (fheorem 2.1.5)
c) Detects all single-burst errors of sixteen bits or less. (fheorem 2.1.6)
d) Detects 99.99695% of all possible bursts of length 17, and 99.99847% of all

possible longer bursts. (Theorem 2.1.8). This property assumes that all errors
are possible and equally probable.
The CRC-CCITT polynomial has some double-burst detection capability when used
with short records. This capability cannot be determined by its structure. Computer
evaluation is required.
When the code is used with a 2088-bit record, it has a guaranteed detection capability for the following double bursts:
Length of
First Burst

1
2
3
4
5
6

Length of
Second Burst
1to6
1to5
1to4
1to4
1 to 2
1

- 55 -

2.2 CORRECTION FUNDAMENfALS
This section introduces single-bit and single-burst error correction from the viewpoint of shift register sequences.
The examples given use very short records and small numbers of check bits. However, the same techniques apply to longer records and greater numbers of check bits as
well.

SINGLE-BIT ERROR CORRECI'ION
The circuit shown below can be used to correct a single-bit error in a seven-bit
record (four-data bits and three-check bits). Data bits are numbered d3 through dO.
Check bits are numbered p2 through pO. Data and check bits are transmitted and
received in the following order:
d3 d2 dl dO p2 pl po

Both the encode and $ecode shift registers premultiply by xm and divide by g(x).
Again m is three and g(x) = x + x + 1.

ENCODE CIRCUIT

WRITE DATA
d3 d2 dl dO

MUX

WRITE DATA/CHECK BITS
d3 d2 dl dO p2 pl pO

For encoding, the shift register is first cleared. Data bits d3, d2, dl, and dO are
processed and simultaneously passed through the MUX to be sent to the storage device
or channel.
After data bits are processed, the gate is disabled and the MUX is switched from
data bits to the high order shift register stage. The shift register contents are then
sent to the storage device or channel as check bits.

- 56 -

DECODE CIRCUIT

RAW DATA

CORRECTED
7 BIT FIFO BUFFER

DQ~----

DATA
C

Decoding takes place in two cycles; the buffer load cycle and the buffer unload
cycle. A syndrome is generated by the shift register circuit as the buffer is loaded.
Correction takes place as the buffer is unloaded. The shift register is cleared just
prior to the buffer load cycle.

HOW CORRECTION WORKS
Since g(x) is primitive, it has two sequences: a sequence of length seven and the
zero sequence of length one.

OE}

001
010
100

Oll
110
III
101

Assume an all-zeros data record. Assume data bit dl is in error.
the decode shift register during buffer load would be as shown below.
Clock
Number
Initialize
d3
d2
dl
dO
p2
pI
pO

Error
Bits

1

The contents of

Shift Register
Contents
000
000
000
011
110
111
101
001

Notice that after the error is processed, the shift register clocks through its
sequence until the end of the record is reached. The final shift register state for this
example is '001'. This is the syndrome.

- 57 -

The syndrome remains in the shift register as the buffer unload cycle begins. The
shift register is clocked as data bits are unloaded from the buffer. As each clock
occurs, the shift register clocks through its sequence. Simultaneously, the gate monitors the shift register contents for the '100' state. Correction takes place on the next
clock after the '100' state is detected.
The shift register contents during the buffer unload cycle is shown below.
Clock
Number

Shift Register
Contents

After Read
d3
d2

d1
dO
p2

pI
pO

001
010
100 *
011 **
110
111
101
001

* The

three-input gate enables after this clock because the '100' state is detected.

** Correction takes place on this clock.
Consider what happens on the shift register sequence during the buffer load cycle.

010
100
011
110
111
101
001

d1
dO
p2
p1
pO

clock Forces SIR to this point on the sequence.
crock Advances SIR to this point on the sequence.
clock
"
clock
"
clock: The final state of the SIR = the syndrome.

- 58-

Since the data record is all zeros, the shift register remains all zeros until the
error bit dt is clocked. The shift register is then set to the 'OIl' state. As each new
clock occurs, the shift register advances along its sequence. There is an advance for
dO, p2, pI, and pO. After the pO clock, the shift register is at state '001'. This is the
syndrome for the assumed error.
When the error bit occurs, it has the same effect on the shift register as loading
the shift register with '100' and clocking once. Regardless of where the error occurs,
the first nonzero state of the shift register is '011'.
Error displacement from the end of the record is the number of states between the
'100' state and the syndrome. It is determined by the number of times the shift register is clocked between the error occurrence and the end of record.
Consider what happens on the shift register sequence during the buffer unload
cycle. The number of states between the syndrome and '100' state represents the error
displacement from the front of the record. To determine when to correct, it is sufficient to monitor the shift register for state '100'. Correction occurs on the next clock
after this state is detected.

001
010
100
011
110

The syndrome: initial state of the SIR for unload.
d3 clock Advances SIR to this point on the sequence.
d2 clock The gate is enabled by this SIR state.
d1 clock Correction takes place.

111

101
Consider the case when the data is not all zero. The check bits would have been
selected on write such that when the record (data plus check bits) is read without
error, a syndrome of zero results. When an error occurs, the operation differs from the
all-zeros data case, only while the syndrome is being generated. A given error results
in the same syndrome, regardless of data content because the code is linear. Once a
syndrome is computed, the operation is the same as previously described for the allzeros data case.
The code discussed above is a single-error correcting (SEC) Hamming code.
be implemented with combinatorial logic as well as sequential logic.

- 59 -

It can

SINGLE-BIT ERROR CORRECTION AND DOUBLE-BIT ERROR DETECTION
If an (x + 1) factor is combined with the polynomial of the previous example, the
resulting polynomial

g(x)

=

(x + 1)·(x 3 + x + 1)

=

x4 + x 3 + x 2 + 1

can be used to correct single-bit errors and detect double-bit errors on seven-bit records (three data bits and four check bits). Double-bit errors are detected regardless of
the separation between the two error bits.
g(x) has four sequences; the two sequences of length one and two sequences of
length seven.

SEQ A
0001
0010
0100
1000
1101
0111
1110

SEO

:a

ooE]

0011
0110
1100
0101
1010
1001
1111

10E]

If a single-bit error occurs, the syndrome will be on sequence A. If a double-bit
error occurs, the syndrome will be on sequence B. This gives the code the ability to
detect double-bit errors.
The circuit below could be used for decoding. Encoding would be performed with
a shift register circuit premultiplying by xm and dividing by g(x).

RAW DATA

CORRECTED

D QI-----_

7 BIT FIFO BUFFER

DATA
C

GATE A

- 60 -

Gate A detects the '1000' state on the clock prior to the clock that corrects the
error. Gate B blocks the shift register feedback on the clock following detection of the
'1000' state. This causes the shift register to be cleared.

If a double-bit error occurs, the syndrome is on sequence B. The shift register
travels around sequence B as it is clocked during the buffer unload cycle. Since the
'1000' state is not on this sequence, gate A will not enable and correction will not take
place. Since correction does not occur, the shift register remains nonzero. Since the
shift register is nonzero at the end of the buffer unload cycle a double error is· assumed.
If three bit-errors occur, the syndrome will be on sequence A. During the buffer
unload cycle, the shift register state '1000' is detected and a data bit is falsely cleared
or set. This is miscorrection because the bit affected is not one of the bits in error.
This code corrects a single-bit error. It detects all occurrences of an even number of bits in error. When more than one bit is in error and the total number of bits
in error is odd, miscorrection results.
This code is a single-error correcting (SEq, double-error detecting (OED) Hamming
code. It can be implemented with combinatorial logic or with sequential logic.

BURST LENGTH-TWO CORRECTION
The polynomial of the previous example can also be used for burst length-two
correction. The circuit is identical except that AND gate A detects '1 xOO' .
If a burst of length one occurs, the syndrome will be on sequence A. Gate A
enables on state '1000'.
If a burst of length two occurs, the syndrome will be on
sequence B. Gate A enables on state '1100'. When the shift register is clocked from
the '1100' state it goes to '1000', due to the action of gate B. Gate A remains enabled.
On the next clock, the shift register is cleared due to the action of gate B. Gate A is
enabled for two consecutive clock times and therefore two adjacent bits are corrected.

- 61 -

CORREC/'JON OF LONGER BURSTS
The concepts discussed above can be extended to correction of longer bursts as
well.
To construct such a code, select a reducible or irreducible polynomial meeting the
following requirements.
1. Each correctable burst must be on a separate sequence.
2. The sequence length must be equal to or greater than the record length (in
bits, including check bits) for sequences containing a correctable burst.
3. Any burst that is to be guaranteed detectable must not be on a sequence
containing a correctable burst.
Assume a polynomial with multiple sequences and that the bursts '1', '11', '101',
and ' 111' are all on separate sequences of equal length. There may be other sequences
as well:

0···0001

o· • ·0011

o· . ·0101

0···0111

Such a code has at least the following capability: Its correction span can be
selected to be one, two, or three bits. In either case, its detection span is guaranteed
to be at least three.
Primitive polynomials can also be used for single-burst correction.
the polynomial requirements are:

In this case,

1. The polynomial period must be equal to or greater than the record length (in
bits, including check bits).
2. Correctable bursts must be separated from each other on the sequence by a
number of states equal to or greater than the record length (in bits, including
check bits).
3. Any burst that is to be guaranteed detectable must be separated from correctable bursts by a number of states equal to or greater than the record length
(in bits, including check bits).
It is also possible to state more general requirements for a single-burst correcting
code. Any polynomial satisfying either of the two previous sets of requirements would
satisfy the more general requirements. Many other polynomials would meet the general
requirements as well.
.

- 62 -

The more general requirements for a single-burst correcting code are:
1. If more than one correctable burst is on a given sequence, these bursts must
be separated by a number of states equal to or greater than the record
length (in bits, including check bits).
2. If one or more bursts that are to be guaranteed detectable are on a. sequence
with one or more correctable bursts, they must be separated from each correctable burst by a number of states equal to or greater than the record
length (in bits, including check bits).
3. The sequence length must be equal to or greater than the record length (in
bits, including check bits) for sequences containing a correctable burst.

ACHIEVING DOUBLE-BURST DETECTION
In order for a computer-generated code to have double-burst detection capability,
the following inequality must hold for all i,j, and k such that 0 5 i,j,k < nand i" j:

[x i .bl(X) + xj.b2(X)] MOD g(x) " [xk .b3(X)] MOD g(x)
Where
n is the record length (in bits) including check bits
d is the double-burst detection span
s is the single-burst correction span
bl (x) is any burst of length LI such that 0 < LI 5 d
b2(x) is any burst of length L2 such that 0 < L2 5 d
b3(x) is any burst of length L3 such that 0 < L3 5 s
g(x) is the code generator polynomial
Additionally, if i>j then we require i>(j+s-LI) and
and i<(j-s+L2).

i~(j+LV,

while if i

DST uses special hardware and software to find codes that satisfy these requirements.

- 63 -

SINGLE-BURST CORRECI'ION VIA STRUCTURED CODES
Fire codes achieve single-burst correction capability by their structure.
codes are generated by the general polynomial form:

g(x)

= c(x)"p(x) =

These

(xc + l)"p(x)

where· p(x) is any irreducible polynomial of degree z and period e, and e does not divide
c. These codes are capable of correcting single bursts of length b and detecting bursts
of length d>b provided z>b and c>(d+b-l). The maximum record length in bits, including
check bits, is the least common multiple (LCM) of e and c. This is also the period of
the generator polynomial g(x).
The structure of Fire code polynomials causes them to have multiple sequences.
Each correctable burst is on a separate sequence. Burst error correction with polynomials of this type was discussed earlier in this section. See Section 3.1 for more information on Fire codes.

SINGLE-BURST CORRECI'ION VIA COMPUTER-GENERATED CODES
The single-burst correction capability of computer-generated codes is achieved by
testing.
These codes are based on the fact that if a large number of polynomials of a particular degree are picked at random, some will meet previously defined specifications,
provided these specifications are within certain bounds.
There are equations that can be used to predict the probability of success when
searching polynomials of particular degree against a particular criteria.
The advantage these codes have over Fire codes is less pattern sensitivity. If
miscorrection is to be avoided on certain short double bursts, this can be included as an
additional criterion for the computer search. See Section 3.2 for more information on
computer-generated codes.

SINGLE-BURST DETECI'ION SPAN FOR A BURST-CORRECI'ING CODE
Let n represent the record length in bits (including check bits).
the shift register length in bits. Assume an all-zeros data record.
register configuration that premultiplies by xm and divides by g(x).

Let m represent
Assume a shift

An error burst, m bits or less in length, has the same effect as loading the shift
register with the burst. Therefore, a particular error burst will place the shift register
at a particular point in the sequence.
If the point in the sequence is far away from any correctable pattern, the shift
register will not sequence to a correctable pattern in n shifts and there is no possibility
of miscorrection. However, if the particular error burst places the shift register at a
point in the sequence that is near a correctable pattern, the correctable pattern may be
detected in n shifts and miscorrection will result. It follows that the error bursts of
length m or less that have the exposure of miscorrection, are those bursts that force
the shift register to points in the sequence near correctable patterns.

- 64 -

The result of having a particular pattern (or state) in the shift register is the
same as if the same pattern were an input-error burst. It follows that the list of shift
register states near the correctable patterns also represents a list of error bursts, of
length m or less, that may result in miscorrection.
The search software shifts a simulated shift register more than ntimes forward
and reverse from each correctable pattern. After each shift, the burst length in the
shift register is determined. One less than the minimum burst length found over the
entire process represents the single-burst detection span.

PROBABILITY OF MISCORRECIION

Let
b = correction span
n = record length including check bits
m = number of check bits
The total number of possible syndromes is then 2m . The total numbfr of valid syndromes must be equal to the total number of correctable bursts, which is n· 2 - .
Assume that all error bursts are possible and equally probable and that when
random bursts are received, one syndrome istjust as likely as another. If all syndromes
have equal probability and there are n· 2b- valid syndromes out of 2 m total possible
syndromes, then the probability of miscorrection for bursts exceeding the code's guaranteed detection capability is:
PIne

~

n'2 b - 1
=-=--

This equation provides a measure for comparing the effect that record length,
correction span, and number of check bits have on miscorrection probability.
One must be careful using this equation. A very simple assumption is made, which
is that all error bursts are possible and equally probable. This is unlikely to be the
case except for particular types of errors such as synchronization errors. To accurately
calculate the probability of miscorrection requires a detailed knowledge of the types of
errors that occur and detailed information on the capability and characteristics of the
polynomial.

- 65 -

PATTERN SENSITIVITY OF A BURST-CORRECI'ING CODE

Some burst-correcting codes have pattern sensitivity. The Fire code, for example,
has a higher miscorrection probability on short double bursts than on all possible error
bursts.
Pattern sensitivity is discussed in greater detail in Sections 4.4 and 4.6.

- 66 -

2.3 DECODING FUNDAMENTALS
The following pages show various examples of decoding single-burst-error-correcting codes. These points will help in understanding the examples.
1.

Forward displacements are counted from the first data bit to the first bit in error.
The first data bit is counted as zero.

2.

Reverse displacements are counted from the last check bit to the first bit in error.
The last check bit is counted as zero.

3.

If a negative displacement is computed, add the record length (seven in all ex-

If a displacement greater than the record length
amples) to the displacement.
minus one is computed, subtract the record length from the displacement.
4.

Shift register states are shown after the indicated clock.

5.

For all examples, the final error pattern is in the register from left to right. The
left-most bit of pattern represents the first bit in error from the front of the
record.

6.

In these simple examples, check bits are corrected as well as data bits.

7.

In these examples, only the read decode circuit is shown.

The write circuit always

premultiplies by xm and divides by g(x).
8.

Each suffix A example is the same as the prior example, except that a different
error has been assumed.

9.

In examples 1 through 4A, it is not necessary to have additional hardware that
detects shift register nonzero at the end of a read. In examples 5 through 8A,
this additional hardware is required.

10.

In these simple examples, if an error occurs that exceeds the correction capability
of the code, miscorrection results. In a real world implementation, excess redundancy would be added to keep miscorrection probability low.

11.

The folh)wing abbreviations are used in the decoding examples.
CLK - Clock
CNT - Count
ERR - Error

FIFO
SIR

- First in, first out
- Shift register

- 67 -

Example #1:
- Correction in hardware, forward clocking.
- Single-bit-correcting code, single-bit error, data all zeros.
- Spaced data blocks, on-the-fly correction (data delay = 1 block).
- Internal-XOR form of shift register.
- g(x) = x3 + x + 1.
- Detect zeros in right-most bits of shift register.
- Premultiply by x3 .

RAW DATA

CORRECTED
7 BIT FIFO BUFFER

D Q f - - - - - -..

DATA
C

GATE 'A'
READ CYCLE
(BUFFER LOAD)

d3
d2
d1
dO
p2
p1
pO

*
**
***

ERR

SIR

0
1
0
0
0
0
0

000
011
110
111
101
001
010

CORRECT CYCLE
(BUFFER UNLOAD)
SIR

010
100
011
110
111
101
001
010

d3
d2
d1
dO
p2
p1
pO

*
**
***

Shift register contents at start of correction cycle.
Gate A enables after the d3 clock.
Correction takes place on d2 clock.

- 68 -

Example #2:
- Correction in hardware, forward clocking.
- Single-bit-correcting code, single-bit error, data all zeros.
- Spaced data blocks, on-the-fly correction (data delay

= 1 block).

- Internal-XOR form of shift register.
- g(x) = x3 + x + 1.
- Detect zeros in left-most bits of shift register.
- No premultiplication.

CORRECTED

RAW DATA

D Qt------

7 BIT FIFO BUFFER

DATA
C

BUFFER UNLOAD CYCLE
GATE 'A'
READ CYCLE
(BUFFER LOAD)

d3
d2
d1
dO
p2
pI
pO
*
**
***

ERR

SIR

a

000
001
010
100
all
110
111

1

a
a
a

a
a

CORRECT CYCLE
(BUFFER UNLOAD)

d3
d2
d1
dO
p2
pI
po

SIR
111
101
001
010
100
all
110
111

*
**
***

Shift register contents at start of correction cycle.
Gate A enables after the d2 clock.
Correction takes place on d2 delayed clock.

- 69 -

Example #3:
- Correction in hardware, forward clocking.
- Burst length-two correcting code, two-adjacent error, data all zeros.
- Spaced data blocks, on-the-fly correction (data delay = 1 block).
- Intemal-XOR form of shift register.
- g(x) = (x + 1). (x3 + x + I) = x4

+ x3 + x2 + 1.

- Detect zeros in right-most bits of shift register.
- Premultiply by x4.

RAW DATA

CORRECTED

7 BIT FIFO BUFFER

D Q I - - - - -...

DATA
C

GATE 'A'
'---"-.......

Jt

~----~L--/

GATE 'B'

READ CYCLE
(BUFFER LOAD)
ERR
SIR
d2
d1
dO
p3
p2
pI
pO

*
**
***
****

a
1
1

a
0
0

a

0000
1101
1010
1001
1111
0011
0110

CORRECT CYCLE
(BUFFER UNLOAD)
d2
d1
dO
p3
p2
pI
pO

SIR
0110
1100
1000
0000
0000
0000
0000
0000

*
**
***
****

Shift register contents at start of correction cycle.
Gates A and B enable after the d2 clock.
Bit dl is corrected on the dl clock.
Bit dO is corrected on the dO clock.

- 70 -

BUFFER
UNLOAD
CYCLE

Example #3A:
- Correction in hardware, forward clocking.
- Burst length-two correcting code, single-bit error, data all zeros.
- Spaced data blocks, on-the-fly correction (data delay = 1 block).
- Internal-XOR form of shift register.
- g(x)

= (x + 1)· (x3 + x +

1)

= x4 + x3 + x2 +

1.

- Detect zeros in right-most bits of shift register.
- Premultiply by x4.

RAW DATA

CORRECTED
D Q t - - - - -......

7 BIT FIFO BUFFER

DATA
C

GATE 'B'

READ CYCLE
(BUFFER LOAD)
ERR
d2
d1
dO
p3
p2
p1
pO

*
**

***
****

0
0
1
0
0
0
0

CORRECT CYCLE
(BUFFER UNLOAD)

SIR

SIR

0000
0000
1101
0111
1110
0001
0010

0010
0100
1000
0000
0000
0000
0000
0000

d2
d1
dO
p3
p2
p1
pO

*
**
***
****

Shift register contents at start of correction cycle.
Gate A enables after the d2 clock.
Gate B enables after the d1 clock. No correction takes place on the dl
clock because gate B is disabled at the time of the clock.
Bit dO is corrected on the dO clock.

- 71 -

Example #4:
- Correction in hardware, forward clocking.
- Burst length-two correcting code, two adjacent error, data all zeros.
- Consecutive data blocks, on-the-fly correction (delay = 1 block).
- Intemal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4 + x3 + x2 + 1.
- Detect zeros in right-most bits of shift register.
- Pr~multiply by x4.

CORRECTED

RAW DATA
7 BIT FIFO BUFFER

D Q / - - - - -...

DATA
C

.'

GATE 'A'

GATE 'B'
READ CYCLE
(BUFFER LOAD)
ERR
SIR
d2
d1
dO
p3
p2
p1
pO

*
**
***
****

0
1
1
0
0
0
0

0000
1101
1010
1001
1111
0011
0110

CORRECT CYCLE
(BUFFER UNLOAD)
d2
d1
dO
p3
p2
p1
pO

SIR
0110
1100
1000
0000
0000
0000
0000
0000

*
**
***
****

Shift register contents at start of correction cycle.
Gates A and B enable after the d2 clock.
Bit dl is corrected on the dl clock.
Bit dO is corrected on the dO clock.
- 72 -

Example #4A:
- Correction in hardware, forward clocking.
- Burst length-two correcting code, single-bit error, data all zeros.
- Consecutive data blocks, on-the-fly correction (delay
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4

+ x3 + x2 +

= 1 block).

1.

- Detect zeros in right-most bits of shift register.
- Premultiply by x4.

RAW DATA

CORRECTED
D Q I - - - - -.....

7 BIT FIFO BUFFER

DATA
C

GATE 'A'

GATE 'B'
READ CYCLE
(BUFFER LOAD)

d2
d1
dO
p3
p2
p1
pO

*
**
***

ERR

SIR

a

0000
1101
0111
1110
0001
0010
0100

1

a
a
a
a
a

CORRECT CYCLE
(BUFFER UNLOAD)

d2
d1
dO
p3
p2
p1
pO

SIR
0100
1000
0000
0000
0000
0000
0000
0000

*
**
***

Shift register contents at start of correction cycle.
Gate B enables after the d2 clock.
Bit dl is corrected on the dl clock.

- 73 -

Example #5:
- Correction in hardware, forward clocking, software assist.
- Burst length-two correcting code, two adjacent error, data all zeros.
- Time delay required when an error occurs.
- InternaI-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4 + x3 + x2 + 1.
- Detect zeros in right-most bits of shift register.
- Premultiply by x4.
RAW DATA
RAM BUFFER

J.£P SAMPLE LINE
FOR DISPLACEMENT
I-+r----.. CALCULATION
GATE IBI
ECC ERROR FLAG TO J.£P

SOFTWARE CORRECI'ION ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate B.
Forward displacement to first bit in error is clock count plus one.
Pattern is in left-most two bits of shift register.
Use pattern and displacement to .correct RAM buffer.
READ CYCLE
(BUFFER LOAD.)

d2
d1
dO
p3
p2
pI
pO

*
**

ERR

SIR

0
0
0
0
1
1
0

0000
0000
0000
0000
1101
1010
1001

CORRECT CYCLE
(CORRECT BUFFER)
SOFTWARE CLK CNT

0
1
2
3

Shift register contents at start of software ~gorithm.
Gate B enables, software stops clocking. .

- 74 -

SIR

1001
1111
0011
0110
1100

*
**

Example #SA:
- Correction in hardware, forward clocking, software assist.
- Burst length-two correcting code, single-bit error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1). (x3 + x + 1) = x4

+ x3 + x2 + 1.

- Detect zeros in right-most bits of shift register.
- Premultiply by x4.

RAW DATA
RAM BUFFER

GATE 'A'
'-r--_

p.P SAMPLE LINE
FOR DISPLACEMENT
CALCULATION

GATE 'B'

L--_.J:========~===D

ECC ERROR FLAG TO p.p.

SOFTWARE CORRECI'JON ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate B.
Forward displacement to first bit in error is clock count plus one.
Pattern is in left-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.

READ CYCLE
(BUFFER LOAD)

d2
dl
dO
p3
p2
pl
pO

*
**

ERR
0
0
0
0
1
0
0

SIR
0000
0000
0000
0000
1101
0111
1110

CORRECT CYCLE
(CORRECT BUFFER)
SOFTWARE CLK CNT
0
1

2
3

Shift register contents at start of software algorithm.
Gate B enables, software stops clocking.

- 75 -

SIR
1110
0001
0010
0100
1000

*
**

Example #6:
- Correction in hardware, forward clocking, software assist.
- Burst length-two correcting code, two adjacent error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4

+ x3 + x2 + 1.

- Detect zeros in left-most bits of shift register.
- No premultiplication.

RAW DATA
RAM BUFFER

J.1.P SAMPLE LINE

GATE 'A'

L---~========~===D

FOR DISPLACEMENT
CALCULATION

ECC ERROR FLAG TO J.1.P ~

SOF1WARE CORRECTION ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Forward displacement to first bit in error is clock count minus one.
Pattern is in right-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.

READ CYCLE
(BUFFER LOAD)
ERR
SIR
d2
d1
dO
p3
p2
p1

pO

*
**

0
0
1
1
0
0
0

CORRECT CYCLE
(CORRECT BUFFER)
SOFTWARE CLK CNT
SIR

0000
0000
0001
0011
0110
1100
0101

0
1
2
3

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.

- 76 -

0101
1010
1001
1111
0011

*
**

Example #6A:
- Correction in hardware, forward clocking, software assist.
- Burst length-two correcting code, single-bit error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4 + x3 + x2 + 1.
- Detect zeros in left-most bits of shift register.
- No premultiplication.
RAW DATA

RAM BUFFER

ILP SAMPLE LINE

GATE 'A'

FOR DISPLACEMENT
CALCULATION

ECC ERROR FLAG TO ILP
SOF1WARE CORRECl'ION ALGORITHM

1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Forward displacement to first bit in error is clock count minus one.
Pattern is in right-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.
READ CYCLE
(BUFFER LOAD)
ERR
SIR
d2
d1
dO
p3
p2
p1
pO

*
**

0
1
0
0
0
0
0

CORRECT CYCLE
(CORRECT BUFFER)
SIR
SOFTWARE CLK CNT

0000
0001
0010
0100
1000
1101
0111

0
1
2

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.
- 77 -

0111
1110
0001
0010

*
**

Example #7:
- Correction in hardware, reverse clocking, software assist.
- Burst length-two correcting code, two adjacent error, data all zeros.
- Tinie delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4 + x3 + x2 + 1.
- Detect zeros in right-most bits of shift register.
- Premultiply by x4.

RAW DATA

RAM BUFFER

~.---

r-----;-~--~

,",P SAMPLE LINE
FOR DISPLACEMENT
CALCULATION
GATE 'A'

SOF7WARE CORRECTION ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Reverse displacement to first bit in error is clock count.
Pattern is in left-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.

READ CYCLE
(BUFFER LOAD)
ERR
SIR
d2
d1
dO
p3
p2
p1
pO

*
**

a
a
a
a

1
1

a

0000
0000
0000
0000
1101
1010
1001

CORRECT CYCLE
(CORRECT BYFFER)
SOFTWARE CLK CN~
~

a
1
2

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.
- 78 -

1001
1010
0101
1100

*
**

Example #7A:
- Correction in hardware, reverse clocking, software assist.
- Burst length-two correcting code, single-bit error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4

+ x3 + x2 + 1.

- Detect zeros in right-most bits of shift register.
- Premultiply by x4.
RAW DATA
RAM BUFFER

I-r"---'
r-----;-~~--~

~______~========~========O

p.P SAMPLE LINE
FOR DISPLACEMENT
CALCULATION
GATE 'A'
ECC ERROR
.FLAG TO HP

SOF7WARE CORRECTION ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Reverse displacement to first bit in error is clock count.
Pattern is in left-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.
READ CYCLE
(BUFFER LOAD)
d2
d1
dO
p3
p2
pl
pO

*
**

ERR
0
0
0
0
1
0
0

SIR

CORRECT CYCLE
(CORRECT BUFFER)
SOFTWARE CLK CNT

0000
0000
0000
0000
1101
0111
1110

0
1
2

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.
- 79 -

SIR

1110
0111
1101
1000

*
**

•

Example #8:
- Correction in hardware, reverse clocking, software assist.
- Burst length-two correcting code, two adjacent error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4 + x3 + x2 + 1.
- Detect zeros in left-most bits of shift register.
- No premuItiplication.

RAW DATA
RAM BUFFER

J1.P SAMPLE LINE

FOR DISPLACEMENT
r---------+------+-ar-~CALCULATION

r---------+---------+-----~~L_~

~

______~========~========

O

GATE 'A'
ECC ERROR
FLAG TO J1.P

SOF7WARE CORRECTION ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Reverse displacement to first bit in error is clock count plus two.
Pattern is in right-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.

READ CYCLE
{BUFFER LOAD}
ERR
SIR
d2
a
0000
d1
a
0000
dO
a
0000
1
p3
0001
p2
1
0011
a
p1
0110
pO
a
1100

*
**

CORRECT CYCLE
(CORRECT BUFFER)
SOFTWARE CLK CNT
SIR
1100
a
0110
1
0011

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.
- 80 -

*
**

Example #8A:
- Correction in hardware, reverse clocking, software assist.
- Burst length-two correcting code, single-bit error, data all zeros.
- Time delay required when an error occurs.
- Internal-XOR form of shift register.
- g(x) = (x + 1)· (x3 + x + 1) = x4

+ x3 + x2 + 1.

- Detect zeros in left-most bits of shift register.
- No premultiplication.
RAW DATA
RAM BUFFER

J.l.P SAMPLE LINE

FOR DISPLACEMENT
CALCULATION

r-------~------~~--~
r---------+---------+------+-oL-~

~______~~======~========O

GATE 'A'
ECC ERROR
.FLAG TO HP

SOF1WARE CORRECJ'JON ALGORITHM
1.
2.
3.
4.

Clock the shift register in a software loop until high output on gate A.
Reverse displacement to first bit in error is clock count plus two.
Pattern is in right-most two bits of shift register.
Use pattern and displacement to correct RAM buffer.
READ CYCLE
(BUFFER LOAD}
d2
d1
dO
p3
p2
pI
pO

*
**

ERR

SIR

1

0000
0000
0000
0001
0010
0100
1000

a
a
a

a
a
a

CORRECT CYCLE
(CORRECT BUFFER}
SOFTWARE CLK CNT

a
1

Shift register contents at start of software algorithm.
Gate A enables, software stops clocking.

- 81 -

SIR
1000
0100
0010

*
**

2.4 DECODING SHORTENED CYCUC CODES

In the decoding examples of the previous section, the record length was equal to
the polynomial period. The method discussed in this section allows forward clocking to
be used in searching for the correctable pattern when the record length is shorter than
the polynomial period. Shortening does not change code properties.
The method assumes that the error pattern is detected when it is justified to the
high order end of the shift register. If this is not the case, the method must be modified.

Let,
g(x)
g'(x)
Pmult(X)
n
m

= the code generator polynomial
= Premultiply polynomial for decoding
= number of information plus check bits
= number of check bits [the degree of g(x)]

e

= the period of g(x)

= reciprocal polynomial of g(x)

Use a shift register to multiply and divide simultaneously. On write, premultiply
by xm and divide by g(x). On read, premultiply by Pmult(x) and divide by g(x).
Pmult(x) is computed using either of the following equations:

Pmult(x)

=

x e - n +m MOD g(x)

Pmult(x)

=

xm-1·F(1/x)

or
where

F(x)

=

x n - 1 MOD g'(x)

i.e. Pmult(x) is the reciprocal polynomial of [(the highest power of x in a codeword)
modulo (the reciprocal polynomial of the code generator polynomial)].

- 82 -

EXAMPLES OF COMPUTING THE MULTIPLIER POLYNOMIAL
FOR SHORTENED CYCLIC CODES
g(x) = x4 + x + 1,

g' (x) = x4 + x 3 + 1

Tables of xr MOD g(x) and xr MOD g' (x)
xr MOD g(x)

r
0
1
2
3
4

5
6

7

8
9

10
11
12
13
14

0001
0010
0100
1000
0011
0110
1100
1011
0101
1010
0111
1110
1111
1101
1001

r

xr MOD g'(x)

0
1
2
3

0001
0010
0100
1000
1001
1011
1111
0111
1110
0101
1010
1101
0011
0110
1100

4

5
6

7

8
9

10
11
12
13
14

n=10, m=4, e=15

Exam2 le ill:

x e - n+m MOD g(x)

Pmult

x 9 MOD (x 4 + x + 1)

x3 + x
or
Pmult

=

Xm- 1 ·F(1/X)

= x3
Exam21e il2:

where F(x)

x 3 • F(l/x)
where F(x)
x 3 • (x- 2 + 1)

=
=

x n - 1 MOD q' (x)
x 9 MOD (x~ + x 3 + 1)

+ x
n=8, m=4, e=15

Pmult = x e - n +m MOD g(x)
x 11 MOD (x 4 + x + 1)
x3 + x2 + x

or
Pmult

where F(x) = x n - 1 MOD q' (x)
where F(x) = x 7 MOD (x~ + x 3 + 1)

+ 1)

- sn -

CORRECTION EXAMPLE FOR A SHORTENED CODE
The code is single-bit correcting only.
Interlaced sectors are assumed.
g(x) = x4
g'(x) = x4

+ x +1
+ x3 + 1

n = 8, m = 4, e = 15
3
2

Pmul t = x

+x +x
DQ

8 BIT FIFO BUFFER

C

GATE A
READ SECTOR
(READ CYCLE)
ERR
SR
d3
d2
d1
do
p3
p2
pI
pO

*
**

0
0
1
0
0
0
0
0

0000
0000
1110
1111
1101
1001
0001
0010

SKIPPED SECTOR
(CORRECT CYCLE)
d3
d2
d1
dO
p3
p2
pI
pO

GATE A gate enables.
Correction takes place on dl clock.

- 84 -

0010
0100
1000
0011
0110
1100
1011
0101
1010

*
**

~

CORRECI'ION EXAMPLE FOR A SHORTENED BURSfLENGTH-1WO CODE
The code of this example corrects bursts of length one or two.
Interlaced sectors assumed.
g(x) = (x

+ l)o(x4 + x + 1)

g'(x) = x5

+ x3 + x + 1

= x5

+ x4 + x2 + 1

n = 9, e = 15, m = 5
Pmult(x)

= x3 + x2 + x
Tables of xr MOD g(x) and xr MOD g'(x)
r

xr MOD g(x)

r

0
1
2
3
4

00001
00010
00100
01000
10000
10101
11111
01011
10110
11001
00111
01110
11100
01101
11010

0
1
2
3
4

5
6
7
8
9

10
11
12
13
14

5
6
7
8
9

10
11
12
13
14

- 85 -

xr MOD g' (x)
00001
00010
00100
01000
10000
01011
10110
00111
01110
11100
10011
01101
11010
11111
10101

D

9 BIT FIFO BUFFER

C

BUFFER UNLOAD CYCLE
READ SECTOR
(READ CYCLE)
d3
d2
d1
do
P4
P3
P2
P1
PO

*
**
***

ERR
0
1
1
0
0
0
0
0
0

SIR

00000
01110
10010
10001
10111
11011
00011
00110
01100

SKIPPED SECTOR
(CORRECTION CYCLE)
01100
11000 *
d3
d2
10000 **
d1
00000 ***
dO
00000
00000
P4
00000
P3
00000
P2
00000
P1
00000
PO

Gates A and B enable on this clock.
Bit d2 is corrected on the d2 clock.
Bit dl is corrected on the dl clock.

- 86 -

~

2.5 INTRODUcnON TO FINITE FIELDS
A knowledge of finite fields is required for the study of many codes, including
BeH and Reed-Solomon codes.
Before discussing finite fields, the definition of a field must be stated.
inition is reprinted from NTIS document AD717205.

This def-

A field is a set F of at least two elements together with a
pair of operations, (+) and ( 0), which have the following properties:

DEFINITION OF A FIELD.

a.

Closure: For all x and y
(x + y)

b.

E

E

F,

F and (Xoy)

E

F

Assodativity: For all x, y, and z

E

F,

(x + y) + z = x + (y + z) and (xoy)oz = xo(yoz)
c.

Corrunutativity: For all x and y

E

F,

x + Y = Y + x and x y = yo x
0

d.

Distributivity: For all x, y and z

E

F,

xo(y + z) = (xoy) + (xoz)
e.

Identities:
There exist an additive identity, zero (0), and a multiplicative
identity, one (1), E F such that for all x E F,
x + 0 = x and xol = x

f.

Inverses: For each x E F, there exists a unique element y

E

F such that

x+y=O
and for each non-zero x E F, there exists a unique element y
x·y

E

F such that

=1

The set of positive and negative rational numbers together with ordinary addition
and multiplication comprise a field with an infinite number of elements, therefore it is
called an infinite field. The set of positive and negative real numbers together with
ordinary addition and multiplication and the set of complex numbers together with
complex addition and multiplication also comprise infinite fields.

- 87 -

FINITE FlEWS
Fields with a (mite number of elements are called (mite fields. These fields are
also called Galois fields, in honor of the French mathematician Evariste Galois.
The order of a (mite field is the number of elements it contains. A finite field of
order .pD, denoted GF{pD) , exists for every prime p and every positive integer n. The
prime p of a (mite field GF{pD) is called the characteristic of the field. The field
GF(p) is referred to as the ground field and· GF{pD) is called an extension field of
GF(p). The field GF{pD) can also be denoted GF(q), where q=pD.
Let Il represent an arbitrary field element, that is, an arbitrary power of Q. Then
the order e of Il is the least positive integer for whic~ Il~ = 1. More simply, the order
of Il is the number of terms in the sequence (fl ,Il ,Il , ••• ) before it begins to repeat.
Elements of order 2n-l in GF(2n) are called primitive elements. They are also called
generators of the field. Do not confuse the order of a field element with the order of
a field, which is defined in the previous paragraph.
Two fields are said to be isomorphic if one can be obtained from the other by
some appropriate one-to-one mapping of elements and operations. Any two finite fields
with the same number of elements (the same order) are isomorphic. Therefore, for
practical purposes there is only one (mite field of order pD.
FlEWS OF CHARACTERISTIC 1WO
Most error-correcting codes of a practical interest are defined over fields of
characteristic two. Such fields have interesting properties. First, every element is its
own additive inverse i.e. x + x = o. Secondly; the square and square root functions
are linear i.e.

f(x + Y + ••• ) = f(x) + fey) + •••
Therefore, in a field of characteristic two the following identities hold.
(x

+ y +

)2 = x2 + y2 + •••

(x

+ y +

)~

=
k

x~ + y~ + ~ ••

k

2k
+ Y +

(x

+ y +

(x

+ y + ••• )1/2k= xl/2k + yl/2k +

)2

= x2

These identities will be helpful in performing finite field computations in fields
GF(2n).

- 88 -

GENERATION OF A FlEW

The rmite field GF(2) has only two elements (0,1). Larger fields can be dermed by
polynomials with coefficients from GF(2).
Let p(x) be a polynomial of degree n with coefficients from GF(2). Let a be a
root of p(x). If p(x) is primitive, the powers of a up through 2n-2 will all be unique.
Appropriately selected operations of addition and multiplication together with the field
elements:

n
O,l,a,a 2 ,ooo,a 2 -2
derme a field of2n elements GF(2n).
Assume a finite field is defined by p(x) = x3
pea) =0. Therefore,
a3 + a + 1

=

+

and

0

x
a3

+

=

1. Since a is a root of p(x),

a + 1

The field elements for this field are:
0

aO
a1
a2
a3
a4
a5
a6
a7
as

MOD (a 3 + a + 1)

"

"

"
"

"

"
"
"
"

"

"

0

aO =
a
= 1
= a2
a + 1
a oa 3 =
a oa 4 =
a oa 5 =

1

ao(a + 1) = a 2 + a 1
ao(a2 + a) = a 3 + a 2 = a 2 + a 1 + 1
a o (a 2 + a + 1) = a 3 + a 2 + a = a 2 + 1

= aO
= a1

The elements of the field can be represented in binary fashion by using one bit to
represent each of the three powers of a whose sum comprises an element. For the field
constructed above, we generate the following table:
a2 a1 aO
0

aO
a1
a2
a3
a4
a5
a6

0
0
0

1

1

0

0

1
1
1

1
1
1

0
0

0

0

1
0
0

1
0

1
1

Figure 2.5.1
- 89-

This list can also be viewed as the zero state plus the sequential nonzero states of
a shift register implementing the polynomial

x3 + x + 1
The number of elements in the field of Figure 2.5.1, including the zero element, is
eight. This field is called GF(8) or GF(23).
OPERA TIONS INA FlEW ani') (Examples use Gnr))

+

Addition: Form the modulo-2 (EXCLUSIVE-OR) sum of the components of the
addends to obtain the components of the sum, e.g.:

aO + a3

=
=
=

'001'$ '011'
'010'

a1

Subtraction: In GF(2n), subtraction is the same as addition, since each element is its own additive inverse. This is not the case in all (mite fields.
Multiplication: If either multiplicand is zero, the product is zero.
add exponents modulo seven (the field size minus one) e.g.:

0·a 4

a3.a5
I

=
=
=

Otherwise

0

a(3+5) mod 7
a1

Division: If the divisor is zero, the quotient is undefined. If the dividend is
zero, the quotient is zero. Otherwise subtract exponents modulo seven e.g.:

a 5/a 3

=

= a2
= a(3-5) = a- 2
= a(-2+7) = a 5
a(5-3)

By convention, multiplication and division take precedence over addition and subtraction except where parentheses are used.

LOG

Logarithm:
LOG (an)

Take the logarithm to the base a, e.g.:

=n

ANTILOG Antilogarithm:
ANTILOG(n)

Raise a to the given power, e.g.:

= an

- 90-

FINITE FIELD COMPUTATION
From the list of field elements above, a 3 repre&ents the vector '011' and a 5 represents the vector 'Ill'. The integer 6 is the exponent of a .
The log function in this field produces an exponent from a vector while the antilog function ~roduces a vector from an exponent. The log of a 4 ('110') is 4. The
antilog of 3 is a (,011 '). The familiar properties of logarithms hold.
Finite field computation is frequently performed by a computer. At times, field
elements are stored in the computer in vector form. At other times, the logs of field
elements are stored instead of the field elements themselves. For example, consider
finite field math implemented on a computer with an eight-bit wide data path. Assume
the finite field of Figure 2.5.1. If a memory location storing a 4 is examined, the 2binary
value '0000 0110' is observed. This b~ value represents the vector '110' or a + a.
If a memory location storing the log of a is examined, the binary value '0000 0100' js
observed. This value represents the integer 4 which is the exponent and log of a .
Finite field computers frequently employ log and antilog tables to convert from one
representation to the other.
Finite field addition t;rr this field is JIlodu10-2 addbtion (bit-wise EXCLUSIVr-OR
operation). The sum of a (' 110') and a' (' 111') is a ('001 '). The sum of a (,011 ')
and at> (,101 ') is a 4 ('110'). Subtraction in this field, as in all finite fields of characteristic two, is the same as addition. The' +' symbol will be used to represent
modulo-2 addition (bit-wise EXCLUSIVE-OR operation). The' +' symbol will also continue to be used for ordinary addition, such as adding exponents. In most cases, when
,+' represents modulo-2 addition, it will be preceded and followed by a space, and when
used to represent ordinary addition, its operands will immediately precede and follow it.
Usage should be clear from the context.
There are two basic ways to accomplish finite field multiplication for the field of
Figure 2.5.1. The vec~rs representing the field elements can be multiplied and the
result reduced modulo (x + x + 1). Alternatively, the product may be computed by
first taking logs of the finite field elements being multiplied; then taking the antilog of
the sum of the logs modulo 7 (field size minus one). The'·' symbol will be used to
represent finite field multiplication. The ,*, symbol will be used to represent ordinary
multiplication, such as for multiplying an exponent, which is an ordinary number and not
a finite field element, by another ordinary number.

- 91 -

The examples below multiply a 4 (' 11 0') times
.
above.

as (' 111') using

the methods described

Example #1

1. Multiply the vectors '110' (a 4) and '111' (a 5 ) to get the vector '10010'.
2.

Reduce the vector '10010' modulo a 3

+ a + 1 to get the vector '100' (a 2).

Example #2
1.

Take the logs base a of a 4 and a 5 to get exponents 4 and 5.

2.

Add exponents 4 and 5 modulo 7 to get the exponent 2.

3.

Take the antilog of the exponent 2 to get the vector a 2 (' 100').

Division is accomplished by inverting (multiplicative inversion) and multiplying.
The inverse of any element in the field of Figure 2.S.1, other than the zero element, is
given by:
~

= a(-j) MOD 7

aj
The inverse of the zero element is undefmed. aOis its own inverse.
Inversion Examples:

-1=

Division Examples:

a2
a3

= a 2 .-.!
a3

- 92 -

Examples of finite field computation in the field of Figure 2.5.1 are shown below.
To provide greater insight, some examples use different approaches than others with
various levels of details being shown. Note that all operations on exponents are performed modulo 1 (field size minus one).

y = a3 + a4
= '011' + '11O'
= '101'
= a6

y

= a 1 ·a 4
= '010'·'110'
= (a 1 ).(a 2 + a 1 )
+ a2
= a3
= (a + 1) + a 2
= a2

=
y = a 2 ·a 6

a + 1
= a5

1
Y =a4

= a(2+6)MOD 7

= a(-4)

= a1

2
y =a5
a

+
'Ill'

MOD 7

= a3

(x + aO).(x + a 1 )

= x2

+ (aO + a 1 )·x + aO o a 1

= X2 + a 3x + a 1

= a (2-5) MOD 7
= a4

The modulo operations shown above for adding and subtracting exponents are
understood for finite field computation and will not be shown for the remainder of the
book.

- 93 -

Other examples are:
Y

= a 22 ·a6 6
=a +
= a1

Y

=
=
=
=

Y =

2
LOGa [ :5 ]
LOG a (a 2 - 5 )

Y

=
=

a1 + a2
a4

y

=

2
LOGa [ :5 ]
LOG a (a 2 )-LOGa (a 5 )

LOG a (a 4 )

=
=

4

= 4

a3
a 2 • (a 4 + a 3 )
3

a =-

(2-5) MOD 7

y = (a 3 )3
= a 3 *3
= a2

a1
a 3- 1
= a2

y

=

(x + aO)·(x + al).(x + a 2 )

= x3

+ (a O + a 1 + a 2 ).x2 + (aO.a l + a O.a 2 + a 1 .a 2 ).x + a O·a l ·a 2

x 3 + a 5 .x 2 + a 6 .x + a 3

- 94-

FIELD PROPERTY EXAMPLES
ASSOCIATIVITY
+ z
+ a4
(1100 1 + 10111) + 1110 1
11111
+ 1110 1
(x

+ y)

+
+
1100 1 + (10111 + 1110 1 )

(a2 + ( 3 )

10011

1101 1

10011

. y) · z
. as) · a 6

(x

+

1100 1

(a 4
a(4+5 MOD 7)

.

= x

(y
z)
(as
( 6)
MOD
7)
a(5+6

.

·
·
·

a6
·
a2
· a6
a(2+6 MOD

a4
a4
a4
a4
a(4+4 MOD 7)

a1

a1

7)

COMMUTATIVITY

x
a3
10111

+
+
+

+ x
a4 + a3
1110 1 + 10111
y

y
a4
1110 1

1101 1

1101 1

y
y
x
x
=
as
as
6
6
a
a
a(5+6 MOD 7)
a(6+5 MOD 7)
a4

a4

DISTRIBUTIVITY

x

+ z)
(as + ( 6 )
(11111 + 1101 1 )
(y

a4
a4
a4
a4

·
·
·

·

(x • y)

1010 1

a1

+

(x'

z)

+
(a4. ( 6 )
(a 4 . as)
a(4+5 MOD 7) + a(4+6 MOD 7)
a2
1100 1

+
+
11111

a(4+1 MOD 7)
as

as

- 95 -

a3
10111

SIMULTANEOUS LINEAR EOUATIONS IN A FIELD
Simuitaneous linear equations in GF(2n) can be solved by determinants.
ampie. given:

a'x + b·y =

d'x + e·y

=

For ex-

C

f

where x and y are independent variables and a, b, c, d, e, and f are constants. then:
c

b
f
e
x = ---a b
d e

a

c

d

f

a

b

d

e

c'e + b·f

=-----

=-----

y =

POLYNOMIALS IN A FIELD
Polynomials can be written with variables and coefficients frc-::J. GF(2 n) and manipulated in much the same manner as polynomials involving rational or rea, numbers.
Polynomial Multiplication Example:

x + a
a 5 ·x2 ... a 6 ·x + a 2
a 4 ·x3 ... a 5 .,,2 + al·x
a 4 ·x3
+ a 5 ·x + a 2
Polvnomial Division Example:

+ a 4 .x 2 + a l ." + a 2
+ a 3 .x 2 + al·x

Thus

a2

."J

+ a 4 .,,2 + al.x + a 2 ) MOD (,,3 + a 4 .x + a 2 )
= a 6 ·x2 + a 6 ." + a

- 96 -

QUADRA TIC SOLUTION DIFFICULTY IN A FlEW OF CHARACTERISTIC 2

The correlation between finite field, of characteristic 2, algebra and algebra involving real numbers does not include the quadratic formula:

x

=

-b ± ./b 2 - 4ac
2a

The 2 in the denominator must be interpreted as an integer, but:
2a

=

a + a

=

0

and division by zero is undefmed.
DIFFERENTIATION IN A FlEW OF CHARAcrERTISTIC 2

The derivative of xn in GF(2 n) is:

nx(n-l)
where n is interpreted as an integer, not as a fmite field el~mi{lt. Thus the derivative
of any even power is zero and the derivative of any odd power is x{n- ). For example,
d(x 2 )/dx

=

2x

=
etc.

- 97 -

x + x

=

0

FINITE FIELDS AND SHIFT REGISTER SEQUENCES
The shift register below implements the polynomial x3
field of Figure 2.5.1.

+ x + 1, which defines the

Figure 2.5.2
This shift register has two sequences, a sequence of length seven and the zero sequence of length one.

STATE NUMBER

SHIFT REGISTER CONTENTS

oE}
o

001
010
100
011
110
111
101

1

2
3
4
5
6

Notice the similarity of the sequences above to the field definition of Figure 2.5.1.
The consecutive shift register states correspond to the consecutive list of field elements. The state numbers correspond to the exponents of powers of a.
Advancing the shift register once is identical to multiplying its conte~ by a.
Advancing the shift register twice is identical to multiplying its contents by a , and so
on.

COMPUTING IN A SMAUER FlEW
We have been representirig power of a by components. For example4 in the2 field
of Figure 2.5.1, the components of a are a and 1. The Components of a are a and a.
An arbitrary power of a can also be represented by its components. Let X represent
any arbitrary power of a from the field of Figure 2.5.1; then

X

= X2oa2 + X10a + Xo

The coefficients X2, Xl, and Xo are from GF(2), the field of two elements, 0 and
1.
In performing (mite field operations in a field such as GF(2 3), it is frequently

- 98 -

necessary to perform multiple operations in a smaller field such as GF(2). For example,
multiplication of an arbitrary field element X by a, might be accomplished as follows:
Y = aoX
a o (X2 oa 2 + Xloa + Xo)
= X2oa3 + Xloa 2 + XOoa

But a 3

a + 1, so

Y = X2° (a + 1) + Xloa 2 + XOoa
= Xloa 2 + (X2 + XO)oa + X2

The result Y can also be expressed in component form, therefore:
Y2oa2 + Yloa + Yo

=

Xloa 2 + (X2 + Xo)oa + X2

Equating coefficients on like powers of ex gives
Y2
Yl
Yo

=
=
=

Xl
X2 + Xo
X2

These results have been used to design the combinatorial logic circuit shown below.
This circuit uses a compute element (modulo-2 adder) from GF(2) to construct a circuit
to multiply any arbitrary field element from the field of Figure 2.5.1 bya.
°a

X

+

X2

Y2

Xl

Yl

Xo

Yo

=

Y

aoX

Finite field addition in GF(2}

- 99 -

ANOTHER LOOK ATTHE SHIFT REGISTER

The shift register of Figure 2.5.2 has been redrawn below to show that it contains
a circuit to multiply by a.

Original Circuit
Same circuit redrawn
MORE ON FIELD GENERATION

Let {3 represent the primitive element a 2 from the field of Figure 2.5.1.
can be redermed as follows:

a2 a1 aO
0
{30
{31
{32
{3!
{3s
{36
{3

0
0
1
1
1
0
0
1

0
0
0
1
0
1
1
1

0
1
0
0
1
0
1
1

All the properties of a field stilI apply. A multiply example:
{32.{34

.
+ a) . (a)

('110')

(a 2

(' 010')

= a3 + a 2

But, a 3
{32 • {34

=

a + 1, so
a2 + a + 1

= ( , Ill' )
= {36
- tOO-

The field

This defmition of the field could be viewed as having been generated by the
circuit below.

A similar redefinition of the field could be accomplished by letting p represent any
primitive element of the field of Figure 2.5.1.

DEFINING FlEWS WITH POLYNOMIALS OVER FlEWS LARGER THAN GF£2J
A polynomial over GF(q) where q =pD is a polynomial with coefficients ffm GF(q).
So far, we have worked with a field GF(8) that is defined by the polynomial x + x + 1
over GF(2). It is also possible to defme a field by a polynomial over GF(4) or GF(8) ,
and soon.
A primitive polynomial of degree mover GF(2n) can define a field GF(2 m*n).
Fields GF(2 2*n) are particularly interesting. Operations in these fields can be accomplished by performing several simple operations in GF(2n). These fields will be
studied in Section 2.7.

- 101 -

COMPUTING IN GF(2)

Consider the field of two elements.
QO

~O

rT

An element of this field is either 0 or 1. the result of a multiplication is either 0
or 1. The result of raising any element to a power is either 0 or 1, and so on.

Let p represent an arbitrary element of this field; then,

Let a and b represent arbitrary elements of this field; then,

= 0 if either a = 0 or b = 0
aob
1
if both a = 1 and b = 1
aob

Clearly, multiplication in GF(2) can be accomplished with an AND gate:

ba~
~aob
Let b represent the logical NOT ofb; then, in GF(2),

a + aob

:

= ao(l

~---

+ b)

aob = a + aob

Let V represent the INCLUSIVE-OR operation; then, in GF(2)

a + aob + b

: --'l)~.-~ a V b

- 102 -

=

~

a V b

a + a °b + b

2.6 FINITE FIELD CIRCUITS FOR FIELDS OF CHARAcrnRISTIC 2
This section introduces flnite fleld circuits for (mite flelds of characteristic 2 with
examples. The notation for various GF(8) flnite fleld circuits is shown below. The fleld
of Figure 2.5.1 is assumed.

Fixed field element adder

w~

0=~Y=w+x

Arbitrary field element adder

x=:t
x

=~e=~

y = ai·x

x

=~0=~

y

w
x

=~

X

=~:

=~

GF(8)
Multiplier

GF(8)
Inverter

=
~

=~

Fixed field element multiplier

a-i.x

y

= W·X

Y = l/x

P>j

GF(8)
Square

~> Y

x->j

GF(8)
Cube

r=>

y = x3

~>j

GF(8)
Log

r=>

j

x

x2

= loga(x)

Fixed field element multiplier
Arbitrary field element multiplier

Mul~iplicative

inversion

Square an arbitrary field element

Cube an arbitrary field element

Compute loga of an arbitrary
field element

- 103-

j

=~

O(x)

i

=~

j =~

GF(8)

~

Antilog

Y

= antiloga (j)

~
~

y

= O(x) MOD (x

Binary
,---=A.:.:d::.:d::.:e::;.:r,,----,

k

= i+j

~

Compute antiloga of an
arbitrary integer

compute the remainder f:rom
dividing O(x) by (x + a 1 )
+ ai )

Add logs of finite field
elements modulo the field
size minus one

- 104 -

COMBINING FINITE FIELD CIRCUITS
Finite field circuits can be combined for computing. For illustration, assume that:
y

X + W3

=

must be computed. This can be accomplished with the circuit below:
X

=====:.,

l=====~

GF

Multiply

I=~

Y

=

II

w =~

"II

GF

Invert

Another circuit solution becomes obvious when the equation is rearranged as
follows:
X

=

Y

+ W3
W3

X

X

X

=

W3

=

+ 1

W3

+ aO

0~

GF

~

Multiply
W

=-j

~-j

GF

Cube

~o

= 2L +
W3

aO

a

"II

GF

Y

Invert

Another example of combining fmite field circuits in GF(2 3) is shown below.

X

lr-G=G=~
~

t

1.b::====:::lJ_

Y

= aoX + X
= (a + 1) oX
= a3 X
0

This example shows how a c~cuit to multiply by the fixed field element a 3 can be
constructed using two other GF(2 ) circuits: a circuit to add two arbitrary field elements and a circuit to multiply an arbitrary field element by a. Later, circuits will be
shown that accomplish this type of operation with GF(2) circuits.

- 105 -

Still another example of combining fmite field circuits follows:
w

====;r=G=;r==
w

x2-----+----------~------~

""""r--..J

x

xl-----+------~H

""""r--..J

xo
xo· (w)
I!:::::=====.... +

)==---0==.

y

= w· x

This circuit is called an array multiplier and is based on the following finite field
math:

y

= x·w

- 106 -

IMPLEMENTING GFf81 FINITE FlEW CIRCUITS WITH GF£21 CIRCUITS
Fixed field element adder:

=

Y

x + a3

= (X2oa2 + x1° a + xo) + (a
= X2oa2 + (xl + l)oa + (xO

+ 1)
+ 1)

But, Ycan also be expressed in component form, therefore:

= Y2 oa2

Y

+ Y1 0a + YO

= X2 oa2

+ (Xl + l)oa + (xo + 1)

Equating coefficients on like powers of a gives:

Y2
Yl
Yo

= x2
= xl
= Xo

+ 1
+ 1

This is realized by the following circuit:

x2
x

-0

Xl
Xo
I

• Y2

G=::
f ~

0

f

1

1

I

a3
A simpler fixed field element adder:
Y

= x + a3

= (X2oa2 + x1°a + xo) + (a + 1)
= X2oa2 + (Xl + l)oa + (xo + 1)
But (Xl + 1) = xl and (xO + 1) = xo,
Y = X2 oa2 + x1° a + Xo

so:

Again expressing y in component form, we have:

Y2oa2 + Y1° a + Yo

= X2 oa2

+ x1° a + Xo

and equating coefficients of like powers of a gives:

Y2
Y1
Yo

= x2
= xl
= Xo
- 107-

Y

=x

+ a3

which is realized by the following circuit:

x2
x

Y2

[>0
[>0

xl
Xo

YI

Y = x + a3

YO

The arbitrary finite field adder:

x

x2

Y2

xl

Yl

Xo

YO

I

wI

w2

Wo

Y

x + w

I

W

may be implemented using bit-serial techniques:

x

W

I I I

~

0-1 I I I Y = x
III~

- 108 -

+ W

Fixed field element multiplier to multiply by a.
Y

But, a 3
Y

=
=
=
=
=

aox
a o (x2 oa 2 + x1a + xO)
X2 oa3 + X1 oa2 + xOoa
a + 1, so:
X1 oa2 + (x2 + xO)oa + x2

Expressing y in component form:
Y2oa2 + Y1° a +

Yo = X1 oa2 + (x2 + xo)oa + x2

Equating coefficients of like powers of a:

Y2 = xl
Y1 = x2 + Xo

Yo = x2

Y2
x

Y1

Y

=

aox

'----.... YO

- 109-

Fixed field element multiplier to multiply by a-I.

Y

= a-lox
= a 60 x

= a 6 °cx2 oa 2

+ xloa + xO)

= X2oa8

+ xl oa 7 + xo oa 6

= xo oa 2

+ x2° a + (xl + xO)

Expressing y in component form:
Y2oa2 +

Yloa + Yo

= xo oa 2

+ x2° a + (xl + xo)

Equating coefficients:

Y2

= Xo

Yl

= x2

Yo

= xl

+ Xo

oa- l
Y2

x

Yl

1---.-

Yo

- lID-

Fixed field element multiplier to Tultiply by a 2 . The finite field math for this circuit
is similar to the math for the a and a- multipliers above.

------~--~

+

r-----.

Y2

Yl
~--------~-?

Yo

Fixed field element multiplier to multiply by a 2 using two circuits that multiply by a:

Y2

Yl

Xo

~---+'

- 111 -

YO

Y

= a 2 ·x

Fixed field element multiplier to multiply by a using bit serial techniques.
Y

=

a'X

~r...t----~
..

r

PROCEDURE:
1. Clear the Y register.
2. Load the X register.
3. Apply three clocks. In GF(2n) apply n clocks.
4. Accept the result from the Y register.

Fixed field4eleme.nt multiplier to multiply by a 4 .
recall that a = a 2 + a.
Y

=

a 4 ·X

PROCEDURE:
Same as above.

- 112 -

To understand the input connections,

Finite field circuit to compute Y = a· X + a 4 • W using bit serial techniques.

Y

= a·X

+ a 4 ·W

~

113-

Arbitrary field element multiplier using combinatorial logic.
X

Xo

W Wl

--+-r--r--------+-+--r------~~--T_-----

Wo

Yo I
Y
Y

= X·W

= X·W
= (X2· a2 + Xl·a + XO)·(W2· a2 + Wl·a
= (X2· W2)·a 4 + (X2·Wl + Xl·W2)·a 3

+ Wo)

+ (X2· WO + Xl·Wl + XO·W2)·a 2 + (Xl·WO + XO·Wl)oa + XO·Wo

But a 4
Y

=

= a2

+ a and a 3

=a

+ 1, so

(X2·W2 + X2°WO + Xl·Wl + XooW2)·a 2
+ (X2· W2 + X2· Wl + Xl·W2 + Xl·WO + XO·Wl)·a
+ (X2 oW l + XloW2 + XO·WO)

Expressing Y in component form and equating coefficients on
like powers of a gives:
Y2
Yl
YO

= x2ow2
= X2·W2
=

+ X2·wO + XloWl + XO·W2
+ X2"Wl + Xl·W2 + XloWO + XOoWl
X20Wl + XloW2 + Xo·Wo

- 114-

Array multiplier - another arbitrary field element multiplier using combinatorial logic.
W2
W WI
Wo
X2
X Xl
Xo

..,
Y2
Yl Y
Yo

Y

=::

XoW
(X2 oa2 + Xloa + Xo)oW
X2 oa 2o W + XloaoW + XOoW

=::

X2 (a 2 oW) + Xl- (a-W) + (W)
0

- 115 -

Arbitrary field element multiplier using bit serial techniques.

Y = X-w

The X register is a shift register.
their value until reloaded.

The W register is composed of flip-flops that hold

PROCEDURE:
1.
2.
3.
4.
5.

Clear the Y register
Load the W register with multiplicand.
.
Load the X register with multiplier.
Clock the circuit three times. For GF(2n), clock n times.
Accept the result from the Y register.

DEVELOPMENT

x-w

Y

=

(X2-a2

+ Xl-a + Xo)-W

X2- a2 -W + Xl-a-W + xo-w

- 116 -

Another arbitrary field element multiplier using bit serial techniques.
y

= Xow

PROCEDURE
1.
2.
3.
4.
5.

Clear the Y register.
Load the W register with multiplicand.
Load the X register with multiplier.
Clock the circuit three times. For GF(2n), clock n times.
Accept the result from the Y register.

DEVELOPMENT

X·W

y

=

(X2oa2 + Xla + XO)oW

= X2 oa 2o W +
=

XloaoW + XOoW

X2o(a 2o W) + Xlo(aoW) + XOo(W)

- 117-

Arbitrary field element multiplier using log and antilog tables.

-@to,

x

ROM

ZERO
DETECT

ZERO
DETECT

W

BINARY ADDER 1===.., ANTILOG 1== .. Y
MOD (2n-l)*
ROM

=

X'W

OUTPUT ENABLE

n)
U
.. =

LOG
ROM

DEVELOPMENT
Y = X'W
IF (X=O) OR (W=O) THEN
Y

0

ELSE
Y = ANTILOG a [ LOGa(XoW)

]

= ANTILOG a [ (LOGa(X)+LOGa(W»

MOD (2n-l)* ]

END IF

* For

n-bit symbols, 2n is the field size of GF(2n), so (2n-l) is the field size
minus one.

- 118-

Circuit to cube an arbitrary field element.

Xo

--------~--------~

Yo

DEVELOPMENT

Y

=
=
=
=

=
=
=

=

X3
(X2"a 2 + Xl"a + XO)3
(X2"a 2 + Xl"a + XO)2"(X2"a 2 + Xl"a + XO)
[(X2"a 2 )2 + (Xloa)2 + (XO)2]"(X2"a 2 + Xl"a + XO)
(X2"a 4 + Xl"a 2 + XO)"(X2"a 2 + Xl"a + XO)
X2"a 6 + Xl"X2oa5 + XO"X2"a 4 + Xl"X2"a 4
+ Xl"a 3 + XOOXl"a 2 + XO"X2"a 2 + XO"Xl"a + Xo
X2"(a 2 + 1) + XloX2 o (a 2 + a + 1) + XO"X2"(a 2 + a)
+ XlOX2"(a 2 + a) + Xl"Ca + 1) + XOOXl" (a 2 )
+ XO"X2o(a 2 ) + XOoXl(a) + Xo
(X2 + XOoXl)oa 2
+ (XO"X2 + Xl + XO"Xl)oa
+ (X2 + XloX2 + Xl + XO)

Expressing Y in component form and equating components of like powers of A gives:

Y2 = X2 + XOXI
Yl = XO"X2 + Xl + XO"XI =XooX2 + Xl"(l + XO)
= XO"X2 + Xl"XO
Yo = X2 + Xl"X2 + Xl + Xo = (X2 v Xl) + Xo
where v is the INCLUSIVE-OR operator.

- 119-

IMPLEMENTING

GF(~)

FINITE FIELD CIRCUITS WITH ROMS

In many cases, ftnite fteld circuits can be implemented with ROMs. For example, a
GF(256) inverter is an 8-bit-in, 8-bit-out function and can be implemented with a 256:8
ROM.
Other examples:
1. The square function in GF(256) can be implemented with a 256:8 ROM.
same is true for any power or root function in GF(256).

The

2. A GF(16) arbitrary fteld element multiplier can be implemented with a 256:4
ROM. A GF(256) arbitrary fteld element multiplier can be implemented with a
65536:8 ROM. It is also possible to implement a GF(256) multiplier with four
256:4 ROMs and several ftnite fteld adders. (See Section 2.7.)
3. A GF(256) ftxed fteld element multiplier can be implemented with a 256:8
ROM.
When back-to-back functions are required, it is sometimes possible to combine
them in a single ROM. For example, the equation:
Y = [l/X]3. a 2
in GF (256) can be solved for Y when X is known with a single 256:8 ROM.

- 120 -

SOLVING FINITE FIELD EQUATIONS
Finding a power of a fmite field element results in a single solution, but the same
solution may be obtained by raising other finite field elements to the same power.
Finding the root(s) of a fmite field element may result in a single solution, multiple solutions or no solution.
Finding the root(s) of a finite field equation may result in a single solution, multiple solutions or no solution.

FINDING ROOTS OF FINITE FIELD EQUATIONS
In decoding the Reed-Solomon and binary BCH codes, it is frequently necessary to
find the roots of nonlinear equations whose coefficients are from a finite field. These
roots provide error-location information. The degree of the equation and the number of
roots are equal to the number of errors that occur. Examples of these equations are
shown below:
x

0

+ °1

x2 + °l'x

+ °2

x3 + 01'X 2 + °2'x

0

+ °3

x4 + °l'X 3 + 02'X 2 + °3'x + °4

0
0

One way to find the roots of such an equation is to substitute all possible finite
field values for x. The equation evaluates to zero for any finite field elements that are
roots.
Two methods which perform the substitution will be discussed.
uses "brute force", and is shown only to illustrate the idea of substitution.

The first method

The second method is the Chien search. This is a practical method that can be
used to find the roots of equations of a low degree or high degree.
After discussing the Chien search, alternatives will be explored for finding roots of
nonlinear equations of a low degree.

- 121 -

SUBSTITUTION METHOD - BRUTE FORCE

Assume the roots of X3
could be used:

~1

+

SQUARE

01X2

+

~~

MULTIPLY

02X

+

03 = 0 must be found. The circuit below

+

~1

ZERO DETECT

~

Each possible finite field value must be substituted for x while checking the output
of the zero detector.
This circuit is easy to understand, although it is not practical because of circuit
complexity.

- 122 -

SUBSTITUTION METHOD - CHIEN SEARCH

Assume the roots of

=

X3 + al'X 2 + a2'x + a3

0

must be found. The Chien search circuit below could be used:
1

L -_ _ _ _

'~_

~i

ZERO DETECT

I--

Thea circUIt IS initialized as shown. If the zero detect output is immediately asseJted, a is a root. The circuit is clocked. If the zero detect output is then asserted,
a l is a root. The circuit is clocked again. If the zero detect output is then active, a 2
IS a root.
Operation continues as described until all finite field values have been
substituted and all roots recorded.
This method uses less complex circuits than the "brute force" method.
The example circuit above finds roots of finite field equations of degree three.
The circuit can be extended in a logical fashion to find the roots of equations of a
higher degree.

- 123 -

RECIPROCAL ROOTS
There are times when the reciprocals of roots of finite field equations are required. If
X3 + Gl'X 2 + G2'X + G3

0

is an equation for which reciprocal roots are required, then
G3'X 3

+

G2'X 2

+ Gl'X + 1

=

0

is an equation whose roots are the reciprocals of the roots of the first equation.
Chien search circuit below can be used to find reciprocal roots.

L -_ _ _ _ _ _

The

~+---------Jr~

L~r--Z-E-R-O-D-E-T-E-C-T--,~

In this circuit, the inputs to the XOR rircuit are from the multipliers instead of
the registers because the equation is evaluated at a first.

- 124 -

FINDING ROOTS OF EQUATIONS OF DEGREE 2
AN EXAMPLE
We illustrate the method by generating a quadratic table for solving y2
the field GF(23) generated by the polynomial x3 + x + lover GF(2).

+

y

=C

in

First generate the antilog table for the field.
Next construct a table giving C
when y is known. Then construct a table giving y when C is known (Table A below).

Antilog Table
Exponent vector
000
001
010
100
011
110
111
101

o
1
2
3
4

5
6

'y' is known
...:L000
001
010
011
100
101
110
111

~

Table A.

leI

000
000
110
110
010
010
100
100

is known

y
000,001

~

000
001
010
011
100
101
110
111

No solution
100,101

No Solution
110,111

No Solution
010,011

No Solution

We may verify the validity of Table A by using it to solve the following equations:
y2 + Y = a 2 => y = 110,111
y2 + Y

a 5 => y

= No

Solution

- 125-

FINITE FlEW PROCESSORS

Finite field processors are programmable or microprogrammable processors, which
are designed especially for (mite field computation. An example for computing in
GF(256) is shown below. Except where noted, all paths are eight bits wide.

LOG

ROM

1

8-BIT BINARY ADDER
MOD 255

- 126 -

Adding two finite field elements from the work buffer consists of the following
steps.

1. Transfer the first element to the A register.
2. Transfer the second element to the B register.
3. XOR the contents of the A and B registers and set the result in the C register.
4. Transfer the C register to the work buffer.
Each of these steps can be a separate instruction or part of a single instruction.
Multiplying finite field elements from the work buffer consists of the following
steps:

1. Transfer the first element to the D register.
2. Transfer the second element to the E register.
3. Add logs of the finite field elements and place the antilog of the results in
the F register.
4. Transfer the F register to the work buffer.
As in finite field addition, each step can be a separate instruction or part of a
single instruction.
If either multiplication operand is zero, the result must be zero. Since the log of
zero is undefined, this case must receive special attention. It is handled by the zero-detect circuits connected to the D and E registers and controlling the gate at the input
of the F register.

For the processor under consideration, logs must be added modulo 255. Eight-bit
binary adders add .modulo 256. They can be used to add modulo 255 by connecting
"carry out" to "carry in". For the antilog table, the contents of location 255 are the
same as location zero.
Finite field division is accomplished with the same steps used for finite field multiplication, except logs are subtracted.
The log operation could be implemented as follows:
1. Load the finite field value in register G.

2. Move the log of the finite field value from the ROM tables to register H.
3. Store register H in the work buffer.
There are many design options available when designing a finite field processor.
The options selected depend on the logic family to be used, cost, performance and other
design considerations. The options selected for an LSI design would differ from those
selected for a discrete design.

- 127 -

A partial list of operations that have been implemented on real world finite-field
processors is shown below.

- Finite field addition
- Finite field multiplication
- Finite field division
- Logarithm
- Antilogarithm
- Fetch one root of the equation y2

+ y + C =0

- Take cube root
- Compare finite field values
- Branch unconditional
- Branch conditional
Non-finite-field operations that may be implemented include:
- Binary addition and subtraction
- Logical AND and inclusive-OR operations
- Operations for controlling error-correction hardware.
A finite field processor implementing subfield multiplication is shown in Section 5.4.

- 128 -

2.7 SUBFIELD COMPUTATION

In this section, a large field. GF(22*n), generated by a smail field, GF(2 n), is discussed. Techniques are developed to accomplish operations in the large field by performmg several operations in the smail field.
Let elements of the smail field be reDresented by powers of 11.
the large tield be represented by powers of a. .

Let elements of

The small field is' defmed by a specially selected polynomial of degree n over
GF(:!). The iarge tield is detined by the polynomial:

x2 + x + 11
over the smail field.
Each element of the large field. GF(2 2*n), can be representeci by a pair of elements from the small field. GF(2 n). Let x represent an arbitrary element from the large
field. Then:

where Xl and XO are elements from the smail field. GF(2 n). The element x from the
large field can be represented by the pair of elements (x 1,X() from the small field.
This is much like representing an element from the field of Figure 2.5.1 with three
elements from GF(2), (x2,xloXO).
Let

:l

be any primitive root of:

x2 + x + 11
Then:

a 2 + a + 11
Therefore:

- 129 -

0

The elements of the large field GF(22*n), can be defined by the powers of a.
example:

o

For

0

a2 = a + f3
a3

a·a 2

a'(a + (3)
a2 + a·f3
a + f3 + a·f3
(f3 + 1)' a + f3

This list of elements can be denoted

o
aO
a1
a2

a3

0
0

0
1

1

0

1

f3
f3

f3+1

The large field, GF(22*n), can be viewed as being generated by the following shift
register. All paths are n bits wide.

This shift register implements the polynomial x2

+ x + f3

over GF(2 n).

Methods for accomplishing finite field operations in the large field by performing
several simpler operations in the small field are developed below.

- 130 -

ADDITION
Let x and w be arbitrary elements from the large field. Then:
y

=x
=

+

W

(xl·a + xo) + (wl·a + wo)
(xl + wl)·a + (xO + wo)

MULTIPLICATION
The multiplication of two elements from the large field can be accomplished with
several multiplications and additions in the small field. This is illustrated below:
y

X·W
(xl·a + XO)

• (wl·a + wo)

Xl·Wl·a 2 + xl·wO·a + xO·wl·a + xO·wO
But, a 2

=

=

a + ~, so

Xl·wl·(a +

~)

+ wO·xl·a + xO·wl·a + XOwO

(xl·wl + wO·xl + xO·wl)·a +

(Xl·Wl·~

+ xo·wo)

Methods for accomplishing other operations in the large field can be developed in
a similar manner. The method for several additional operations are given below without
the details of development.

INVERSION
y

=

l/x
Xl

--------------------·a
(Xl)2.~ + xl·xO + x02

Xl + Xo
+ ---------------------(Xl)2.~ + xl·xO + xo2

- 131 -

LOGARITHM

L = LOGa(x)
Let,

= LOG~[(XI)2o~ + xloxO + X02]

J

K = 0

if xI=O

= I

if

xI~O

and xO=O

= fl(xO/xI)

if

xI~O

and

xO~O

Then,

L =

(the integer whose residue
residue modulo (2 n +l) is K}

modulo

(2n-l) is

J

and whose

This integer can be determined by the application of the Chinese Remainder
'
Method. See Section 1.2 for a discussion of the Chinese Remainder Method.
The function fl can be accomplished with a table of 2n entries which can be generated with the following algorithm.

BEGIN
Set table location fl (0) =0
FOR 1=2 to 2n
Calculate the GF(22*n) element Y

= a l = Y1 ° a + YO

Calculate the GF(2n) element YO/Y 1
Set ft(YO/Y1) =1

NEXT I
END

- 132 -

ANTILOGARITHM

x = ANTlLOGaCL)
=

ANTlLOG~CINTCL/(2n+l»

if [L MOD (2 n +1)]=O

=

[ANTlLOG~(INTCL/(2n+l»].a

if [L MOD (2 n +1)]=1
if [L MOD C2 n +1) ]>1

= xloa + Xo
where x1 and X() are determined as follows. Let

= ANTlLOG~[ L MOD (2 n -1)
b = f2[ (L mod (2~1»-2 ]
a

Then,

xl

= [

Xo

=

b 2 +a b +

~

]1/2

box1

The function f2 can be accomplished with a table of 2n entries.
generated with the following algorithm.
BEGIN
Set f2(2n-l)=0
FOR 1=0 to 2n-2
Calculate the GF(22 *n) element Y = a(l +2) = Y loa + YO
Calculate the GF(2n) element YO/Y 1
Set f2(1)

= YO/Yl

NEXT 1

END

- 133 -

This table can be

APPLICATIONS
In this section, techniques were introduced for performing operations in a large
field, GF(2z *n), by performing several simpler operations in a small field, GF(2n).
One application of these techniques is for computing in a very large finite field.
Assume that it is necessary to perform computation in GF(65536). A multiplication
operation might be accomplished by fetching logs from a log table; adding logs modulo
65535; and fetching an antilog. The log and antilog tables would each be 65536 locations of 16 bits each. The total storage space required for these tables would be one
quarter million bytes. An alternative is to define GF(65536) as described in this section
and to perform operations in GF(65536) by performing several simpler operations in
GF(256). These GF(256) operations could be performed with 256 byte log and antilog
tables.
Another application is for performing finite field multiplication di!fntly with ROMs
for double-bit-memory c~ection. Instead of using one ROM with 2 n locations, use
four ROMs each with 2 n locations. An example application to multiplier ROMs is
shown below.
A GF(256) MULTIPLIER USING A SINGLE ROM

16 total input [X
address lines

8

65536:8
ROM

y - X·W ]
8

W
8

A GF(256) MULTIPLIER USING FOUR SMALLER ROMS

- 134 -

8 output

lines

CHAPTER 3 - CODES AND CIRCUITS
3.1 FIRE CODES
Fire cOdes are linear cyclic single-burst-correcting codes defined by generator
polynomials of the form:

g(x)
where
c =
p(x)

Let:

= c(x)·p(x) =

(xc + l)·p(x)

Degree of the c(x) factor of g(x)
is any irreducible polynomial with period e, and e does not
divide c.

z =
m=
n=
b =
d =

Degree of the p(x) factor of g(x)
Degree of g(x) = total number of check bits = c +z
Record length in bits including check bits; nSLCM(e,c)
Guaranteed single-burst correction span in bits
Guaranteed single-burst detection span in bits

The maximum record length in bits, including check bits, is equal to the period of
g(x) , which is the least common multiple of e and c.
The guaranteed single-burst
correction and detection spans for the Fire codes are subject to the following inequalities:
b S z
b S d
b+d S c+1

These inequalities provide a lower bound for d. When the record length is much
less than the period of the polynomial, this bound for d is conservative. In this case,
the true detection span should be determined by a computer search.
Given a fixed and limited total number of check bits, selecting the degrees of p(x)
and c(x) will be involve a tradeoff. Increasing the degree of p(x) will provide more
protection against miscorrection on double-bit errors (less pattern sensitivity), while
increasing the degree of c(x) will provide a greater correction span andlor detection
span. The degree of c(x) should not be used to adjust the period of a Fire code unless
the effects of pattern sensitivity are fully understood.
Overall miscorrection probability for a Fire code for bursts exceeding the guaranteed detection capability is given by the equation below, assuming all errors are possible and equally probable:
Pmc

~

b 1

n*2 =-=-=-2m

Miscorrection probability for double-bit errors separated by more than the guaranteed detection span, assuming all errors of this type are possible and equally probable,
~~~

.

- 135 -

_ (b-l)*2 *
P
mcdb c

n
c* (2z-1)

This equation is applicable only when the product of Pmcdb and the number of possible
double-bit errors is much greater than one. When this is not true, a computer search
should be used to determine the actual Pmcdb.
An advantage of the Fire Code is simplicity. A disadvantage is pattern sensitivity.
The (xC + 1) factor of the Fire Code generator polynomial causes the code to be susceptible to miscorrection on short double-bursts.
The Pmcdb equation given above
provides a number for this susceptibility for one particular short double-burst (the
double-bit error).
For more information on the Fire code's pattern sensitivity see
Sections 4.4 and 4.6.
The pattern sensitivity of the Fire Code can be reduced to any arbitrary level by
adding sufficient redundancy to the p(x) factor.
There are at least five ways to perform the correction step:
1. Clock around the full period of the polynomial.
2. Shorten the code by performing simultaneous multiplication and division of
the data polynomial. A computer search may be required to minimize the
complexity of feedback logic. The period after shortening can be selected to
be precisely the required period.
3. Select a nonprimitive polynomial for p(x). This method yields a less complex
feedback structure than method 2. However, it is only possible to select a
period that is close to the required period. A computer search is required.
4. Perform the correction function with the reciprocal polynomial. This requires
that either a self-reciprocal polynomial be used, or that the feedback terms
be modified during correction. In addition, the contents of the shift register
must be flipped end-for-end before performing the corrections.
This method differs from methods 1 through 3 because the maximum number
of shifts during correction depends on the record length instead of the polynomial period. Therefore, correction is faster for the case when the record
length is shorter than the polynomial period.
5. Decode using the Chinese Remainder Method. This method requires only a
fraction of the number of shifts required by the other methods. Thus, significant improvements in decoding speed can be obtained.
Any of the methods above may be implemented in hardware or software. However,
for software, methods 4 and 5 are the most applicable. Methods 4 and 5 are more
flexible for handling variable record lengths than the other methods.
The Fire Code may be implemented with bit-serial, byte-serial or
See Section 4.1 for k-bit serial techniques.

- 136 -

k-bit~serial

logic.

BIT SERIAL
Fire-code circuit implementations using bit-serial techniques are less complex than
those using byte-serial techniques.
Less logic is required for the shift register as well as for detecting the correctable
pattern.
Polynomial selection is easier for the bit-serial implementation.
The disadvantage of bit-serial circuit implementations is shift rate limitations.
BITE SERIAL
Byte-serial circuit implementations have speed as their advantage.
One disadvantage is greater logic complexity compared to bit-serial implementations. More logic is required to implement the shift register and to detect the correctable pattern. Pattern detection is more complex because the pattern is never justified
to one end of the shift register. The problem is to determine within one shift (byte
time), if a pattern unjustified in several byte-wide registers is of length b bits or less.
Another disadvantage of byte-serial implementations is that a computer search may
be required for polynomial selection if the feedback logic is to be minimized.
Both bit-serial and byte-serial logic may be implemented in either hardware or
software.
Byte-serial implementations in software usually require look-up tables (for
effective speed).

- 137 -

DECODING ALTERNATIVES FOR THE FIRE CODE

The Fire code can be decoded with the methods described in Section 2.3. Two examples of real world decoding of the Fire code are discussed in Sections 5.2.2 and 5.2.3.
The internal-XOR or external-XOR forms of shift registers may be used for implementing Fire codes. The decoding methods of Section 5.2 apply to the Fire code as
well as to computer-generated codes.
In many cases, logic can be saved by using sequential logic to determine if the
shift register is nonzero at the end of a read.

It is possible to use a counter to detect the correctable pattern.
The counter
counts the number of zeros preceding the error pattern. For the internal-XOR form of
shift register the counter can monitor the high order shift register stage. A one clears
the counter. A zero bumps the counter. The counter function can also be accomplished
by a software routine commanding shifts and monitoring the high order shift register
stage.
It is harder to detect the correctable pattern for byte-serial implementations than
for bit-serial implementations. The second flowchart of Section 5.3.3 shows a software
algorithm for detecting the correctable pattern for a byte-serial software implementation.
The following page shows a method for accomplishing this for a byte-serial
hardware implementation.

- 138 -

,,,

~IRECTION

. . . . . . .
I

OR GATE

OF SHIFT

I

tJ

I

Q

9
9
y

j
I....

j

~

9

\(

,9
NOR GATE

I

CORRECTABLE PATTERN FOOND

Figure 3.1.1

Byte·serial Hardware Correction

Correction span is assumed to be eight bits.
When the correctable pattern
first appears in the shift register, at least one bit of the pattern will be in the
low order byte.

- 139 -

3.2 COMPUTER-GENERATED CODES

Computer-generated codes are based on the fact that if a large number of polynomials of a particular degree are picked at random, some will meet previously defined
specifications, provided the specifications are within certain bounds.
There are equations that predict the probability of success when evaluating polynomials against a particular specification.
For computer-generated codes, correction and detection spans are determined by
computer evaluation. Overall miscorrection probability, assuming all errors possible and
equally probable, is given by:

where,
b
n
m

=
=
=

Guaranteed single burst correction span in bits
Record length in bits including check bits
Total number of check bits

In some cases, tens of thousands of computer-generated polynomials have been
evaluated in order to fmd a polynomial with particular characteristics.
Properly selected computer-generated codes do not have the pattern sensitivity of
the Fire code. It is possible to select computer-generated codes that have a guaranteed
double-burst-detection span. The miscorrecting patterns of these codes are more random than those of the Fire code.
The decoding alternatives for the computer-generated code are the same as those
previously described for the Fire code.

- 140 -

COMPUTER SEARCH RUN
This run evaluates polynomials for use with 512-byte records ~d correction spans
to 8 bits. This run is for illustration only. The polynomials below which have a good
single-burst detection span may not test well against other criteria.

Polynomial
(octal)
40001140741
41040103211
42422242001
42010100127
42200301203
40110425041
40442115001
44104042501
40030201415
40030070211
40006241441
40430250401
44401144041
41442001203
44431120001
40056110021
40200211701
40001201163
40410423003
42000027421
40001741005
42000045065
41114210201
44011511001
41200103203
43140224001

Single-burst detection
spans for given
correction span of:
1

2

3

4

5

6

7

8

18
19
19
21
20
19
18
19
18
19
20
15
20
22

18
19
19
21
20
19
18
19
18
19
19
15
20
21
17
20
20
18
18
17
17
20
19
20
18
18

18
19
19
16
19
17
18
16
18
18
18
15
20
20
17
15
20
18
17
17
17
17
19
18
15
18

16
15
17
16
17
17
18
16
15
18
18
15
16
18
17
15
18
15
16
16
17
16
18
18
15
18

16
14
17
16
17
17
17
12
15
13
15
15
16

16
14
12
15
15
17
16
12
13

16
13
12
15
12
10
16
10
13
11
15
12
14
14
11
9
9
12
14
13
11
14
16
13
15

12
13
12
12
12
10
14
10

17

20
20
18
21
17
18
20
20
20
18
18

- 141 -

17

16
15
18
15
16
13
11

14
18
16
15
17

11

15
13
14
16
15
9
9
14
16
13
11
14
16
13
15
7

13

11
14
11
13
11
11
9
9
12
12
13
11
10
14
11
14

COMPUTER SEARCH RUN (CONTINUED)

Polynomial
(octal)
40000074461
40527200001
40342100221
40400264411
44001140305
41450040051
40060405013
41030210031
40201202131
41024021025
40006052403
40152014401
46200002341
44501404011
40250002053
43012104011
42012430201
42114023001
43300020241
40001403207
40214020503
40260302005
40252200241
40004560111
40000404347
42200036011
42202210241
40504100431
42012401111
43041105001
40022044225
40500001465

Single-burst detection
spans for given
correction span of:
1 2 3 4 5 6 7 8
14
16
19
16
17
19
20
18
17
21
18
19
19
19
20
19
21
21
15
18
20
20
20
16
15
15
20
16
19
21
18
19

14
16
18
16
17
19
19
18
17
19
18
19
19
19
20
18
17
21
15
18
20
20
20
16
15
15
18
16
17
20
18
18

- 142 -

14
16
18
16
17
18
19
18
17
19
18
18
19
16
18
18
17
20
15
18
20
19
20
16
15
15
18
15
17
17
18
18

14
16
18
16
17
18
19
18
17
19
18
18
19
16
18
18
17
16
15
18
16
18
13
16
15
15
17
15
15
17
11
15

14
16
18
16
17
18
17
17
16
16
16
14
17
14
17
18
15
16
14
17
16
17
13
14
15
11
10
15
15
17
11
15

13
16
16
13
13
16
14
17
16
12
15
14
14
14
17
17
15
11
14
16
16
7
13
14
15
11
10
12
14
14
11
15

13
16
11
13
13
14
17
16
12
13
14
14
13
15
12
12
11
14
9
10

13
10
11
13
13
14
10
9
15
12
12
13
10
13
14
12
12
10
13
9
10

12
14
14
11
10
12
14
14
11
15

12
14
13
10
10
12
14
12
11
14

13

SPECI'RUM OF DETECI'ION SPANS FOR COMPUTER SEARCH RUN
12
11
C

o

9
8

U

N
T

5

DET SPAN
2

4

4

2

7
8
9 10 11 12 13 14 15 16 17
CORRECTION SPAN 6: AVERAGE DETECTION SPAN = 13.7
12
10
9

r-

r-

7
r-

5
3

-

5

4

r---

1
0

0

=-:-J

7
8
9
10 11 12 13 14 15 16 17
CORRECTION SPAN 7: AVERAGE DETECTION SPAN = 12.9
13
11

10
9
8

4

o

1

o

o

o

7
8
9
10 11 12 13 14 15 16 17
CORRECTION SPAN 8: AVERAGE DETECTION SPAN = 11.9

- 143 -

MOST PROBABLE DETECTION SPAN
The equation below gives an approximation for the most likely single;'burst detection span of a single polynomial picked at random.
d ~ 0.5287 - In(-ln(1-(n*2 b )/2 m»
0.6932

+1

where,
b
d
n
m

= Single-burst correction span
= Single-burst detection span

= Number of information plus check bits

= Number of check bits

PROBABILITY OF SUCCESS
The equation below gives an approximation for the probability that a single polynomial picked at random will meet specified criteria.

where n, m, b, and d are as defined above.

- 144 -

3.3 BINARY BCH CODES
Binary BCH codes correct random bit errors. Coefficients of the data polynomial
and check symbols are from GF(2) i.e. they are binary '0' or '1', but computation of
error locations and values is performed using w-bit symbols in a fmite field GF(2W),
where w is greater than one.

BINARY BCH CODE SUMMARY
Let:
w
n
t
d
m
mi(x)
g(x)
k
D(x)
W(x)
C(x)
E(x)
C'(x)

=
=
=
=
=
=
=
=
=
=
=
=
=
=
=

Number of bits required to represent each element of GF(2W), the field
wherein computations are performed.
Selected record leng$ in bits, including check bits
Number of bits the code is capable of correcting
Minimum Hamming distance
Degree of code generator polynomial
Number of check bits
Minimum polynomial in GF(2) of a i in GF(2W)
Code generator polynomial
LCM[ml(x),m3(x), ... ,m2*t-l(X)]
Number of factors of g(x) [typically t]
Data polynomial
Write redundancy polynomial
Transmitted codeword polynomial
Error polynomial
Received codeword polynomial

= [xm.D(x)] MOD g(x)
= xm. D(x) + W(x)
= xLI + xL2 + •••
= C(x) + E(x)

Then the following relationships hold:
n

~

d

= 2*t+ 1

2w-l

m~

w*t

THE GENERATOR POLYNOMIAL
The generator polynomial for a t-error-correcting binary BCH code is:
g(x)

= LCM[ml(x),m3(x), ... ,m2*t-l(X)]

where mi(x) is the minimum polynomial in GF(2) of a i in GF(2W); see the glossary for
the defmition of a minimum polynomial. The LCM function above accounts for the fact
that if the minimum polynomials of two or more powers of a are identical, only one
copy of the polynomial is multiplied into g(x). In most cases no duplicate polynomials
exist, and g(x) is the product of them all:
g(x)

= ml (x)· m3(x)· .... m2*t-l (x)
- 145 -

ENCODING

Encoding for a binary BCH code can be performed with a bit-serial shift register
implementing the generator polynomial of the form shown below. All paths and storage
elements are bit-wide. Multipliers comprise either a connection or no connection.

GATE

WRITE

o WRITE DATA/CHECK BITS

----+---~1

DATA
MUX
For applications such as error correction on semiconductor memory, an encoder implementing combinatorial logic is preferable to one implementing sequential logic. Such
an encoder includes a farity tree for each bit of redundancy. The parity tree for a
coefficient Wi of the x term of the write redundancy polynomial W(x) includes each
data bit Dj for which the coefficient of the Xl term of
[x m • x.i] MOD g(x)
is one.

PARITY TREES
Dn- m-1
Dn- m-2

=:i. .I' . . . . .I
1

:I

D1
DO

1

1

I

I

Wm-1
An example of a
EXAMPLE below.

combinatorial~logic

I

Wo

encoder is given in the BINARY BCH CODE

- 146 -

DECODING
Decoding generally requires 5 steps:

1. Generate the syndromes.
2. Calculate the coefficients of an error locator polynomial.
3. Find the roots of the error locator polynomial to determine error location
vectors.
4. Calculate logs of error location vectors to obtain error locations.
5. Invert bits in error.

SYNDROME GENERATION
The syndromes contain information about the locations of errors:

aLl

Sl
S3
S5

+ a L2

+

=
=

a 3*Ll + a 3 *L2 +

=

a k*Ll + a k*L2 +

...

a 5*Ll + a 5*L2 +

It is possible to compute the syndromes directly from the received codeword polynomial C' (x) with the following equation.

Si

= C' (ai)

The above equation can be implemented with either sequential or combinatorial logic.
The syndromes can also be computed by computing the residues of the received
codeword when divided by each factor or the generator polynomial. Let:
ri(x) = C' (x) MOD mi(x)
then the resulting residues may be used to compute the syndromes:

Si

= ri(a i)

The above equations can be implemented sequentially, combinatorially, or with a mixture
of sequential and combinatorial logic.
An example of each of the above methods is shown in the BINARY BCH CODE EXAMPLE below.

- 147 -

COMPUTING COEFFICIENTS OF ERROR LOCATOR POLYNOMIALS
The error locator polynomial has the following form.

The coefficients of the error locator polynomial are related to the syndromes by
the following system of linear equations, called Newton's identities.
a1

Sl

a1 oS 2 + a2 oS 1 + a3
a1oS4 + a2 oS 3 + a3 oS 2 + a4 oS 1 + a5

S3

a1 S2t-2 +

+ a2t-2 Sl

0

0

=

S5

=

S2t-1

For error locator polynomials of low degree, the coefficients of the error locator
polynomial are computed by solving Newton's identities using determinants. For error
locator polynomials of high degree, the coefficients are computed by solving Newton's
identities with Berlekamp's iterative algorithm.

FINDING THE ROOTS OF ERROR LOCATOR POLYNOMIALS
The roots of error locator polynomials are error location vectors.
error location vectors are error locations.

The logs of

The error locator polynomial of degree one is:

The single root of this equation is simply:

x

=

a1

The error locator polynomial of degree two is:
x 2 + alex + a2

=

0

This equation can be solved using a precomputed look-up table by first applying a
substitution to transform it into following form (see Sections 2.6 and 3.4 for more
details):

y2 + Y + c = 0
There are similar approaches to solving other low degree error locator polynomials.
The Chien search is used to solve error locator polynomials of high degree.

- 148 -

BINARYBCHCODEEX4MPLE
Assume a two-error-correcting code over GF(24). The generator polynomial is:
g(x) = m1(x)·m3(x)

= (x 4 + x
= x8 + x7

+ 1)·(x4 + x 3 + x 2 + x + 1)
+ x 6 + x4 + 1

The codeword length is limited to 24_1 =15 bits, so the code may be used protect a
seven-bit data polynomial.

SEQUENTIAL LOGIC ENCODER
GATE

o

C(x)

D(x)
---~---I1

MUX

COMBINATORIAL LOGIC ENCODER
The write-redundancy polynomial coefficients are given by the following parity
trees. Each coefficient Wj is formed as the XOR sum of those coefficients Di whose
row contains a '1' in Wi's column.

D6
Ds
D4
D3
D2
D1
Do

[x 8 .x 6 ]
[X 8 ·xS ]
[X 8 .x 4 ]
[X 8 .x 3 ]
[X 8 ·x 2 ]
[X 8 ·x1 ]
[x 8 .x O]

MOD
MOD
MOD
MOD
MOD
MOD
MOD

g(x)
g(x)
g(x)
g(x)
g(x)
g(x)
g(x)

1
0
0
0
1
0

1

- 149 -

1
1
0
0

1
1
1

0

1

0

1

0
1
1
0
0
0

1

0
0

0

1
1
1

1

0

1

1

0

1
0

1
1

1

0
0
0

1

0

1

1
0
0

1
1
0

0

1
1

SYNDROME GENERATION

SEQUENTIAL CIRCUIT FOR Sl

C' (x)

SEQUENTIAL CIRCUIT FOR S3

C' (x)

L------------------------4------ra 2
L-----------------------------------+-----~a

'-------.-1

ALTERNATIVE SEQUENTIAL CIRCUIT FOR S3
S3

=

C' (a 3 )

C' (x)

- 150 -

COMBINATORIAL LOGIC SYNDROME CIRCUITS
The parity tree for a coefficient Sij of the
received codeword bit

:d

term of syndrome Si includes each

Ck for which the coefficient of the :d term of
[xk*i] MOD mi(x)

is one.

PARITY
TREE5

Co'

COMPUTING THE COEFFICIENTS OF THE 1WO-ERROR LOCATOR POLYNOMIAL
For the two-error case the system of linear equations below must be solved.
These equations follow from Newton's identities.

(1)0°1 +

(0) 0(72 = 51

(52)0°1 + (51) 0(72

°1 =

°2

0

51
53

51

1
52

Sl

1
52

Sl
53

0

(Sl)2
= - - = Sl

Sl

0

51

53 + (51)3

S3 + 51 0 (51)2

53 + 51 05 2
=

1
52

53

=

51

51

- 151 -

51

FINDING ROOTS OF THE TWO-ERROR LOCATOR POLYNOMIAL
The algorithm below defines a fast method for fmding roots of the error locator
polynomial in the two-error case. This algorithm can be performed by a finite field
processor. For double bit memory correction is performed by combinatorial logic.
The two-error locator polynomial is
x 2 + alex +a2 = 0
where

=

a1

S1

and

a2

Substitute

to obtain

o

y2 + y + C
where
C

=

a2
(a1) 2

=

(Sl)3 + S3
(Sl) 3

Fetch Y 1 from TBLA (see Section 2.6) using C as the index. Then form
Y2 = Y1 + a O

Apply reverse substitution of

y = x/a1
to obtain
Xl

=

aLl

=

aleYI

=

SleY1

and

X2

=

a L2

=

al eY2

=

Finally, calculate the error locations
L1

LOG a (Xl)

L2

LOG a (X2)

For a binary BCH code, the error values are by defmition equal to '1'.

- 152 -

SleY2

BCH CODE DOUBLE-BIT MEMORY CORRECfION - EXAMPLE #1

w

LOG(S3)
-LOG(S13)
BINARY~--------~

ADDER*

ZERO-DETECTHr--------------------~

aO

----r------.\.

=0

ALARM

Yl

*Binary addition modulo
field size minus one.

+
BINARY
ADDER*

BINARY
ADDER*

This example is shown in a form that is easier to understand.
the same approach but combines some of the functions.

- 153 -

Example #2 uses

BCB CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #2

-+'-'-~

CUBE/LOG
LOG(S3~

-LOG(Sl )
BINARYr---------~

ADDER*

ZERO-DETECT~------~

S3
81 3
=0

*Binary addition
modulo field
size minus one.

LOG(Y2)

+

+

BINARY
ADDER*

ALARM
OR

LOG (Y1)

+

+

BINARY
ADDER*

This example uses the same approach as Example #1 but several functions have
been combined.

- 154 -

BCH CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #3

--F=-4--I

CUBE/ INVERT

.---'--.=0

w

Y2

ALARM

Yl

GF MULTIPLIER

- 155 -

BCH CODE DOUBLE-BIT MEMORY CORRECTION - EXAMPLE #4

--F-~

CUBE/INVERT

.-----'----, = 0

w

ALARM

YI

+--------;.f +

1
Xl

- 156 -

X2

BCN CODE DOUBLE-BIT MEMORY CORRECl'ION - EXAMPLE #5

The mathematical basis for this example is developed by operating on the error
locator polynomial:
x 2 + 01·x + 01

=

0

First substitute for 01 and 0Y2 using expressions developed above:
x 2 + S1·x +

(S1)3 + S3
S1

=

0

Next multiply by SI:
S1·x2 + (S1)2· x + (S1)3 + s3

=

0

Add zero in the form of (x3 + x3):
(x 3 + x 3 ) + S1·x2 + (S1)2. x + (S1)3 + S3

=

0

Finally, rearrange and combine the underlined terms to obtain a useful relation:
(S1 + x)3 + (S3 + x 3 )

=

0

ICUBER~
ZERO-DETECT

_RA_W
__D_A_T_A__B_I_T_S________________________--w~~ CORRECTED DATA

One such circuit is required for each bit of the memory word.

- 157 -

BITS~

3.4 REED-SOWMON CODES
Reed-Solomon codes are random single- or multiple-symbol error-correcting codes
operating on symbols which are elements of a finite field. The coefficients of the data
polynomial and the check symbols are elements of the field, and all encoding, decoding,
and correction computations are performed in the field. Reed-Solomon codes are inherently symbol oriented and the circuits implementing them are typically clocked once
per data symbol, although bit-serial techniques are also employed.

»,

We shall use the Galois field with eight elements (i.e., GF(8) or GF(23 introduced
in Section 2.5 in illustrating the properties and implementation of Reed-Solomon codes.

- 158 -

REED-SOLOMON CODE SUMMARY
Let

w

= Number of bits per symbol; each symbol

m

= Degree of generator polynomial = number of check symbols

n

including check symbols
= Selected record length in symbols,
~

d

= Minimum Hamming distance of the code

t

= Number of symbol errors correctable by the code

£

GF(2W)

= Selected number of symbol errors to be corrected
= Number of symbol errors which the code is capable of detecting beyond the number selected for correction

b

= Burst length in bits

c

= Number of bursts correctable by the code

A(x)

= Any polynomial in the field

G(x)

= The code generator polynomial

gi(x)

= Any of the m factors of G(x)

D(x)

= Data polynomial

W(x)

= Write redundancy polynomial

= [xm D(x)] MOD G(x)

C(x)

= Transmitted codeword polynomial

= xmoD(x) + W(x)

= (x + a i) when mO=O

0

E(x)
= Error polynomial
Cl(x) = Received codeword polynomial

= eloxLI + e2oxL2 +

R(x)

= Remainder polynomial

= C I (x) MOD G(x)

Si

= ith syndrome

= C I (x) MOD gi(x)

Then the following relationships hold:
n

S

2w-l

d

=

m+l,

ec

S

t = INT[(d-l)/2] = INT[m/2]

ed
b

=

dmin - 2*ec - 1 = m - 2*ec

S

(ec-1)*w + 1

c

=

ec/(1 + INT[(b+w-2)/w])

- 159 -

= C(x) + E(x)

000

AR
AR
AR
AR
AR
REED-SOLOMON CODE SUMMARY (CONT.)

A(x) MOD gi(x)

=

A(x) MOD gi(x)

= A(x)
=0
=0

[A(x) MOD G(x)] MOD gi(x)

(1)

= A(a i )

(2)

C(x) MOD G(x)
C(x) MOD gi(x)
R(x) = CI (x) MOD G(x)
= [C(x) + E(x)] MOD G(x)
= E(x) MOD G(x)
Si = CI(X) MOD gi(x)
= [C(x) + E(x)] MOD gi(x)
= E(x) MOD gi(x)
= Eloa i*Ll + E2 0a i*L2 + 000
= [E(x) MOD G(x)] MOD gi(x)
= R(x) MOD gi(X)

(3)

la i

(4)

{by definition of CI}
{by equation (3)}
{by
{by
{by
{by
{by

- 160 -

definition of CI}
equation (4)}
equation (2)}
equation (l)}
equation (5)}

(5)

CONSTRUcrING THE CODE GENERATOR POLYNOMIAL
The generator polynomial of a Reed-Solomon code is given by:
G(x)

=

m-l

I I

i=o

(x + a mo +i )

where m is the number of check symbols and 1DQ is an offset, often zero or one. In
the interest of simplicity, we take II1() equal to zero for the remainder of the discussion.
Note that many expressions derived below must be modified for cases where mO is not
zero. Let m=4; the code will be capable of correcting:
t

=

INT(m/2)

=

2

symbol errors in a codeword. The generator polynomial is:
3

G(x) =

I I

i=o

(x + ai,

(x + aD) (x + a l ) (x + a 2 ) (x + a 3 )
x4 + (a O + a l + a 2 + a 3 ).x 3
+ (aOa l + a Oa 2 + a Oa 3 + a l a 2 + a l a 3 + a 2 a 3 ) ·x 2
+ (a Oa l a 2 +a Oa l a 3 +a Oa 2 a 3 +a l a 2 a 3 ).x + (a Oa l a 2 a 3 )
= x4 + a 2 .x 3 + a 5 .x 2 + a 5 .x + a 6

VERIFYING THE CODE GENERA TOR POLYNOMIAL
The code generator polynomial evaluates to zero at each of its roots.
can be used to prove the computations used in generating it:

G(x)

G(x)

G(x)

a O.(a l )4 + a 2 .(a l )3 + a 5 .(a l )2 + a 5 .a l + a 6

°

al

a2

=

This fact

a O.(a 2 )4 + a 2 .(a 2 )3 + a 5 .(a 2 )2 + a 5 .a 2 + a 6

=

°

a O.(a 3 )4 + a 2 .(a 3 )3 + a 5 .(a 3 )2 + a 5 .a 3 + a 6

=

°

a3

- 161 -

FINITE FIELD CONSTANT MULTIPLIERS

To design a constant multiplier to implement
y=an·x
in GF(2 W ), fill in the diagram below with the binary representations of
a n ,an + 1" ••• an +w-l .
a n +w- l

xw-l
xw-2

x

Yw-l Yw-2

YI

Yo

~

Y = an·x
Then construct parity trees down columns.
The parity tree for a given y bit
includes each x bit with a '1' at the intersection of the corresponding column and row.
Example: Using the field of Figure 2.5.1, construct a constant multiplier to compute:

Y = a 3 ·x
I

I

I

I

I

0

0

1

I

Y2

YI

YO

t

Y
Y2
YI
YO

a 3 ·x
Xl + x2
Xo + Xl + x2
Xo
+ x2

- 162 -

ENCODING OF REED-SOLOMON CODES
Encoding is typically, but not always, performed using an intetllal-XOR shift register with symbol-wide data paths, implementing the form of generator polynomial shown
above. Other encoding alternatives will be discussed later in this section.
The following circuit computes C(x) for our example field and code in symbol-serial
fashion:

1

C (x)

Xm·D(X) + [xm·D(x)] MOD G(x)

D(x)
---~--...,o

MUX
The circuit above multiplies the data polynomial D(x) by xm and divides by G(x).
All paths are symbol-wide (three bits for this example). The AND gate and the MUX
are fed by a signal which is low during data time and high during redundancy time.
The circuit below performs the same function.

1

D(x)
----..-10

MUX

C(x)

- 163 -

DECODING OF REED-SOLOMON CODES

Decoding generally requires five steps:

1. Compute syndromes.
2. Calculate the coefficients of the error locator polynomial.
3. Find the roots of the error locator polynomial. The logs of the roots are the
error locations.
4. Calculate the error values.
The following circuit computes the syndromes for our example field and code in
symbol-serial fashion:
Si = c' (x) MOD 9i(x)

Q:5
So

oj

C' (x) MOD (x

+ Qi)

1
0

C' (x)

QB
QB
QB
S1

oj

~
1

S2

oj

S3

oj

8

~

This circuit computes the syndromes by dividing the received codeword C'(x) by
the factors of G(x). All paths are symbol-wide (three bits for this example). After all
data and redundancy symbols have been clocked, the registers contain the syndromes Si.

- 164 -

COMPUTING COEFFICIENTS OF ERROR LOCATOR POLYNOMIALS
Recall the syndrome equations derived above:

Si = E1 • a i*L1 + E2· a i*L2 +
These form a system of nonlinear equations with error values and error location
vectors as unknowns. More easily solved is the error locator polynomial, which contains
only error location information. Error locator polynomials have the following form:
e

I I

i=l

(x + aLi)

=

x e + 01·Xe-1 + ••• + 0e-1·x +

0e

=

0

where e is the number of errors. The coefficients of the error locator polynomial are
related to the syndromes by the following system of linear equations, called Newton's
generalized identities:

Se+1
Sm-e-1· o e + Sm-e· o e-1 +

Sm-1

where m is the number of syndromes.
When computation of the error location polynomial is begun, the number of errors,
and thus the degree of the error locator polynomial, is unknown. One method of computing coefficients of the error locator polynomial first assumes a single error. If this
assumption is found to be incorrect, the number of assumed errors is increased to two,
and so on. This method is fastest for the least number of errors. This is desirable
because in most cases few errors are more likely than many.
For error locator polynomials of low degree, the coefficients 0i are computed by
directly solving the above system of equations using determinants. Examples are worked
out below.
For error locator polynomials of high degree, the coefficients 0i are computed by
solving the system of equations above using Berlekamp's iterative algorithm. One version of the iterative algorithm is outlined, and an example is worked out, below.

- 165 -

ITERA TIVE ALGORITHM
(0) Initialize a table as shown below; the parenthesized superscript on a(x) is a
counter and not an exponent.
a (n) (x)

n
-1

o

n-Ln

1

1

1

So

o
o

-1

o

1

The table will be completed in the steps below.
Initialize n to zero.
(1) If dn =0 then set Ln + 1 = Ln, set a(n + l)(x) = a(n)(x), and go to Step (3).

(2) Find a row k where k TWO ERRORS
COMPUTE ERROR LOCATIONS

SOoS3 + Si oS 2
a1 =

SOoS2 + (Sl) 2
C

Y1 = a 2
Xl
a1' Yl = a 3 ·a 2
L1 = LOG (Xl) = 5

= a3

(S2)2 + Sl oS 3
a2 =

= a1

a2/(a1)2

=

a5

= aO

SO'S2 + (Sl) 2

Y2 = a 6
X2 = al oY2 = a 3 ·a 6 = a 2
L2 = LOG (X2) = 2

COMPUTE ERROR VALUES

- 173 -

lTERA11YE ALGORJ11/M EXAMPLE
Use the iterative algorithm to generate o(x) for the case above.

TABLE GENERATED BY ITERATIVE ALGORITHM
o(n) (x)

n
-1

°1
2

3
4

x
x
x2
x2

+
+
+
+

1
1
a4
a4
a 4 ·x + a 1
a 3 ·x + aO

dn

Ln

1
a4

0
0

°

a5
a4

1
1
2

n-Ln
-1

°°1
1

TRACE OF ITERATIVE ALGORITHM

n=O

.- (1)
(2)

(1)

(n+1) :m => 1<4 => continue.
d1 = °O'Sl + °l' S O = 1·a 1 + a 4 ·a 4 = O.
n = 0+1 = 1. Go to (1) .
d1=0 => L2 = L1 = 1. 0(2) (x) = o(l)(x)

(3)

Go to (3) .
(n+1) :m => 2<4 => continue.

(3)
(4)
n=l

(4)
n=2

(1)
(2)

n=3

d2 = 00'S2 + 0l'Sl = 1·0 + a 4 ·a 1 = a 5 .
n = 1+1 = 2. Go to (1).
d2=a 5 <>0 => Go to (2).
k = O. d2/dO = a 5 /a 4 = a 1 • L3 = MAX[1,O+2-0) = 2.
0(3) (x) = x 1 .(x + ( 4 ) + a 1 '(1) = x 2 + a 4 .x + a 1 •
(n+1) : m => 3<4 => continue.

(4)

d3 = GO'S3 + 01'S2 + 02'Sl = 1·a 1 + a 4 ·0 + a 1 ·a 1
n = 2+1 = 3. Go to (1).
d3=a 4 <>0 => Go to (2).

(1)

1.

x+ a 4 .

(3)

(2)

n=4

do=a 4 <>0 => Go to (2).
k = -1. do/d_1 = a 4/1 = a 4 . L1 = MAX[O,O+O-(-l»)
0(1) (x) = xl. (1) + a 4 • (1) = x + a 4 .

k = 2. d3/d2 = a 4 /a 5 = a 6 . L4 = MAX[2,1+3-2)
0(4) (x) = x O'(x 2 + a 4 .x + ( 1 ) + a 6 .(x + ( 4 )
= x2 + a 3 .x + aD.

(3)
(n+1):m => 4=4 => stop
o(x) = 0(4) (x) = x 2 + a 3 ·x + aD; same as case above.
- 174 -

a4 .

2.

UNCORRECIABLE ERROR EXAMPLE
C(x)
E(x)
C' (x)

=
=

(X 20 x 6 + (X 10 x S + (XSox 4 + Oox 3 + (X3 0x2 + (X 10 x + 0
(X 20 x S + (X 20 x 4
+ (X 10 x 2
(X4
S
3
0x
6
+ (X3 0x4 + Oox + (XOox 2 + (X 10 x + 0
(X 20 x +
COMPUTE SYNDROMES

C' (x)
INIT
(X2
(X4
(X3

So
0

0

(X2
(Xl
(XO
(XO

0

(Xl
(Xl

(XO
(Xl

Sl
0

(X2
(X6
(Xl
(X2
(Xl
(X4
(XS

0

S2

S3

0

0

(X2

(X2
(XO

0

(X3
(XS

0
0

(XO
(XO
(X3

0

(Xl
(X3

COMPUTE a

a = Sl/S 0 = (XS/(X1 = (X4
VERIFY NEWTONS IDENTITIES
?

Sl oa

::::

(XS.(X4
(X(S+4 MOD 7)

?

:0::

?

:0::

S2
(X3
(X3

(X2 Y. (X3 => TWO ERRORS
COMPUTE ERROR LOCATIONS

(S2)2 + 51.53

SOoS3 + SloS2

------=

(X3
SOoS2 + (Sl)2

- 175 -

MISCORREcnON EXAMPLE

C(x)
E (x)
C' (x)

=
=
=

a 2 ·x 6 + a 1 ·x5 + a 5 ·x4 + Oox3 + a 30 x 2 + a 10 x +
a 20 x 5
+ a4
+ a 10 x 2
5
3
6
4
2
a 20 x + a 40 x + a 50 x + Oox + aO ox + a 10 x + a 4

°

COMPUTE SYNDROMES
C' (x)

50

INIT
a2
a4
a5

°

a2
a1
a6
a6
a2
a4

°

aO
a1
a4

51

52

°

°
a°5
aO

a2
a6
a4
a5
a2
aO
a2

53

°

a2
aO
a2
a5
a3
as
a2

a2

a6

°a 4
°
COMPUTE (1

50 =

o

=> TWO ERROR5

COMPUTE ERROR LOCATIONS
50 05 3 + 51 05 2
(11
50. 5 2 + (51) 2

=

c =
Y1
Xl
L1

=

=

a2
a20a2
(11° Y1
LOG (Xl) = 4

a2

(12

=

(52)2 + 51 05 3

= a1
Y2 = a 6
a 2 ·a 6
X2 = (11° Y2
L2 = LOG(X2) = 1

(12/«(11)2

= a4

COMPUTE ERROR VALUES
aL2050 + 51

E1

= aLl

+ aL2

a5

50 05 2 + (51) 2

= aO

E2

- 176 -

= 50

+ E1

= aO

a1

REFERENCE TABLES
ADDmON TABLE

o aO a 1 a2 a3 a4 a 5 a6
aO a 1 a2 a3 a4 a 5 a6

0

0

0

0

0

0

1

aO 0

0

1

0

a 1 a3 0

1

0

0

a2 a6 a4 0

0

1

1

a3 a 1 aO a5 0

1

1

0

a4 a5 a 2 a 1 a6 0

1

1

1

a5 a4 a6 a3 a 2 a O 0

1

0

1

a6 a2 a5 aO a 4 a3 a 1 0

a3 a6 a 1 a5 a4 a2
a4 aO a 2 a 6 a 5
a5 a 1 a3 aO
a6 a2 a4

ROOT TABLE

MULTIPLICATION TABLE

o a O a 1 a2 a3 a4 a5 a6
0

aO

aO a 1 a2 a3 a4 a5 a 6

--

--

0

a 1 a 2 a3 a4 a 5 a6 aO

a2

a6

0

a2 a 3 a4 a5 a6 aO a 1

a4

a5

0

a3 a4 a5 a6 aO a 1 a2

--

--

0

a4 a5 a 6 a O a 1 a 2 a3

a1

a3

0

a5 a6 a O a 1 a 2 a 3 a4

0

a6 aO a 1 a2 a3 a4 a5

---

---

0

0

0

0

0

0

0

0

0

- 177 -

aO a3
a1

AN INTUITIVE DISCUSSION OF THE SINGLE-ERROR CASE

The following discussion provides an intuitive description of how the Reed-Solomon
code single-error case works. A particular example is used in order to make the discussion more understandable. Finite field theory is intentionally avoided.
Consider a single-error correcting Reed-Solomon code operating on 8-bit symbols
and employing, on read, the binary polynomials:
PO

(x 8 +1)

PI

(x 8 + x 6 + x 5 + x4 + I)

The correction algorithm requires residues of a function of the data, f(data),
modulo PO and PI where:

for PO, f(OATA}

m-I
for PI, f(OATA}

)0 xiOi(X}
i=o

and m is the number of data bytes. Di represents the individual data byte polynomials.
DO is the lowest order data byte (last byte to be transmitted or received).
The residues are computed by hardware implementing the logical circuits shown
below. These logical circuits are clocked once per byte.

7

5

..4

2
1
ft

- 178 -

The shift register for PO computes an XOR sum of all data bytes including the
check bytes. Since PI is primitive, its shift register generates a maximum length sequence (255 states). When the PI shift register is non-zero, but its input is zero, each
clock sets it to the next state of its sequence.

CORRECJION ALGORITHM
Consider what happens when the data record is all zeros and a byte in error
received.

IS

Both shift registers wiJ1 remain zero until the byte in error arrives. The error
byte is XOR'd into the PO and PI shift registers. Since the PO shift register preserves
its current value as long as zeros are received, the error pattern remains in it until the
end of record. XOR'ing the error byte into the PI shift register places the shift register at a particular state in its sequence. As each new byte of zeros is received the
PI shift register is clocked along its sequence, one state per byte.
The terminal states of the PO and PI shift registers are sufficient for determining
displacement. To find displacement, it is necessary to determine' the number of shifts
of the PI shift register that occurred between the occurrence of the error byte and the
end of record.
To better understand the correction algorithm, consider a sequence of 255 states as
represented by the circle in the drawing on the following page. Let SI be the ending
state of the PI shift register and let SO be the ending state of the PO shift register (SO
is also the initial state of the PI shift register). Let Sr be the reference state '0000
000 I' . The number of states between SO and S I must be determined. There are several
ways to do this. In this description a table method is used.
Refer again to the diagram on the following page. What we need to know is the
number of states between SO and S1. We construct a table. The table is addressed by
SO and S I, and contains the distance along the PI sequence between the reference state
and any arbitrary state Sx.
First, SO is used to address the table to fetch distance dl. Next, S 1 is used to
address the table to fetch distance d2. The desired distance Cd), distance between SO
and S 1 is computed as follows:
d

= d2-dl; ifdO

PROBABILITY OF TWO OR MORE BIT ERRORS IN A BLOCK OF n BITS
-IL

\

1-

Pr

r>l

- 216-

DECODED ERROR PROBABILITIES FOR A BIT-CORRECTING CODE

n = Block length in bits
e = Number of bits corrected per block
= 0 for an error-detection-only code
Pe

= Raw bit error probability (units: bit errors per bit)
~

BLOCK ERRORS
BLOCK

~ ~ [~J*(Pe)i*(l-Pe}n-i
i>e
~

BLOCK ERRORS
BIT

~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e
n

BIT ERRORS
BLOCK

~ ~(i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e
~

BIT ERRORS
BIT

~ ~ *~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e

- 217-

BURST-ERROR PROBA BILITIES

Let P e be the raw burst-error rate, defined as the ratio of burst error occurrences
to total bits transferred, with units of burst errors per bit. The equations below give
the probabilities for various numbers of burst errors occurring in a block of n bits. It
is assumed that burst length is short compared to block length.
PROBABILITY OF EXACTLY r BURST ERRORS IN A BLOCK OF n BITS

PROBABILITY OF ZERO BURST ERRORS IN A BLOCK OF n BITS

PROBABILITY OF ONE BURST ERROR IN A BLOCK OF n BITS

PROBABILITY OF ATLEAST ONE BURST ERROR IN A BLOCK OF n BITS
-.!L

\

1-

Pr

l- P O

r>O

PROBABILITY OF TWO OR MORE BURST ERRORS IN A BLOCK OF n BITS
-.!L

\

1-

Pr

r>l

- 218 -

DECODED ERROR PROBABILITIES FOR A BURST-CORRECTING CODE

n
e
Pe

= Block length in bits
= Number of bursts corrected per block
= 0 for an error-detection-only code
= Raw burst error probability
(units: burst errors per bit)

BLOCK ERRORS
BLOCK

-1L

~ ~ [~J*(Pe)i*(l-Pe)n-i
i>e

BLOCK ERRORS
BIT

-1L

~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e

BURST ERRORS
BLOCK

-1L

~ ~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e

BURST ERRORS
BIT

-1L

~ ~ *~ (i+e)*[~J*(Pe)i*(l-Pe)n-i
i>e

- 219-

SYMBOL-ERROR PROBABILITIES

Let P e be the raw-symbol-error rate, defined as the ratio of symbol error occurrences to total symbols transferred, with units of symbol errors per symbol. The equations below give probabilities for various numbers of symbol errors occurring in a block
of n symbols.
PROBABILITY OF EXACTLY r SYMBOL ERRORS IN A BLOCK OF 11 SYMBOLS

PROBABILITY OF ZERO SYMBOL ERRORS IN A BLOCK OF 11 SYMBOLS

PROBABILITY OF ONE SYMBOL ERROR IN A BLOCK OF 11 SYMBOLS

PROBABILITY OF AT LEAST ONE SYMBOL ERROR IN A BLOCK OF 11 SYMBOLS
~

\

1-

Pr

l-P O

r>O

PROBABILITY OF TWO OR MORE SYMBOL ERRORS IN A BLOCK OF 11 SYMBOLS
~

\

1-

Pr

r>l

- 220-

DECODED ERROR PROBABILITIES FOR A SYMBOL-CORRECTING CODE

n = Block length in symbols
e = Number of bits corrected per block
= 0 for an error-detection-only code
P e = Raw symbol error probability
(units: symbol errors per symbol)

w = Symbol width in bits
BLOCK ERRORS
BLOCK

BLOCK ERRORS
SYMBOL

~ ~

BLOCK ERRORS
BIT

~

--ll-

*L
i>e

[~J*(Pe)i*(l-Pe)n-i

--ll1

w*n

*\

1-

[~J * (P e ) i* (l-P e ) n-i

i>e

SYMBOL ERRORS
BLOCK

--ll~

\

1-

(i+e) * [~J * (P e ) i* (l-P e ) n-i

i>e

SYMBOL ERRORS
SYMBOL

SYMBOL ERRORS
BIT

--ll-

~

1
- *\
n 1- (i+e) * [~J * (P e ) i* (l-P e ) n-i
i>e

--ll~

1

w*n

*\

1-

(i+e) * [~J * (P e ) i* (l-P e ) n-i

i>e

* BIT ERRORS
BIT

--ll~

1

2*n

*\

1-

(i+e)*[~J*(Pe)i*(l-Pe)n-i

i>e

* Assuming a symbol error results in kl2 bit errors.

- 221 -

DECODED ERROR PROBABILITIES FOR A SYMBOL-CORRECTING CODE
WHEN ERASURE POINTERS ARE AVAILABLE FOR SYMBOL ERRORS

n = Block length in symbols
e

=

Number of bits corrected per block
ofor an error-detection-only code

P e = Raw symbol error probability
(units: symbol errors per symbol)
w = Symbol width in bits
.-!L

BLOCK ERRORS
BLOCK

~ ~ [~J*(Pe)i*(l-Pe)n-i
i>e

BLOCK ERRORS
SYMBOL

.-!L

~ ~ *~ [~J*(Pe)i*(l-Pe)n-i
i>e

BLOCK ERRORS
BIT

~

.-!L

w;n

*~ [~J*(Pe)i*(l-Pe)n-i
i>e

SYMBOL ERRORS
BLOCK

.-!L

~ ~ i*[~]*(Pe)i*(l-Pe)n-i
i>e

SYMBOL ERRORS
SYMBOL

.-!L

~ ~ *~ i*[~J*(Pe)i*(l-Pe)n-i
i>e

SYMBOL ERRORS
BIT

~

.-!L

w;n

*~ i*[~J*(Pe)i*(l-Pe)n-i
i>e

*

BIT ERRORS
BIT

~

.-!L

2;n

*~ (i)*[~J*(Pe)i*(l-Pe)n-i
i>e

* Assuming a symbol error results in kl2 bit errors.

- 222-

4.3 DATA RECOVERABILITY
Error correction is used in storage device subsystems to improve data recoverability. There are other techniques that improve data recoverability as well. Some of
these techniques are discussed in this section.
System manufacturers may want to
include data recovery techniques on their list of criteria for comparing subsystems.

DATA RECOVERY TECHNIQUES
Some storage device subsystems attempt data recovery with the techniques below
when ECC is unsuccessful.
a.
b.
c.
d.
e.
f.
g.

Head offset.
Detection window shift.
VFO bandwidth change.
Detector threshold change.
Rezero and reread.
Remove and reinsert media then reread.
Move media to another device and reread.

DATA SEPARATOR
The design of the data separator will have a significant influence on data recoverability.
Some devices have built-in data separators.
Other devices require a data
separator in the controller.
Controller manufacturers should consult their device vendors for recommendations
when designing a controller for devices which require external data separators.
Circuit layout and parts selection are very important for data separators. Even if
one has a circuit recommended by a drive vendor, it may be advisable to use a highly
experienced read/write consultant for the detailed design and layout.

WRITE VERIFY
Another technique that can improve the probability of data recovery is write verify
(read back after write). Write verify can be very effective for devices using magnetic
media due to the nature of defects in this media. One may write/read over a defect
hundreds of times without an error. An error will result only when the write occurs
with the proper phasing across the defect. Once the error occurs, it may then have a
high incidence rate until the record is rewritten. Hundreds of writes may be required
before the error occurs again.

- 223-

When an error is detected by write verify, the record is rewritten or retired or
defect skipping is applied. This reserves error correction for errors that develop with
time or usage. Since it affects performance, write verify should be optional.
DEFECT SKIPPING

Defect-skipping techniques include alternate-sector assignment, header move functions, and defect skipping within a data field. These techniques are used to handle
media defects detected during formatting and persistent errors detected on read.
Under alternate-sector assignment, a defective sector may be retired and logically
replaced with a sector physically located elsewhere. Space for alternate sector(s) may
be reserved on each track or cylinder, or one or more tracks or cylinders may be
reserved exclusively for alternate sectors.
The header contains an alternate-sector
assignment field; when a sector is retired, this field in its header is written to point to
the alternate sector which is to logically replace it.
An assigned alternate sector
typically has a field which points back to the retired sector that it is replacing.
When a header-move function is implemented, a defect falling in a header is avoided by moving the header further along the track. Space may be allotted in the track
format to allow a normal-length data field to follow a moved header, or the moved
header may contain a field pointing to an assigned alternate sector. In the latter case,
since the data field following a moved header is not used, it need not be of normal
length; it mayor may not actually be written, depending on implementation alternatives.
Defect skipping within a data field is used in some high-capacity magnetic disk
subsystems employing variable-length records as a means of handling known defects.
Each record has a count field which records information on the locations of defects
within the track. Writing is interrupted when the current byte displacement from the
index corresponds to the starting offset of a skip as recorded in the count field. When
the recording head passes beyond the known length of the defect, a preamble pattern
and sync mark are written, then writing of data re-commences. Some IBM devices allow
up to seven defects per track to be skipped in this manner.
Defect skipping within a data field is also used on magnetic devices employing
fixed-length records. In this case, each sector header records displacement information
for defects in that sector. Some implementations write a preamble pattern and sync
mark at the end of a skip as discussed above for variable-length records while others
do not. The former practice handles defects which can cause loss of sync. If a preamble pattern and sync mark are not written, some other method must be used to map
out defects which can cause loss of sync.

- 224-

Devices employing defect skipping within a data field must allocate extra media
area for each sector, track, or cylinder, depending on whether or not embedded servoing
is used and on other implementation choices. In devices using embedded servoing, the
space a]]otted for each sector must allow room for the maximum-length defect(s) which
may be skipped. In devices not using embedded servo techniques, the track format need
accommodate only some maximum number of skips per track, which may be much less
than one per sector.
.
When defect-skipping techniques are used and skip or alternate-sector information
is stored in headers, care must be taken to make sure that the storage of information
in headers other than track and sector number does not weaken the error tolerance of
the headers.
A different method for alternate-sector assignment, which avoids this
complication, is sector slipping. Each track or cylinder contains enough extra area to
write one or more extra sectors. When a sector must be retired, it and each succeeding
sector are slipped one sector-length along the track or cylinder. This method has the
additional advantage that sectors remain consecutive and no additional seek time is
required to find an alternate sector at the end of the track or cylinder, or on a different track or track or cylinder. This method is discussed in more detail under A
HEADER STRATEGY EXAMPLE below.
ERROR-TOLERANT TRACK FORMATS
Achieving error tolerance in the track format is a major consideration when architecting a storage device and controller for high error rate media. All special fields
and all special bytes of the track format must be error-tolerant. This includes but is
not limited to preambles, sync marks, header fields, sector marks, and index marks.
Experience shows that designing an error-tolerant track format (one that does not
dominate the uncorrectable sector event rate) to support high defect densities can be
even more difficult than selecting a high performance ECC code.
SYNCHRONIZA TION
For high defect rate devices, it is essential that the device/controller architectures
include a high degree of tolerance to defects that fall within sync marks. There are
several synchronization strategies that achieve this. The selection will be influenced by
the nature of the' device and the nature of defects (e.g., length distribution, growth
rate, etc.). Both false detection and detection failure probabilities must be considered.
Synchronization is discussed in detail in Section 4.8.1; some high points are briefly
covered below.
One method for achieving tolerance to defects that fall within sync marks is to
employ error-tolerant sync marks. Error-tolerant sync marks have been used in the
past that can be detected at the proper time even if several small error bursts or one
large error burst occurs within the mark. See Section 4.8.1 for a more in-depth discussion of synchronization codes.

- 225-

Another strategy is to replicate sync marks with some number of bytes between.
The number of bytes between replications would be determined by the maximum defect
length to be accommodated. A different code is used for each replication so that the
detected code identifies the true start of data. The number of replications required is
selected to achieve a high probability of synchronization for the given rate and nature
of defects.
Mark lengths, codes, and detection qualification criteria are selected to
achieve an acceptable rate of false sync mark detection.
If synchronization consists of several steps, each must be error-tolerant. If sector
marks (also called address marks) and preambles precede sync marks they must also be
error tolerant.
Today, in some implementations correct synchronization will not be
achieved if an error occurs in the last bit or last few bits of a preamble. Such sensitivities must be avoided. Section 4.8.1 discusses how error tolerance can be achieved
in the clock-phasing step of synchronization as well as in the byte-synchronization step.

MAINTAINING SYNCHRONIZATION THROUGH LARGE DEFECTS
Obviously, it is desirable to maximize the defect length that the PLL can flywheel
through without losing synchronization.
Engineers responsible for defect handling
strategy will want to influence the device's rotational speed stability and PLL flywheeling characteristics. One technique that has been used to extend the length of bursts
the PLL can flywheel through is to coast the PLL through defects by using some criteria (run-length violation, loss of signal amplitude, etc.) to temporarily shut off updating of the PLL's frequency and phase memory.

FALSE SYNC MARK DETECTION
The false detection of a sync mark can result in synchronization failure. The
probability of false mark detection must be kept low by careful selection of mark lengths, codes, and qualification criteria.
In some architectures, once data acquisition has been achieved, sync mark detection is qualified with a timing window in order to minimize the probability of false
detection. In such an architecture, it is desirable to generate the timing window from
the reference clock; if the timing window is generated from the data clock and the PLL
loses sync while clocking over a large defect in a known defective sector, the following
good sector may be missed due to the subsequent mispositioning of the timing window.

- 226-

HEADERS
For high error-rate devices, header strategy is influenced by defect event rates,
growth rates, length distributions, performance requirements, and write prerequisites.
One header strategy requires replication. A number of contiguous headers with
CRC are written, then on read one copy must be read error-free. Another strategy is
to allow a data field to be recovered even if its header is in error. This requires that
headers consist solely of address information such as track and sector number. If a
header is in error, such information can be generated from known track orientation.
Some devices combine this strategy with header replication in order to minimize the
frequency at which address information is generated rather than read. In any case,
devices using high error-rate media must be insensitive to defects falling into the
headers of several consecutive sectors. When address information is generated rather
than read, the data field can be further qualified by subsequent headers.
Using error correction on the header field as well as the data field will increase
the probability of recovering data. However, one must either be able to store and
correct both a header and the associated data field, or provide a way to space over a
defective header in order to recover the associated data field on a succeeding revolution.
An alternative to correcting the header is to keep only address information in the
header and to provide a way to space over a defective header. When a defective header
is detected, record address is computed from track orientation. A disadvantage of this
method is that it does not allow flags to be part of the header field.
Some devices also include address information within the highly protected data
field to use as a final check that the proper data field was recovered. This check must
take place after error correction. The best time to perform it may be just before
releasing the sector for transfer to the host.
A HEADER STRATEGY EXAMPLE

A typical error-tolerant header and sector-retirement strategy might be: Store in
the header only track and sector address information. Reserve K sectors at the end of
each cylinder for spare sectors. When a sector must be retired, slip all data sectors
down the cylinder by one sector position and write a special "sector-retired" flag in
place of the sector number in the header of the retired sector. On searches if a header is read error-free and the "sector-retired" flag is found instead of a sector number,
adjust the sector number in the known orientation and continue searching.

- 227-

If a header-in-error is encountered during a search then it is either the header of
a sector that had been previously retired or it is a header containing a temporary error
or a new hard defect.
The sector number sequence encountered in continuing the
search can be used to determine which is the case. If the header-in-error was that of
an already-retired sector, the sector number sequence should be adjusted and the search
continued. Otherwise the search should still be continued unless the header-in-error
was that of the desired sector, in which case the search should be interrupted and a
re-read attempted. If the error is not present on re-read, assume it was a temporary
error and proceed to read the data field. If the error persists on re-read, assume a
new hard defect: orient on the preceding sector, skip the header-in-error, and read the
desired data field. A sector whose header contains a new hard defect should be retired
as soon as possible.
Note that the error-tolerant header strategy outlined above will not work if it is
necessary to store control data, such as location information for defect skipping, within
headers.

SERVO SYSTEMS
In many devices, the ability to handle large defects is limited by the servo system(s). Engineers responsible for defect handling strategy must understand the limits of
the servo system(s) relative to defect tolerance. In particular, any testing of defect
handling capabilities should include the servo system(s).

MODULATION CODES
The modulation code selected will affect EDAC performance by influencing noise-generated error rates, the extension of error bursts, the ability to acquire synchronization, the ability to hold synchronization through defects, the ability to generate erasure
pointers, and the resolution of erasure pointers.
The following summarizes the results of an analysis of the error propagation
performance of the (2,7) code described in U.S. Patent #4,115,768, inventors Eggenberger
and Hodges, assignee IBM (1978). Analysis was confmed to cases of single-bit errors
defmed below:

Drop-in: A code-bit '1' where '0' was encoded
Drop-out: A code-bit '0' where '1' was encoded
Shift:
A code-bit '1' where '0' was encoded, coincident
with an adjacent code-bit '0' where '1' was encoded
Error propagation length is defined as the inclusive number of data-bits between
the first data-bit in error and the last data-bit in error caused by a given code-bit
error case.
Random fifteen-data-bit sequences were generated and encoded using the encoder
described in the patent. Drop-in, drop-out, and shift errors were created in tum in the
twelfth through the eighteenth bits of the resulting code-bit sequences. The corrupted
code-bit sequences were decoded using the decoder described in the patent, the resulting data-bit sequences were analyzed, and the error propagation lengths recorded.
Results of 2000 trials are shown below:

- 228-

ERROR
TYPE
DROP-IN
DROP-OUT
SHIFT
TOTAL

o
#
%

#
%

#
%

#
%

ERROR PROPAGATION LENGTH
123

4

TOTAL

5

2674
29

5009
54

1220
13

195
2

127
1

0
0

9225

201
7

1258
45

776
28

448
16

92
3

0
0

2775

150
3

1955
36

1496
27

1242
23

508
9

125
2

5476

3025
17

8222
47

3492
20

1885

727
4

125
1

17476

- 229-

11

4.4 DATA ACCURACY

Data accuracy is one of the most important considerations in error correction
system design. The following discussion on data accuracy is concerned primarily with
magnetic disk applications. However, the concepts are extendable to many other error
correction applications.
The transfer of undetected erroneous data can be one of the most catastrophic
failures of a data storage system; consider the consequences of an undetected error in
the money field of a financial instrument or the control status of a nuclear reactor.
Most users of disk subsystems consider data accuracy even more important than data
recoverability. Nevertheless, many disk subsystem designers are unaware of the factors
determining data accuracy.
Some causes of undetected erroneous data transfer are listed below.
-

Miscorrection by an error-correcting code.

-

Misdetection by an error-detecting or error-correcting code.

-

Synchronization framing errors in an implementation without synchronization
framing error protection.

-

Occasional failure on an unprotected data path on write or read.

-

Occasional failure on an unprotected RAM buffer within the data path on
write or read.

-

A software error resulting in the transfer of the wrong sector.

-

A broken error latch which never flags an error; other broken hardware.
Some other factors impacting data accuracy are discussed below.

- 230-

POLYNOMIAL SELECTION
In disk subsystems, the error-correction polynomial has a significant influence on
data accuracy. Fire code polynomials, for example, have been widely used on disk controllers, yet they provide less accuracy than carefully selected computer-generated
codes.
Many disk controller manufacturers have employed one of the following Fire code
polynomials:
(x21 + 1).(xll + x2 + 1) or (x21 + 1).(xll + x9 + 1)
The natural period of each polynomial is 42,987. Burst correction and detection
spans are both eleven bits for record lengths, including check bits, no greater than the
natural period. These codes are frequently used to correct eleven-bit bursts on record
lengths of 512 bytes.
When used for correction of eleven-bit bursts on a 512-byte record, these codes
miscorrect ten percent of all possible double bursts where each burst is a single bit in
error. With the same correction span and record length, the miscorrection probability
for all possible error bursts is one in one thousand. The short double burst, with each
burst a single bit in error, has a miscorrection probability two orders of magnitude
greater.
Such codes have a high miscorrection probability on other short double bursts as
well. Double bursts are not as common as single bursts. However, due to error clustering, they occur frequently enough to be a problem.
The data accuracy provided by the above Fire codes for all possible error bursts is
comparable to that provided by a ten-bit CRC code. The data accuracy for all possible
double-bit errors is comparable to that provided by a three-bit or four-bit CRC code.
Fire codes are defined by generator polynomials of the form:
g(x)

= c(x).p(x) = (xC +

1).p(x)

where p(x) is any irreducible polynomial of degree z and period e, and e does not divide

c.
The period of the generator polynomial g(x) is the least common multiple of c and
e. For record lengths (including check bits) not exceeding the period of g(x) , these
codes are guaranteed to correct single bursts of length b bits and detect single bursts
of length d bits where d~b, provided z~b and c~(d+b-l).
The composite form of the generator polynomial (g(x» is used for encoding.
Decoding can be performed with a shift register implementing the composite generator
polynomial (g(x» or by two shift registers implementing the factors of the generator
polynomial (c(x) and p(x». Code performance is the same in either case.
The p(x) factor of the Fire code generator polynomial carries error displacement
information. The c(x) factor carries error pattern information. It is this factor that is
responsible for the Fire code's pattern sensitivity.
To understand the pattern sensitivity, assume that decoding is performed with shift registers implementing the individual factors of the generator polynomial. For a particular error burst to result in
- 231 -

miscorrection, it must leave in the c(x) shift register a pattern that qualifies as a
correctable error pattern. A high percentage of short double bursts do exactly that.
For example, two bits in error, (c+ 1) bits apart, would leave the same pattern in the
c(x) shift register as an error burst of length two. The same would be true of two bits
in error separated by any multiple of (c+ 1) bits.
If p(x) has more redundancy than required by the Fire code formulas, the excess
redundancy reduces the miscorrection probability for short double bursts, as well as the
miscorrection probability for all possible error bursts.
The overall miscorrection probability (Pmc) for a Fire code is given by the following equation, assuming all errors are possible and equally probable.

Pmc

~

n*2 (b-1)

(1)

where,
n
record length in bits including check bits.
b = guaranteed single burst correction span in bits.
m = total number of check bits.
For many Fire codes, the miscorrection probability for double bursts where each
burst is a single bit in error is given by the following equation, assuming all such
errors are possible and equally probable.

Pmcdb

~

2*n*(b-1)

(2)

c 2 * (2z-1)

where,
nand b are as defined above.
c = degree of the c(x) factor of the Fire code polynomial.
z = degree of the p(x) factor of the Fire code polynomial.
This equation is unique to the Fire Code. It is applicable only when the product
of Pmcdb and the number of possible double-bit errors is much greater than one. When
this is not true, a computer search should be used to determine Pmcdb.
The ratio of Pmcdb to Pmc provides a measure of pattern sensitivity for one particular double burst (each burst a single bit in error). Remember that the Fire code is
sensitive to other short double bursts as well.
Properly selected computer-generated codes do not exhibit the pattern sensitivity
of Fire codes. In fact, it is possible to select computer-generated codes that have a
guaranteed double-burst detection span. The miscorrecting patterns of these codes are
1llore random than those of Fire codes. They are selected by testing a large number of
random polynomials of a particular degree.
Provided the specifications are within
certain bounds, some polynomials will satisfy them.
There are equations that predict the number of polynomials one must evaluate to
meet a particular specification.
In some cases, thousands of computer-generated polynomials must be evaluated to
find a polynomial with unique characteristics.

- 232-

For a computer-generated code, correction and detection spans are determined by
computer evaluation. Overall miscorrection probability is given by Equation #1.
To increase data accuracy, many disk controller manufacturers are switching from
Fire codes to computer-generated codes.

ERROR RECOVERY STRATEGY
Error recovery strategies also have a significant influence on data accuracy. A
strategy that requires data to be reread before attempting correction provides more
accurate data than a strategy requiring the use of correction before rereading.
An equation for data inaccuracy is given below:

(3)
where,
Pued

=

Probability of undetected erroneous data
Ratio of undetected erroneous data occurrences to total bits transferred.
This is a measure of data inaccuracy.

Pe = Raw burst error rate
Ratio of raw burst error occurrences to total bits transferred.
Pc = Catastrophic probability
Probability that a given error occurrence exceeds the guaranteed
capabilities of a code.
Pmc = Miscorrection probability
Probability that a given error occurrence, exceeding the guaranteed capabilities of a code, will result in miscorrection, assuming
all errors are possible and equally probable.
It is desirable to keep the probability of undetected erroneous data (Puect} as low
as possible. The burst error rate, catastrophic probability or miscorrection probability
must be reduced to reduce Pued. (See Equation #3).

Miscorrection probability (Pmd can be reduced by decreasing the record length
and/or the correction span, or by increasing the number of check bits. Catastrophic
probability (Pc) can be reduced by increasing the guaranteed capabilities of the code, or
by reducing the percentage of error bursts that exceed the guaranteed code capabilities.
Burst error rate (Pe) can be reduced by using reread. Most disk products exhibit
soft burst error rates several orders of magnitude higher than hard burst error rates.
Rereading before attempting correction makes Pe (in Equation #3) the hard burst error
rate instead of the soft burst error rate, reducing Pued by orders of magnitude.

- 233-

Rereading before attempting correction provides additional improvement in Pued
due to the different distributions of long error bursts and multiple error bursts in hard
and soft errors.
Another strategy that reduces Pued is to reread until an error disappears, or until
there has been an identical syndrome for the last two reads. Correction is then attempted only after a consistent syndrome has been received.

- 234-

DESIGN PARAMETERS
For data accuracy, a low miscorrection probability is desirable.
Miscorrection
probability can be reduced by decreasing the record length and/or correction span, or
by increasing the number of check bits.
For most Winchester media, a five-bit correction span has been considered adequate. A longer correction span is needed if the drive uses a read/write modulation
method that maps a single encoded bit in error into several decoded bits in error, such
as group coded recording (GCR) and run-length limited (RLL) codes.
For several years, 32-bit codes were considered adequate for sectored Winchester
disks provided that the polynomial was selected carefully, record lengths were short,
correction span was low, correction was used only on hard errors, and the occurrence
rate for hard errors exceeding the guaranteed capability of the code was low.
More recently, most disk controller developers have been using 48-, 56- and 64-bit
codes in their new designs. Using more check bits increases data accuracy and provides
flexibility for increasing the correction span when the product is enhanced. Using more
check bits also allows other error-recovery strategies to be considered, such as
on-the-fly correction.
Disk controller developers are also implementing redundant sector techniques and
Reed-Solomon codes. Redundant sector techniques allow very long bursts to be corrected. Reed-Solomon codes allow multiple bursts to be corrected.

ECC CIRCUIT IMPLEMENTATION
Cyclic codes provide very poor protection when frame synchronization is lost, i.e.,
when synchronization occurs early or late by one or more bits.
One way to protect against this type of error is to initialize the shift register to
a specially selected nonzero value. The same initialization constant must be used on
read and write. Another method is to invert a specially selected set of check bits on
write and read. Each method gives the ECC circuit another important feature - nonzero
check bits are written for an all-zeros data record. This allows certain logic failures to
be detected before inaccurate data is transferred. See Section 4.8.2 for further discussion of synchronization framing errors.
Still, some ECC circuit failures can result in transferring inaccurate data. If the
probability of ECC logic failure contributes significantly to the probability of transferring inaccurate data, include some form of self-checking. See Section 6.5.

- 235-

DEFECI' MANAGEMENT STRATEGY
All defects should have alternate sectors assigned, either by the drive manufacturer
or subsystem manufacturer, before the disk subsystem is shipped to the end user.
There are problems with a philosophy that leaves defects to be corrected by ECC
on each read, instead of assigning alternate sectors. First, if correction before reread
is used, a higher level of miscorrection results. This is because a soft error in a sector
with a defect results in a double burst. Once a double burst occurs that exceeds the
double-burst-detection span, miscorrection is possible.
In the second case, if reread
before correction is used, revolutions will be lost each time a defective sector is read.

ERROR RATES
Clearly, disk drive error rates also significantly influence data accuracy. If errors
exceeding the guaranteed capability of the code never occurred, inaccurate data would
never be transferred.
When a data separator is part of the controller, its design affects error rate and
therefore data accuracy. While most drive manufacturers provide recommended data
separator designs, there are also well-qualified consultants who specialize in this area.

SPECIFYING DATA ACCURACY
The probability of undetected erroneous data (Pued) is a measure of data inaccuracy. Sophisticated developers of disk subsystems are now targeting 1.E-20 or less
forPued·
Even when Pe and Pc are high, one can still achieve any arbitrarily low Pued by
carefully selecting the correction span, record length, and number of check bits. (See
Equations #1 and #3).

ACHIEVING HIGHER DATA INTEGRITY
The following first appeared in slightly different form in the March 1988 issue of
the ENDL Newsletter.
Horror stories about the consequences of a storage subsystem transferring undetected erroneous data have been circulating since the dawn of the computer age. As
the computer industry matures, data integrity requirements for storage subsystems have
increased along with capacity, throughput, and uptime requirements. To meet these
higher demands, both the probability of uncorrectable error and the probability of
transferring
undetected erroneous data must decrease. As more and more powerful
error detection and correction systems are implemented to protect data from higher
media-related error rates, errors arising in other areas of the subsystem will come to
dominate unless equivalent protection is provided. The most powerful media EDAC
system is useless against errors occurring anywhere in the write path from the host
interface to the input of the EDAC encoder or in the read path from the output of the
EDAC decoder to the host interface.

- 236-

One example of undetected erroneous data which the media EDAC system is powerless to detect is a single-bit soft error occurring in an unprotected data buffer after
the EDAC system has corrected the data but before the data are transferred to the
host. Another example is a subtle subsystem software error which causes a request for
the wrong sector to be executed. The actual sector fetched may contain no .mediarelated errors and so be accepted as correct by the media EDAC system, yet it is not
the data which the host requested.
Data Systems Technology, Corp. (DST) has proposed a method to combat errors not
covered by the media EDAC system. DST recommends that the host append a CRC
redundancy field to each logical sector as it is sent to the storage subsystem and
perform a CRC check on each logical sector as it is received from the storage subsystem. DST further recommends that a logical identification number containing at least
the logical sector number, and perhaps the logical drive number as well, be placed
within each logical sector written to a storage subsystem and that this number be
required to match that requested when each logical sector is received from· the storage
subsystem.
It is possible to combine these two functions so that only four extra bytes per
logical sector are needed to provide both thirty-two-bit CRC protection and positive
sector/drive identification.
Three methods are outlined below; whatever method is
chosen for implementing the two functions, it must be selected with multiple-sector
transfers in mind.
(1) Append to each logical sector within the host's memory a four-byte logical
sector number field. Design the host adapter so that as each logical sector of a multiple-sector write is fetched from the host's memory, four bytes of CRC redundancy are
computed across the data portion of the logical sector and then EXCLUSIVE-OR summed
with the logical sector number field and transferred to the storage subsystem immediately behind the data. During a multiple-sector read, the host adapter would compute CRC redundancy over the data portion of each received logical sector and EXCLUSIVE-OR sum it with the received sum of the logical identification number and CRC
redundancy generated on write, then store the result after the data portion of the
logical sector in the host's memory. The host processor would then have to verify that
the result for each logical sector of a multiple-sector transfer matches the identification
number of the respective requested logical sector. If an otherwise undetected error
occurs anywhere in a logical sector anywhere beyond the host interface which exceeds
the guarantees of the host CRC code, including the fetching of the wrong sector, the
logical identification number within the host's memory will be incorrect with probability
1-(2.33E-1O).

(2) Keep data contiguous in the host's memory by instead recording the identification numbers of all of the logical sectors in a multiple sector transfer within the host
adapter's memory, but process the data and identification numbers for the CRC code in
the same manner as in (1). The host adapter would have the responsibility for checking
that identification numbers match those requested. Equivalent error detection is achieved.
(3) Initialize the CRC shift register at the host interface with the identification
number of each logical sector before writing or reading each logical sector of a multiple-sector transfer. The host adapter would require that on read the CRC residue for
each logical sector be zero. Again equivalent error detection is achieved.

- 237-

To implement the CRCnn field approach toward achieving higher data integrity,
computer builders will have to support the generation and checking of the extra four
bytes of CRC redundancy. Storage subsystem suppliers accustomed to sector lengths
which are powers of two will have to accommodate sector lengths which are greater by
four bytes. If the storage subsystem architecture includes its own auxiliary CRC field
of thirty-two or fewer bits, an option to disable it should be provided in order to
minimize overhead when the storage subsystem is connected to a host which implements
the CRC/ID field. The scope of coverage of the host CRC/ID field is much greater
than that of an equivalent-length auxiliary CRC field which protects only against media
errors, so data integrity can be greatly improved at no increase in overhead if the
subsystem auxiliary CRC code is disabled and the host CRCnn field is used instead.
Procedures like those outlined above can have a profound impact on data integrity
in computer systems. They allow the computer builder to be in control of the integrity
of data throughout the entire system without being concerned with the detailed designs
of the storage subsystems connected to the system.

- 238-

SUMMARY
When designing error correction for a disk controller, keep data accuracy high by
using the techniques listed below:
-

Use a computer-generated code to avoid pattern sensitivity.

-

Reread before attempting error correction.

-

Use the lowest possible correction span meeting the requirements of supported
drives.

-

Ensure that the Eee circuit provides adequate protection from sync framing
errors.

-

Design the Eee circuit to generate nonzero check bits for an all-zeros data
record.

- Include self-checking, if it is required to meet the specification for probability of undetected erroneous data {Pue-~TERNAL XOR FORM OF SH I FT REG)
I

BIT
NO.

DATA
BITS

R:1

ERROR
BURST

R2

R3

BYTE
NO.

R4

o
.'

...

(SEE SH1ULATION RUN #:1 FOR :1ST 40 SHIFTS)
(R IS RECORD LEN IN BITS INCLUDING CHK AND OVERHD)
o

X
R-96
R-95
R-94
R-93:

R-92

R-91
R-90
R-89
R-88

R-B7
R-86
R-B5
R-84
R-8:

R-82

R-81
R-80
R-79
R-78

R-77
R-76
R-75
R-74
R-7:$

R,..72

R-71
R-7e
R-69
R-68
R-67
R-66

R-65

R-64
R-63
R-62
R-61

1.
1.
1.
:1

R-60
R-59
R-'58

R-57
R-56
R-55
R-54

1.
1.
1.
:1
1.

00000000
000000013
00000000
00000000
13131301301313
0130013000
13130000013
131313130000
000013000
00000000
000130000
0130001300
1301300000
000000013
000001300
0001301300
000000013
00000000
00000000
00000000
13001313000
1300001300
00000000
0130000130
00000000
00131301300
13013000013
00000000
1300001300
01300130013
00000000
00000000
0000130130
0001301300
130000000
0130130000
00000000
0001301300
00000000
013000000
00013130013
000013000
000001300

:~

00000000
00000000
0000130130
000000130
0013000013
00000000
00000000
01301300130
00000000
00000000
001301313013
00000000
00000000
000130000
013131313000
00000000
000000130
000130131313
000001300
00000000
000130000
130000000
00130001313
013130001313
00000000
13130001300
130000000
000013000
000130000
013000000
001300000
000001300
130001313130
0013000013
1300130000
000001300
00000000
000000013
00000000
000000013
01313130000
1301300000
000130000

00000000
000000130
00000000
00000000
1313013131300
00000000
00000000
013131300013
00000000
00000000
00000000
000130000
00000000
0013001300
0001301300
1300130000
13013131313013
0131313001313
00000000
000000130
000000130
001300000
01301301300
0131300000
000001300
0130130131313
130000000
01300013013
000130000
000013000
00000000
000000013
000130000
000000130
130000000
00013131300
0000130013
00000000
00000000
00000000
0000130131.
000001311

3:1

000000130
1313000000
000001300
00000000
00000000
00000000
013000000
00000000
000000013
00000000
1300000013
130000000
00000000
1300130000
1313131300130
000130000
0131313013130
0131313013130
000000130
00013013130
01301301300
001300000
130013130013
00000000
0001300013
0001300013
130000000
01313000013
0000013130
000000130
130013130013
000000013
000130001.
00000131.1.
0130001.1.1.
e00e1.1~:1
00011.~~:1

0131.1:11.:11.
01.~1:1~~1
1.~~1.1.~11

:11.1:1:1:111

1.1.~1~~1:1
0000011.~ :1~~1.~1.1:1

-11
-11
-:1:1
-1:1
-1.1.
-:1:1
-11
-1.:1
-:10
-113
-:10
-1.0
-:10
-10
-1.0
-1~1

-9
-9
-9
-9
-9

-9

-9
-9
-8
-8
-8
-8
-8
-8
-8
-8
-7
-7
-7
-7

-7
-7
-7
-7
-6
-6
-6

SUIULATION RUN NO.

R-'53
R-S2

R-51
R-se
R-49
R-48
R-47
R-46
R-4S

R-44
R-4::
R-42
R-41

ee00e000 0e0ee000
eS000000 000001300
eeS00S00 S0000000
e000e000 0000S0e0
eS00e00e 00e000e0
0e00S000 S0S00001
0000S000 00000011.
0e00S0e0 e00e0111.
00000000 00e01.111
00000000 00011111
00000000 00111111
00eeeeee 91.111.111
90000090 111.11111
00900001 11111.1.11
00000011 11111111
00900111 11111111
00001111 11111111
00011111 11111111
00111111 11111111
011.11111 11111111
111111.11 11111111

1.
1.
1.
1

1.
1
1.
1

1

R-40

R-39
R-:;:;e
R-?7
R-36
R-?5
R-34
R-Z3

4 CONTINUED

1
1
1
1

e0S0~ ~1.1.~1.
e0S1.~1. ~11.1.~
001.111.~ 1.1.11.1.1.11.

-6
-6

-6
-6
-6
-5
-5
-5
-5
-5
-5
-5
-5
-4
-4
-4
-4
-4
-4
-4
-4

1.1.1.1.111.1
L1.1.1.1.1.1.1. 11.1.1.1.1.1.1
11.1.1.1.1.1.1. 111.1.1.1.11
1~1.1.1.1.1. 1.1.11.1111
1.11.1.1.1.1.1. 1.1.1.1.1111
111111.11 11111.111
1111.1.1.1.1. 1.1111.111
1111.11.1.1 1.1.1.1.1111
11111111. 1.11111.11
11111.1.11. 1111.1111
11111111 11111111
111~111 1.1.111.111
111.1.1111 11111111
11111111 11111111
1111.1.111 11111111
11111111 1111.1111
11111111 11111111
11111111 1.1.1.11111
01.1.1.1.1.~

FINISHED READ 1NG DATA BYTES. NOW READ CHECK BYTES.
INPUT TO SHIFT REGISTER NOW DEGATED. PIN 9 OUTPUT
IS GATED TO DESERIALIZER TO BE STORED AS SYNDRor1E.
R-32
R-31
R-30

R-29
R-.~8

1(-27
R-2t5

R-25
R-24
R-2J.
R-?':;-

R-21
R-2'3
R-19
R-i8
R-:17
R-16
R-15
R-.14
R-l?,
R-12

R-ll

R-10

R -9

R -8
R -7
R -6

R -5
R -4
R -.:.-

R -2
R -1

1.

11111111 111.11.111 1111111.1 11111111
11111111 11111111 1111.1111 1.1111111
111.11111 11111111 11.11111.1 11111111
11111111 11111.111 11.11.1.111 1.1.111.111.
11111111 11.111111. 111.1111.1 11.111111
1.1.111111 11111111 1111.1111. 11111111
11111111 11111.11.1. 11111111 11.111111
11111111 1111.1111 11111111 11111111
11111111 111.11111 11.1.1.1.111 11.1.1111.1.
111.1.1111 1111.1.111 111.11.11.1 1.1.111111
1.1111111 1111.1.11.1. 11.11.111.1 1.1.111.11.1
11.11111.1 11.1.111.1.1 1.111.1.1.1.1 1.1111.1.1.1
11.111111 11111111. 1.1.1.1.1111 11.1.1.1.1.1.1
1.1.111111 11.11111.1 11.1.1111.1. 1.1.111.1.11
11111111 11111111 11111.111 1.1111111
11.111111 11111111 1111.1.11.1 11111111
1111.1111 11111111 11111111 11111111
11111111 11111.111 1111111.1 11111111
11.111111111111.1.1 11111.1.1.1 11111.111
1111111.1 111111.11 1111111.1 11.111111
1.111111.1 11111111 111.11111 11.111111
11.11111.1 1111111.1 11111.111 11111111
11111111 1111.1111 11.111.111. :U.l1111.1
11111111 111.1.1111 11.11111.1 1:1.111111
11111111 11111111 11111.111 11111111
111.11111 11111111 1111.1.111 11111111
11111111 11111111 11111111. 11.111111
11111111 .11.111111 1111:1.11:1. 11111:1.11
1111.1:1.11 11111111 11111111 111.11:1.:1.1
111111:1.1 111:1.1111 111:1.1.111. 11111111
11111111 11111111 11111.111 11111111
11111111 11111111 111111:1.1 11111111

HOI" PART NOW COl'1PLETE - SYNI)Ol'lE HAS BEEN STORED.

-3
-3
-3
-3
-3
-3
-::;
-3
-2
-2
-2
-2
-2
-2
-2
-2
-1
-1
-1.
-1
-1
-1
-1
-1
0

0
13
0

0
0
0
0

PIN 9= 13
PIN 9= 0
PIN 9= 13
PIN 9= 0
PIN 9= 13
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN :3= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 1
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= 13
PIN 9= 0
PIN 9= 0
PIN 9= 13
PIN 9= 13
PIN 9= 13
PIN 9= 13
PIN 9= 13

SIMULATION RUN..

4 CONTINUED

S IfotULATI ON OF CORRECTI ON PROCEDURE
BEGIN SHIFTING SYNDROME
THIS PART SIr1ULATES INTERNAL XOR FORM OF SHIFT REG
(SHIFTING RIGHT WITH SOFTWARE)
0

31.
X

X

R -9
R-1.e
R-1:1
R-:12
R-.13

01313131.13130
1301313131.00
013131313131.0
013000091.
000000130

0001300013
1313013131300
0900013130
1301300000
:10000000

13001313131313
001301301313
131301313000
131313013131313
13130130000

1313013131300
eeeee9ge
1300000130
00131301300
e0ee0e00

-1.
"";1.
-1.
-1.
-1.

CORRECTABLE PATTERN FOUND, -1.3 IS BIT DISPLACEMENT.
NOW BEGIN BYTE ALIGNMENT.
R-14

R-:15
R-:16

e0eeeeee 131.13131301313 1313131313131313 131313131313013
13131313131300 013:113013130 1313131313131313 13009131300
1300013000 1313131.091313 131301301300 1301313131300

B'T'TE ALIGNt-lENT COMPLETE - SIMULATION CONPLETE
BYTE DISPLACEt'IENT IS 1..
COUNT! NG FRON END OF RECORD. LAST BYTE I S ZERO.

_ 'l'l7 _

-1.
-:1
-1.

READ SIMULATION RUN. S
SH1ULATION OF HARDWARE AND SOFTWARE
BEGIN HDI.J PART OF SIMULATION
(SHIFTING LEFT. SERIAL E:'~TERNAL XOR FORM OF SHIFT REG)
DATA
BITS

BIT
NO.

o..
'

.'

ERROR
BURST

R:1.

R3

R2

BYTE
NO.

R4

(SEE SINULATION RUN .. i FOR FIRST 40 SHIFTS)
(R IS RECORD LEN IN BITS INCLUDING CHK AND OVERHD)

.'

3:1.

o
:~

R-96
R-95
R-94
R-9~

R-Q':;'
~R-9:1
R-90
R-89
R-88
R-87
R-86
R-;'35
R-84
R-8::
R-82
R-8:1
R-:30
R-79
R-78
R-77
R-76
R-75
R-,4
R-73
R-72
R-7i
R-70
R-69
R-68
R-67
R-66
R-65
R-64
R-63
R-62
R-6:1
R-60
R-59
R-58
R-57
R-56
R-55
R-54
R-53

:1.
:1.
:1.

:1.
:1.
:1.
:1.
:1.
:1.
:1.
:1.
:1
:1

00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000001
000000:1:1.
00000:1:1.0
0000:1:1.0:1.
000:1.:1.0:1.:1.
90:1.:1.0:1.:1.:1.
0:1.:1.0:1:1.:1.:1.
:1.:10:1.:1.:1.:10
:1.0:1:1.:1.:1.9:1.
0:1.:1.:1.:1.0:1.0
:1.:1.:1.:1.0:1.00
:1.:1.:1.0:1.00:1.
:1.:1.0:1.09:1.0
:10:1.09:1.0:1.

X

00000000
00000000
00000000
00000000
00000000
00000000
00000000
00000000
00008000
00008000
00000000
00008000
00000000
00000000
00000000
00000000
00000000
00000000
00800000
00000000
00000000
00000000
0000000:1
000080:1.:1.
00000:1.:10
0000:1.:1.0:1
000:1.:1.0:1.:1.
00:1.:1.0:1:1.:1.
0:1.:1.0:1.:1:1.:1
:1.:1.0:1.:1:1:1.0
:1.0:1:1.:1:10:1.
0:1:1.:1:1.0:1.0
:1.:1.:1.:1.0:1.00
:1.:1.:1.0:1.08:1.
:1:1.0:1.00:1.0
:1.0:1.90:1.0j.
9:1.90:10~B

:1.90:1.9:1.90
00:1.0:1.090
0:1.0:1.000:1.
:1.0:1.009:1.0
0:1.000:1.9:1.
:1.099:1.0:1.:1.
090:1.0:1.:1:1

09009900
00000000
00000000
90000000
00000000
00000000
09009999
00000000
00009000
90000000
00000090
00000009
90009000
90900900
0000000:1.
009099:1.:1.
00000:1:1.9
0009:1.19:1.
000:1.:1.0:1.:1.
00:1.:1.0:1.:1.:1.
0:1.:1.0:1.:1.:1.:1.
i:1.0:1.:1.:1.:1.0
:1.0:1.:1.:1.:1.0:1.
0:1.:1:1.:1.0:1.0
:1.:1.:1.:1.9:1.00
:1.:1.:1.0:1.09:1.
:1.:1.0:1.09:1.0
:1.0:1.09:1.0:1.
0:1.00:1.0:1.0
:1.00:1.0:1.00
00:1.0:1.000
0:1.0:1.090:1.
:1.0:1.000:1.0
0:1.000:1.0:1
1.0001.0:1.:1.
eeO:1.0:1.:1.:1.
00:1.0:1.:1.:1.:1.
0:1.0:1.:1.:1.:1.0
:1.0:1.:1.:1.:1.00
0:1.:1.:1.:1.00:1.
i:1.:1.:1.00:1.:1.
:1.:1.:1.00:1.:1.0
i:1.00:1.:1.00
i00:1.:1.00:1.

- 338 -

00099000
09009000
00000000
00000000
00900000
00090099
9999900:1
900000:1.:1
09000:1.:1.0
9000:1.:1.0:1.
000:1.:1.0:1.:1.
00:1.i0i:1.:1.
0:1.:1.0:1.:1.:1.:1.
:1.:1.0:1.:1.:1.:1.9
:1.0:1.:1.:1.:1.0:1.
0:1.:1.:1.:1.0:1.0
:1.:1.:1.:1.0:1.00
~i0i00i

:1.:1.0:1.90:1.0
i0:1.00:1.9i
9:1.00:1.9:1.0
:1.90:1.9:1.90
09:1.9:1.000
0:1.0:1.000:1.
:1.0:1.900:1.0
9:1.900:1.9:1.
:1.999:1.0:1.:1.
099:1.0:1.:1.:1.
00:1.0:1.:1.:1.:1.
0:1.0:1.:1.:1.:1.0
:1.0:1.:1.:1.:1.00
0:1.:1.:1.:1.09:1.
:1.:1.:1.:1.00:1.:1.
:1.:1.:1.00:1.:1.0
:1.::1.00:1.1.00
:1.00:1.:1.00:1.
00:1.:1.00:1.:1.
0:1.:1.00:1.:1.:1.
:1.:1.00:1.:1.:1.0
:1.00:1.:1.:1.00
00:1.:1.:1.000
0:1.:1.:1.0000
:1.:1.:1.0000:1.
:1.:1.0000:1.:1.

-:1.:1.
-:1.:1.
-:1.:1.
-i:1.
-:1.:1.
-:1.:1.
-:1.:1.
-:1.:1.
-:1.0
-:1.0
-:1.9
-:1.0
-:1.0
-:1.0
-~0

-:L1!l

-9
-9
-9
-9
-9
-9
-9
-9
-8
-8
-8
-8
-8
-8
-8
-8

-7
-7
-7
-7
-7
-7
-7
-7

-6
-6
-6
-6

SIMULATION RUN NO.
R-52
R-51.
R-S9
R-49
R-48
R-47
R-46
R-45
R-44
R-43
R-42
R-41.
R-49
R-39
R-38
R-37'
R-36
R-35
R-34
R-3Z

1.
1
1
1
1
1

1.

1

1.
1

1.

1

5 CONTINUED
01.001.01.9
1.001.91.00
00101000
0101.0001.
1.91.0001.0
010001.01.
1.0001.01.1.
0001.01.1.1
0010111.1.
01.01.1.1.1.0
1.91.1.1.100
011.1.1.001.
1.111.001.1.
11.1.00110
1.1001190
10011091.
0011001.1.
911.091.11.
11.001.1.10
10011100

001.011.11
01.01.11.10
101.1.11.00
01.1.11.001.
1.1.1.1.001.1.
1.11.001.1.0
1.1.001.1.00
1001.1001
001.1.0011.
01.1.00111.
11.0011.10
1001.1.100
001.1.1.000
011.1.9000
11.1.09001.
11.900011
100001.10
09001101.
00011.011
001101.11.

001.1.091.1
01.1.001.1.1
1.1001.1.10
1.001.1.1.00
001.11.000
01.1.10000
1.1100901.
11000011
1.00901.1.0
000011.01
0001.1.01.1.
0011.01.1.1.
01.1.01.1.1.0
1101.11.00
1011.1001
01110019
111901.00
11001.001
1001.0011
001.001.1.1.

1.0000110
00001101.
0001.1.011.
0011.01.11.
01.1.01.1.1.0
1.1.01.1100
1.011.1.001
011.10010
11.100100
1.1.001001
1.001001.1.
001.001.1.1.
01.001.1.1.1
1001.111.0
0011.1.1.00
01111.000
1.1.1.1.0000
11100001
1.1.90001.0
1.0000101

-6
-6
-6
-6
-5
-5
-5
-5
-5
-5
-5
-5
-4
-4
-4
-4
-4
-4
-4
-4

FIN I SHED READING DATA BYTES,
NOW READ CHECK BYTES,
INPUT TO SHIFT REGISTER NOW DEGATED.
PIN 9 OUTPUT
IS GATED TO DESERIALIZER TO BE STORED AS SYNDROME.
R-3;2
R-Zl
R-30
R-':;OQ

R-28
R-17
R-26
R-25
R-24
R-23
R-22
R-21.
R-20
R-:1.9
R-18
R-17
R-:1.6
R-:1.5
R-:1.4
R-'13:
R-12
R-:1.:1.
R-10
R -9
R -8
R -7
R -6
R -5
R -4
R -3
R -2
R -1.

90111.099
0111.9000
11100001
1.1130001.1
1000011.0
00001.1.01.
9001.1.01.1
0011.01.11.
011011.10
11011.1.00
10111901.
01.1.1901.9
1.1.:1.001.09
11.00100:1.
1.9010011
001.9131.1.1
13:1.00111.:1.
1.0011.11.0
001.1.11.00
0111.1.000
11.1.10000
:1.1.1.00001
11000010
10000101
00001.011
9901.01.1.1.
90101.1.1.1
01.011.1.1.1
101.1.1.1.1.1.
01111.11.1.
11.1.1.11.1.1.
1.1.1.1.1.1.1.1.

011011.19 01001111 00001011.
1.1.011100 1.1301.1.1.1.0 0001.0:1.1.1
1011.10131 0011.11.00 00101:1.11
01~1.0010 0:1.1:1.11300 0191.1:1.11
111.00100 1.1.1.10000 :1.0:1.:1.1111
1.1001001. 11.1.00001 011.11.1.11.
:1.001.001.1 :1.1.00001.9 11.1.1.1.:1.1.1.
001.13011.1 113000101 1111.11.11.
01.0011.1.1. 00001.01.1. 111.1.1.1.1.1.
:1.001.1.110 0001.0111. 11.11.111.1
001.1.1.1.00 00:1.0111.1. 1.1.1.1.1.11.1
~1.11.1.000 01.01.1.1.1.1. 1.1.1.1.1.11.1.
:1.1.1.1.0099 191.1.1.1.1.1. 1.1.1.1.1.1.1.1.
:1.1100001. 01.:1.11.11.1. 111.:1.:1.1.11.
:1.1000131.0 11:1.1.1:1.1.1 :1.1.:1.1.:1.1.1.1
1.13131313101. 1.111.1.1.1.1 :1.1.1.1:1.:1.1.1
09001.011. 111.1.1111. 1.1.1.1.11.11.
000101.1.1 111.1.1.11.1. :1.1.1.1.1.1.1.1.
001.0:1.11.1. 11.1.1.11.11. 11.1.11.1.11.
01.01.1111. 111.11.11.1. 11.11.11.1.1.
101.1.1.1.1.1. 1111.1.1.1.1. :1.1.1.1.111.1
01.1.1111.1. :1.1111.11.1. 11.1.111.11.
111.11.1.1.1 111~1.11.1 1.1.1.11111
1111.1.111 11111.111. 111111.11
11.11.1.11.1. 111.11.1.11. 11.1.1111.1.
11.11.11.1.1. 11.11.11.1.1. 11.11.11.1.1.
11.111.11.1 1111.1.1.1.1 111.1.111.1.
111.11.11.1. 11.1.11.1.1.1. 1.1.1111.1.1.
1.11.1.1.1.11 1.1.1.1.1.1.1.1 1.11.1.1.1.1.1.
1.11.11.11.1. 1.111.1.1.11. 1.1.11.11.1.1.
1.1.1.1.1.1.1.1 11.1.1.1.1.1.1. 1.1.111.11.1.
1.1.1.11.1.1.1. 11.111.1.1.1 1.1.1.1.1.1.1.1.

HDW PART NOW COMPLETE - SYNDOl'lE HAS BEEN STORED.

- 339 -

-3
-3
-3
-3
-3
-3
-3
-3
-2
-2
-2
-2
-2
-2
-2
-2
-:1.
-1.
-:1.
-1.
-1.
-1
-1
-1
9
0
0
0
0
0
0
0

PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN
PIN

9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=
9=

13
0
0
13
0
1
1.
0
1
0
0
:1.
1.
0
13
0
0
:1.
0
0
1
1
1
1
9
0
0
1.
0
0
1
0

SHIULATION RUN tt·

5" CONTINUED

SIMULATION OF CORRECTION PROCEOURE
BEGIN SHIFTING SYNOROME
THIS PART SIMULATES INTERNAL XOR FOR!'1 OF SHIFT REG
(SHIFTING RIGHT WITH SOFTWARE 8 BITS AT A TIME)

-0

31.

X

R-::;2
R-40
R-48
R-'5b

R-';4
R-,2
R-80
R-88
R-9~

01.1.111.00
1.1.01.01.10
1.1.0001.11.
131.1.001.130
01.1.1.1.1.1.1.
01.01.01.01.
131.:1.:1.1.:1.130
12100121013:1:1
00(11219013121

X

01.001.101.
10011.01.1.
01.11.0001
01.11.1.000
0001.001.1.
1301.01.001.
00011.:1.1.0
0:101210000
00000011.

1.01.1.1.1.01.
001.11.1.1.0
01.001.000
1.131.01.1.1.0
01.00001.1.
001.1.1.000
139001.1.00
130000000
01.00012100

101.1.1.000
1.1.1.01.000
001.01.1.1.1.
1.1.01.1.1.01.
01.01.01.131.
0001.0131.0
1.1.0011.11.
00000000
00000000

CORRECTABLE PATTERN FOUND.
B'r'TE DISPLACEMENT IS 13...
COUNTING FRON END OF RECORD.

LAST BYTE IS ZERO.

S r l'lULAT I ON COr1PLETE.

- 340 -

-3
-4

-5
-6

-7

-9
-9
-1.0
-:11.

READ SIMULATION RUN #

6

SINlILATION OF HARDWARE AND SOFTWARE
BEGIN HDl.J PART OF S mULATI ON
(SHIFTING LEFT, SERIAL EXTERNAL XOR FORM OF SHIFT REG)
DATA
BITS

BrT

NO.
13

"

ERROR
BURST

R:1.

BYTE
NO.

R4

R2

(SEE SII'1ULATION RUN # :1. FOR FIRST 413 SHIFTS)
(R IS RECORD LEN IN BITS INCLUDING CHK AND OVERHD)

R-96
R-95
R-94
R-93:
R-92
R-91
R-913
R-89
R-88
R-87
R-86
R-85
R-84
R-8?
R-82
R-81
R-80
R-79
R-78
R-77
R-76
R-75
R-74
R-73
R-'0:.
... ..,
R-71
R-7e
R-69
R-r::8
R-67
R-;':;;':;
R-65
R-64
R-6?
R-r::2
R-61
R-60
R-59
R-58
R-57
R-56
R-55
R-54
R-53

1.
1
:1.

1.
1.
1.
1
:1.
:1.
1.
1.
1.
:1.

13
X
1313131313131313
1313131313131313
1313131313131313
1313131313131310
130131313131313
001300131313
13131313131300
0013130131013
01300001313
13013013131313
1313131313131313
131310131301313
130131313131313
1313131313131313
131313013131313
131313130131313
13131313131300
13013013131313
1313131313131313
1313131313131313
1313131313131313
1313131313131313
1300131301313
013013013010
1313131313131313
1313131313131313
13001313131313
131313001300
131313131313130
13130130131313
1313131313131313
13131301313013
1313131313131310
1313131313131310
131313013101313
1313131313131313
13131313131313:1.
131313131313113
1301313131.13:1.
13131313:1.131.:1.
130131.131.1:1.
1313:1.13:1.:1.1.13
131.13111.1313

3:1.
X

131313013131313
1313131313131313
1313131313131313
1313131013131313
00130001013
013013131000
1300100000
1301300000
13001000130
131313013131313
131300131300
1301313131300
131313131313013
130013001313
13131313013013
131313131313130
1300131313013
13001313131313
1313131313131313
1313131313131313
13131313131301.
01313131313:1.13
13131313131.13:1.
131313131.131.1.
1313131.131.:1.:1.
1313:1.13:1.:1.113
131.0:1.:1.:1.1313
101.:1.:1.1301.
0:1.1.1.13131.13
1.:1.:1.13131.131.
1.:1.0131.13:1.:1.
:1.13131.13:1.:1.:1.
~ee1e11.1. 013:1.131.:1.:1.13
13101.101.1.:1.13 13:1.101.:1.1.13:1.
131.1311.1.1313 1.01.~1.e1.1.
1.1311.1.13131. 131.1.1.131.1.13
13:1.1.1.1313113 1.1.:1.131.:1.1313
1.1.1.1313:1.0:1. :1.1.131.1.131313
:1.:1.131310:1.:1. 1.131.1.13131313
1.13131.131.:1.:1. 0:1.:1.131301313
130:1.13:1.1.:1.0 :1.:1.1313131313:1.
13:1.13:1.:1.1131. 11313131313:1.:1.
1.13:1.:1.:1.13:1.:1. 1313131313:1.:1.13
1.e~:1.1ee:1. 13:1.:1.:1.131:1.13 13131313:1.1.1313
13e'i:l1313eee
1313131313101313
1310131013101310
1313131313131313
0130013131313
131301313131313
130000000
131313130000
131000001313
1313131313131313
1313131313131313
131313013131313
13131300131313
013131313131313
1300130131313
130131313131313
1301300131313
13131313013130
013131313131313
1313131313131313
1313131313131313
1313131313131313
1313131313131313
1313131313131313
1313131313131313
1313131313131313
1313013013130
13130131313130
130130130131.
001313013:1.0
13131313131.13:1.
131313131.131.:1.

- 341 -

1313131013131313
1313131313131313
100131313131313
13131301313130
01300013130
131301313131313
0000131300
0000013013
00101001300
13001313131313
1013013013130
13013131313130
13131301313131.
130000131.13
13131313131.131.
13131313:1.13:1.:1.
1313131.131.:1.1.
13131.131.1.1.13
131.0:1.1.1.1313
:1.131.1.:1.13131.
131.1.:1.13131.13
:1.:1.:1.13131.13:1.
1.:1.13131.01.:1.
1.13131.131.1.:1.
13131.131.1.1.13
13:1.01.:1.10:1.
1.01.:1.1.01.1.
01.1.1.01.1.0
1.1.1.1311.013
1:1.13:1.:1.13013
113:1.:1.13131313
13:1.:1.1313131313
1.:1.130131313:1.
1.1313101313:1.1.
1313130131.:1.13
131313101.11313
1313131.:1.131313
13131.:1.131313:1.
13:1.1.1313131.13
1.:1.131313:1.0:1.
:1.13131310:1.:1.
131313:1.131.1.:1.
1313:1.13:1.:1.1.:1.
13:1.13:1.:1.:1.:1.13

-1.1.
-1.:1.
-1.:1.
-:1.:1.
-1.:1.
-1.:1.
-:1.:1.
-1.:1.
-:1.0
-1.0
-1.13
-1.13
-1.0
-:1.0
-1.0
-1.13
-9
-9
-9
-9
-9
-9
-9
-9
-8
-8
-8
-8
-8
-8
-8
-8

-7
-7
-7

-7
-7
-7

-7
-7
-6
-6
-6
-6

SIf>1IJLATION RUN NO.
R-S2
R-Sl
R-513
R-49
R-48
R-47
R-46
R-45
R-44
R-43:
R-42
R-41.
R-40
R-Z9
R-!8
R-37
R-36
R-35
R-34
R-:n

:1.
1
1
1
:1.
~

:1.
1

1
1
1
1

6 CONTINUED
91~199:1.9 ~101108 900:1.:1.909 :1.0:1.:1.:1.19:1.
:1.:1.~8e~01 :1.:1.0:1.:1.00e 001~9991 0~01e

11001011 101:1.0000
10e1a1~~ 01:1.09000
00101110 ~1000e01
01e1110~ 1999001:1.
101:1.:1.0:1.:1. 00000:1.:1.0
0:1.:1.101:1.0 0009:1.:1.09
:1.1:1.01:1.00 990:1.1000
:1.101:1.000 00110001
:1.0110000 0:1.109010
91100000 L1000:1.01
11000001 1000101:1.
10000011 000~0:1.:1.:1.
000901.10 e010:1.:1.~:1.
0090:1.100 0:1.0:1.:1.1:1.0
00011000 10111:1.01
001~0001 011:1.:1.0:1.0
01100019 ~1110101
11000101 1:1.:1.01010

0:1.:1.09010 :1.:1.:1.:1.9:1.0:1.
11009191 1:1.:1.0~010
1000101:1. 1:1.0:1.0~01
090~01:1.:1. 1010:1.0:1.:1.
00:1.0:1.:1.:1.:1.9:1.0:1.0:1.10
9:1.9:1.:1.:1.:1.9 :1.0:1.0:1.:1.99
:1.0:1.:1.:1.:1.9:1. 9:1.0~00:1.
0:1.:1.:1.:1.9:1.0 19:1.:1.00:1.:1.
1:1.:1.~019~ 01109111
:1.:1.101010 11001:1.:1.0

:1.:1.010:1.0~ :1.00:1.1:1.9~

10:1.010:1.:1.
0:1.0:1.0:1.10
1019:1.:1.00
0:1.01100:1.
10110011

00:1.:1.:1.010

0:1.:1.:1.0:1.0~

1110:1.010
11010:1.01
101010:1.9
010:1.0101

0:1.~001:1.1
~1001110 :1.0:1.9:1.0~0

-6
-6
-6

-6
-5
-5
-5
-5
-5
-5
-5
-5
-4
-4
-4
-4

-4
-4
-4
-4

FINISHED READING DATA BYTES.
NOW READ CHECK BYTES.
INPUT TO SHIFT REGISTER NOW DEGATED.
PIN 9 OUTPUT
IS GATED TO DESERIALIZER TO BE STORED AS SYNDROME.
R-32
R-Z:1.
R-30
R-29
R-28
R-27
R-26
R-.~S

R-24
R-23:
R-.22

R-21
R-20
R-19
R-18
R-ti"

R-16

R-1.5
R-1.4
R-13
R-12

R-·11.

R-1.0
R -9
j':'
R -.'
R -(
R-';
R -S
R -4
R -3
R -2
R -1

1909:1.01~ 1:1.0:1.010~ :1.00:1.:1.:1.01 0:1.0:1.0:1.0~
00010111 1010191:1. 001~0:1.0 10:1.0:1.81:1.
90101111 01e~011e 01:1.10101 010:1.0111
01@11110 10101100 1:1.~0:1.010 :1.0:1.0:1.:1.:1.:1.
101~119~ 010~~001
101~001~

01111010
11110101
11191910
11010101
10101911
011310119
10101100
01011001
10110011
01100111
11001110
1.9911.191
90111019

:1.10:1.0~01 0:1.011~:1.1
~01~~111

10101010

0110011i 010:1.0~01 01~11~11
11901110 ~0101910 11~~1:1.~1
10011101 0:1.019101 111:1.1111
001~101e ~0101011 11111111
91110101 01010111 111:1.1~:1.1
11101010 10~011:1.:1. 111~1:1.11
11010101 0:1.0111:1.1 11:1.~:1.1:1.~

~0~01010 10~:1.1:1.:1.1 111:1.:1.:1.~1
010~0101 01111:1.:1.:1. :1.:1.11:1.1:1.:1.
10101010 11:1.~:1.~:1.~ 1:1.1:1.~~1~
91910191 1.11:1.111:1. 1111~111
101019:1.1 11:1.:1.~11:1. 1:1.1:1.:1.:1.11
01~10101 0:1.0:1.011:1. ~1111:1.:1.:1. 11:1.:1.:1.:1.:1.:1.

1119:1.010 10:1.0:1.:1.:1.:1. 1:1.:1.:1.:1.11:1.
11010101 0:1.0:1.1:1.1:1. :1.11:1.1111
10~0:1.010 ~01:1.:1.1:1.1 11:1.:1.:1.:1.:1.:1.
01010101 0:1.:1.~111:1. ~1:1.111:1.:1.
19191010 111:1.:1.11~ 1.:1.:1.~1:1.:1.:1.
01019101 1:1.~~:1.~~1 1:1.:1.1:1.:1.~:1.
1010~0~~ 1:1.:1.:1.:1.:1.~1 :1.:1.:1.:1.:1.:1.:1.:1.
0:1.010111 1:1.:1.11:1.:1.:1. 11:1.:1.1:1.:1.:1..
19:1.0:1.1:1.:1. ~:1.1:1.:1.:1.:1.:1. :1.:1.:1.:1.:1.:1.:1.:1.
0:1.0~~~~~ 1.:1.:1.:1.:1.~:1.:1. 11:1.1:1.11:1.

:1.1:1.1:1.:1.:1.1
1:1.:1.:1.:1.:1.:1.1
:1.11:1.:1.1:1.:1.
11~:1.:1.1:1.1

1:1.:1.:1.1:1.1:1.
:1.:1.:1.:1.:1.:1.:1.:1.
1:1.:1.~:1.:1.:1.:1.

:1.:1.:1.:1.:1.:1.:1.:1.
1:1.:1.~:1.:1.:1.1

1:1.1:1.1:1.:1.1

1e1~1~1~ 1~:1.1111:1. 1:1.:1.~:1.:1.:1.:1.. :1.:1.:1.:1.:1.~:1.~

011~~:1.11 1:1.1~1:1.:1.1
1:1.:1.:1.:1.11~

11:1.:1.:1.:1.:1.:1.

1111:1.:1.:1.:1. 11i:1.1111
11~:1.:1.:1.:1.:1. 1:1.:1.:1.:1.~:1.:1.

HD"I PART NOW COMPLETE - SYNDOME HAS BEEN STORED.

- 342 -

-3
-3
-3
-3
-3
-3
-3
-3
-2
-2
-2
-2
-2
-2
-2
-2
-1
-1
-1
-1
-1
-1
-1
-1
9
9
9
9
9
9
9
0

PIN 9= 1
PIN 9= 1
PIN 9= 1
PIN 9= 0
PIN 9= 1
PIN 9= 1
PIN 9= :1.
PIN 9= :1.
PIN 9= 1
PIN 9= 1
PIN 9= 0
PIN 9= 0
PIN 9= 1
PIN 9= 0
PIN 9= 1
PIN 9= 0
PIN 9= ~
PIN 9= 1
PIN 9= 1
PIN 9= 1
PIN 9= 0
PIN 9= 1
PIN 9= 0
PIN 9= 1
PIN 9= 9
PIN 9= ~
PIN 9= 0
PIN 9= 9
PIN 9= 0
PIN 9= 0
PIN 9= 0
PIN 9= :1.

SIMULATION RUN..

6 CONTINUED

SI:f"1IJLATION OF CORRECTION PROCEDURE
BEGIN SHIFTING SYNDROME
THIS PART SIMULATES INTERNAL XOR FORf1 OF SHIFT REG
(SHIFTING RIGHT WITH SOFTWARE 8 BITS AT A TIME)
0
X

131.81.81381.
13131.1.13131.13
1313131.131.1313
001.1.1.1381.
01.1.13131.1.1.
l.el<:t1.1.1.ee
13131301.01.1.
e0Beoeee

R-Z2
R-48
R-48
R-56
R-64
R-72
R-se
R-88

CORI':ECTABLE

PATTER~J

1.1.1.81.881.
1.0eee1.0e
ee1.1.e1.1.B
01.01.1.1.81.
1.131.1.1.131.13
01.13131.1.1.0
ee000e013
e0ee1.01.1.

1.1.001.801.
013131313131.1.
1313131301.1.13
1.1301.001.0
1.13131.1.1.013
0131.01.1.1.0
eeeee013B
01300ee13e

31.
X
1.131.1.81.81.
13131313013131.
!31.01.eB1.0
1.1.1.013131313
131313131.131.1.
1.1.1.1301.1.1.
eeBe0eB13
000eee00

FOUND.

B'T'TE DISPLACEMENT IS 1.13.
COUNTING FRot1 END OF RECORD.

LAST BYTE IS ZERO.

SIf'1ULATION CONPLETE.

- 343 -

-3
-4
-5
-6
-7
-8
-9
-1.0

READ SIMULATION RUN.

7

SHIULATION OF HARDloIARE AND SOFTWARE
BEGIN HDW PART OF SIMULATION
(SHIFTING LEFT~ SERIAL ~ATERNAL XOR FORM OF SHIFT REG)
BIT
NO.
13
.#

DATA
BITS

ERROR
BURST

R2

R1.

R3

R4

BYTE
NO.

(SEE SIMULATION RUN tt :1 FOR FIRST 49 SHIFTS)
(R IS RECORD LEN IN BITS INCLUDING CHK AND OVERHD)

R-96
R-95
R-94
R-93
R-92
R-91
R-99
R-89
R-8B
R-87
R-86
R-B5
R-B4
R-83
R-82
R-8:1
R-89
R-79
R-713
R-77
R-76
R-75
R-74
R-t:3
R-72
R-71
R-70
R-69
R-6S
R-67
R-66
R-"::5
R-64
R-63
R-"::2
R-61
R-60
R-S9
R-58
R-S7
R-56
R-'55
R-~4

R-'5:;

1.
1
:1
1

1
:1
1
1
"1
1.

9
X
139999090 90999000
90999999 999139999
13139913999 99999999
90999999 99999999
90999999 99999999
90099999 99099999
90999999 99999999
90999999 99999999
09999999 99999990
99999999 99999999
09099999 99999909
130999999 991399990
9991391399 99999999
99999999 90999999
991399999 99999999
99909990 09090909
00000090 99990990
909001399 99913131399
99099999 99990999
909139090 99099999
00999090 99999999
gege9990 139999090
99999990 99999999
99090999 99999999
99999999 139990999
99099909 99000009
999001300 99999999
990099a9 90909009
13913990913139090999
9999131399 9991391399
09131391399 99999999
99990999 99999999
99909090 099991300
13913991309 99990900
9913991300 99099999
900091390 09999999
99999990 90999999
990130909 99999999
091399009 90909990
990091399 139999090
009909013 990990ge
000130990 09090090
0ge90090 09999099
00090900 e9009ge0

-

~44-

31.
X
00000900 00999999 -11.
99999999 99999099 -1.1.
99999999 999139909 -1.1.
99999999 99999999 -11.
99099999 99999999 -:11.
09999099 999999913 -11
90999990 99999990 -11
99999999 99999099 -11
999999099999091313 -19
99999999 90900090 -1.0
999999913 99990999 -:19
9999131399 99999099 -19
1399139990 9991391399 -19
99999999 999139999 -:19
139099999 09099099 -19
99090099 09999999 -:19
999139999 99999999 -9
90999999 990909139 -9
99999999 99S90999 -9
99999999 99990990 -9
99999090 999139000 -9
99999099 99999999 -9
99990999 99999099 -9
99099999 99999999 -9
90990999 99990999· ·-8
99909090 1391399099 -8
999139999 139999999 -8
90999999 99909099 -8
99099999 99999099 -8
99099999 99999999 -8
99099999 99999999 -8
99999099 99909099 -8
99999909 9999900:1 -7
99999990 9991391311. -7
09099099 091399:111. -7
139009909 90991.:111. -7
99009999 99911.111. -7
99990090 901.1~11 -7
90990999 9111111.1 -7
90990999 :11.111111 -7
90009991 11111111 -6
e9999911 111111:11 -6
e9999111 11111111 -6
00991111 11~11111 -6

SIMULATION RUN NO.
R-52
R-5:t
R-se
R-49
R-48
R-47
R-46
R-4S
R-44
R-43
R-42
R-4:l.
R-48
R-39
R-38
R-37
R-36
R-sS
R-34
R-33

7 CONTINUED
0131313013130 00131313131313
0013130131313 0eeeeeee
913139139913 13999913913
131313131313813 138989099
139889998 8989913131.
99139131389 1399990:11.
1313989988 139999:11.1.
1389998139 8980:1:11.1.
09991391313 899:11.:11.:1.
099139999 8911.1.:11.1.
9913139888 131.1.:11.:1.1.1.
1399139989 :1:1:1:1:1:1:1:1
899131399:1 :11.1.:11.1.1.:1
13913988:1:1.1.1.1.:1:1:11.1.
99998:1:1:1 :1:1:1:11.:11.1.
913901.:1:11. 1.:1.1.1.1.:11.:1.
9991.:1:11.1. 1.:1.1.:1.1.1.1.1.
99:1:1.:1:11.:1 :11.:11.1.1.1.:1
91.11.1.:1:11. :11.1.:1:1.1.1.1.
1.1.:11.:1:11.:1 :11.:11.1.1.1.:1.

:1
1.
1
1
:1
1
1
:1

1.
:1
1.
:1

131391.1.1.1.1.
13131.:11.1.1.1.
91.:1.:1.1.1.1.1.
1.1.1.1.:11.1.1.
1.1.1.1.1.1.1.1.
1.1.1.1.1.1.1.1.
1.1.1.:11.1.1.1.
1.1.:11.1.:1.1.1.
:1:1:11.1.:1:11.
1.1.1.1.1.1.1.1.
1.1.1.1.:11.1.1.
:11.:11.:1:11.:1
:11.1.1.:1:1:1:1
1.1.:11.1.1.1.1.
:11.1.:11.:1:11.
:1:1:1.1.:1.:1.:1.:1.
:11.1.1.1.1.1.1.
:1:1:11.1.1.1.1.
1.:11.1.1.1.:11.
1.1.1.1.1.1.1.1.

-6
-6
-6
-6
-5
-5
-5.
-5
-5
-5
-5
-5
-4
-4
-4
-4
-4
-4
-4
-4

:11.1.1.1.1.1.1.
1.1.:11.1.1.1.:1.
1.1.1.:1.1.1.:11.
1.1.1.1.1.1.:11.
1.:1:1:11.1.1.1.
1.1.1.:11.1.1.1.
1.1.1.1.1.1.1.1.
1.1.1.1.1.1.:11.
1.1.1.1.1.11.1.
1.1.:11.1.:11.:1
:11.:1.1.1.1.1.1.
1.1.1.1.:11:1:1
1.1.:11.:1:11.:1
1.1.1.1.1.1.1.1.
:11.1.1.1.1.1.:1
:1.:1.:1:1:11.:1.1.
1.1.1.:11.1.1.1.
1.1.1.1.:11.1.:1
1.1.1.:1.:1.1.1.1.
1.1.:1.1.1.1.:1:1

FINISHED READING DATA BYTES.
NOW READ CHECK BYTES.
INPUT TO SHIFT REGISTER NOL4 DEGATED.
PIN 9 OUTPUT
IS GATEr· TO OESER I ALIZER TO BE STORED AS SYNDROME.
R-32
R-31
R-3e
R-29
R-28
R-27
R-26
R-2S
R-24
R-23:
R-22

R-21
R-20
R-19
R-18
R-17
R-16
R-:15
R-14
R-:i3
R-12
R-11
R-10
R -9
R -8
R -7
R -6
R -5
R -4
R -3
R -2
R -1

1

1.1.:1:1.1.1.1.1. 1.1.:1.1.:1.:1.1.:1. 1.:1.:1:1:1.1.:1.1. 1.:1.1.1.1.11.1.
1.:1.:1.:1.1.1.:1.:1. 1.:1.1.:1.:1.:1.:1.:1. :1.:1.1.1.:1.1.:11. :11.:11.:1:1:1:1
:1.:1:1:1.1.:11.1 1:1.1.11.1.:1.:1 :11.1.:11.1.1.1. 11.1.1.1.1.:11.
1.1.:11.:1:1:1:1 t1.1.:11.:11.1. 1:1:1:1:1:1:1:1. :1:1.1.:11.1.:11.
1.:1.:1:1.1.1.11. 1.1.1.:1:1:1.:1:1 1.:1:1.1.:11.1.1. 1.1.1.1.1.1.1.1.
:11.1.1.1.1.:11. 1.1.1.1.1.:1:1.1. :1:1:11.1.1.:1:1 :11.:11.:1111.
11.1.1.1.1.1.:1 1:11.1.1.:1:1:1 :1:1:1:1.:1:1:1:1 :1:1:1:11.1.1.1.
:11.1.:11:111 1:11:1:1.:1:1.:1 1.:11.:1.:11:1:1 :1:1:1.:1.:11:1.:1
1.:1:11.11.11 11.:1:111.1.1. 11.1.1.1.:1:1:1 1.1.1.1.1.11.1.
1.:1:11.1.1.:1:1 11.1:1:11.1.:1 1:1:1:11.1.:11. 1.1.:11.1.11.1.
11.1.:1:1:1:1:1 .11.:11.1.:1.1.1. 1.:11.1.:11.:11. :1:1.1.:11.11.1.
:11.1.:11.1.:11 1:1:1:11.1.:11. 1.1.:11.:1:1:11. 1:1:11.:1:11.1.
:1:1:1:111.1.1. 11.1.:11.:1:1:1 1:11.:1:1:1:1.:1 1.:1:11.:11.:1:1
1.1.:1:1:11.1.1. 1.1.1.1.1.1.1.:1. :11.:1:1:11.1.1. 1.:1.:1:1:11.1.1
:1.11.1.1111 111.:1:11:1:1 11.1.:1:1:1:1:1 1.:1:11.1111
1:1:111.111 1:1.:111.1.:1:1 111.:11.:11.:1. :1:11.1.1:1:1:1
:1:11111:11 1:1:1:1:1.:1.:11. 1:1:1:1:11.1.:1 :11:1.1.:11:11
1.1:1:1:111.:1 1:1:1:1:1:11.:1 1:11.:1.:11.:11. :1:1.:1.1.:1:1:1:1
111:1:1111 1:1:1:11.11.:1. 1.1:1:1:1:1:11. 1:1.1.1:1.1:11.
1.:111.1.1.:11 1.:1.1.:1.:1.:1.:1.:1. :1:1.1.:1:1.:11.:1. 1.1.:1.:11.1:1:1
1.:1:1.11111 :1:1.1:1:11.:1.:1. :1:11.1.:1:1.:1:1. 1:1.1.:1:1:11.:1
1.1.:111:1:1.1. 1:1.:11.1.:1:1.1. 1:1.1:1.:1:11.:1 11.1.:11.1.:1.1
1.:1.:1.:1.1.1:1.1 11:1:1.1.1:1.:1. 1.:1.1.:1.11.1.i 1.1.1.111.11.
1.1.11:1.111 1111.1.111 :111.1:1.1.:11 111.:11.:1:1.:1.
1:1.111:1.:1.1 1.11.1111:1 1:11.:1.1:111 :1:11.:111:1:1.
11.1:1:11.11 111.1.1:1.:11 1.:1.1.:1.:1111 :11.:1.1:11.:1.:1
111:1:1:11.1. t1.11.:1:1.:1.:1 1.1.:1:1.11.:11. 1.1.1.1.:1:11.:1
1.1:1.:1:1:11.1 11111.1.11. 1.1.1.1.:1.1.:11 1:11:1:11.:11.
1:1:1:1:111.1 11.1.1:11.1.1. 111.1.1111. 11.11.1.:11:1
11.1:11.111 11111111 11.:11.1:111. 111.:11:11.1
111111.11 1:1:1:11:111. :11.:111:111 :111.:1:1:11.1
1.1:1.111.:11 1.1.1.:1111.1. 1.111111.1 11111111.

HOW PART NOW CO/'1PLETE - SYNDOr-tE HAS BEEN STORED.

- 345 -

-3
-3
-3
-3
-3
-3
-3
-3
-2
-2
-2
-2
-2
-2
-2
-2
-1.
-:1.
-:1
-:1
-1.
-:1
-1.
-1.
9
9
9
9
9

e
9
9

PIN 9= 0
PIN 9= 0
PIN 9= e
PIN 9= 0
PIN 9= e
PIN 9= 13
PIN 9= e
PIN 9= 13
PIN 9= 13
PIN 9= 0
PIN 9= e
PIN 9= 0
PIN 9= e
PIN 9= 0
PIN 9= 0
PIN 9= 13
PIN 9= e
PIN 9= 0
PIN 9= 13
PIN 9= 1.
PIN 9= 8
PIN 9= 0
PIN 9= e
PIN 9= 121
PIN 9= e
PIN 9= 13
PIN 9= 13
PIN 9= 13
PIN 9= 13
PIN 9= 13
PIN 9= 9
PIN 9= 9

5 I NULATI ON RUN

If

7 CONTINUED

SUIULATION OF CORRECTION PROCEDURE
BEGIN SHIFTING SYNDROME
THIS PART SIMULATES INTERNAL XOR FORM OF SHIFT REG
(SHIFTING RIGHT WITH SOFTWARE SBITS AT A TIME)

e
eeeeeeee

~

X

R-16

X

eee~eege

eeeeeeee aaeeaeee

CORRECTABLE PATTERN FOUND.
BYTE 0 ISPLACEMENT I S ~.
COUNTING FROM END OF RECORD. LAST BYTE IS ZERO.
SINIJLATION COMPLETE.

- 346 -

-1

5.3.7 RECIPROCAL POLYNOMIAL TABLES
The byte-serial software algorithm requires four, 256-byte tables. These tables are
listed on the following pages. Since data entry is error prone, the tables should be
regenerated by computer.
To regenerate the tables, implement a right-shifting intemal-XOR serial shift
register in software, using the reciprocal polynomial. For each address of the tables
(0-255), place the address in the eight most significant (right-most) bits of the shift
register and clear the remaining bits. Shift eight times, then store the four bytes of
shift register contents in tabled T1 through T4 at the location indexed by the current
a~1ress. The coefficient of x is stored as the high-order bit of T1; the coefficient of
x
is stored as the low-order bit of T4. Check the resulting tables against those on
the following pages.

- 347-

RECIPROCAL POLYNOMIAL TABLE T1

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

2

3

00
54
A8
FC
45
11
ED
B9
8A
DE
22
76
CF
9B
67
33

14
40
BC
E8
51
05
F9
AD
9E
CA
36
62
DB
8F
73
27

28
7C
80
D4
6D
39
C5
91
A2
F6
OA
5E
E7
B3
4F
1B

3C
68
94
CO
79
2D
D1
85
B6
E2
1E
4A
F3
A7
5B
OF

4

5

6

7

8

9

A

50 44
.04 10
F8 EC
AC B8
15 ·01
41 55
BD A9
E9 FD
DA CE
8E 9A
72 66
26 32
9F 8B
CB DF
37- 23
63 77

78
2C
DO
84
3D
69
95
C1
F2
A6
5A
OE
B7
E3
1F
4B

6C
38
C4
90
29
7D
81
D5
E6
B2
4E
1A
A3
F7
OB
5F

AO
F4
08
5C
E5
B1
4D
19
2A
7E
82
D6
6F
3B
C7
93

B4
EO
1C
48
Fi
A5
59
00
3E
6A
96
C2
7B
2F
D3
87

88
DC
20
74
CD
99
65

FE
47
13
EF
BB

31

02
56
AA

C

0

E

F

9C FO
C8 A4
34 58
60 ·OC
D9 BS
8D Ei
71 1D
25 49
16 7A
42 2E
BE D2
EA 86
53 3F
07 6B
FB 97
AF C3

E4
BO
4C
18
Ai
FS
09
5D
6E
3A
C6
92
2B
7F
83
D7

08
8C
70
24
9D
c9
35
61
52
06
FA
AE
17
43
BF
EB

cC
98
64
30
89
DD
21
75
46
12
EE
BA
03
57
AB
FF

B

RECIPROCAL POLYNOMIAL TABLE T2

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

D

E

F

00
42
84
C6
02
40
86
C4
05
47
81
C3
07
45
83
C1

04
46
80
C2
06
44
82
CO
01
43
85
C7
03
41
87
C5

09
4B
8D
CF
OB
49
8F
CD
OC
4E
88
CA
OE
4C
8A
C8

OD
4F
89
CB
OF
4D
8B
C9
08
4A
8C
CE
OA
48
8E
CC

12
50
96
04
10
52
94
D6
17
55
93
D1
15
57
91
03

16
54
92
DO
14
56
90
D2
13
51
97
D5
11
53
95
D7

18
59
9F
DO
19
5B
9D
DF
1E
5C
9A
D8
1C
5E
98
DA

1F
5D
9B
D9
1D

24
66
AO
E2
26
64
A2
EO
21
63
A5
E7
23
61
A7
E5

20
62
A4
E6
22
60
A6
E4
25
67
A1
E3
27
65

20
6F
A9
EB
2F
60
AB
E9
28
6A
At
EE

29

36
14
B2
FO
34
76
BO
F2
33
71
87
F5
31
73
B5
F1

32
10
B6
F4
30
72
B4
1"6
37
75
B3
F1
35

3F
7D
BB
F9
3D
7F
B9
FB

3B
19
BF
FD
39
7B
Bo
FF
3E
7C
BA
F8
3C
7E
B8
FA

5F
99
DB
1A
58
9E
DC
18

SA
9C
DE

Aj

E1

2A

68
AE
EC

6B
AD

EF
2B
69

AF
ED
2C
6E

AS

EA
iF.!
6C
AA

E8

77

B1
F3

3A
18
BE
Fe
38
7A
BC
FE

RECIPROCAL POLYNOMIAL TABLE T3

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

2

3

4

00
21
42
63
81
AO
C3
E2
02
23
40
61
83
A2
Cl
EO

82
A3
CO
El
03
22
41
60
80
Al
C2
E3
01
20
43
62

04
25
46
67
85
A4
C7
E6
06
27
44
65
87
A6
C5
E4

86
A7
C4
E5
07
26
45
64
84
A5
C6
E7
05
24
47
66

09
28
4B
6A
88
A9
CA
EB
OB
2A
49
68
8A
AB
C8
E9

5

6

7

8

9

A

B

C

D

E

F

8B

OD
2C
4F
6E
8C
AD
CE
EF
OF
2E
4D
6C
8E
AF
CC
ED

8F
AE
CD
EC
OE
2F
4C
6D
8D
AC
CF
EE
OC
2D
4E
6F

12
33
50
71
93
B2
Dl
FO
10
31
52
73
91
BO
D3
F2

90
Bl
D2
F3
11
30
53
72
92
B3
DO
Fl
13
32
51
70

16
37
54
75
97
B6
D5
F4
14
35
56
77
95
B4
D7
F6

94
B5
D6
F7
15
34
57
76
96
B7
D4
F5
17
36
55
74

IB
3A
59
78
9A
BB
D8
F9
19
38
5B
7A
98
B9
DA
FB

99
B8
DB
FA
18
39
5A
7B
9B
BA
D9
F8
lA
3B
58
79

IF
3E
5D
7C
9E
BF
DC
FD
ID
3C
5F
7E
9C
BD
DE
FF

9D
BC
DF
FE
lC
3D
5E
7F
9F
BE
DD
FC
IE
3F
5C
7D

AA

C9
E8
OA
2B
48
69
89
A8
CB
EA
08
29
4A
6B

RECIPROCAL POLYNOMIAL TABLE T4

00
10
20
30
40
50
60
70
80
90
AD
BO
CO
DO
EO
FO

0

1

2

3

00
55

51
04
FB
AE
40
15
EA
BF
73
26
D9
8C
62
37
C8
9D

A2
F7
08
5D
B3
E6
19
4C
80
D5
2A
7F
91
C4
3B
6E

F3
A6
59
OC
E2
B7
48
ID
Dl
84
7B
2E
CO
95
6A
3F

AA

FF
11

44
BB
EE
22
77

88
DD
33
66
99
CC

4

5

6

7

8

9

A

B

C

D

E

F

44

15
40
BF
EA
04
51
AE
FB
37
62
9D
C8
26
73
8C
D9

E6
B3
4C
19
F7
A2
5D
08
C4
91
6E
3B
D5
80
7F
2A

B7
E2
ID
48
A6
F3
OC
59
95
CO
3F
6A
84
Dl
2E
7B

88
DD
22
77
99
CC
33
66

D9
8C
73
26
C8
9D
62
37
FB
AE
51
04
EA
BF
40
15

2A
7F
80
D5
3B
6E
91
C4
08
5D
A2
F7
19
4C
B3
E6

7B
2E
Dl
84
6A
3F
CO
95
59
DC
F3
A6
48
ID
E2
B7

CC
99
66
33
DD
88
77
22
EE
BB
44

9D
C8
37
62
8C
D9
26
73
BF
EA
15
40
AE
FB
04
51

6E
3B
C4
91
7F
2A
D5
80
4C
19
E6
B3
5D
08
F7
A2

3F
6A
95
CO
2E
7B
84
Dl
ID
48
B7
E2
DC·
59
A6
F3

11

EE
BB
55
00
FF
AA

66
33
CC
99
77

22
DD
88

- 349-

AA

FF
00
55
BB
EE
11

44

11

FF
AA

55
00

5.4 APPLICATION TO MASS STORAGE DEVICES
This section describes an interleaved Reed-Solomon code implementation that is
suitable for many mass storage devices. It is a composite of several real world implementations, including the implementation described in U.S. Patent #4,142,174, Chen, et
al. (1979).
The implementation has triple-symbol error-correction capability and is interleaved
to depth 32. Symbols are one byte wide.
Key features of the implementation are:
- Corrects up to 3 random symbol errors in each interleave.
- Corrects a single burst up to 96 bytes in length.
- The data format includes a resync field after every 32 data bytes.
the length of an error burst resulting from synchronization loss.

This limits

The media data format is shown below. Data is transferred to and from the media
one row at a time. Checking is performed in the column dimension.

32 INTERLEAVES
RESYNC
-----1.1-----,1 FIELDS

r-I

o

1

• • • 30

31

I

-t-t-

65
DATA
SYMBOLS

-t-t-

6

CHECK
SYMBOLS

- 350-

The following pages show, for the implementation:
- The write encoder circuit.
- The syndrome circuits.
- The finite field processor.
An algorithm for determining the number of errors occurring and for generating coefficients of the error locator polynomial.
- Algorithms for fmding the roots of the error locator polynomial in the
single-, double-, and triple-error cases.
- Algorithms for determining error values for the single-, double-, and tripleerror cases.
- ROM tablts for taking logarithms and antilogarithms, for finding the roots of
equation y + y + c =0, and for taking the cube root of a finite field element.

- 351 -

ENCODE POLYNOMIAL

(x + l)~(X + a)·(x + a 2 ).(x + a 3 ).(x + a 4 ).(x + a 5 )

= x6

+ a 94 .x5 + a 10 .x4 + a 136 .x3 + a 15 .x2 + a104.x + a 15

WRITE ENCODER

GATE

WRITE DATA/CHECK BYTES

WRITE DATA
SYNDROME CIRCUITS
There are six circuits (i =0 to 5) and each circuit is interleaved to depth 32.

READ DATA/CHECK BYTES

- 352 -

FINITE FIELD PROCESSOR

Except where noted, all paths are eight bits wide.

LOG
ROM

1

8-BIT BINARY ADDER
MOD 255

- 353-

DETERMINE NUMBER OF ERRORS AND GENERATE ERROR LOCATOR POLYNOMIAL

YES

YES

°1=SloSl+S00S2
u1=(SloS2+S00S3)/01
U2=(SloS3+S2oS2)/01
°2=S4+a1oS3+a2oS2

0=SOoS3+S1 oS 2
a1=(SloS3+S00S4)/0

a2=(S2° (S3+a1oS2)+SO° (SS+a1 oS 4»/0
a3=(S3+a1os2+a2oS1)/SO

YES

NO

a1=S3/ S 2
a2=(S4+ a 1 oS 3)/S2
a3=(SS+u1oS4+a2oS3)/S2

(a1) '=(a2~s3+a1oS4+SS)/02
(a2) '=(a1)'oa1+(SloS3+S00S4)/01
(a3) '=(a1)'oa2+(SloS4+S2oS3)/01
a1=(a1) ,
a2=(a2) ,
a3=(a3) ,

NO

NO

NO

UNCORRECTABLE

- 354-

COMPUTE ERROR LOCATIONSAND ERROR VALUES

y

ry
IXl

= aLl = (11

y

C = (12/(11 2 1

K= «(11)2+(12
K3
C =
«(11 (12+(13)2
0

ILl = LOGa(Xl) I

6

'11 = TBLA C)
'12 = 'll+a

Xl = aLl = (11 '11
X2 = a L2 = (11 '12

I
Vl = TBLA(C)
Ul = Vlo«(1l°(12+(13)

0
0

I
Tl = TBLB~Ul)
T2 = Tloa 5
T3 = T2 oa85

Ll = LOGa (Xl)
L2 = LOGa (X2)

I
Ll
Xl = a L = (1l+Tl+K/Tl
X2 = a 2 = (1l+T2+K/ T2
X3 = a L3 = (1l+T3+K/ T3
I
Ll = LOGa(Xl)
L2 = LOGa (X2)
L3 = LOG a (X3)

1 E = So

I

El =

X2 oS 0+ S l
Xl+X2

E2 = El+SO

I
S2+Slo(X2+X3)+SOoX2oX3
El =

E2 =

(Xl+X2)° (Xl+X3)
SOoX3+Sl+ Elo (Xl+X3)
X2+ X3

E3 = SO+El+E2

!
I FINISHED I

- 355 -

SOLVING THE THREE-ERROR LOCATOR POLYNOMIAL IN GF(~)
The three-error locator polynomial is

x 3 + al ox 2 + a2°x + a3
First, substitute w = x

=

0

+ al to obtain

w3 + «al)2 + a2)ow + (al oa2 + a3)

=

0

Second, apply the substitution

=

w

t + «ul)2 + u2)/t

to obtain

=

0

=

0

t 3 + (al oa2 + a3) + «al)2 + a2)3/t 3
and thus

t 6 + (al oa2 + a3)ot 3 + «al)2 + a2)3
Third, substitute u = t3 to obtain

u 2 + (a1 oa2 + a3)ou + «a1)2 + a2)3

=

0

Finally, substitute

v

=

u/(a1 oa2 + a3)

to obtain

«al)2 + a2)3

v2 + v + ------

(a1 oa2 + a3)2

=

0

Now fetch a root V 1 from the table developed for the two-error case:

= TBLA [

V1

«a1)2 + a2)3 ]

(a1 oa2 + a3)2

Next, apply the reverse substitution

u

= v o (a1 oa2 + a3)

to obtain

U1

= V1o(a1 oa2

+ a3)

Apply the reverse substitution t = (u)1I3 to obtain

T1

=

(V1o(a1 oa2 + a3»1/3
- 356 -

TI may be fetched from a table of cube roots in GF(2 8):

Tl

= TBLB[ Vlo(al oa2 + a3) ]

Each element in GF(28) which has a cube root has three cube roots; the other two may
be computed:

T2
T3

=

= Tloa k
T2 a k
0

where k

= (2 8 -1)/3

850

Now reverse the substitution

w = t + «al)2 + a2)/t
to obtain

(al)2 + a2
Tl +

Tl

(al)2 + a2
T2 +

T2

And finally. apply the reverse substitution

to obtain the roots of the original three-error locator polynomial:

Xl = aLl = Tl +

X2

a L2 = T2 +

X3 = a L3 = T3 +

(al)2 + a2
Tl

+ al

(al)2 + a2
T2

+ al

(al)2 + a2
T3

+ al

The error locations L}. L2. and L3 are the logs base a of X }. X2. and X3. respectively.

- 357-

ANTILOG TABLE
(INPUT IS n, OUTPUT IS an)

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
00
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

0

E

F

01
F3
2F
9F
EO
46
FC

02
97
5E
4F
Bl
8C
89
25
B6
B7
44
6B
F4
14
52
AE

04
5F
BC
9E
13
69
63
4A
10
IF
88
06
99
28
A4
20

08
BE
09
40
26
02
C6
94
3A
3E
61
00
43
50
39
5A

10
00
12
9A
4C
05
FO
59
74
7C
C2
CB
86
AO

20
lA
24
45
98
OB
8B
B2
E8
F8
F5
E7
70
31
E4
19

40
34
48
8A
41
C7
67
15
Al
81
9B
BF
FA
62
B9
32

80
68
90
65
82
FF
CE
2A
33
73
47
OF
85
C4
03
64

71
00
51
CA
75
8F
EO
54
66
E6
8E
IE
7B
F9
06
C8

E2
01
A2
E5
EA
6F
AB
A8
CC
BO
60
3C
F6
83
OC
El

B5
03
35
BB
A5
OE
27
21
E9
OB
OA
78
90

36
OF
04
OE
76
EB
9C
84
37
2C
FB
91
96
AO
60
2E

6C
CF
09
lC
EC
A7
49
79
6E
58
87
53
50
2B
CO
5C

08
EF
C3
38
A9
3F
92
F2
OC
BO
7F
A6
BA
56
Fl
B8

Cl
AF
F7
70
23
7E
55
95
C9

18
B3

IB
07
6A
07
3B
CO
4E
42
A3
16
C5
FO
4B
EE
30
17

FE
30
05
AC
93
01

AA

5B
E3
22
80
7A
OA
29
57

72

B4

77

11

LOG TABLE
(INPUT IS an, OUTPUT IS n)

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
00
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

0

E

F

04
05
EB
06
03
EC
3F
07
27
04
9E
EO
18
40
BB

00
9F
7A
05
46
28
A3
08
96
BC
86
41
OF
19
F9
EE

01
24
AO
F6
7B
El
06
E4
47
6E
29
75
A4
53
09
7E

E7
42
4F
87
C3
BO
62
97
09
EF
8B
FA
2E
lA
90
10

02
01
25
16
Al
78
F7
84
7C
73
E2
F4
07
2C
E5
Cl

CF
76
71
2A
35
6F
37
48
C7
7F
4A
OA
AB
54
39
A5

E8
9B
43
OC
50
OE
88
4C
C4
CC
BE
81
63
B2
98
C9

3B
FB
6A
8C
A7
FO
66
OA
AO

03
EA
02
3E
26
90
17
BA
A2
45
79
FE
F8
OE
85
95

23
F5
EO
E3
60
74
52
70
61
C2
4E
E6
8F
20
8A
08

00
15

9A
OB
00
4B
CB
80
Bl
C8
65
A6
69
3A
B4
55
5C
AC

E9
30
9C
B9
44
FO
00
94
51
6C
OF
22
89
8E
40
60

14
82
F2
BF
33
CO
A9
C5
BO
CA
OC
99
5B
B3
68
64

3C
B8
FC
93
6B
21
80
5F
A8
32
Fl
13
67
5A
OB
AF

B7
92
20
5E
31
12
59
AE
58
30
IF
B6
10
lC
IE
57

11

50
91
56
IB
B5
2F

- 358-

77

83
72

F3
2B
CO
36
34
70
CE
38
AA

49
C6

°

QUADRATIC SOLUTION TABL~
FOR FINDING SOLUTION TO Y + y + C· =
(INPUT IS C, OUTPUT IS YI; YI =0 => NO SOLUTION, ELSE Y2 = YI

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

+ a<»

0

1

2

3

4

5

6

7

8

9

A

B

C

D

E

F

01
89
00
00
00
00
CB
43
FF

DB
53
00
00
00
00
11
99
25
AD
00
00
00
00
EF
67

8F
07
00
00
00
00
45
CD
71
F9
00
00
00
00
BB
33

55
DD
00
00
00
00
9F
17
AB
23
00
00
00
00
61
E9

8D
05
00
00
00
00
47
CF
73
FB
00
00
00
00
B9
31

57
DF
00
00
00
00
9D
15
A9
21
00
00
00
00
63
EB

03
8B
00
00
00
00
C9
41
FD
75
00
00
00
00
37
BF

D9
51
00
00
00
00
13
9B
27
AF
00
00
00
00
ED
65

00
00
C3
4B
09
81
00
00
00
00
3D
B5
F7
7F
00
00

00
00
19
91
D3
5B
00
00
00
00
E7
6F
2D
A5
00
00

00
00
4D
C5
87
OF
00
00
00
00
B3
3B
79
Fl
00
00

00
00
97
IF
5D
D5
00
00
00
00
69
El
A3
2B
00
00

00
00
4F
C7
85
OD
00
00
00
00
Bl
39
7B
F3
00
00

00
00
95
ID
5F
D7
00
00
00
00
6B
E3
Al
29
00
00

00
00
Cl
49
DB
83
00
00
00
00
3F
B7
F5
7D
00
00

00
00
IB
93
Dl
59
00
00
00
00
E5
6D
2F
A7
00
00

77

00
00
00
00
35
BD

CUBE ROOT TABLE
(INPUT IS an, OUTPUT IS a nl3 ; EXCEPT FOR a O, OUTPUT=O => NO ROOT)

00
10
20
30
40
50
60
70
80
90
AD
BO
CO
DO
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

D

E

F

00
00
00
00
04
00
00
IA
00
6C
00
00
23
71
00
00

DB
45
00
82
00
00
00
00
9E
00
00
00
20
00
D2
00

00
36
00
69
A2
3B
00
76
00
00
00
90
00
00
08
35

EC
34
00
D9
Bl
70
00
00
00
00
00
00
00
00
9F
00

00
00
00
00
00
51
00
D4
00
00
00
00
00
00
00
00

98
00
00
D8
00
24
00
DO
00
00
00
00
E5
OF
BE
65

00
00
00
10
00
A5
00
00
00
4C
00
6A
5E
00
00
26

00
00
00
00
00
46
BC
00
00
00
AF
00
00
E2
00
00

02
A9
41
00
00
00
00
38
8A
68
00
00
00
00
00
00

00
00
00
00
00
00
00
00
00
00
D3
00
00
Cl
C3
75

00
80
00
00
48
8C
00
EO
5F
00
00
00
00
00
00
13

00
00
00
Dl
00
00
07
00
00
00
09
00
DE
00
00
00

00
00
9A
00
00
00
00
00
D7
12
00
00
00
00
00
2F

00
00
00
00
97
00
00
00
00
00
00
4D
00
00
00
00

OD
00
D5
4F
00
IB
F7
00
CA
00
00
00
00
EF
EA
00

lC
00
00
00
00
40
00
BB
00
F3
00
00
00
00
B5
CF

- 359-

AN ALTERNATIVE FINITE FIELD PROCESSOR DESIGN
The ftnite fteld processor shown below could be used instead of the one shown
earlier in this section. It uses subfteld multiplication; see Section 2.7 for more information. the timing for ftnite fteld mUltiplication includes only one ROM delay. This path
for the other processor included two ROM delays and a binary adder delay. Inversion is
accomplished with a ROM table.

GF(256)SUBFIELD
MULTIPLIER
USING 4 ROMS:
SEE SECTION 2.7

The following pages show, for this alternative ftnite fteld processor:
-

A ROM table for the four multipliers comprising the GF(256) subfteld multiplier.

-

A ROM table for accomplishing inversion.

-

ROM tables for taking logarithms and antilogarithms.

-

A ROM table for ftnding roots of the ftnite field equation y2

-

A ROM table for ftnding cube roots.

- 360·

+ y + c = o.

SUBFIELD MULTIPICATION TABLE
(INPUT IS TWO 4-BIT NIBBLES, OUTPUT IS ONE 4-BIT NIBBLE)

0
1
2
3
4
5
6
7
8
9
A
B
C
0
E
F

0

1

2

3

4

5

6

7

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

0
1
2
3
4
5
6
7
8
9
A
B
C
0
E
F

0
2
4
6

0
3
6
5
C
F
A
9
1
2
7
4
0
E
B
8

0
4

0
5
A
F
0
8
7
2
3
6
9
C
E
B
4
1

0
6
C
A
1
7
0
B
2
4
E
8
3
5
F
9

0
7
E
9
5
2
B
C
A
0
4
3
F
8
1
6

8

A
C
E
9
B
0
F
1
3
5
7

8

C
9
0
1
5
B
F
3
7
2
6
A
E

8

9

A

B

C

0

E

F

0

0
9
B
2
F
6
4
0
7
E
C
5
8
1
3
A

0
A
0
7
3
9
E
4
6
C
B
1
5
F
8
2

0
B
F
4
7
C
8
3
E
5
1
A
9
2
6
0

0
C
1
0
2
E
3
F
4
8
5
9
6
A
7
B

0
0
3
E
6
B
5
8
C
1
F
2
A
7
9
4

0
E
5
B
A
4
F
1
0
3
8
6
7
9
2
C

0
F
7

8

9
1
B
3
2
A
F
7
6
E
4
C
0
5

8

E
1
9
6
5
A
2
0
B
4
C
3

INVERSE TABLE FOR ALTERNATIVE FINITE FIELD PROCESSOR
(INPUT IS aD, OUTPUT IS 1/aD)
0
00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

CC
66
44
33
BB
22
77

DO
AA

99
55
11

88
FF
EE

1

2

3

4

5

6

7

8

9

A

B

C

0

E

F

01
CO
07
56
52
6F
DB
IF
AD
E7
E8
68
CD
A6
82
8B

OC
6A
60
53
AF
41
E3
49
El
5A
3B
7F
A9
74
CB
34

08
6C
OA
40
2F
32
CA
OF
FE
85
CE
F9
39
4C
62
25

06
58
35
F2
30
69
ED
02
5B
7C
2C
E5
70
09
28
FB

OF
50
F3
24
57
BO
C6
40
93
C5
46
29
95
17
B4
19

04
08
36
26
A5

OE
05
FC
FO
20
45
06
70
8F
4F
89
96
EC
21
91
79

03
FA
E4
A8
DE
14
Bl
B9
DO
E9
38
F6
7A
16
Al
7E

00
F5
B5
C3

OB
8E
EA
CF
BO
92
12
C8
EF
80
90
6E
63
23
2A
18

OA
86
BE
A2
9E
84
60
90
Fl
5F
E6
50
E2
61
BF
F4

02
3E
A4
3F
03
59
13
94
5E
C9
DC
9F
10
AC
C7
27

09
3D
47
10
75
15
6B
C4
9A
7B
81
4A
Cl
80
64
37

07
76
AE
lC
B6
8C
BA
F8
lA
4B
2E
2B
A3
48
FO
83

05
71
43
3C
97
9B
51
B2
87
BC
42
EB
3A
73
SA
EO

31

20
IE
IB
B7
01
4E
65
67
AB
B8

- 361 -

72

5C
54
F7
A7
AO
C2
78
9C
04
98
B3

ANTILOG TABLE FOR ALTERNATIVE FINITE FIELD PROCESSOR
(INPUT IS n, OUTPUT IS an)

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

01

10
02
22
26
62
2E

11

13
31
17
75
5B
Bl
IE
EC
C9
98
83
3A
AC
Cl

EA
AF
F2
25
51
IB
B9
96
6D
Dl

3

2
12
20
04
44
4C
C4
45
50
07
74
4A
A2
2F
FB
BC
C3

4

32
24
40
08
88
81
18
8A
A3
3E
E8
8D
D4
47
7F
Fl

5
16
64
48
80
09
99
92

29
90
06
65
59
93
38
8E
E7

6
72

2C
C8
89
90
OB
BB
B4
4B
B3
3C
CA
AB
B6
69
95

8

7
5E
E4
41
19
9B
BO
OF
FF
F8
8F
F6
61
ID
DF
FC
CB

BA
A5
58
82
2B
BF
FO
07
77
79
97
7C
C2
23
37
71

9
IF
FO
OA
A9
94
4F
F7
70
DE
EE
EB
BE
El
14
46
6E

A
E2
27
73
3D
DB
B8
87
7E
EO
05
55
5F
F5
52
28
8C

B

C5
54
4E
E6
63
3F
F9
9E
E5
50
OA
AA

A7
7A
A4
49

C

0

E

F

91
lA
A8
85
5C
C6
67
7B
B5
5A
AD
OD
DD
DE
ED
D8

8B
B2
20
09
9A
Al
lC
CE
EF
FA
AD
DO
03
33
35
53

39
9F
F4
43
3B
BO
02
21
15
57
70
03
30
06
66
6A

A6
6B
B7
78
86
6F
F3
34
42
2A
AE
E3
36
60
OC
CC

CO
DC
CF
FE
E9
9C
C7
76
68
84
4D
D5
56
6C
CO
01

LOG TABLE FOR ALTERNATIVE FINITE FIELD PROCESSOR
(INPUT IS an, OUTPUT IS n)

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

D

E

F

00
10
70
30
26
Al
B6
F7
53
OB
5C
70
FO
Fl
CB
F3

11

CC
20
D7
DC
3D
FC
4A
29
CO
C4
83
95
F2
BO
BE
6E

22
08
13
7E
32
lA
14
92
9F
48
EA
75
52
C3
16
2D

99
80
91
EC
62
A9
A4
50
3B
F5
17
8B
OA
BF
8A
C9

DD
04

77
40
19
E7
03
90
6B
87
69
A7
CA
2E
6F
82
F4
68

33
63
E9
D4
24
27
8F
3E
43
BO
2B
59
25
FB
A3
86

44
36
74
OD
FA
B4
E5
97
35
54
38
Cl
AD
3C
4F
6A

AA

01
12
CD
23
9A
DE
78
34
45
AB
56
EF
BC
89
67

55
Bl
47
4D
85
60
IE
7B
OC
46
C5
65
F6
49
A8
D2

EE
6C
15
A5
42
4B
DF
B7
F9
5F
EO
E2
FE
IF
90
E6

BB
C6
2C
39
AF

88
80
51
93
2A
06
F8
79
E4
7A
AE
B8
7C
DB
98
3F

66
08
C2
5A
58
B9
5E
E3
96
ID
71
57
2F
D6
8C
76

02
21
03
8E
D9
41
05
37
64
B2
lC
C7
60
09
81

31

CE
E8
CF
ED
7F
4E
01
OE
05
5B
94
3A
A6

- 362-

IB
9E
DO
A2
9B
FD
OA
73
4C
BA
07
B5
28
61
9C

72

El
AD
B3
84
AC
50
OF
CB
EB
18

QUADRATIC SOLUTION TABL~ FOR ALTERNATIVE FINITE FIELD PROCESSOR
FOR FINDING SOLUTION TO Y + y + C = 0
(INPUT IS C, OUTPUT IS YI; YI =0 => NO SOLUTION, ELSE Y2 = YI + a<»

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

0

1

2

3

4

5

6

7

8

9

A

B

C

01
B5
00
00
00
00
3D
89
00
00
CF
7B
F3
47
00
00

OB
BF
00
00
00
00
37
83
00
00
C5
71
F9
4D
00
00

11
A5
00
00
00
00
2D
99
00
00
DF
6B
E3
57
00
00

1B
AF
00
00
00
00
27
93
00
00
D5
61
E9
5D
00
00

13
A7
00
00
00
00
2F
9B
00
00
DD
69
E1
55
00
00

19
AD
00
00
00
00
25
91
00
00
D7
63
EB
5F
00
00

03
B7
00
00
00
00
3F
8B
00
00
CD
79
F1
45
00
00

09
BD
00
00
00
00
35
81
00
00
C7
73
FB
4F
00
00

1D
A9
00
00
00
00
21
95
00
00
D3
67
EF
5B
00
00

17
A3
00
00
00
00
2B
9F
00
00
D9
6D
E5
51
00
00

OD
B9
00
00
00
00
31
85
00
00
C3

07
B3
00
00
00
00
3B
8F
00
00
C9
7D
F5
41
00
00

OF
BB
00
00
00
00
33
87
00
00
C1
75
FD
49
00
00

77

FF
4B
00
00

D

E

F

05 1F
B1 AB
00 00
00 00
00 00
00 ' 00
39 23
8D 97
00 00
00 00
CB D1
7F 65
F7 ED
43 59
00 00
00 00

15
A1
00
00
00
00
29
9D
00
00
DB
6F
E7
53
00
00

CUBE ROOT TABLE FOR AL/!.RNATIVE FINITJ! FIELD PROCESSOR
'
(INPUT IS an, OUTPUT IS an ; EXCEPT FOR a ,OUTPUT=O => NO ROOT)

00
10
20
30
40
50
60
70
80
90
AO
BO
CO
DO
EO
FO

a

1

2

3

4

5

6

7

8

9

A

B

C

D

E

F

00
00
5E
00
00
00
63
DA
00
A5
3D
00
00
00
00
00

OB
00
00
11
00
00
00
00
00
00
00
00
75
00
00
2E

00
00
91
10
00
00
00
00
00
00
00
00
00
00
32
A8

09
00
00
00
00
99
00
00
17
00
00
00
00
FE
00
00

00
94
00
4E
00
00
00
00
89
FD
86
58
00
62
00
CD

08
CF
00
00
00
00
00
00
00
00
00
00
00
00
B7
88

00
00
00
00
00
90
E9
00
54
00
00
2B
00
00
00
00

00
00
00
3B
00
00
00
F4
40
00
00
00
C8
00
00
00

02
22
00
00
8B
39
00
00
00
00
00
00
00
00
00
00

00
20
00
00
00
D9
00
00
00
B2
00
00
00
64
00
00

00
E2
00
00
19
00
00
00
00
00
78
00
00
00
00
80

00
85
00
00
00
13
C5
73
16
00
00
00
C4
00
A9
9B

00
48
BA
82
E4
27
00
43
81
00
00
00
00
00
31
00

00
4C
00
24
00
41
5C
00
00
2D
00
DC

00
00
1A
26
A6
12
00
00
9A
00
E6
00
00
DB
00
2C

04
00
00
6B
00
00
00
00
44
00
00
9F
00
00
00
00

- 363-

72

00
00
1F

CHAPTER 6 - TESTING OF ERROR-CONTROL SYSTEMS
This chapter is concerned primarily with diagnostic capability for storage device
applications. However, the techniques described are adaptable to semiconductor memory,
communications, and other applications.
6.1 MICRODIAGNOSTICS
There are several approaches for, implementing diagnostics for storage device error-correction circuits. Two approaches are discussed here. The first approach requires
the implemeptation of "read long" and "write long" commands in the controller.
The "read long" command is identical to the normal read command except that
check bytes are read as' if they were data bytes. The "write long" command is identical
to the normal write command except that check bytes to be written are supplied, not
generated. They are supplied immediately behind the data bytes.
Use the "read long" command to read a known defect-free data record and its
check bytes. XOR into the record a simulated error condition. Write the modified data
record plus check bytes back to the storage device using the "write long" command. On
read back, using the normal read command, an ECC error should be detected and the
correction routines should generate the correct response for the error condition simulated. Repeat the test for several simulated error conditions, correctable and uncorrectable.
It is often desirable to reserve one or more diagnostic records for the testing of
error-correction functions.
It is important for any diagnostic routines testing these
functions to first verify that the diagnostic record is error free.

In some cases, hardware computes syndromes but is not involved in the correction
algorithm. The correction algorithm is totally contained in software. In this case, it is
easy to get a breakdown between hardware and software failures by testing the software
first. Supply syndromes to the software, for which proper responses have been recorded.
Using the second diagnostic approach, the hardware is designed so that, under
diagnostic control, data records can be written with the check bytes forced to zero. A
data record is selected that would normally cause all check bytes to be zero. Simulated
error conditions are XOR'd into this record. The record is then written to the storage
device under diagnostic control and check bytes are forced zero. On normal read back
of this record, an error should be detected and the proper responses generated.

- 364-

These techniques apply to error-control systems employing very complex codes as
well as those employing simple codes. They apply to the interleaved Reed-Solomon code
as well as the Fire code.
6.2 HOST SOFIWARE DIAGNOSTICS
Host testing of error-correction functions can be accomplished by implementing at
the host software level either of the diagnostic approaches discussed in Section 6.1.
If the controller corrects data before it is transferred to the host, the host diagnostic software must check that the simulated error condition is corrected in the test
record. The entire test record must be checked to verify that the error is corrected
and that correct data is not altered. Alternatively, the controller could have a diagnostic status or sense command that transfers error pattem(s) and displacement(s) to the
host for checking. However, this is not as protective as checking corrected data.
6.3 VERIFYING AN ECC IMPLEMENTATION
Error-correction implementations should be carefully verified to avoid incorrect
operation and the transfer of undetected erroneous data under subtle circumstances.
This verification should be performed at the host software level using host level diagnostic commands.

FORCING CORRECJ'ABLE ERROR CONDITIONS
Use the "read long" command to read a known error free data record and its
check bytes. XOR into this record a simulated error condition that is guaranteed to be
correctable. Write the data record plus check bytes back to the storage device using
the "write long" command.
Read back the record just written using the normal read command. Verify that
the controller corrected the simulated error condition.
Repeat, using many random
guaranteed-correctable error conditions.
Some nonrandom error conditions should be forced as well. Select a set of error
conditions that is known to test all paths of the error-correction implementation.

- 365-

FORCING DETECI'ABLE ERROR CONDITIONS
Repeat the test defined under FORCING CORRECTABLE ERROR CONDITIONS,
except use simulated error conditions that exceed guaranteed correction capability but
not guaranteed detection capability. An uncorrectable error should be detected for each
simulated error condition.

FORCING ERRORS THAT EXCEED DETECI'ION CAPABILITY
Repeat the test defined under FORCING CORRECTABLE ERROR CONDITIONS,
except use simulated error conditions that far exceed both the guaranteed correction
and guaranteed detection capabilities. Count the number of correctable and uncorrectable errors reported by the error-correction implementation.
The ratio of counts
should be approximately equal to the miscorrection probability of the code. Repeat for
error conditions known to have a higher miscorrection probability.
6.4 ERROR LOGGING
For implementations where the data is actually corrected by the controller, it may
be desirable to include an error-logging capability within the controller. A minimum error-logging capability would count the errors recovered by reread and the errors recovered by error correction. Logging requires the controller to have a method of signaling
the host when the counters overflow and a command for offloading counts to the host.
A more sophisticated error log would also store information useful for:
- Reassigning areas of media for repeated errors.
- Retiring media when the number of reassignments exceeds a threshold.
- Isolation of devices writing marginal media. This may require that the physical address of the writing device be part of each record written.
- Hardware failure isolation.
It may be desirable to reserve space fot error logging on each storage device.

- 366-

6.5 SELF-CHECKING

HARDWARE SELF-CHECKING
Hardware self-checking can limit the amount of undetected erroneous data transferred when error-correction circuits fail.
Self-checking should be added to the design if the probability of error-correction
circuit failure contributes significantly to the probability of transferring undetected
erroneous data. One self-checking method duplicates the error-correction circuits and,
on read, verifies that the error latches for both circuits agree. No circuits from the
two sets of error-correction hardware share the same Ie package. This concept can be
extended by having separate sources and/or paths for clocks, power, and ground.
It is used for the
Another self-checking method is called parity predict.
self-checking of shift registers that are part of an error-correction implementation. On
each clock, new parity for each shift register is predicted. The actual parity of each
shift register is continuously monitored and at each clock, is compared to the predicted
parity. If a difference is found, a hardware check flag is set.
The diagrams below define when parity is predicted to change for four shift-register configurations.

DIVIDE BY g(x). ODD NUMBER OF FEEDBACKS

Hb

~_....L--l~l

DATA

r

The parity of the shift register will flip each time the data bit is '1'.

DIVIDE BY g(r). EVEN NUMBER OF FEEDBACKS

Hb

~r--------'---,l

DATA

r

The parity of the shift register will flip if a '1' is shifted out of the shift register, or (exclusive) if the data bit is '1'.

- 367-

MULTIPLY BY:xl1!AND DIVIDE BY glI). ODD # OF FEEDBACKS

DATA

;J

.&"----L--l-L--I

The parity of the shift register will flip if the data bit is '1'.

MULTIPLY BY:xl1!AND DIVIDE BY g(x). EVEN # OF FEEDBACKS

The parity of the shift register will flip if a '1' is shifted out of the shift register.
An m-bit shift register circuit using parity predict for self-checking is shown on
the following page. An odd number of feedbacks and premultiplication by xm is assumed. It is also assumed that the feedbacks are disabled during write check-bit time
but not during read check-bit time. While writing data bits, reading data bits, and
reading check bits, parity of the shift register is predicted to change for each data bit
that is '1'. While writing check bits, parity is predicted to change for each '1' that is
shifted out of the shift register.

WRITE

DATA

......
PARITY TREE

1
1----.---1 J

Q

J Q t - - - - - -.....

o
K
MUX

- 368-

K

PARITY
PREDICT
ERROR

Another technique that aids the detection of error-correction hardware failures is
to design the circuits so that nonzero check bytes result when the data is all zeros.

SELF-CHECKING WITH MICROCODE AND/OR SOFTWARE
Periodic microcode and/or software checking is another approach that can be used
to limit the amount of undetected erroneous data transferred in case of an error-correction circuit failure. Diagnostic microcode or software could be run on a subsystem
power-up and during idle times. These routines would force ECC errors and check for
proper detection and correction. In some cases, this approach is the only form of
self-checking incorporated in an implementation, even though it is not as protective as
self-checking hardware. In other cases, this approach is used to supplement self-checking hardware.

- 369-

SUPPLEMENTARY PROBLEMS

1.

Write the syndrome equations for a three-error-correcting Reed-Solomon code.

2.

Write out the error-locator polynomi~ for errors 1t locations 0, 3, and 5 for a
Reed-Solomon code operating over GF(2 ) dermed by x + x + 1.

3.

Show a Chien search circuit to solve the error-locator polynomial from problem 2.

4.

Once error locations for a Reed-Solomon code are known, the syndrome equations
become a system of simultaneous linear equations with the error values as unknowns. The error-location vectors are coefficients of the unknown error values.
Solve this set of simultaneous linear equations for the two error case.

5.

Write out 4the encode polrnomial for a two-error-correcting Reed-Solomon code
using GF(2 ) generated by x + x + 1.

6.

G1ven a small field generated by the rule p3 = P + 1 and a large field generated by
a = a + p, develop the rule for accomplishing the square of any element in the
large field by performing computation in the small field.

7.

Show a complete decoder (on-the-fly, spaced data blocks)
a bu~t length 2
correcting, shortened cyclic code, using the polynomial (x + 1)· ex + x + 1).
Record length is 20 bits, including check bits. Data and check bits are to be
buffered in a 2Q-bit FIFO (first in first out) circuit.

8.

Find a polynomial for a code of length 7 that has single-, double-, and triple-bit
error detection.

9.

For detection of random bit errors on a 32-bit memory word, would it be better to
place parity on each byte or use a degree four error-detection polynomial across
the entire 32-bit word?

10.

A device using a 2048 bit record, including 16 check bits, has a random bit error

lor

rate of lE-4. The 16 check bits are defined by the polynomial below. Can the
device meet a lE-I5 specification for Pued (probability of undetected erroneous
data)?
x I6

+ xI2 + x5 + 1

= (x +
11.

1)· (x I5

+ x14 + x 13 + x 12 + x4 + x3 + x2 + x + 1)

Compute the probability for three or more error bursts in a block of 256 bytes
when the raw burst error rate is 1E-7 .

- 370-

12.

Compute the block error probability for a channel using a detection only code
when the raw burst error rate is IE-IO.

13.

Design a circuit to sojve the equation y2
field is generated by x + x + 1.

14.

There is a Fire code in the industry defmed by
x24

+

x 17

+

x14

+

+

y

+

x lO

C

+

= 0 for

x3

+

Y when C is given. The

1

a) For a correction span of four, determine the detection span using the inequalities for a Fire code.
b) Determine the miscorrection probability for correction span four and record
length 259 bytes, (data plus check bytes.)
15.

For an error-detection code using the shift register below for encoding and decoding of 2048 byte records:

DATA

.~r--3-2---B-I-T-S-H-I-F-T-R-E-G-I-S-T-E-R--,;J

a) Determine the misdetection pr~babi1ity for all possible error bursts.
b) Determine the misdetection probability for all possible double-bit errors.
16.

Which of the pairs of numbers below are relatively prime?
15
9
7
14

,
,
,
,

45
31
11
127

17.

Write the integer 18 as residues of moduli 5 and 7.

18.

Listed below are residues for several integers modulo 5 and 9. Compute the Ai
and mi of the Chinese Remainder Method.
Then use the Chinese Remainder
Method to determine the integers.
a)
b)
c)

aMOD5 = 4, aMOD9 = 6, a = ?
aMOD5 = 3, aMOD9 = 5, a = 1
aMOD5 = 0, aMOD9 = 4, a = 1

What is the total number of unique integers that can be represented by residues
modulo 5 and 91

- 371 -

19. DefiJ;le a fast division algorithm for dividing by 2S5 on an 8-bit. processor
does not have a divide instruction. The dividend must be less than 65,536.

integers that can be represented by residues

20.

What is the total number of unique
modulo 6 and 8?

21.

Which of the finite field functions listed below are linear?
Square
Square Root
Eight Root

Log
Cube
Sixth Power
Modulo

22;

that

Antilog
Cube Root
Inverse

Determine the period of the following polynomials:
a) x43
b) x

+
+

12
x

+

x

+

1

23.

Con:pute the reciprocal polynomial ofx3 + x + 1.

24.

How many primitive polynomials are of degree eight?

25.

Compute the residue of x7 MOD x3

26.

For a small-systems magnetic disk,list sevel'al factors influencing data accuracy.

27.

Is it possible for. a polynomial with an. 2n) , where GF(p2n) is an
extension field of GF(pD). If a is a root of p(x), i.e., p(a)=O, and pD+I a prime, then a
is primitive in GF2n).
Proof. If a is a root of p(x) , its conjugate
a=apD. Thus:

a is

also a root of p(x) , where

It follows that:

a+a = 1, a·a = p

(1)

and:

Now (p2o-I)=(pD+1).(pD-I) and pO+1 is a prime by hypothesis.
such that rlp2n-1 implies that rlpO-l. Then, from (2):

Hence, any number r

(3)

Since p is primitive over GF(pD), pO-I is least integer such that p(pD-I)/r = 1. Hence:

p(pn_1)/r

t

1 unless r=l.
- 378-

+1 unless r=l.

Thus, by (2) opP"-l)/r

Therefore, the order of a is p2n_l and a

is primitive in GF(p2n).
Q.E.D.
The above theorem guarantees the root ex to be primitive in the
only when p"+ 1 is a prime. To show that the theorem is not generally
not a prime, consider the following counter example: Let GF{ll2) be the
of GF(U). It is verified readily that 13=112 € GF(ll) and is a primitive
field. Also:

extension field
true for p"+ 1
extension field
element in this

p(x)=x2-x+ 112
is irreducible in GF(ll). Suppose a is a solution to p(x) and ex € GF(l1 2).
a 2-a+1I2=O.
From this equation it is seen that a4 =-1I4=117 € GF(11).
a 4 € GF(11), (a 4)10= 1. Thus, a 40 = 1 and ex is not a primitive element in GF(1l2).

Definition 2. For a
ofa+ab is:

€

GF(p2n) and a+exb

I la+abl I =

€

GF(p2n), where a,b

€

Then
Since

GF(p") , the norm

(a+ab)· (a+ab)

Using the results of Theorem 1 and Definition 1 and Definition 2, the following
theorem is demonstrated.
Let f3 be a primitive element in GF(p") such that the quadratic poly2
nomial x + x + 13 is irreducible over GF(p"). Suppose that p" + 1 is prime. Next let a be
the root of this polynomial in the extension field GF(p2n) = {a + exb Ia,b € GF(pn)} of
GF(p"). Suppose ex m = a+ab € GF(p2n). The following holds:
Theorem 2.

(The proof is given in Section V.)

- 379-

By Theorem 2 one can construct a logp table of p11 - 1 elements by storing the
value ml = m mod(p11-l), where:
1 S ml S p11-l,
at location a 2 +ab+b 2p such that cr m = a+crb. Then with a and b known, one can find ml
using the logp table. A logp table is given in Section VI for p11-l =15. Similarly, the
antilogp table is constructed by storing the binary representation of a2 +ab+b 2p at location m 1 such that crm = a + crb and:

(4)

An antilogp table is also given in Section VI for p11-1 = 15.

Next, the constructions of

tables of p11 + 1 elements is shown.

1heorem 3. Let
cr m = a+crb

£

l'

=crp11-1

£

GFw2 n) for some a,b

GFw2n) , where cr is primitive in GF(p2n).
£

GF(p11). Then:

a+ab
log1' [ ---a+crb

1=m mod(pn+l)

(The proof is given in Section V.)
Using the results of Theorem 3, let:

f(a/b)

= 1'm =

(a/b)+a
(a/b)+cr

- 380-

=

a+ab
a+crb

Suppose

To construct the log.,. table, notice that when a=O:

and m=1. For m2 == mod (pfi+ 1), one has m2=1 when a=O. When b=O:
a+O

f

(alb)

= -- =

1.

a+O

Thus, m=O and m2=0. The remaining part of the log.,. table can then be constructed by
storing the value m2==m mod(pfi+l) at location alb for am = a+ab, where 2Sm2spfi. A log.,.
table for pD+ 1 = 17 is given in Section VII. Also, given there is an antilog.,. table for
pfi+l=17. It is constructed by storing the binary representation of (alb) e {pl,p2, •• • pI5}
at the corresponding location i=m2 for 2SiS16. Thus:

(5)

From (4) and (5) the following two simultaneous equations need to be solved for a and
b in order to reconstruct a m = a+ab:

=x
{6}

- 381 -

Relations (6) yield the following solution:

(7)

a
For b

€

=

boy

(8)

GFWO) it is verified readily that:

b = antilogp [

lOgPZ]
2

(9)

where:
z

= _-"x=-y2+y+p

Now, the logarithm of am=a+ab

€

GF(p2n), where a,b

€

GF(pO) and a

€

GP(p2n) is

primitive, can be found in terms of ml and m2 by using the tables of pn-l elements and
pn+ 1 elements, respectively. Then the Chinese Remainder theorem warrants that:
(10)

where:

- 382-

and (ntt t and (nv- t are the smallest numbers such that:

To recapitulate, the following algorithms for the log and antilog are given:
(~

THE LOG ALGORITHM
Given a m = a+ab fmd m as follows:

y

2.

= alb

Use the 10813 table to fmd mt = 10gJ3(x) and the log,. table to fmd
m2= log,.(y) for aiO, biO.

- 383 -

3.

By equation (10):

(b) THE ANTILOG ALGORITHM

Given m, recover a m = a+ab as follows:
1.

Compute:

2.

Use the antilog tables to find:

ml = m mod(pfi-I) and m2= m mod(pfi+ I).

antilog T (m2) = y = alb, for m2
3.

+0,1.

Use the equation (9):

where:

Then:

a = boy
To illustrate the above procedures, the following examples are given over GF(2 8).

- 384-

Example 1:

Given a l27 =(0,1, 1,0)+a(l, 1,1,0)

E

GF(28). Then, a=(O,I,1 ,0) and b=(l, 1,1,0).

By the WG algorithm:

(1,1,1,0)
y

= alb = (1,1,1,1)

Now use Tables VI.1 and VI.3 to fmd ml and m2, respectively. The results
are ml=7 and m2=8. For this example, nl=17, n2=15, (nlt 1=8, (n2t l =8,
n 1· (nt}-1 = 136 and n2· (n2f 1 = 120. By equation (10):

= 127

Example 2.

Given m=127, find a 127 = a+ab

E

ml =

m mod (pfi-l)

=

7

m2 =

mmod(pfi+l)

=

8

GF(2 8). Using the ANTILOG algorithm:

Then use Tables VI.2 and VI.4 to find x and y, respectively. The results are:
x

=

(1,1,1,0) and y = (1,1,1,1).

Thus:

x

z

= ---

- 385-

(0,0,1,1)

By equation (9):

b = antilogp

_ [ lOgP2 (z) ]

=

antilogp

and:

b

=

(1,1,1,0)

a

=

bey = (0,1,1,0)

a 127

=

Thus:

Therefore:
(0,1,1,0)

- 386-

+ a(1,t,l,O).

III.

A GENERALALGORrrHM FOR FlNDINGWGAND AN11WG OYER GF(q)
Consider a Galois field GF(q) and suppose that:

where Pi is prime and ni =(p/i for l~i~k.

Let a ~ GF(q) be primitive. Then any field
i
element of GF(q) can be represented by a for some i, where l~i~q-l. By the Chinese
Remainder theorem an exponent i is mapped onto (it mod nI, i2 mod n2, ... , ik mod nJ.;).
Then a primitive element a is expressed in the notation of the Chinese Remainder
theorem as follows:

a1

=

a(l mod n1, 1 mod n2,"',1 mod nk)
a (1,0,0, ... ,0) ,a (0,1,0 ..• ,0) , ... ,a (0,0, ..• ,0,1)

(11)

_(0,0, ... ,0,1,0, •.• ,0) s

(12)

Here:
...

where the integer 1 in the exponent is in the location j.

Tj

Element Tj is an nj-th root

of unity. It follows from (11) that:

for all integers m where Tj is a primitive Dj-th root ofUDity.

By (12) and the reconstruction of the Chinese Remainder theorem:
Tj

=

m' • (m' ) -1

a J

J

where:

- 387 -

(14)

and:

nj omj=q-l
Now, suppose one computes
that one has:

(15)

Cam)mjocmjtl for any j such

that l:sj:sk.

where Cj ;: m mod nj for l:sj:sk and m=Cj t~anj for sOIne integer a.
follows that:

m mJ·· (mJ' ) -1 _

(a )

=

[~mj

0

(mj)

Observe by (15)

Then, by (14) it

-lJ Cj mod q-1

(fj) Cj

(17)

Therefore, by the use of equation (17) one can compute (fj)Cj from am for l:sj:sk.
Note that k small tables, each containing the value Cit for l:Si:snj, at location
(fj}Ci. for l:sj:sk, can be used to find the k exponents Cl,C2 .... ,Ck. respectively. Once
the Cj are found, the Chinese Remainder th~orem is used to compute the logarithm of
am as follows:

m

= [/

Ci omi (mi) -1 ] mod q-1
0

~=1

- 388 -

(18)

By (13), (14), (16), and (17), the antilog ofm is computed as:
(19)
where

Ci

= m mod

ni

for

1~i~k.

Tables

of (1j) Ci fOr 1~j~3 for

q-1 = 255

3x5x17 = n1'n2°n3 are given in section VII.

Example 3:

To demonstrate the general algorithm above, the logarithm of a 20 is computed for a,a 20 E GF(2 8 ) where a satisfies x8+x 4+x3+x 2+1. with an exponent of 20 given, the
antilog a 20 is recovered.
In this case, n1=3, n2=5,
n3=17, m1'(m1)-1= 85, m2'(m2)-1= 51, and m3 o (m3)-1=120.

LOGARITHM
Using the tables in Section VII, one finds Cl, C2, and C3 from the following
computations:

Thus, CI =2, C2=0, and C3=3. The logarithm m is obtained by:

m = (2.85+0 51+3 120) mod 255
0

0

=20.

- 389-

ANTILOG

From m=20, one computes:
C1
C2
C3

E

3

=2

5

0

mod 17

3

20 mod

= 20
= 20

mod

Using the tables in Section VII gives:

Then the antilog of m=20 is recovered as:

=

a(170+0+105) mod 255

- 390 -

IV.

eONaUS/ON
To find the log and antilog of an element in a finite field GF(q) , it q =p2n for

some prime p, the technique shown in Section II can be used to reduce the table memory requirement from 2(p2n_l) elements to 4pll elements.

A further memory reduction

can be achieved, i.e. from (q-l) elements to:

elements, by using the general method shown in Section III, however, at the expense of
a greater number of operations.. A comparison of the number of operations needed in
these methods is given in Table IV.l.

It is evident from Table IV.l that the number of

multiplications required in the general case can be prohibitively large in some situations.
Thus, the technique shown in Section II has a better potential than the general method
for many practical applications.

- 391 -

Table IV. 1

A Complexity Comparison of the
Alternative Approaches for Computing
Logs and Antilogs over GF(q).

when ~-1=p2n-l
and p +1 is prime

...

General Method for
q-l=nl n2
nk
LOG

ANTILOG

k+ \
L- m'J • em'J ) -1 *
j-l

k-l

LOG

ANTILOG

Multiplication

7

5

Additions

4

2

k-l

0

Table Look-Ups

2

4

k

k

Modulus
operations

0

2

1

k

No. of

--1L

* mj • (mj) -1 == 1 mod nj

- 392-

v.

PROOF OF THEOREMS

Proof of Theorem 2.

~=apn

Since x2+x+,8 is irreducible over GF(pn), it has roots a and

in the extension field GF(p2n).

By theorem 1, a is primitive in GF(p2n).

Definition 2 and relations (1) and (2), one has the following:

I la+abl I

=

(a+ab) (a+ab)

= (a+ab) (a+ab)
(V.1)

If c+ad is any other element in GF(p2n) and c,d

(a+ab) (c+ad)

=

€

GF(pD), then:

(a+ab)P

n

(c+ad)P

n

(V.2)

Thus, by (V.2) and the definition of the norm, one has:

I I (a+ab) (c+ad) I I

= (a+ab) (c+ad) (a+ab) (c+ad)
=

I la+abl I

~

I Ic+adl I

(V. 3)

Observe next by (2) that:

I Ia I I

= a~a = ,8

so that the theorem is true for m = 1. For purposes of induction, assume that:

II akll

=

- 393-

,8k

(V. 4)

By

for all k such that

l~k~m.

Then by (V.3), for k=m+ 1:

Hence, the induction is complete and (V.4) is true for all k.
Represent am by a+ab for some a,b

€

GF(pD). Then, by (V. I) and (V.4):

The theorem follows by the definition of the logarithm and the fact that 13 has order
pn-I.

Q.E.D.
Proof of Theorem 3. Since a is primitive in GF(p2n) and r=a pn - 1 , the order of r
is pO + 1. By the definition of the norm, one has:
(V.6)

For purposes of induction, assume that:
(V. 7)

for

l~k~m.

Then, by (V.3) for m=m+ 1:

Hence, the induction is complete and (V. 7) is true for all k.

- 394-

Representing am by a+ab for some a,b

E

GF(pD), it follows from (V.6) that:
(V.8)

Multiplying both sides of (V. 8) by (a + ab)2 yields:

Therefore, from the definition of the norm:

II a+ab II (a+ab) 2
I la+abl 12
(V.9)

The theorem follows by the definition of the logarithm and the fact that the order of
Tis pn+1.

Q.E.D.

- 395-

VI. Let p(x)=x4 +x3 +1 be irreducible over GF(2) and fJ
Then:

fJl
fJ2
/33
/3 4

= /33+1
= /33+/3+1
= /3 3 +/3 2 +/3+1
= /32+/3+1
= /3 3+/3 2+/3
= /3 2+1

/35
/36
/3 7
/38
/39
/310 = /33+/3
/311 = /33+/32+1
/312 = /3+1
/313 = /3 2+/3
/314 = /33+/32
/315 = 1

- 396 ..

€

GF(24) is a solution of p(x).

Table VI.l
Logp

Location
0

o

~ontent

0 1

3

0 0 1 0

2

0 0 1 1

14

0 1 0 0

1

0 1 0 1

10

0 1 1 0

13

0 1 1 1

8

1 0 0 0

15

1 0 0 1

4

1 0 1 0

9

1 0 1 1

11

1 1

o

0

12

1 1 0 1

5

1 1 1 0

7

1 1 1 1

6

- 397 -

Table VI.2
Antilog.B

Location Content
0 0 0 0

0 0 0 0

0 0 0 1

0 1 0 0

0 0 1 0

0 0 1 0

0 0 1 1

0

0 1 0 0

1 0 0 1

o

1 0 1

1 1 0 1

0 1 1 0

1 1 1 1

0 1 1 1

o

o

0 1

1 1 1 0

0

0 1 1 1

1 0 0 1

1 0 1 0

1 0 1 0

0 1 0 1

1 0 1 1

1 0 1 1

1 1 0 0

1 1 0 0

1 1 0 1

0 1 1 0

1 1 1 0

001 1

1 1 1 1

1 0

1 0

- 398 -

o

0

For the .,. table, note the following:
i)

lfa=O:

Thus, m=l and m2=1.
ii)

Ifb=O:

Thus, m=O and m2=O.

- 399-

Table VI. 3
Antilog,

Location Content
0 0 0 1

14

0 0 1 0

12

o

0 1 1

6

0 1 0 0

2

0 1 0 1

10

0 1 1 0

4

0 1 1 1

9

1 0 0 0

16

1 0 0 1

3

1 0 1 0

5

1 0 1 1

11

1 1 0 0

15

1 1 0 1

7

1 1 1 0

13

1 1 1 1

8

- 400 -

,.'

Table VI.4
Antiloqr

Location
0 0 1 0

~ontent

0 1 0 0

a

1 1

1

a a

0 1

a a
a 1

a

1 1 0

0

1

1 1 0

1 a
a a

1 0

a
a

1 1 1

1 1

a

1

a

1 1 1 1

0 1

0 0

1 1
1

1 0 1 0

a
a

1 0 1 1

1 0 1 1

1 0 0 1

1 1 0

a

1 1 1
1 0 1

a a

1

1 1 0 1

1 1 1

1 1 1 0

a

1 0 0 0

1 1 0

1 0 0 0 0

1

- 401 -

a
a

0 0 1

a

0

a
a

VIT.

Tables

for

ml =3,

m2=5,

and

m3=17,

where m=255=ml'm2'm3,

when

a 8 +a 4 +a 3 +a 2 +1=O,

(f2)m2

(fl)m 1

m2

1

(0 0 0 0 0 0 0 1)

0

1

(0 0 0 0 0

0 1)

0

a 85

(0 1 1 0 1 0 1 1)

1

a 51

(0 0 0 0 0 1 0 1)

1

2

a 102 (0 0 1

o

a 170 (1 1 1

0 1 0 1)

o

0 0 1 0)

2

a 153 (0 1 0 0 1 0 0 1)

3

a 204 (1 1 1 0
(f3)m3

m3

0 0 0 0 1)

0

a 120 (1 0 0 1 0 0 1 1)

1

a 240 (0 0 0 1 0 1 1 0)

2

1

(0 0

a 105 (0

o

o

0 0 1 1 0 1)

3

a 225 (0 1 1 1 0 1 1 0)

4

a 90

(1 1 1 0 0 0 0 1)

5

a 210 (1 0 1 0 0 0 1 0)

6

a 75

o

0 0 1 0 0 1)

7

a 195 (0 0 1 1 0 0 1 0)

8

a 60

(1

o

0 1 0)

9

a 180 (0 1 0 0 1 0 1 1)

10

a 45

(1 1 1 0 1 1 1 0)

11

a 165 (1 1 0 0 0 1 1 0)

12

a 30

(0 0 1 1 0 0 0 0)

13

a 150 (1 0 1 0 0 1 0 0)

14

a 15

(0 0 1 0 0 1 1 0)

15

a 135 (1 1 0 1 1 0 1 0)

16

(1 1 0 1

- 402 -

o

m2

o

0

o

0)

4

ABBREVIATIONS
BCH

Bose-Chaudhuri-Hocquenghem (code)

BER

Bit-Error Rate

CLK

Clock

CNT

Count

CRC

Cyclic Redundancy Check (code)

DBER

Decoded Bit-Error Rate

DEC

Double-Error Correction

DED

Double-Error Detection

ECC

Error Correcting Code

EDAC

Error Detection And Correction

FBSR

Feedback Shift Register

FEC

Forward Error Correction

FWD

Forward

GF

Galois Field

LFSR

Linear Feedback Shift Register

LSC

Linear Sequential Circuit

LSR

Linear Shift Register

LRC

Longitudinal Redundancy check

RS

Reed-Solomon (code)

REV

Reverse

SR

Shift Register

SEC

Single-Error Correction

TED

Triple-Error Detection

VRC

Vertical Redundancy Check

- 403-

GLOSSARY

ALARM
Any condition detected by a correction algorithm that prevents correction, such as
error-correction capability exceeded.

In some cases, alarms will cause the error-control

system to try another approach, for example using a different set of pointers.
BINARY SYMMETRIC CHANNEL
A channel in which there is equal probability for an information bit being 1 or O.
BLOCK CODE
A block code is a code in which the check bits cover only the immediately preceding
block of information bits.
BURST ERROR RATE
The number of burst-error occurrences divided by total bits transferred.
BURST LENGTH
The number of bits between and including the first and last bits in error; not all of the
bits in between are necessarily in error.
CATASTROPHIC ERROR PROBABILITY (Pc)
The probability that a given defect event causes an error burst which exceeds the
correction capability of a code.
CHARACTERISTIC
See Ground Field.

-404-

CODE POLYNOMIAL
See Codeword.
CODE RATE
See Rate.
CODE VECTOR
See Codeword.
CODEWORD
A set of data symbols (i.e. information symbols or message symbols) together with its
associated redundancy symbols; also called a code vector or a code polynomial.
CONCATENATION
A method of combining an inner code and an outer code, to form a larger code. The
inner code is decoded first. An example would be a convolutional inner code and a
Reed-Solomon outer code.
CONVOLUTIONAL CODE
A code in which the check bits check information bits of prior blocks as well as the
immediately preceding block.
CORRECTABLE ERROR
One that can be corrected without rereading.
CORRECTED ERROR RATE
Error rate after correction.

-405,.

CORRECTION SPAN
The maximum length of an error burst which is guaranteed to be corrected by a burstcorrecting code.
CYCLIC CODE
A linear code with the property that each cyclic (end-around) shift of each codeword is
also a codeword.
CYCLIC REDUNDANCY CHECK (CRe)
An error-detection method in which check bits are generated by taking the remainder
after dividing the data bits by a cyclic code polynomial.
DEFECT
A permanent fault on the media which causes an error burst.
DEFECT EVENT
A single occurrence of a defect regardless of the

numbe~

of bits in error caused by the

defect.
DEFECT EVENT RATE (Pe)
The ratio of total defect events to total bits, having the units of defect events per bit.
DETECTION SPAN
For a single-burst detection code, the single-burst detection span is the maximum length
of an error burst which is guaranteed to be detected.
For a single-burst correction code, the single-burst detection span is the maximum
length of an error burst which is guaranteed to be detected without possibility of
miscorrection.

-406 -

If a correction code has a double-burst detection span, then each of two bursts is
guaranteed to be detected without possibility of miscorrection, provided neither burst
exceeds the double-burst detection span.
DISCRETE MEMORYLESS CHANNEL
A channel for which noise affects each transmitted symbol independently, for example,
the binary symmetric channel (BSC).
DISTANCE
See Hamming Distance.
ELEMENTARY SYMMETRIC FUNCTIONS
Elementary symmetric functions are the coefficients of the error locator polynomial.
ERASURE
An errata for which location information is known.
but an unknown value.

An erasure has a known location,

ERASURE CORRECTION
The process of correcting errata when erasure pointers are available.

A Reed-Solomon

code can correct more errata when erasure pointers are available. It is not necessary
for erasure pointers to be available for all errata when erasure correction is employed.
ERASURE LOCATOR POLYNOMIAL
A polynomial whose roots provide erasure-location information.
ERASURE POINTER
Information giving the location of an erasure.

Internal erasure pointers might be de-

rived from adjacent interleave error locations.
External erasure pointers might be
derived from run-length violations, amplitude sen~ing. timing sensing. etc.
-407 -

ERRATA LOCATOR POLYNOMIAL
A polynomial whose roots provide errata-location information.
ERRATUM
Either an error or an erasure.

An errata for which location information is not known.

In general, an error represents

In the binary case, the only unknown is the

two unknowns, error location and value.
location.
ERROR BURST
A clustered group of bits in error.
ERROR LOCATION OR DISPLACEMENT

The distance by some measure (e.g., bits or bytes) from a reference point (e.g., beginning or end of sector or interleave) to the burst.

For Reed-Solomon codes, the error

location is the log of the error-location vector and is the symbol displacement of the
error from the end of the codeword.
ERROR LOCA nON VECTOR
Vector form of error location (antilog of error location).
ERROR LOCATOR POLYNOMIAL
A polynomial whose roots provide error-location information.
ERROR VALUE
The error value is the bit pattern which must be exclusive-or-ed (XOR-ed) against the
data at the burst location in order to correct the error.

-408 -

EXPONENT
See Period.
EXTENSION FIELD
See Ground Field.

Refer to Section 2.8 for the definition of a field.
FINITE FIELD
A field with a finite number of elements; also called a Galois field and denoted as GF(n)
where n is the number of elements in the field.
FORWARD-ACTING CODE
An error-control code that contains sufficient redundancy for correcting one or more
symbol errors at the receiver.
FORW ARD POLYNOMIAL
A polynomial is called the forward polynomial when it is necessary to distinguish .it
from its reciprocal polynomial.
GROUND FIELD
A finite field with q elements, GF(q) , exists if, and only if, q is a power of a prime.
Let q =pfi where p is a prime and n is an integer, then GF(p) is referred to as the
ground field and GF{pll) as the extension field of GF(p).
The prime P is called the characteristic of the field.

-409-

GROUP CODE
See Linear Code.
HAMMING DISTANCE
The Hamming distance between two vectors

IS

the number of corresponding symbol

positions in which the two vectors differ.
HAMMING WEIGHT
The Hamming weight of a vector is the number of nonzero symbols in the vector.
HARD ERROR
An error condition that persists on re-read; a hard error is assumed to be caused by a
defect on the media.
IRREDUCIBLE
A polynomial of degree n is said to be irreducible if it is not divisible by any polynomial of degree greater than zero but less than n.
ISOMORPHIC
If two fields are isomorphic they have the same structure.

That is, one can be obtained

from the other by some appropriate one-to-one mapping of elements and operations.
LINEAR (GROUP) CODE
A code wherein the EXCLUSIVE-OR sum of every pair of codewords is also a codeword.
LINEAR FUNCTION
A function is said to be linear if the properties below hold:
a. Linearity: f(a-x) = a-f(x)
b. Superposition: f(x +y) = f(x)+f(y)

-410-

LINEARLY DEPENDENT
A set of n vectors is linearly dependent if, and only if, there exists a set of n scalars
Ci, not all zero, such that:

LINEARLY INDEPENDENT
A set of vectors is linearly independent if they are not linearly dependent.

See Linear-

ly Dependent.
LONGITUDINAL REDUNDANCY CHECK (LRC)
A check byte or check word at the end of a block of data bytes or words, selected to
make the parity of each column of bits odd or even.
MAJORITY LOGIC
A majority logic gate has an output of one if, and only if, more than half its inputs are
ones.
MAJORITY LOGIC DECODABLE CODE
A code that can be decoded with majority logic gates. See Majority Logic.
MINIMUM DISTANCE OF A CODE
The minimum Hamming distance between all possible pairs of codewords.
distance of a linear code is equal to its minimum weight.
MINIMUM FUNCTION
See Minimum Polynomial.

- 411 -

The minimum

MINIMUM POLYNOMIAL OF Qi
The monic polynomial m(x) of smallest ~egree with coefficients in a ground field such
that m(Qi) =0, where Qi is any element of an extension field. The minimum polynomial of
Qi is also called the minimum function of Qi.
MINIMUM WEIGHT OF A CODE
The minimum weight of a linear (group) code's non-zero codewords.
MISCORRECTION PROBABILITY (pmc)
The probability that an error burst which exceeds the guaranteed capabilities of a code
will appear correctable to a decoder. In this case, the decoder actually increases the
number of errors by changing correct data. Miscorrection probability is determined by
record length, total redundancy, and correction capability of the code.
Pmc usually represents the miscorrection probability for all possible error bursts, assuming all' errors are possible and equally probable. Some codes, such as the Fire Code,
have a higher miscorrection' probability for particular error bursts than for all possible
error bursts.
MISDETECTION PROBABILITY (pmd)
The probability that an error burst which exceeds the correction and detection capabilities of a code will cause all syndromes to be zero and thereby go undetected. Misdetection probability is determined by the total number of redundancy bits, assuming
that all errors are possible and equally probable.
MONIC POLYNOMIAL
A polynomial is said to be monic if the coefficient of the highest degree term is one.
(o.k) CODE

A block code with k information symbols, n-k check symbols, and n total symbols
(information plus check symbols).
- 412-

A convolutional code with constant length n, code rate R (efficiency), and information
symbols k=Rn.

Number of combinations of n objects taken r at a time, without regard to order.

n!
= r!(n-r)!
n-TUPLE
An ordered set of n field elements ai, denoted by (al,a2,' •• ,an)'
ORDER OF A FIELD
The order of a field is the number of elements in the field.
may be infinite (infinite field) or finite (finite field).

The number of elements

ORDER OF A FIELD ELEMENT
The order e of a field element f3 is the least positive integer for which f3e=1.
of order 2n-l in GF(2 n) are called primitive elements.

Elements

PARITY
The property of being odd or even.

The parity of a binary vector is the parity of the

number of ones the vector contains.

Parity may be computed by summing modulo-2 the

bits of the vector.

- 413-

PARITY CHECK CODE
A code in which the encoder accepts a block of information bits and computes for
transmission, a set of modulo-2 sums (XOR) across .various of these information bits and
possibly information bits in prior blocks.

A decoder at the receiving point reconstructs

the original information bits from the set of modulo-2 sums.

Every binary parity-check

code is also a linear, or group code. See also Block Code and Convolutional Code.
PERFECT CODE
An e error correcting code over GF(q) is said to be perfect if every vector is distance
no greater than e from the nearest codeword. Examples are Hamming and Golay codes.
PERIOD
The period of a polynomial P(x) is the least positive integer e such that xe + 1 is divisible by P(x).
POINTER
Location information for an erasure.

This information is normally provided. by special

hardware.
POLYNOMIAL CODE
A linear block code whose codewords can be expressed in polynomial form and are
divisible by a generator polynomial.

This clas~ of codes includes the cyclic and shor-

tened cyclic.codes.
POWER SUM SYMMETRIC FUNCTIONS
The power sum symmetric functions are the syndromes.
PRIME FJELD
A field is called prime if it possesses no. sub fields except that consisting of the whole
field.
~414

-

PRIME SUBFIELD

The prime subfield of a field is the intersection of all sub fields of the field.
PRIME POLYNOMIAL

See Irreducible.
PRIMITIVE POLYNOMIAL

A polynomial is said to be primitive if its period is 2m-I, where m is the degree of the
polynomial.
RANDOM ERRORS

For the purposes of this book, the term 'random errors' refers to an error distribution
in which error bursts (defect events) occur at random intervals and each burst affects
only a single symbol, usually one bit or one byte.

The code rate, or rate (R) of a code is the ratio of information bits (k) to total
bits (n); information bits plus redundancy. It is a measure of code efficiency.

R =.k.
n
RA W BURST ERROR RATE

Burst error rate before correction.
READABLE ERASURE

A suspected erasure that contains no errors.

- 415-

RECIPROCAL PoLYNOMIAL
'The reciprocal ofa poiynomial F(Jt) is defined as

where III is the degree ofF(x).

····RECURRENTCOI>E
. . . See Collvolutional Code.

RErr(JCIBLE
A"pOlynoIlliar'ofdegree nissrudto'be redliCIble if itisdi~isibleby's'omi'polyn()filial of
adegreegreatetthan o but lesktliait n .

. RELATIVEi.yPRIME
1r; the greatest'corilmon divisorof'i~oP61yri6niials: is ,. 1,': th'~y are' ~aid to;!6e:r~j~ti ~ely
prime.

:SELF,cRECIPROCALPOLYNOMIAL

SHORTENiHiCYCLICCODE
.Alillearcodeformedbydeleting 'leadihginfohtiatloh'digiisfr()Ill the~Ode words ofa
cyClic code. Sh6rtened c},cHc cOdes are no{cydic.

SOFT ERROR

orbecomescort~ttabj~ :bo :ie5~~d;
'be ,due ,atll~ast iilpart,t()aTdln'sientcau~e sudlis~le2fiic~lridi~e.
An -error tnatt1isappears

a

soft errorjs'a~su;;;~d

to

SUBFIELD
A subset of a field which satisfies the defmition of a field.

See Section 2.8 for the

definition of a field.
SYNC FRAMING ERROR
When synchronization occurs early or late by one or more bits.
SYNDROME
A syndrome is a vector (a symbol or set of symbols) containing information about an
error or errors. Some codes use a single syndrome while others use multiple syndromes.
A syndrome is usually generated by taking the EXCLUSIVE-OR sum of two sets of
redundant bits or symbols, one set generated on write and one set generated on read.
SYSTEMATIC CODE
A code in which the codewords are separated into two parts, with all information
symbols occurring first and all redundancy symbols following.
UNCORRECTABLEERROR
An error situation which exceeds the correction capability of a code.

An uncorrectable

error caused by a soft error on read will become correctable on re-read.
UNCORRECTABLESECTOR
A sector which contains an uncorrectable error.
UNCORRECTABLE SECTOR EVENT RATE
.The ratio of total uncorrectable sectors to total bits, having the units of uncorrectable
sector events per bit.

- 417-

UNDE.TECTED ERRONEOUS DATA PROBABILITY (Pued)
The probability that erroneous data will be transferred and not detected, having the
units of undetected erroneous data events per bit.

Pued for a code that does not have

pattern sensitivity is the product of miscorrection probability (pmc) of the error COrrecting code (if present). the misdetection probability (pmd) of the error detectiIlg code
(if present), and the probability of having an error that exceeds guaranteed capabilities
of the code (Pe*Pc).
A code with pattern sensitivity will have two undetected erroneous data rates: one for
all possible error bursts, and a higher one for the sensitive patterns.
UNREADABLE ERASURE
A suspected erasure that actually contains an error.
UNRECOVERABLE ERROR
Same as hard error.
VERTICAL REDUNDANCY CHECK NR.C)
Check bit(s) on a byte or word selected to make total byte or word parity odd or even.
WEIGHT
The weight of a codeword is the number of non-zero symbols it contains.

- 418-

BIBLIOGRAPHY
BOOKS

Abramson, N., InfomUJtion Theory and Coding, McGraw-Hili, New York, 1963.
Aho, A., et. al., The Design and Analysis of Computer Algorithms, Addison-Wesley,
Massachusetts, 1974.
Albert, A., Fundamental Concepts of Higher Algebra, 1st ed., University of Chicago
Press, Chicago, 1956.
Artin, E., Galois Theory, 2nd ed., University of Notre Dame Press, Notre Dame, 1944.
Ash, R., InfomUJtion Theory, Wiley-Interscience, New York, 1965.
Berlekamp, E. R., Algebraic Coding Theory, McGraw-Hill, New York, 1968.
Berlekamp, E. R., A
New York, 1970.

Survey

of Algebraic

Coding

Theory,

Springer-Verlag,

Berlekamp, E. R., Key Papers in The Development of Coding Theory, IEEE Press,
New York, 1974.
Bhargava, V., et. aI., Digital Communications by Satellite, Wiley, New York, 1981.
Birkhoff, G. and T. C. Bartee., Modem App/ied Algebra, McGraw-HilI, New York, 1970.
Birkhoff, G. and S. MacLane, A Survey of Modem Algebra, 4th ed., Macmillan,
New York,1977.
Blake, I. F., Algebraic Coding Theory: History and Development, Dowden, Hutchinson &
Ross, Pennsylvania, 1973.
Blake, I. F. and R. C. Mullin, The Mathematical Theory of Coding, Academic Press,
New York, 1975.
Burton, D. M., Elementary Nwnber Theory, Allyn & Bacon, Boston, 1980.
Cameron, P. and J. H. Van Lint, Graphs, Codes and Designs, Cambridge University Press,
Cambridge, 1980.
Campbell, H. G., Linear Algebra
New Jersey, 1980.

With

Applications,

2nd

ed.,

Prentice-HaIl,

Carlson, A. B., Communication Systems, 2nd ed., McGraw-Hill, New York, 1968.
Clark, Jr., G. C. and 1. B. Cain, Error-Correction Coding for Digital Communications,
Plenum Press, New York, 1981.
Cohen, D. I. A., Basic Techniques of Combinatorial Theory , Wiley, New York, 1978.

- 419 -

Crouch, R. and E. Walker, Introduction to Modem Algebra and Analysis, Holt, Rinehart
& Winston, New York, 1962.
Davies, D. W. and D. L. A. Barber, Communication Networks for Computers, Wiley,
New York, 1973.
Davisson, L. D. and R. M. Gray, Data Compression, Dowden, Hutchinson & Ross,
Pennsylvania, 1976.
Doll, D. R., Data Communications:
New York, 1978.

Facilities, Networks, and Systems Design, Wiley,

Durbin, J. B., Modem Algebra: An Introduction, Wiley, New York, 1979.
Feller, W., An Introduction to Probability Theory and Its Applications, 2nd ed., Wiley,
New York, 1971.
Fisher, J. L., Application-Oriented Algebra, Harper & Row, New York, 1977.
Folts, H. C., Data Communications Standards, 2nd ed., McGraw-HilI, New York, 1982.
Forney, Jr., G. D., Concatenated Codes, M.LT. Press, Massachusetts, 1966.
Gallager, R. G., Injo171Ultion Theory and Reliable Communication, Wiley, New York, 1968.
Gere, J. M. and W. W. Williams, Jr., Matrix Algebra for Engineers, Van Nostrand,
New York, 1965.
Gill, A., Linear Sequential Circuits,

McGraw~HiI1,

New York, 1967.

Gill, A., Applied A1gebra for the Computer Sciences. Prentice-Han, New Jersey, 1976.
Golomb, S., et. aI., Digital Communications with Space Applications, Peninsula Publishing,
Los Altos, California, 1964.
Golomb, S., et. al., Shift Register Sequences, Aegean Park Press, Laguna Hills, California, 1982.
Gregg, W. D., Analog and Digital Communication, Wiley, New York, 1977.
Hamming, R. W., Coding and Injo171Ultion Theory, Prentice-Hall, New Jersey, 1980.
Hardy, G. and E. M. Wright, An Introduction to the Theory of Nwnbers, 5th ed.,
Clarendon Press, Oxford, 1979.
Herstein, LN., Topics in Algebra, 2nd ed., Wiley, New York, 1975.
Jayant, N. S., Wavefonn Quantization and Coding, IEEE Press, New York, 1976.
Jones, D. S., Elementary Info171Ultion Theory, Clarendon Press, Oxford, 1979.
Kaplansky, I., Fields and Rings, 2nd ed., The University of Chicago Press, Chicago, 1965.

-420 -

Khinchin, A. I., Mathematical Foundations of Information Theory, Dover, New York,
1957.
Knuth, D. E., The An of Computer Programming, Vol. I, 2nd ed., Addison-Wesley,
Massachusetts, 1973.
Knuth, D. E., The An of Computer Programming, Vol. 2, Addison-Wesley, Massachusetts,
1969.
Knuth, D. E., The An of Computer Programming, Vol. 3, Addison-Wesley, Massachusetts,
1973.
Kuo, F. F., Protocols and Techniques for Data Communication Networks, Prentice-Hall,
New Jersey, 1981.
Lathi, B. P., An Introduction to Random Signals and Communication Theory, International Textbook Company, Pennsylvania, 1968.
Lin, S., An Introduction to Error-Correcting Codes, Prentice-Han, New Jersey, 1970.
Lipson, J. D., Elements of Algebra and Algebraic Computing,
Massachusetts, 1981.

Addison-Wesley,

Lucky, R. W., et. al., Principles of Data Communication, McGraw-Hill, New York, 1968.
MacWilliams, F. J. and N. J. A. Sloane, The Theory of Error-Correcting, Vol. 16, NorthHolland, Amsterdam, 1977.
Martin, J., Communications Satellite Systems, Prentice-Hall, New Jersey, 1978.
Martin, J., Telecommunications and the Computer, Prentice-Hall, New Jersey, 1969.
McEliece, R. J., The Theory of Infonnation and Coding, Addison-Wesley, Massachusetts,
1977.
McNamara, J. E., Technical Aspects of Data Communication, Digital Press, Massachusetts,
1978.
Niven, I., An Introduction to the Theory of Numbers, 4th ed., Wiley, New York, 1960.
Owen, F. E., PCM and Digital Transmission Systems, McGraw-Hill, New York, 1982.
Peterson, W. W., and E. J. Weldon, Jr., Error-Correcting Codes, 2nd ed., MIT Press,
Massachusetts, 1972.
Pless, V., Introduction to the Theory of Error-Correcting Codes, Wiley, New York, 1982.
Rao, T. R. N., Error Coding for Arithmetic Processors, Academic Press, New York, 1974.
Sawyer,

w. W.,A Concrete Approach to Abstract Algebra, Dover, New York, 1959.

Sellers, Jr., F. F., et. aI., Error Detecting Logic for Digital Computers, McGraw-Hill,
New York, 1968.
- 421 -

Shanmugam, K. S., Digital and Analog Communication Systems, Wiley, New York, 1979.
Shannon, C. E. and W. Weaver, The Mathematical Theory of Communication, University
of Illinois Press, Chicago, 1980.
Slepian, D., Key Papers in The Development of Infonnation Theory, IEEE Press,
New York, 1974.
Spencer, D. D., Computers in Nwnber Theory, Computer Science Press, Maryland, 1982.
Stafford, R. H., Digital Television: Bandwidth Reduction and Communication Aspects,
Wiley-Interscience, New York, 1980.
Stark, H. M, An Introduction to Nwnber Theory, The MIT Press, Cambridge, 1970.
Tanenbaum, A. S., Computer Networks, Prentice-Hall, New Jersey, 1981.
Viterbi, A. J., Principles of Digital Communication and Coding, McGraw-Hili, New York,
1979.
Wakerly, J., Error Detecting Codes, Self-Checking Circuits and Applications, NorthHolland, New York, 1978.
Wiggert, D., Error-Control Coding and Applications, Artech House, Massachusetts, 1978.
Ziemer, R. E., and W. H. Tranter, Principles of Communications: Systems, Modulation,
and Noise, Houghton Mifflin, Boston, 1976.

- 422 -

IBM TECHNICAL DISaoSURE BUUETlN
(Chronologically Ordered)
D. C. Bossen, et. at., "Intermittent Error Isolation in a Double-Error Environment."
15 (12), 3853 (May 1973).
W. C. Carter, "Totally Self-Checking Error for K Out of N Coded Data." 15 (12),
3867-3870 (May 1973).
L. R. Baht and D. T. Tang, "Shortened Cyclic Code With Burst Error Detection and
Synchronization Recovery Capability." 16 (6),2026-2027 (Nov. 1973).
L. R. Bahl and D. T. Tang, "Shortened Cyclic Code With Burst Error Detection and
Synchronization Recovery Capability." 16 (6),2028-2030 (Nov. 1973).

P. Hodges, "Error Detecting Code With Enhanced Error Detecting Capability." 16 (11),
3749-3751 (Apr. 1974).
D. M. Oldham and A. M. Patel, "Cyclical Redundancy Check With a Nonself-Reciprocal
Polynomial." 16 (11), 3501-3503 (Apr. 1974).
G. H. Thompson, "Error Detection and Correction Apparatus." 17 (1), 7-8 (June 1974).
R. A. Healey, "Error Checking and Correction of Microprogram Control Words With a
Late Branch Field." 17 (2), 374-381 (July 1974).
A. M. Patel, "Coding Scheme for Multiple Selections Error Correction." 17 (2), 473-475
(July 1974).
D. C. Bossen and M. Y. Hsiao, "Serial Processing of Interleaved Codes." 17 (3), 809-810
(Aug. 1974).
K. B. Day and H. C. Hinz, "Error Pointing in Digital Signal Recording." 17 (4), 977-978
(Sept. 1974).

W. C. Carter and A. B. Wadia, "Contracted Reed-Solomon Codes With Combinational
Decoding." 17 (5), 1505-1507 (Oct. 1974).
W. D. Brodd and R. A. Donnan, "Cyclic Redundancy Check for Variable Bit Code Widths."
17 (6), 1708-1709 (Nov. 1974).

T. A. Adams, et. al., "Alternate Sector Assignment." 17 (6),1738-1739 (Nov. 1974).
W. C. Carter, et. al., "Practical Length Single-Bit Error Correction/Double-Bit Error
Detection Codes for Small Values of b." 17 (7), 2174-2176 (Dec. 1974).
I. E. Dohermann, "Defect Skipping Among Fixed Length Records in Direct Access
Storage Devices." 19 (4),1424-1426 (Sept. 1976).

R. E. Cummins, "Displacement Calculation of Error Correcting Syndrome Bytes by Table
Lookup." 22 (8b), 3809-3810 (Ian. 1980).

- 423 -

R. C. Cocking, et. al., "Self-Checking Number Verification and Repair Techniques."
22 (to), 4673-4676 (Mar. 1980).
P. Hodges, "Error-Detecting Code for Buffered Disk." 22 (12), 5441- 5443 (May 1980).
F. G. Gustavson and D. Y. Y. Yun, "Fast Computation of Polynomial Remainder
Sequences. " 22 (12), 5580-5581 (May 1980).
V. Goetze, et, al., "Single Error Correction in CCD Memories." 23 (1), 215-216 (June
1980).
J. W. Barrs and J. C. Leininger, "Modified Gray Code Counters." 23 (2), 460-462 (July
1980).
J. L. Rivero, "Program for Calculating Error Correction Code." 23 (3), 986-988 (Aug.
1980).
N. N. Nguyen, "Error Correction Coding for Binary Data." 23 (4), 1525-1527 (Sept. 1980).
J. C. Mears, Jr., "High-Speed Error Correcting Encoder/Decoder." 23 (4), 2135-2136 (Oct.
1980).
G. W. Kurtz, et. al., "Odd-Weight Error Correcting Code for 32 Data Bits and 13 Check
Bits." 23 (6), 2338 (Nov.1980).
J. R. Calva, et. al., "Distributed Parity Check Function." 23 (6),2451-2456 (Nov. 1980).
J. R. Calva and B. J. Good, "Fail-Safe Error Detection With Improved Isolation of 110
Faults." 23 (6),2457-2460 (Nov. 1980).
S. G. Katsafouros and D. A. Kluga, "Memory With Selective Use of Error Detection and
Correction Circuits." 23 (7a), 2866-2867 (Dec. 1980).
R. A. Forsberg, et. al., "Error Detection for Memory With Partially Good. Chips. "
23 (7b), 3272-3273 (Dec. 1980).
R. H. Linton, "Detection of Single Bit Failures in Memories Using Longitudinal Parity.·
23 (8), 3603-3604 (Jan. 1981).
.
C. L. Chen, "Error Correcting Code for Multiple Package Error Detection." 23 (8),
3808-3810 (Jan. 1981).
D. C. Bossen, et. al., "Separation of Error Correcting Code Errors and Addressing
Errors. " 23 (9), 4224 (Feb. 1981).
G. S. Sager and A. J. Sutton, "System Correction of Alpha-Particle- Induced Uncorrectable Error Conditions by a Service Processor. " 23 (9), 4225-4227 (Feb. 1981).
W. G. Bliss, et. al., "Error Correction Code." 23 (10),4629-4632 (Mar. 1981).
W. G. Bliss, "Circuitry for Performing Error Correction Calculations on Baseband
Encoded Data to Eliminate Error Propagation." 23 (10), 4633-4634 (Mar. 1981).

- 424 -

P. A. Franaszek, "Efficient Code for Digital Magnetic Recording.· 23 (11), 5229:.5232
(Apr. 1981).
C. L. Chen, "Error Checking of ECC Generation Circuitry." 23 (11), 5055-5057 (Apr.
1981).
C. L. Chen and B. L. Chu, "Extended Error Correction With an Error Correction Code."
23 (11), 5058-5060 (Apr. 1981).
G. G. Langdon, Jr., "Table-Driven Decoder Involving Prefix Codes." 23 (12), 5559-5562
(May 1981).
D. F. Kelleher, "Error Detection for All Errors in a 9-Bit Memory Chip." 23 (12), 5441
(May 1981).
S. W. Hinkel, "Utilization of CRC Bytes for Error Correction on Multiple Formatted Data
Strings. " 24 (lb), 639-643 (June 1981).
D. A. Goumeau and S. W. Hinkel, "Error Correction as an Extension of Error Recovery
on Information Strings." 24 (lb), 651-652 (June 1981).
J. D. Dixon, et. al., "Parity Mechanism for Detecting Both Address and Data Errors."
24 (lb), 794 (June 1981).
A. M. Patel, "Dual-Function ECC Employing Two Check Bytes Per Word." 24,(2),
1002-1004 (July 1981).
D. Meltzer, "CCD Error Correction System." 24 (3), 1392-1396 (Aug. 1981).
I. Jones, "Variable-Length Code-Word EncoderIDecoder." 24 (3), 1514-1515 (Aug. 1981).

D. B. Convis, et. al., "Sliding Window Cross-Hatch Match Algorithm for Spelling Error
Correction. " 24 (3), 1607-1609 (Aug. 1981).
N. N. Heise and W. G. Verdoom, ·Serial Implementation of b-Adjacent Codes." 24 (5),
2366-2370 (Oct. 1981).
.
S. R. McBean, "Error Correction at a Display Terminal During Data Verification." 24 (5),
2426-2427 (Oct. 1981).
D. T. Tang and P. S. Yu, "Error Detection With Imbedded Forward Error Correction."
24 (5), 2469-2472 (Oct. 1981).
R. W. Alexander and J. L. Mitchell, "Uncompressed Mode Trigger." 24 (5), 2476-2480
(Oct. 1981).
V. A. Albaugh, et. al., "Sequencer for Converting Any Shift Register Into a Shift
Register Having a Lesser Number of Bit Positions.· (Oct. 1981).
A. R. Barsness, W. H. Cochran, W. A. Lopour and L. P. Segar, "Longitudinal Parity
Generation for Single- Bit Error Correction." 24 (6),2769-2770 (Nov. 1981).
S. Lin and P. S. Yu, "Preventive Error Control Scheme.· 24 (6), 2886-2891 (Nov. 1981).
- 425 -

I

D. T. Tang and P. S. Yu, "Hybrid Go-Back-N ARQ With Extended Code Block." 24 (6),
2892-2896 (Nov. 1981).
.
F. Neves and A. K. Uht, "Memory Error Correction Without ECC." 24 (7a) , 3471 (Dec.
1981).
E. S. Anolick, et. aI., "Alpha Particle Error Correcting Device. " 24 (8), 4386 (Jan. 1982).
W. H. McAnney, "Technique for Test and Diagnosis of Shift-Register Strings." 24 (8),
4387-4389 (Jan. 1982).
F. J. Aichelmann, Jr. and L. K. Lange, "Paging Error Correction for Intermittent Errors.
24 (9), 4782-4783 (Feb. 1982).

II

R. E. Starbuck, "Self-Correcting DASD." 24 (10), 4916 (Mar. 1982).
W. H. Cochran and W. A. Lopour, "Optimized Error Correction! Detection for Chips
Organized Other Than By-I." 24 (10),5275-5276 (Mar. 1982).
S. Bederman, et. aI., "Codes for Accumulated- Error Channels." 24 (lla), 5744-5748 (Apr.
1982).
M. P. Deuser, et. aI., "Correcting Errors in Cached Storage Subsystems.
5347-6214 (Apr. 1982).

II

24 (lla),

P. T. Burton, "Method for Enhancement of Correctability of Recording Data Errors in
Computer Direct-Access Storage Devices." 24 (lIb), 6213 (Apr. 1982).
A. R. Barsness, et. aI., "ECC Memory Card With Built-in Diagnostic Aids and Multiple
Usage." 24 (lIb), 6173 (Apr. 1982).

- 426 -

NATIONAL TECHNICAL INFORMATION SERVICE
Altman, F. J., et. aI., "Satellite Communications Reference Data Handbook." AD-746 165,
(July 1972).
Assmus, Jr., E. F. and H. F. Mattson, Jr., "Research to Develop the Algebraic Theory of
Codes." AD-678 108, (Sept. 1968).
Assmus, Jr., E. F., et. aI., "Error-Correcting Codes." AD-754234, (Aug. 1972).
Assmus, Jr., E. F., et. aI., "Cyclic Codes." AD-634989, (Apr. 1966).
Assmus, Jr., E. F., et. aI., "Research to Develop the Algebraic Theory of Codes."
AD-656 783, (June 1967).
Bahl, L. R., "Correction of Single and Multiple Bursts of Error. " AD-679 877,
1968).

(0 c t.

Benelli, G., "Multiple-Burst-Error-Correcting-Codes." N78-28316, (Apr. 1977).
Benice, R. J., et. aI., "Adaptive Modulation and Error Control Techniques." AD-484 188,
(May 1966).
Brayer, K., "Error Patterns and Block Coding for the Digital High-Speed Autovon
Channel. " AD-A022 489, (Feb. 1976).
Bussgang, J. J. and H. Gish, "Analog Coding." AD-721 228, (Mar. 1971).
Cetinyilmaz, N., "Application of the Computer for Real Time Encoding and Decoding of
Cyclic Block Codes. " AD/A-021 818, (Dec. 1975).
Chase, D., et. aI., "Troposcatter Interleaver Study RepOrt." AD/A-008 523, (Feb. 1975).
Chase, D., et. aI., "Coding/MUX Overhead Study." AD/A-009 174, (Mar. 1975.
Chase, Dr., D., et. aI., "Multi-Sample Error Protection Modulation Study." AD/A-028 985,
(May 1976).
Chase, Dr. D., et. aI., "DemodlDecoderIntegration. " AD/A-053 685, (Apr. 1978).
Chien, R. T. and S. W. Ng., "L-Step Majority Logic Decoding." AD-707 877, (June 1970).
Chien, R. T., et. aI., "Hardware and Software Error Correction Coding." AD/A-OI7 377,
(Aug. 1975).
Choy, D. M-H., "Application of Fourier Transform Over Finite Fields to Error-Correcting
Codes." AD-778 102, (Apr. 1974).
Covitt, A. L., "Performance Analysis of a Frequency Hopping Modem." AD-756 840, (Dec.
1972).
DonnaIly,

W.,

"Error Probability in Binary Digital FM Transmission Systems."

AD/A-056 237, (Feb. 1978).

- 427 -

Ellison, J. T., "Universal Function Theory and Galois Logic Studies." AD-740 849, (Mar.
1972).
Ellison, J. T. and B. Kolman, "Galois Logic Design." AD-717 205, (Oct. 1970).
Forney, Jr" G., "Study of Correlation Coding." AD-822 106, (Sept. 1967).
Gilhousen, K. S., et. al., "Coding Systems Study for High Data Rate Telemetry Links."
N71-27786, (Jan. 1971).
Gish, H., "Digital Modulation Enhancement Study." AD-755 939, (Jan. 1973).
Hamalaninen, J. R. and E. N. Skoog, "Error Correction Coding With NMOS Microprocessors: a 6800-Based 7,3 Reed-Solomon Decoder.· AD/A-073 088, (May 1979).
Horn, F. M., "Design Study of Error-Detecting and Error-Correcting Shift Register."
N65-21302, (Apr. 1965).
Janoff, N. S., "Computer Simulation of Error-Correcting Codes." AD-777 198, (Sept.
1973).
Kindle, J. T., "Map Error Bit Decoding of Convolutional Codes." AD/A-061 639, (Aug.
1977).
Lee, L., "Concatenated Coding Systems Employing a Unit-Memory Convolutional Code
and a Byte-Oriented Decoding Algorithm." N76-31932, (July 1976).
Liu, K. Y., et. al., "The Fast Decoding of Reed-Solomon Codes Using High-Radix Fermat
Theoretic Transforms." N77-14057.
Martin, A. F., "Investigation of Bit Interleaving Techniques for Use with Viterbi
Decoding Over Differentially Coded Satellite Channels." ADIA-003 807, (July 1974).
Marver, J. M., "Complexity Reduction in Galois Logic Design." AD/A-056 190, "(Dec.
1977).
Massey, J. L., "Joint Source and Channel Coding." AD/A-045 938, (Sept. 1977).
Mitchell, M. E., "Coding for Turbulent Channels." AD-869-973, (Apr. 1970).
Mitchell, M. E. and Colley, L. E., "Coding for Turbulent Channels." AD-869 942, (Apr.
1970).
Mitchell, M. E., et. al., "Coding for Turbulent Channels. " AD-869 941, (Apr. 1970).
Morakis, J. C., "Shift Register Generators and Applications to Coding." X-520-68-133,
(Apr. 1963).
Muggia, A., "Effect of the Reduction of the Prandtl in the Stagnation Region Past an
Axisymmetric Blunt Body in Hypersonic Flow. " AD-676 388, (July 1968).
McEliece, R. J., et. al., "Synchronization Strategies for RFI Channels. " N77-21123.
- 428 -

Nesenbergs, M. "Study of Error Control Coding for the U. S. Postal Service Electronic
Message System." PB-252 689, (May 1975).
Oderwalder, 1. P., et. al., "Hybrid Coding Systems Study Final Report." N72-32206, (Sept.
1972).
Paschburg, R. H., "Software Implementation of Error-Correcting Codes." AD-786 542,
(Aug. 1974).
Pierce, 1. N., "Air Force Cambridge Research Laboratories." AD-744 069, (Mar. 1972).
Reed, I. S., "kth-Order Near-Orthogonal Codes." AD-725 901, (1971).
Reed, I. S. and T. K. Truong, "A Simplified Algorithm for Correcting Both Errors and
Erasures ofR-S Codes." N79-16012, (Sept.lOct. 1978).
Roome, T. F., "Generalized Cyclic Codes Finite Field Arithmetic." AD/A-070 673, (May
1979).
Rudolph, L. D., "Decoding Complexity Study." AD/A-002 155, (Nov. 1974).
Rudolph, L. D., "Decoding Complexity Study II." AD/A-039 023, (Mar. 1977).
Sarwate, D. V., "A Semi-Fast Fourier Transform Algorithm Over GF(2 m)." AD/A-034982,
(Sept. 1976).
Schmandt, F. D., "The Application of Sequential Code Reduction." AD-771 587, (Oct.
1973).
Sewards, A., et. al., "Forward Error-Correction for the Aeronautical Satellite Communications Channel." N79-19193, (Feb. 1979).
Skoog, E. N., "Error Correction Coding with NMOS Microprocessors: Concepts. "
AD/A-072 982, (May 1979).
Solomon, G., "Error Correcting Codes for the English Alphabet and Generalizations."
AD-774 850, (July 1972).
Solomon G. and D. J. Spencer, "Error Correction/Multiplex for Megabit Data Channels."
AD-731 567, (Sept. 1971).
Solomon, G., et. al., "Error Correction.
AD-731 568, (Sept. 1971).

Multiplex for Megabit Data Channels."

Stutt, C. A., "Coding for Turbulent Channels." AD-869979, (Apr. 1970).
Tomlinson, M. and B. H. Davies, "Low Rate Error Correction Coding for Channels with
Phase Jitter." AD/A-044 658, (Feb. 1977).
Viterbi, A. J., et. aI., "Concatenation of Convolutional and Block Codes"
(June 1971).

- 429 -

N71-32505,

Welch, L. R., et. aI., "The Fast Decoding of Reed-Solomon Codes Using Fermat Theoretic
Transforms and Continued Fractions." N77-14056.
Wong, J. S. L., et. aI., "Review of Finite Fields: Applications to Discrete Fourier Transforms and Reed-Solomon Coding. " N77-33875, (July 1977).
· . . . . ,"Coding Investigation for Time Division Multiple Access Communications."
AD-76654O, (July 1973).
· .... ,"Feedback Communications." ADIA-002 284, (Oct. 1974).
· .... ,"Majority Decoding Apparatus for Geometric Codes.", AD-D003 369, (Oct. 1976).

- 430 -

A UDTO ENGINEERING SOCTErr PREPRINTS
Adams, R. W., "Filtering in the Log Domain." 1470 (B-5), (May 1979).
Doi, T. T., "Channel Codings for Digital Audio Recordings." 1856 (1-1), (Oct.lNov. 1981).
Doi, T. T., "A Design of Professional Digital Audio Recorder." 1885 (G-2), (Mar. 1982).
Doi, T. T., et. at., "Cross Interleave Code for Error Correction of Digital Audio
Systems." 1559 (H-4), (Nov. 1979).
Doi, Dr. T. T., et. at., "A Long Play Digital Audio Disc System." 1442 (G-4), (Mar. 1979).
Doi, T. T., et. at., "A Format of Stationary-Head Digital Audio Recorder Covering Wide
Range of Application." 1677 (H-6), (Oct.lNov. 1980).
Engberg, E. W., "A Digital Audio Recorder Format for Professional Applications."
1413 (F-1), (Nov. 1978).
Fukuda, G. and T. Doi, "On Dropout Compensation of PCM Systems-Computer Simulation
Method and a New Error-Correcting Code (Cross Word Code)." 1354 (E-7) , (May
1978).
Fukuda, G., et. at., "On Error Correctability of EIAJ-Format of Home Use Digital Tape
Recorders." 1560 (G-5), (Nov. 1979).
Furukawa, T., et. at, "A New Run Length Limited Code." 1839 (1-2), (Oct.lNov. 1981).
Inoue, T., et. aI., "Comparison of Performances Between IPC Code and RSC Code When
Applied to PCM Tape Recorder." 1541 (H-5), (Nov. 1979).
Ishida, Y., et. at., "A PCM Digital Audio Processor for Home Use VTR'S." 1528 (G-6) ,
(Nov. 1979).
Kosaka, M., et. at, "A Digital Audio System Based on a PCM Standard Format."
1520 (G-4), (Nov. 1979).
Lagadec, Dr. R., et. aI., "A Digital Interface for the Interconnection of Professional
Digital Audio Equipment." 1883 (G-6), (Mar. 1982).
Locanthi, B. N. and M. Komamura, "Computer Simulation for Digital Audio Systems."
1653 (K-4), (May 1980).
Muraoka, T., et. aI., "A Group Delay Analysis of Magnetic Recording Systems."
1466 (A-5), (May 1979).
Nakajima, H., et. at., "A New PCM Audio System as an Adapter of Video Tape
Recorders." 1352 (B-l1), (May 1978).
Nakajima, H., et. al., "Satellite Broadcasting System for Digital Audio." 1855 (L-8) ,
(Oct.lNov. 1981).

- 431 -

Odaka, K., et. al., "LSls for Digital Signal Processing to be Used in "Compact Disc
Digital Audio" Players." 1860 (G-5), (Mar. 1982).
Sadashige, K. and H. Matsushima, "Recent Advances in Digital Audio Recording Technique." 1652 (K-5), (May 1980).
Seno, K., et. al., "A Consideration of the Error Correcting Codes for PCM Recording
System." 1397 (H-4), (Nov. 1978).
Tanaka, K., et. al., "2-Channel PCM Tape Recorder for Professional Use." 1408 (F-3) ,
(Nov. 1978).
Tanaka, K., et. al., "Improved Two Channel PCM Tape Recorder for Professional Use."
1533 (G-3), (Nov. 1979).
Tanaka, K., et. al., "On a Tape Format for Reliable PCM Multi-Channel Tape Recorders. "
1669 (K-l), (May 1980).
Tanaka, K., et. aI., "On PCM Multi-Channel Tape Recorder Using Powerful Code
Format." 1690 (H-5), (Oct.lNov. 1980).
Tsuchiya, Y., et. al., "A 24-Channel Stationary-Head Digital Audio Recorder." 1412 (F-2) ,
(Nov. 1978).
Van Gestel, W. J, et. al., " A Multi-Track Digital Audio Recorder for Consumer Applications." 1832 (1-4), (Oct. 1981).
Vries, L. B., "The Error Control System of Philips Compact Disc." 1548 (G-8), (Nov.
1979).
Vries, L. B., et. aI., "The Compact Disc Digital Audio System: Mudulation and ErrorCorrection." 1674 (H-8), (Oct. 1980).
White, L., et. al., "Refinements of the Threshold Error Correcting Algorithm."
1790 (B-5), (May 1981).
Yamada, Y., et. al., "Professional-Use PCM Audio Processor With a High Efficiency Error
Correction System." 1628 (G-7), (May 1980).

- 432 -

PATENTS

2,864,078, "Phased, Timed Pulse Generator," Seader, (1958).
2,957,947, "Pulse Code Transmission System," Bowers, (1960).
3,051,784, "Error-Correcting System," Neumann, (1962).
3,162,837, "Error Correcting Code Device With Modulo-2 Adder and Feedback Means,"
Meggitt, (1964).
3,163,848, "Double Error Correcting System," Abramson, (1964).
3,183,483, "Error Detection Apparatus," Lisowski, (1965).
3,226,685, "Digital Recording Systems Utilizing Ternary, N Bit Binary and Other SelfClocking Forms, " Potter, et al., (1965).
3,227,999, "Continuous Digital Error-Correcting System," Hagelbarger, (1966).
3,242,461, "Error Detection System," Silberg, et al., (1966).
3,264,623, "High Density Dual Track Redundant Recording System," Gabor, (1966).
3,278,729, "Apparatus For Correcting Error-Bursts In Binary Code," Chien, (1966).
3,281,804, "Redundant Digital Data Storage System," Dirks, (1966).
3,281,806, "Pulse Width Modulation Representation of Paired Binary Digits," Lawrance,
et al., (1966).
3,291,972, "Digital Error Correcting Systems," Helm, (1966).
3,319,223, "Error Correcting System, " Helm, (1967).
3,372,376, "Error Control Apparatus, " Helm, (1968).
3,374,475, "High Density Recording System," Gabor, (1968).
3,387,261, "Circuit Arrangement for Detection and Correction of Errors Occurring in
the Transmission of Digital Data, " Betz, (1968).
3,389,375, "Error Control System," (1968).
3,398,400, "Method and Arrangement for Transmitting and Receiving
Errors," Rupp, et al., (1968).

Data Without

3,402,390, "System for Encoding and Decoding Information Which Provides Correction of
Random Double-Bit and Triple-Bit Errors," Tsimbidis, et al., (1968).
3,411,135, "Error Control Decoding System," Watts, (1968).
3,413,599, "Handling OfInformation With Coset Codes, " Freiman, (1968).
- 433 -

3,416,132, "Group Parity Handling," MacSorley, (1968).
3,418,629, "Decoders For Cyclic EiTar-Correcting Codes," (1968).
3,421,147, "Buffer Arrangement," Burton, et al., (1969).
3,421,148, "Data Processing Equipment," (1969).
3,423,729, "Anti-Fading Error Correction System," Heller, (1969).
3,437,995, "Error Control Decoding System," Watts, (1969).
3,452,328, "Error Correction Device For Parallel Data Transmission System," Hsiao,
et al., (1969).
3,457,562, "Error Correcting Sequential Decoder," (1969).
3,458,860, "Error Detection By Redundancy Checks," Shimabukuro, (1969).
3,465,287, "Burst Error Detector," (1969).
3,475,723, "Error Control System," Burton, et al., (1969).
3,475,724, "Error Control System," Townsend, et al., (1969).
3,475,725, "Encoding Transmission System," Frey, Jr., (1969).
3,478,313, "System For Automatic Correction Of Burst-Errors, " Srinivasan, (1969).
3,504,340, "Triple Error Correction Circuit," Allen, (1970).
3,506,961, "Adaptively Coded Data Communications System," Abramson, et aI., (1970).
3,508,194, "Error Detection and Correction System," Brown, (1970).
3,508,195, "Error Detection and Corrections Means," Sellers, Jr., (1970).
3,508,196, "Error Detection and Correction Features, " Sellers, Jr., et al., (1970).
3,508,197, "Single Character Error and
Convolution Codes," (1970).

Burst-Error

Correcting

Systems

Utilizing

3,508,228, "Digital Coding Scheme Providing Indicium AT Cell Boundaries Under
Prescribed Circumstances to Facilitate Self-Clocking, " Bishop, (1970).
3,519,988, "Error Checking Arrangement for Data Processing Apparatus," Grossman,
(1970).
3,533,067, "Error Correcting Digital Coding and Decoding Apparatus," (1970).
3,534,33 t, "Encoding-Decoding Array," Kautz, (1970).

- 434 -

3,542,756, "Error Correcting," Gallager, (1970).
3,557,356, "Pseudo-Random 4-Level m-Sequences Generators," Baiza, et at., (1971).
3,559,167, "Self-Checking Error Checker for Two-Rail Coded Data," (1971).
3,559,168, "Self-Checking Error Checker for k-Out-of-n Coded Data," (1971) ..
3,560,925, "Detection and Correction of Errors in Binary Code Words,"
(1971).

Ohnsorge,

3,560,942, "Clock for Overlapped Memories With Error Correction," Enright, Jr., (1971).
3,562,711, "Apparatus for Detecting Circuit Malfunctions, " Davis, et al., (1971).
3,568,148, "Decoder for Error Correcting Codes," Clark, Jr., (1971).
3,573,728, "Memory With Error Correction for Partial Store Operation," Kolankowsky,
et al., (1971).
3,576,952, "Forward Error Correcting Code Telecommunicating System," VanDuuren,
et at., (1971).
3,577,186, "Inversion-Tolerant Random Error Correcting Digital Data Transmission
System," Mitchell, (1971).
3,582,878, "Mltiple Random Error Correcting System," (1971).
3,582,881, "Burst-Error Correcting Systems," Burton, (1971).
3,585,586, "Facsimile Transmission System," Harmon, et al., (1971).
3,587,090, "Great Rapidity Da~ Transmission System, " Labeyrie, (1971).
3,601,798, "Error Correcting and Detecting Systems," Haize, (1971).
3,601,800, "Error Correcting Code Device for Parallel-Serial Transmissions," Lee, (1971).
3,662,337, "Mod 2 Sequential Function Generator for Multibit Binary Sequence." (1972).
3,622,982, "Method and Apparatus for Triple Error Correction," Clark, Jr., et al., (1971).
3,622,984, "Error Correcting System and Method, " (1971).
3,622,985, "Optimum Error-Correcting Code Device for Parallel-Serial Transmissions in
Shortened Cyclic Codes," Ayling, et aI., (1971).
3,622,986, "Error-Detecting Technique for Multilevel Precoded Transmission,· Tang,
et al., (1971).
3,623,155, "Optimum Apparatus and Method for Check Bit Generation and Error Detection, Location and Correction, " (1971).

- 435 -

3,624,637, "Digital Code to Digital Code Conversions," Irwin, (1971).
3,629,824, "Apparatus for Multiple-Error Correcting Codes, • Bossen, (1971).
3,631,428, "Quarter-Half Cycle Coding for Rotating Magnetic Memory System," King,
(1971).
3,634,821, "Error Correcting System," (1972).
3,638,182, "Random and Burst Error-Correcting Arrangment with Guard Space Error
Correction, " Burton, et al., (1972)
3,639,900, "Enhanced Error Detection and Correction for Data
(1972).

Systems," Hinz, Jr.,

3,641,525, "Self-Clocking Five Bit Record-Playback System," Milligan, (1972).
3,641,526, "Intra-Record Resynchronization," Bailey, et ai. (1972).
3,648,236, "Decoding Method and Apparatus for Bose-Chaudhuri- Hocquenghem Codes,"
Burton, (1972).
3,648,239, "System for Translating to and From Single Error Correction-Double Error
Detection Hamming Code and Byte Parity Code," (1972).
3,649,915, "Digital Data Scrambler-Descrambler Apparatus for Improved Error Performance," Mildonian, Jr., (1972).
3,662,337, "Mod 2 Sequential Function Generator for Multibit Binary Sequence," Low,
et aI., (1972).
3,662,338, "Modified Threshold Decoder for Convolutional Codes," (1972).
3,665,430, "Digital Tape Error Recognition Method Utilizing Complementary Information," Hinrichs, et aI., (1972).
3,668,631, "Error Detection and Correction System with Statistically Optimized Data
Recovery," Griffith, et aI., (1972).
3,668,632, "Fast Decode Character Error Detection and Correction System," Oldham III,
(1972).
3,671 ,947, "Error Correcting Decoder," (1972).
3,675,200, "System for Expanded Detection and Correction of Errors in Parallel Binary
Data Produced by Data Tracks," (1972).
3,675,202, "Device for Checking a Group of Symbols to Which a Checking Symbol is
Joined and for Determining This Checking Symbol," Verhoeff, (1972).
3,685,014, "Automatic Double Error Detection and Correction Device," (1972).

- 436 -

3,685,016, "Array Method and Apparatus for Encoding, Detecting, and/or Correcting
Data, " (1972).
3,688,265, "Error-Free Decoding for Failure-Tolerant Memories," (1972).
3,689,899, "Run-Length-Limited Variable-Length Coding with Error Propagation Limitation," Franaszek, (1972).
3,697,947, "Character Correcting Coding System and Method for Deriving the Same,"
(1972).
3,697,948, "Apparatus for Correcting Two Groups of Multiple Errors," Bossen, (1972).
3,697,949, "Error Correction System for Use With a Rotational Single-Error Correction,
Double-Error Detection Hamming Code," (1972).
3,697,950, "Versatile Arithmetic Unit for High Speed Sequential Decoder," (1972).
3,699,516, "Forward-Acting Error Control System," Mecklenburg, (1972).
3,701,094, "Error Control Arrangement for Information Comparison," Howell, (1972).
3,714,629, "Double Error Correcting Method and System," (1973).
3,718,903, "Circuit Arrangement for Checking Stored Information," Oiso, et al., (1973).
3,725,859, "Burst Error Detection and Correction System," Blair, et al., (1973).
3,728,678, "Error-Correcting Systems Utilizing Rate 112 Diffuse Codes," (1973).
3,742,449, "Burst and Single Error Detection and Correction System," Blair, (1973).
3,745,525, "Error Correcting System," (1973).
3,745,526, "Shift Register Error Correcting System," Hong, et al., (1973).
,3,745,528, "Error Correction for Two Tracks in a Multi-Track System," Patel, (1973).
3,753,227, "Parity Check Logic for a Code Reading System," Patel, (1973).
3,753,228, "Synchronizing Arrangement for Digital Data Transmission Systems," Nickolas,
et al., (1973).
3,753,230, "Methods and Apparatus for Unit-Distance Counting and Error-Detection,"
Hoffner II, (1973).
3,755,779, "Error Correction System for Single-Error Correction, Related-Double-Error
Correction and Unrelated- Double-Error Detection, " (1973).
3,764,998, "Methods and Apparatus for Removing Parity Bits from Binary Words,"
Spencer, (1973).

- 437 -

3,766,521, "Multiple B-Adjacent Group Error Correction and Detection Codes and
Self-Checking Translators Therefor," (1973).
3,768,071, "Compensation for Defective Storage Positions," Knauft, et al., (1973).
3,771,126, "Error Correction for Self-Synchronized Scramblers," (1973).
3,771,143, "Method and Apparatus for Providing Alternate Storage Areas on a Magnetic
Disk Pack," Taylor, (1973).
3,774,154, "Error Control Circuits and Methods," Devore, et al., (1973).
3,775,746, "Method and Apparatus for Detecting Odd Numbers of Errors and Burst
Errors of Less Than a Predetermined Length in Scrambled Digital
Sequences," Boudreau, et aI., (1973).
3,777,066, "Method and System for Synchronizing the Transmission of Digital Data
While Providing Variable Length Filler Code," Nicholas, (1973).
3,780,271, "Error Checking Code and Apparatus for an Optical Reader," (1973).
3,780,278, "Binary Squaring Circuit," Way, (1973).
3,781,109, "Data Encoding and Decoding Apparatus and Method," Mayer, Jr., et aI.,
(1973).
3,781,791, "Method and Apparatus for Decoding BCH Codes," Sullivan, (1973).
3,786,201, "Audio-Digital Recording System," (1974).
3,786,439, "Error Detection Systems, " McDonald, et al., (1974).
3,794,819, "Error Correction Method and Apparatus," Berding, (1974).
3,794,821, "Memory Protection Arrangements for Data Processing Systems," (1974).
3,798,597, "System and Method for Effecting Cyclic Redundancy Checking," Frambs,
et al., (1974).
3,800,281, "Error Detection and Correction Systems," Devore, et al., (1974).
3,801,955, "Cyclic Code Encoder/Decoder," Howell, (1974).
3,810,111, "Data Coding With Stable Base Line for Recording and Transmitting Binary
Data, " (1974).
3,814,921, "Apparatus and Method for a Memory Partial-Write of Error Correcting
Encoded Data," Nibby, et al., (1974) ..
3,818,442, "Error-Correcting Decoder for Group Codes," (1974).
3,820,083, "Coded Data Enhancer, Synchronizer, and Parity Remover System," Way,
(1974).
- 438 -

3,825,893, "Modular Distributed Error Detection and Correction Apparatus and Method,"
(1914).
3,828,130, "Data Transmitting System," Yamaguchi, (1914).
3,831,142, "Method and Apparatus for Decoding Compatible Convolutional Codes, " (1974).
3,831,143, "Concatenated Burst-Trapping Codes," Trafton, (1914).
3,832,684, "Apparatus for Detecting Data Bits and Error Bits In Phase Encoded Data,"
Besenfelder, (1914).
3,842,400, "Method and Circuit Arrangement for Decoding and Correcting Information
Transmitted in a Convolutional Code," (1914).
3,843,952, "Method and Device for Measuring the Relative Displacement Between Binary
Signals Corresponding to Information Recorded on the Different Tracks of a
Kinematic Magnetic Storage Device," Husson, (1914).
3,851,306, "Triple Track Error Correction," (1914).
3,858,119, "Error Detection Recording Technique," (1974).
3,859,630, "Apparatus for Detecting and Correcting Errors in Digital Information
Organized into a Parallel Format by Use of Cyclic Polynomial Error Detecting and Correcting Codes," Bennett, (1915).
3,863,228, "Apparatus for Detecting and Eliminating a Transfer of Noise Records to a
Data Processing Apparatus," Taylor, (1975).
3,866,110, "Binary Transmission System Using Error-Correcting Code," Verzocchi, (1915).
3,868,632, "Plural Channel Error Correcting Apparatus and Methods," Hong, et aI.,
(1915).
3,872,431, "Apparatus for Detecting Data Bits and Error Bits in Phase Encoded Data,"
Besenfelder, et al., (1975).
3,876,978, "Archival Data Protection," (1975).
3,818,333, "Simplex ARQ System," Shimizu, et al., (1915).
3,882,457, "Burst Error Correction Code," En, (1975).
3,891,959, "Coding System for Differential Phase Modulation," Tsuji, et aI., (1975).
3,891,969, "Syndrome Logic Checker for an
Christensen, (1975).
.
\

Error

Correcting

Code

Decoder,"

3,893,070, "Error Correction and Detection CirCuit With Modular Coding Unit," (1975).
3,893,071, "Multi Level Error Correction System for High Density Memory," (1975).
- 439 -

3,893,078, "Method and Apparatus for Calculating the Cyclic Code of a Binary
Message," Finet, (1975).
3,895,349, "Pseudo-Random Binary Sequence Error Counters," Robson, (1975).
3,896,416, "Digital Teleccommunications Apparatus Having Error-Correcting Facilities,"
Barrett, et al., (1975).
3,903,474, "Periodic Pulse Check Circuit, " (1975).
3,909,784, "Information Coding With Error Tolerant Code, " Raymond, (1975).
3,913,068,

.i Error

Correction of SeriaI Data Using a Sub field Code," (1975).

3,920,976, "Information Storage Security System," Christensen, et aI., (1975).
3,921,210, "High Density Data Processing System," Halpern, (1975).
3,925,760, "Method of and Apparatus for Optical Character Recognition, Reading and
Reproduction," Mason, et aI., (1975).
3,928,823, "Code Translation Arrangement," (1975).
3,930,239, "Integrated Memory," SaIters, et aI., (1975).
3,938,085, "Transmitting Station and Receiving Station for Operating With a Systematic
Recurrent Code," (1976).
3,944,973, "Error Syndrome and Correction Code Forming Devices," Masson, (1976).
3,949,380, "Peripheral Device Reassignment Control Technique," Barbour, et aI., (1976).
3,958,110, "Logic Array with Testing Circuitry," Hong, et al., (1976).
3,958,220, "Enhanced Error Correction, " (1976).
3,982,226, "Means and Method for Error Detection and Correction of Digital Data,"
(1976).
3,983,536, "Data Signal Handling Arrangements," Telfer, (1976).
3,988,677, "Space Communication System for Compressed Data With a Concatenated
Reed-Solomon-Viterbi Coding Channel," Fletcher, et aI., (1976).
3,996,565, "Programmable Sequence Controller," Nakao, et aI., (1976).
3,997,876, "Apparatus and Method for Avoiding Defects in the Recording Medium within
a Peripheral Storage System," Frush, (1976).
4,001,779, "Digital Error Correcting Decoder," Schiff, (1977).

- 440 -

4,009,469, "Loop Communications System with Method and Apparatus for Switch to
Secondary Loop," Boudreau, et al., (1977).
4,013,997, "Error Detection/Correction System, " Treadwell III, (1977).
4,015,238, "Metric Updater for Maximum Likelihood Decoder," (1977).
4,020,461, "Method of and Apparatus for Transmitting and Receiving Coded Digital
Signals," Adams, et al., (1977).
4,024,498, "Apparatus for Dead Track Recovery," (1977).
4,030,067, "Table Lookup Direct Decoder for Double-Error Correcting Correcting (DEC)
BCH Codes Using a Pair of Syndromes, " Howell, et al., (1977).
4,030,129, "Pulse Code Modulated Digital Audio System," (1977).
4,032,886, "Concatentation Technique for Burst-Error Correction and Synchronization,"
En, et al., (1977).
4,035,767, "Error Correction Code and Apparatus for the Correction of Differentially
Encoded Quadrature Phase Shift Keyed Data (DQPSK)," Chen, et al., (1977).
4,037,091, "Error Correction Circuit Utilizing Multiple Parity Bits," (1977).
4,037,093, "Matrix Multiplier in GF(2 m), " (1977).
4,044,328, "Data Coding and Error Correcting Methods and Apparatus, " Herff, (1977).
4,044,329, "Variable Cyclic Redundancy Character Detector, " (1977).
4,047,151, "Adaptive Error Correcting Transmission System, " Rydbeck, et al., (1977).
4,052,698, "Multi-Parallel-Channel Error Checking," (1977).
4,054,921, "Automatic Time-Base Error Correction System," (1977).
4,055,832, "One-Error Correction Convolutional Coding System," (1977).
4,058,851, "Conditional Bypass of Error Correction for Dual Memory Access Time
Selection," Scheuneman, (1977).
4,063,038, "Error Coding Communication Terminal Interface," Kaul, et aI., (1977).
4,064,483, "Error Correcting Circuit Arrangement Using Cube Circuits, " (1977).
4,072,853, "Apparatus and Method for Storing Parity Encoded Data from a Plurality of
Input/Output Sources," Barlow, et al., (1978).
4,074,228, "Error Correction of Digital Signals, " (1978).
4,077,028, "Error Checking and Correcting Device,· Lui, et al., (1978).

- 441 -

4,081,789, "Switching Arrangement for Correcting the Polarity of a Data Signal
Transmitted With a Recurrent Code," (1978).
4,087,787, "Decoder for Implementing an Approximation of the Viterbi Algorithm Using
Analog Processing Techniques," (1978).
4,092,713, "Post-Write Address Word Correction in Cache Memory System," Scheuneman,
(1978).
4,099,160, "Error Location Apparatus and Methods," (1978).
4,105,999, "Parallel-Processing Error Correction System," Nakamura, (1978).
4,107,650, "Error Correction Encoder and Decoder," Luke, et al., (1978).
4,107,652, "Error Correcting and Controlling System," (1978).
4,110,735, "Error Detection and Correction, " Maxemchuk, (1978).
4,112,502, "Conditional Bypass of Error Correction for Dual Memory Access Time
Selection," Scheuneman, (1978).
4,115,768, "Sequential Encoding and Decoding of Variable Word Length, Fixed Rate Data
Codes," Eggenberger, et al., (1978).
4,117,458, "High Speed Double Error Correction Plus Triple Error Detection System,"
(1978).
4,119,945, "Error Detection and Correction," Lewis, Jr., et al., (1978).
4,129,355, "Light Beam Scanner with Parallelism Error Correction," Noguchi, (1978).
4,138,694, "Video Signal Recorder/Reproducer for Recording and Reproducing Pulse
Signals, " (1979).
4,139,148, "Double Bit Error Correction Using Single Bit Error Correction, Double Bit
Error Detection Logic and Syndrome Bit Memory," (1979).
4,141,039, "Recorder Memory With Variable Read and Write Rates," Yamamoto; (1979).
4,142,174, "High Speed Decoding of Reed-Solomon Codes," Chen, et al., (1979).
4,145,683, "Single Track Audio-Digital Recorder and Circuit for Use Therein Having
Error Correction," Brookhart, (1979).
4,146,099, "Signal Recording Method and Apparatus," Matsushima, et al. (1979).
4,146,909, "Sync Pattern Encoding System for Run-Length Limited Codes," Beckenhauer,
et al., (1979).
4,151,510, "Method and Apparatus for an Efficient Error Detection and Correction
System," Howell, et al., (1979).

- 442 -

4,151,565, "Discrimination During Reading of Data Prerecorded in Different Codes,"
Mazzola, (1979).
4,156,867, "Data Communication System With Random and Burst Error Protection and
Correction, " Bench, et al., (1979).
4,157,573, "Digital Data Encoding and Reconstruction Circuit," (1979).
4,159,468, "Communications Line Authentication Device, " Barnes, et al., (1979).
4,159,469, "Method and Apparatus for the Coding and Decoding of Digital Information,"
(1979).
4,160,236, "Feedback Shift Register," Oka, et al., (1979).
4,162,480, "Galois Field Computer," (1979).
4,163,147, "Double Bit Error Correction Using Double Bit Complementing," (1979).
4,167,701, "Decoder for Error-Correcting Code Data," Kuki, et al., (1979).
4,168,468, "Radio Motor Control System," Mabuchi, et al., (1979).
4,168,486, "Segmented Error-Correction System," (1979).
4,175,692, "Error Correction and Detection Systems, " (1979).
4,181,934, "Microprocessor Architecture with Integrated Interrupts and Cycle Steals
Prioritized Channel," Marenin, (1980).
4,183,463, "Ram Error Correction Using Two Dimensional Parity Checking," (1980).
4,185,269, "Error Correcting System for Serial by Byte Data," (1980).
4,186,375, "Magnetic Storage Systems for Coded Numerical Data with Reversible
Transcoding into High Density Bipolar Code of Order N," Castellani, et aI.,
(1980).
4,188,616, "Method and System for Transmitting and Receiving Blocks of Encoded Data
Words to Minimize Error Distortion in the Recovery of Said Data Words,"
Kazami, etaI., (1980).
4,189,710, "Method and Apparatus for Detecting Errors in a Transmitted Code," Iga,
(1980).
4,191,970, "Interframe Coder for Video Signals," Witsenhausen, et aI., (1980).
4,193,062, "Triple Random Error Correcting Convolutional Code, " En, (1980).
4,196,445, "Time-Base Error Correction," (1980).
4,201,337, "Data Processing System Having Error Detection and Correction Circuits,"
Lewis, etal., (1980).
- 443 -

4,201 ,976, "Plural Channel

Error Correcting Methods and Means Using Adaptive
Reallocation of Redundant Channels Among Groups of Channels, " (1980).

4,202,018, "Apparatus and Method for Providing Error Recognition and Correction of
Recorded Digital Information," Stockham, Jr., (1980).
4,204,199, "Method and Means for Encoding and Decoding Digital Data," Isailovic,
(1980).
4,204,634, "Storing Partial Words in Memory," (1980).
4,205,324, "Methods and Means for Simultaneously Correcting Several Channels in Error
in a Parallel Multi Channel Data System Using Continuously Modifiable
Syndromes and Selective Generation of Intemal Channel Pointers,· (1980).
4,205,352, "Device for Encoding and Recording Information with Peak Shift Compensation," Tomada, (1980).
4,206,440, "Encoding for Error Correction of Recorded Digital Signals," Doi, et aI.,
(1980).
4,209,809, "Apparatus and Method for Record Reorientation Following Error Detection
in a Data Storage Subsystem,· Chang, et al., (1980).
4,209,846, "Memory Error Logger Which Sorts Transient Errors From Solid Errors,"
Seppa, (1980).
4,211,996, "Error Correction System for Differential Phase-Shift-Keying," Nakamura,
(1980).
4,211,997, "Method and Apparatus Employing an Improved Format for Recording and
Reproducing Digital Audio,· Rudnick, et aI., (1980).
4,213,163, "Video-Tape Recording," Lemelson, (1980).
4,214,228, "Error-Correcting and Error-Detecting System, " (1980).
4,215,402, "Hash Index Table Hash Generator Apparatus," Mitchell, et al., (1980).
4,216,532, "Self-Correcting Solid-State Mass Memory Organized by Words for a
Stored-Program Control System," Garetti, et al., (1980).
4,216,540, "Programmable Polynomial Generator," McSpadden, (1980).
4,216,541, "Error Repairing Method and Apparatus for Bubble Memories," Clover, et aI.,
(1980).
4,223,382, "Closed Loop Error Correct,· Thorsrud, (1980).
4,225,959, "Tri-State Bussing System," (1980).
4,234,804, "Signal Correction for Electrical Gain Control Systems," Bergstrom, (1980).
- 444 -

4,236,247, "Apparatus for Correcting Multiple Errors in Data Words Read From a
Memory," Sundberg, (1980).
4,238,852, "Error Correcting System, " Iga, et al., (1980).
4,240,156, "Concatenated Error Correcting System, " (1980).
4,241,446, "Apparatus for Performing Single Error Correction and Double Error
Detection," Trubisky, (1980).
4,242,752, "Circuit Arrangement for Coding or Decoding of Binary Data,"
(1980).

Herkert,

4,249,253, "Memory With Selective Intervention Error Checking and Correcting Device,"
Gentili, et al., (1981).
4,253,182, "Optimization of Error Detection and Correction Circuit," (1981).
4,254,500, "Single Track Digital Recorder and Circuit for Use Therein Having Error
Correction," Brookhart, (1981).
4,255,809, "Dual Redundant Error Detection System for Counters, " Hillman, (1981)
4,261,019, "Compatible Digital Magnetic Recording System," McClelland, (1981).
4,271,520, "Synchronizing Technique for an Error Correcting Digital Transmission
System," (1981).
4,275,466, "Block Sync Signal Extracting Apparatus, " (1981).
4,276,646, "Method and Apparatus for Detecting Errors in a Data Set," Haggard, et al.,
(1981).
4,276,647, "High Speed Hamming Code Circuit and Method for the Correction of Error
Bursts," Thacker, et al., (1981).
4,277 ,844, "Method of Detecting and
Systems," (1981).

Correcting Errors in Digital Data Storage

4,281,355, "Digital Audio Signal Recorder," Wada, et al., (1981).
4,283,787, "Cyclic Redundancy Data Check Encoding Method and Apparatus," (1981).
4,291,406, "Error Correction on Burst Channels by Sequential Decoding," (1981).
4,292,684, "Format for Digital Tape Recorder," Kelley, et al., (1981).
4,295,218, "Error-Correcting Coding System," Tanner, (1981).
4,296,494, "Error Correction and Detection Systems," Ishikawa, et al., (1981).
4,298,981, "Decoding Shortened Cyclic Block Codes," Byford, (1981).
- 445 -

4,300,231, "Digital System Error Correction Arrangement,· Moffitt, (1981).
4,306,305, "PCM Signal Transmitting System With Error Detecting and Correcting
Capability," Doi, et al., (1981).
4,309,721, "Error Coding for Video Disc System," (1982).
4,312,068, "Parallel Generation of Serial Cyclic Redundancy Check, • Goss, et al., (1982).
4,312,069, "Serial Encoding-Decoding for Cyclic Block Codes," Ahamed, (1982).
4,317,201, "Error Detecting and Correcting RAM Assembly," Sedalis, (1982).
4,317,202, "Circuit Arrangement for Correction of Data, " Markwitz, (1982).
4,319,356, "Self-Correcting! Memory System,· Kocol, et al., (1982).
4,319,357, "Double Error Correction Using Single Error Correcting Code," Bossen,
(1982).
4,320,510, "Error Data Correcting System, • Kojima, (1982).
4,328,580, "Apparatus and an Improved Method for Processing of Digital Information,"
Stockham, Jr., et al., (1982).
4,330,860, "Error Correcting Device, " Wada, et. ai, (1982).
4,334,309, "Error Correcting Code System," Bannon, et. ai, (1982).
4,335,458, "Memory Incorporating Error Detection and Correction," Krol (1982).
4,336,611, "Error Correction Apparatus and Method," Bernhardt, et aI., (1982).
4,336,612, "Error Correction Encoding and Decoding System," Inoue, et. aI, (1982).
4,337,458, "Data Encoding Method and System Employing Two-Thirds Code Rate with
Full Word Look-Ahead," Cohn, et al., (1982).
4,344,171, "Effective Error Control Scheme for Satellite Communications," Lin, et aI.,
(1982).
4,345,328 "ECC Check Bit Generation Using Through Checking Parity Bits," White,
(1982).
4,355,391, "Apparatus and Method of Error Detection and/or Correction in a Data Set,"
Alsop IV, (1982).
4,355,392, "Burst-Error Correcting System," Doi, et al., (1982).
4,356,566, "Synchronizing Signal Detecting Apparatus," Wada, et al., (1982).
4,357,702, "Error Correcting Apparatus,OI Chase, et aI., (1982).
- 446 -

4,358,848, "Dual Function ECC System with Block Check Byte,· Patel, (1982).
4,359,772, "Dual Function Error Correcting System,· Patel, (1982).
4,360,916, "Method and Apparatus for Providing for Two Bits-Error Detection and
Correction, " Kustedjo, et at., (1982).
4,360,917, "Parity Fault Locating Means," Sindelar, et al., (1982).
4,365,332, "Method and Circuitry for Correcting Errors in Recirculating Memories,"
Rice, (1982).
4,368,533, "Error Data Correcting System, • Kojima, (1983).
4,369,510, "Soft Error Rewrite Control System," Johnson, et al., (1983).
4,375,100, "Method and Apparatus for Encoding Low Redundancy Check Words from
Source Data," Tsuji, et at., (1983).
4,377 ,862, "Method of Error Control in Asynchronous Communications," Koford, et al.,
(1983).
4,377,863, "Synchronization Loss Tolerant
Apparatus, " Legory, et al., (1983).

Cyclic

Error

Checking

Method

and

4,380,071, "Method and Apparatus for Preventing Errors in PCM Signal Processing
Apparatus,· Odaka, (1983).
4,380,812, "Refresh and Error Detection and Correction
Processing System,· Ziegler II, et at., (1983).

Technique

for

a

Data

4,382,300, "Method and Apparatus for Decoding Cyclic Codes Via Syndrome Chains, "
Gupta, (1983).
4,384,353, "Method and Means for Internal Error Check in a Digital Memory, "
Varshney, (1983).
4,388,684, "Apparatus for Deferring Error Detection of Multibyte Parity Encoded Data
Received From a Plurality of Input/Output Data Sources," Nibby, Jr., et al.,
(1983).
4,393,502, "Method and Apparatus for Communicating Digital Information Words by
Error-Correction Encoding, " Tanaka, et at., (1983).
4,394,763, "Error-Correcting System," Nagano, et at., (1983).
4,395,768, "Error Correction Device for Data Transfer System,· Goethats, et at., (1983).
,

4,397,022 "Weighted Erasure Codec for the (24,12) Extended Golay Code," Weng, et al.,
(1983).

- 447 -

4,398,292, "Method and Apparatus for Encoding Digital with Two Error-Correcting
Codes," Doi, et al., (1983).
4,402,045, "Multi-Processor Computer System," Krol, (1983).
4,402,080 "Synchronizing Device for a Time Division Multiplex System," Mueller, (1983).
4,404,673, "Error Correcting Network, " Yamanouchi, (1983).
4,404,674, "Method and Apparatus for Weighted Majority Decoding of FEC Codes Using
Soft Detection," Rhodes, (1983).
4,404,675, "Frame Detection and Synchronization System for High Speed Digital Transmission Systems, " Karchevski, (1983).
4,404,676, "Partitioning Method and Appartus Using Data-Dependent Boundary-Marking
Code Words," DeBenedictis, (1983).
4,412,329, "Parity Bit Lock-On Method and Apparatus," Yarborough, Jr., (1983).
4,413,339, "Multiple Error Detecting and Correcting System Employing Reed-Solomon
Codes," Riggle, et al., (1983).
4,413,340, "Error Correctable Data Transmission Method," Odaka, et aI., (1983).
4,414,667, "Forward Error Correcting Apparatus," Bennett, (1983).
4,417,339, "Fault Tolerant Error Correction Circuit," Cantarella, (1983).
4,418,410, "Error Detection and Correction Apparatus for a Logic Array," Goetze, et aI.,
(1983).
4,425,644, "PCM Signal System," Scholz, (1984).
4,425,645, "Digital Data Transrriission with Parity Bit Word Lock-On," Weaver, et aI.,
(1984).
4,425,646, "Input Data Synchronizing Circuit," Kinoshita, et aI., (1984).
4,429,390, "Digital Signal Transmitting System," Sonoda, etal., (1984).
4,429,391, "Fault and Error Detection Arrangement," Lee, (1984).
4,433,348, "Apparatus and Method for Requiring Proper Synchronization of a Digital
Data Flow," Stockham, Jr., et aI., (1984).
4,433,415, "PCM Signal Processor," Kojima, (1984.)
4,433,416, "PCM Signal Processor," Kojima, (1984).
4,434,487, "Disk Format for Secondary Storage System," Rubinson, et aI., (1984).
4,435,807, "Orchard Error Correction System," Scott, et aI., (1984).
- 448 -

4,441,184, "Method and Apparatus for Transmitting a Digital Signal," Sonoda, et al.,
(1984).
4,447,903, "Forward Error Correction Using Coding and Redundant Transmission,"
Sewerinson, (1984).
4,450,561, "Method and Device for Generating Check Bits Protecting a Data Word,"
Gotze, et al., (1984).
4,450,562, "Two Level Parity Error-Correction System,· Wacyk, et al., (1984).
4,451,919, "Digital Signal Processor for Use
Equipment," Wada, et al., (1984).

in

Recording

and/or

Reproducing

4,451,921, "PCM Signal Processing Circuit," Odaka, (1984).
4,453,248, "Fault Alignment Exclusion Method to Prevent Realignment of Previously
Paired Memory Defects, " Ryan, (1984).
4,453,250, "PCM Signal Processing Apparatus," Hoshimi, et al., (1984).
4,453,251, "Error-Correcting Memory with Low Storage Overhead and Fast Correction
Mechanism," Osman, (1984).
4,454,600, "Parallel Cyclic Redundancy Checking Circuit," LeGresley, (1984).
4,454,601, "Method and Apparatus for Communication of Information and Error
Checking," Helms, et al., (1984).
4,455,655, "Real Time Fault Tolerant Error Correction Mechanism," Galen, et al., (1984).
4,456,996, "Parallel/Series Error Correction Circuit," Haas, et al., (1984).
4,458,349, "Method for Storing Data Words in Fault Tolerant Memory to Recover
Uncorrectable Errors," Aichelmann, Jr., et al. (1984).
4,459,696, "PCM Signal Processor with Error Detection and Correction Capability
Provided by a Data Error Pointer," Kojima, (1984).
4,462,101, "Maximum Likelihood Error Correcting Technique," Yasuda, etal., (1984).
4,462,102, "Method and Apparatus for Checking the Parity of Disassociated Bit Groups,"
Povlick, (1984).
4,464,752, "Multiple Event Hardened Core Memory, " Schroeder, et al., (1984).
4,464,753, "Two Bit Symbol SECIDED Code," Chen, (1984).
4,464,754, "Memory System with Redundancy for Error Avoidance," Stewart, et al.,
(1984).

- 449 -

4,464,755, "Memory System with Error Detection and Correction," Stewart, et al.,
(1984).
4,468,769, "Error Correcting System for Correcting Two or Three Simultaneous Errors
in a Code," Koga, (1984).
4,468,770, "Data Receivers Incorporating Error Code Detection and Decoding,"
et al., (1984).

Metcalf,

4,472,805, "Memory System with Error Storage," Wacyk, et al., (1984).
4,473,902, "Error Correcting Code Processing System," Chen, (1984).
4,476,562, "Method of Error Correction," Sako, et al., (1984).
4,477,903, "Error Correction Method for the Transfer of Blocks of Data Bits, a Device
for Preforming such a Method, A Decoder for Use with such a Method, and
a Deyice Comprising such a Decoder, " Schouhamer Immink, et al., (1984).
4,494,234, "On-The-Fly Multibyte Error Correcting System," Patel, (1985).
4,495,623, "Digital Data Storage in Video Format," George, et aI., (1984).
4,497,058, "Method of Error Correction," Sako, et al., (1985).
4,498,174, "Parallel Cyclic Redundancy Checking Circuit," LeGresley, (1985).
4,498,175, "Error Correcting System," Nagumo, et al., (1985).
4,498,178, "Data Error Correction Circuit," Ohhashi, (1985).
4,502,141, "Circuits for Checking Bit Errors in a Received BCH Code Succession by the
Use of Primitive and Non-Primitive Polynomials," Kuki, (1985).
4,504,948, "Syndrome Processing Unit for MuItibyte Error Correcting Systems,"
(1985).

Patel,

4,506,362, "Systematic Memory Error Detection and Correction Apparatus and Method,"
Morley, (1985).
4,509,172, "Double Error Correction - Triple Error Detection Code," Chen, (1985).
4,512,020, "Data Processing Device for Processing Multiple-Symbol Data-Words Based on
a Symbol-Correcting Code and Having Multiple Operating Modes," Krol,
et al., (1985).
4,519,058, "Optical Disc Player," Tsurushima, et al., (1985).
4,525,838, "Multibyte Error Correcting System Involving a Two-Level Code Structure,"
Patel, (1985).

- 450 -

4,525,840, "Method and Apparatus for Maintaining Word Synchronization After a
Synchronizing Word Dropout in Reproduction of Recorded Digitally Encoded
Signals," Heinz, et al., (1985).
4,527,269, "Encoder Verifier," Wood, etal., (1985).
4,538,270, "Method and Apparatus for Translating a Predetermined Hamming Code to an
Expanded Class of Hamming Codes," Goodrich, Jr., et al., (1985).
4,541,091, "Code Error Detection and Correction Method and Apparatus," Nishida, et al.,
(1985).
4,541,092, "Methtxl for Error Correction," Sako, et al., (1985).
4,541,093, "Method and Apparatus for Error Correction," Furuya, et al., (1985).
4,544,968, "Sector Servo Seek Control," Anderson, et al., (1985).
4,546,474, "Method of Error Correction," Sako, et al., (1985).
4,549,298, "Detecting and Correcting Errors in Digital Audio Signals," Creed, et al.,
(1985).
4,554,540, "Signal Format Detection Circuit for Digital Radio Paging Receiver,"
et al., (1985).

Mori,

4,555,784, "Parity and Syndrome Generation for Error Detection and Correction in
Digital Communication Systems," Wood, (1985).
4,556,977, "Decoding of BCH Double Error Correction - Triple
(DEC-TED) Codes," Olderdissen, et al., (1985).

Error

Detection

4,559,625, "Interleavers for Digital Communications," Berlekamp, et al., (1985).
4,562,577, "Shared EncoderlDecoder Circuits for Use with Error Correction Codes of an
Optical Disk System," Glover, et al., (1985).
4,564,941, "Error Detection System," Woolley, et al., (1986).
4,564,944, "Error Correcting Scheme," Arnold, et al., (1986).
4,564,945, "Error-Correction Code for Digital Data on Video Disc," Glover, et al.,
(1986).
4,566,105, "Coding, Detecting or Correcting Transmission Error System," Oisel, et al.,
(1986).
4,567,594, "Reed-Solomon Error Detecting and Correcting System Employing PipeIined
Processors," Deodhar, (1986).
4,569,051, "Methods of Correcting Errors in Binary Data," Wilkinson, (1986).
4,573,171, "Sync Detect Circuit," McMahon, Jr., et al., (1986).
- 451 "'

4,583,225, "Reed-Solomon Code Generator," Yamada, et al., (1986).
4,584,686, "Reed-Solomon Error Correction Apparatus," Fritze, (1986).
4,586,182, "Source Coded Modulation System," Gallager, (1986).
4,586,183, "Correcting Errors in Binary Data," Wilkinson, (1986).
4,589,112, "System for Multiple Error Detection with Single and Double Bit Error
Correction," Karim, (1986).
4,592,054, "Decoder with Code Error Correcting Function," Namekawa, et al., (1986).
4,593,392, "Error Correction Circuit for Digital Audio Signal," Kouyama, (1986).
4,593,393, "Quasi Parallel Cyclic Redundancy Checker,· Mead, et al., (1986).
4,593,394, "Method Capable of Simultaneously Decoding Two Reproduced Sequences,"
Tomimitsu, (1986).
4,593,395, "Error Correction Method for the Transfer of Blocks of Data Bits, a Device
and Performing such a Method, A Decoder for Use with such a Method, and
a Device Comprising such a Decoder," Schouhamer Immink, et al., (1986).
4,597,081, "Encoder Interface with Error Detection and Method Therefor," Tassone,
(1986).
4,597,083, "Error Detection and
Stenerson, (1986).

Correction

in

Digital

Communication

Systems,"

4,598,402, "System for Treatment of Single Bit Error in Buffer Storage Unit,"
Matsumoto, et al., (1986).
4,604,747, "Error Correcting and Controlling System," Onishi, et al., (1986).
4,604,750, "Pipeline Error Correction, " Manton, et al., (1986).
4,604,751, "Error Logging Memory System for Avoiding Miscorrection of Triple Errors,"
Aichelmann, Jr., et al., (1986).
4,606,026, "Error-Correcting Method and Apparatus for the Transmission of Word-Wise
Organized Data," Baggen, (1986).
4,607,367, "Correcting Errors in Binary Data,· Ive, et al., (1986).
4,608,687, "Bit Steering Apparatus and Method for Correcting Errors in Stored Data,
Storing the Address of the Corrected Data and Using the Address to
Maintain a Correct Data Condition," Dutton, (1986).
4,608,692, "Error Correction Circuit," Nagumo, et al., (1986).

- 452 -

4,617,664, "Error Correction for Multiple Bit Output Chips," Aichelmann, Jr., et al.,
(1986).
4,623,999, "Look-up Table Encoder for Linear Block Codes," Patterson, (1986).
4,627,058, "Code Error Correction Method," Moriyama, (1986).
4,630,271, "Error Correction Method and Apparatus for Data Broadcasting System,"
Yamada, (1986).
4,630,272, "Encoding Method for Error Correction," Fukami, et at, (1986).
4,631,725, "Error Correcting and Detecting System," Takamura, et al., (1986).
4,633,471, "Error Detection and Correction in an Optical Storage System," Perera,
et al., (1986).
4,637,023, "Digital Data Error Correction Method and Apparatus," Lounsbury, et at,
(1987).
4,639,915, "High Speed Redundancy Processor," Bosse, (1987).
4,642,808, "Decoder for the Decoding of Code Words which are Blockwise Protected
Against the Occurrence of a Plurality of Symbol Errors within a Block by
Means of a Reed-Solomon Code, and Reading Device for Optically Readable
Record Carriers, n Baggen, (1987).
4,646,301, "Decoding Method and System for Doubly-Encoded Reed-Solomon Codes,"
Okamoto, et at, (1987).
4,646,303, "Data Error Detection and Correction Circuit," Narusawa, et at, (1987).

- 453 -

PERIODICALS

Abramson, N., "Cascade Decoding of Cyclic Product Codes." IEEE Trans. on Comm.
Tech., Com-16 (3),398-402 (June 1968).
Alekar, S. V., "M6800 Program Performs Cyclic Redundancy Checks."
(Dec. 1979).

Electronics, 167

Bahl, L. R. and R. T. Chien, "Single- and Multiple-Burst-Correcting Properties of a Class
of Cyclic Product Codes." IEEE Trans. on Info. Theory, IT-17 (5), 594-600 (Sept.
1971).
Bartee, T. C. and D. I. Schneider, "Computation with Finite Fields." Info. and Control,
6, 79-98 (1963).
Basham G. R., "New Error-Correcting Technique for Solid-State Memories Saves
Hardware. " Computer Design, 110-113 (Oct. 1976).
Baumert L. D. and R. J. McEliece, "Soft Decision Decoding of Block Codes." DSN
Progress Report 42-47,60-64 (July/Aug. 1978).
Beard, Jr., J., "Computing in GF(q)." Mathematics of Comp., 28 (128), 1159-1166 (Oct.
1974).
Berlekamp, E. R., "On Decoding Binary Bose-Chaudhuri-Hocquenghem Codes." IEEE Trans.
on Info. Theory, IT-l1 (4),577-579 (Oct. 1965).
Berlekamp, E. R., "The Enumeration of Information Symbols in BCH Codes." The Bell
Sys. Tech. J., 1861-1880 (Oct. 1967).
Berlekamp, E. R., "Factoring Polynomials Over Finite Fields." The Bell Sys. Tech. J.,
1853-1859 (Oct. 1967).
Berlekamp, E. R., "Factoring Polynomials Over Large Finite Fields." Mathematics of
Comp., 24 (111), 713-735 (July 1970).
Berlekamp, E. R., "Algebraic Codes for Improving the Reliability of Tape Storage."
National Computer Conference, 497-499 (1975).
Berlekamp, E. R., "The Technology of Error-Correcting Codes." Proceedings of the IEEE,
68 (5), 564-593 (May 1980).
Berlekamp, E. R. and J. L. Ramsey, "Readable Erasures Improve the Performance of
Reed-Solomon Codes." IEEE Trans. on Info. Theory, IT-24 (5), 632-633 (Sept.
1978).
Berlekamp, E. R., et. al., "On the Solution of Algebraic Equations Over Finite Fields."
Info. and Control, 10, 553-564 (1967).
Blum, R., "More on Checksums." Dr. Dobb's J., (69), 44-45 (July 1982).
Bossen, D. C., "b-Adjacent Error Correction." IBM J. Res. Develop., 402-408 (July 1970).

- 454 -

Bossen, D. C. and M. Y. Hsiao, "A System Solution to the Memory Soft Error Problem. "
IBM J. Res. Develop., 24 (3),390-397 (May 1980).
Bossen, D. C. and S. S. Yau, "Redundant Residue Polynomial Codes." Info. and Control,
13, 597-618 (1968).
Boudreau, P. E. and R. F. Steen, "Cyclic Redundancy Checking by Program." Fall Joint
Computer Conference, 9-15 (1971).
Brown, D. T. and F. F. Sellers, Jr., "Error Correction for IBM 800-Bit-Per-Inch Magnetic
Tape." IBM J. Res. Develop., 384-389 (July 1970).
Bulthuis, K., et. aI., "Ten Billion Bits on a Disk. " IEEE Spectrum, 18-33 (Aug. 1979).
Burton, H. 0., "Some Asymptotically Optimal Burst-Correcting Codes and Their Relation
to Single-Error-Correcting Reed-Solomon Codes." IEEE Trans. on Info. Theory,
IT-17 (1), 92-95 (Jan. 1971).
Burton, H. O. "Inversionless Decoding of Binary BCH Codes." IEEE Trans. on Info.
Theory, IT-17 (4), 464-466 (July 1971).
Carter, W. C. and C. E. McCarthy, "Implementation of an Experimental Fault-Tolerant
Memory System." IEEE Trans. on Computers, C-25 (6), 557-568 (June 1976).
Chen, C. L. and R. A. Rutledge, "Error Correcting Codes for Satellite Communication
Channels." IBM J. Res. Develop., 168-175 (Mar. 1976).
Chien, R. T., "Cyclic Decoding Procedures for Bose- Chaudhuri-Hocquenghem Codes."
IEEE Trans. on Info. Theory, 357-363 (Oct. 1963).
Chien, R. T., "Block-Coding Techniques for Reliable Data Transmission." IEEE Trans. on
Comm. Tech., Com-19 (5), 743-751 (Oct. 1971).
Chien, R. T., "Memory Error Control: Beyond Parity." IEEE Spectrum, 18-23 (July 1973).
Chien, R. T. and B. D. Cunningham, "Hybrid Methods for Finding Roots of a
Polynomial-With Application to BCH Decoding." IEEE Trans. on Info. Theory,
329-335 (Mar. 1969).
Chien, R. T., et. ai., "Correction of Two Erasure Bursts." IEEE Trans. on Info. Theory,
186-187 (Jan. 1969).
Comer, E., "Hamming's Error Corrections." Interface Age, 142-143 (Feb. 1978).
Davida, G. I. and J. W. Cowles, "A New Error-Locating Polynomial for Decoding of BCH
Codes." IEEE Trans. on Info. Theory, 235-236 (Mar. 1975).
Delsarte, P., "On Subfield Subcodes of Modified Reed-Solomon Codes." IEEE Trans. on
Info. Theory, 575-576 (Sept. 1975).
Doi, T. T., et. aI., "A Long-Play Digital Audio Disk System." Journal of the Audio
Eng. Soc., 27 (12), 975-981 (Dec. 1979).

- 455 -

Duc. N. Q., "On the Lin-Weldon Majority-Logic Decoding Algorithm for Product Codes."
IEEE Trans. on Info. Theory, 581-583 (July 1973).
Duc, N. Q. and L. V. Skattebol, "Further Results on Majority-Logic Decoding of Product
Codes." IEEE Trans. on Info. Theory, 308-310 (Mar. 1972).
Forney, Jr., G. D., "On Decoding BCH Codes." IEEE Trans. on Info. Theory, IT-ll (4),
549-557 (Oct. 1965).
Forney, Jr., G. D.,
"Coding and Its Application in Space Communications." IEEE
Spectrum, 47-58 (June 1970).
Forney, Jr., G. D., "Burst-Correcting Codes for the Classic Bursty Channel." IEEE Trans.
on Comm. Tech., Com-19 (5), 772-781 (Oct. 1971).
Gorog, E., "Some New Classes of Cyclic Codes Used for Burst-Error Correction."
IBM 1., 102-111 (Apr. 1963).
Greenberger, H., "An Iterative Algorithm for Decoding Block Codes Transmitted Over a
Memoryless Channel." DSN Progress Report 42-47,51-59 (July/Aug. 1978).
Greenberger, H. J., "An Efficient Soft Decision Decoding Algorithm for Block Codes."
DSN Progress Report 42-50, 106-109 (Jan.lFeb. 1979).
Gustavson, F. G., "Analysis of the Bedekamp-Massey Linear Feedback Shift-Register
Synthesis Algorithm." IBM J. Res. Develop., 204-212 (May 1976).
Gustlin, D. P. and D. D. Prentice, "Dynamic Recovery Techniques Guarantee System
Reliability." Fall Joint Computer Conference, 1389-1397 (1968).
Hartmann, C. R. P., "A Note on the Decoding of Double- Error-Correcting Binary BCH
Codes of Primitive Length." IEEE Trans. on Info. Theory, 765-766 (Nov. 1971).
Hellman, M. E., "Error Detection in the Presence of Synchronization Loss." IEEE Trans.
on Comm., 538-539 (May 1975).
Herff, A. P., "Error Detection and Correction for Mag Tape Recording." Digital Design,
16-18 (July 1978).
Hindin, H. J., "Error Detection and Correction Cleans Up Wide-Word Memory Act."
Electronics, 153-162 (June 1982).
Hodges, D. A., "A Review and Projection of Semiconductor Components for Digital
Storage." Proceedings of the IEEE, 63 (8), 1136-1147 (Aug. 1975).
Hong, S. 1. and A. M. Patel, "A General Class of Maximal Codes for Computer
Applications." IEEE Trans. on Computers, C-21 (12), 1322-1331 (Dec. 1972).
Hsiao, M. Y. and K. Y. Sih, "Serial-to-Parallel Transformation of Linear-Feedback
Shift-Register Circuits." IEEE Trans. on Elec. Comp., 738-740 (Dec. 1964).
Hsu, H. T., et. al, "Error-Correcting Codes for a Compound Channel." IEEE Trans. on
Info. Theory, IT-14 (1), 135-139 (Jan. 1968).
- 456 -

Imamura, K., "A Method for Computing Addition Tables in GF(pD)." IEEE Trans. on Info.
Theory, IT-26 (3),367-368 (May 1980).
Iwadare, Y., "A Class of High-Speed Decodable Burst-Correcting Codes." IEEE Trans. on
Info. Theory, IT-18 (6),817-821 (Nov. 1972).
Johnson, R. C., "Three Ways of Correcting Erroneous Data." Electronics, 121-134 (May
1981).
Justesen, J., "A Class of Constructive Asymptotically Good Algebraic Codes." IEEE Trans.
Info. Theory, IT-18, 652-656 (Sept. 1972).
Justesen, J., "On the Complexity of Decoding Reed-Solomon Codes." IEEE Trans. on Info.
Theory, 237-238 (Mar. 1976).
Kasami, T., and S. Lin, "On the Construction of a Class of Majority- Logic Decodable
Codes." IEEE Trans. on Info. Theory, IT-17 (5), 600-610 (Sept. 1971).
Kobayashi, H., "A Survey of Coding Schemes for Transmission or Recording of Digital
Data. " IEEE Trans. on Comm. Tech., Com-19 (6), 1087-1100 (Dec. 1971).
Koppel, R., "Ram Reliability in Large Memory Systems-Improving MTBF With ECC."
Computer Design, 196-200 (Mar. 1979).
Korodey, R. and D. Raaum, "Purge Your Memory Array of Pesky Error Bits." EDN,
153-158 (May 1980).
Laws, Jr., B. A. and C. K. Rushforth, "A Cellular-Array Multiplier for GF(2m)."
Trans. on Computers, 1573-1578 (Dec. 1971).

IEEE

Leung, K. S. and L. R. Welch, "Erasure Decoding in Burst-Error Channels." IEEE Trans.
on Info. Theory, IT-27 (2), 160-167 (Mar. 1981).
Levine, L. and W. Meyers, "Semiconductor Memory Reliability With Error Detecting and
Correcting Codes." Computer, 43-50 (Oct. 1976).
Levitt, K. N. and W. H. Kautz, "Cellular Arrays for the Parallel Implementation of
Binary Error-Correcting Codes." IEEE Trans. on Info. Theory, IT-15 (5), 597-607
(Sept. 1969).
Liccardo M. A., "Polynomial Error Detecting Codes and Their Implementation." Computer
Design, 53-59 (Sept. 1971).
Lignos D., "Error Detection and Correction in Mass Storage Equipment." Computer
Design, 71-75 (Oct. 1972).
Lim, R. S. and J. E. Korpi, "Unicon Laser Memory: Interlaced Codes for MultipleBurst-Error Correction." Wescon, 1-6 (1977).
Lim, R. S., "A (31,15) Reed-Solomon Code for Large Memory Systems." National
Computer Conf., 205-208 (1979).

- 457 -

Lin, S. and E. J. Weldon, "Further Results on Cyclic Product Codes." IEEE Trans. on
Info. Theory, IT-16 (4), 452-459 (July 1970).
Liu, K. Y., "Architecture for VLSI Design of Reed-Solomon Encoders." IEEE Transactions
on Computers, C-31 (2), 170-175 (Feb. 1982).
Locanthi, B., et. al., "Digital Audio Technical Committee Report." J. Audio Eng. Soc.,
29 (112), 56-78 (Jan'/Feb. 1981).
Lucy, D., "Choose the Right Level of Memory-Error Protection." Electronics Design,
ss37-ss39 (Feb. 1982).
Maholick, A. W. and R. B. Freeman, "A Universal Cyclic Division Circuit." Fall Joint
Computer Conf., 1-8 (1971).
Mandelbaum, D., "A Method of Coding for Multiple Errors." IEEE Trans. on Info. Theory,
518-521 (May 1968).
Mandelbaum, D., "On Decoding of Reed-Solomon Codes." IEEE Trans. on Info. Theory,
IT-17 (6), 707-712 (Nov. 1971).
Mandlebaum, D., "Construction of Error Correcting Codes by Interpolation.
on Info. Theory, IT-25 (1), 27-35 (Jan. 1979).

H

IEEE Trans.

Mandelbaum, D. M., "Decoding of Erasures and Errors for Certain RS Codes by
Decreased Redundancy." IEEE Trans. on Info. Theory, IT-28 (2), 330-335 (Mar.
1982).
Massey, J. L., "Shift-Register Synthesis and BCH Decoding." IEEE Trans. on Info.
Theory, IT-15 (1), 122-127, (Jan. 1969).
Matt, H. J. and J. L. Massey, "Determining the Burst-Correcting Limit of Cyclic Codes."
IEEE Trans. on Info. Theory, IT-26 (3),289-297 (May 1980).
Miller R. L. and L. J. Deutsch, "Conceptual Design for a Universal Reed-Solomon
Decoder." IEEE Trans. on Comm., Com-29 (11), 1721-1722 (Nov. 1981).
Miller, R. L., et. aI., "A Reed Solomon Decoding Program for Correcting Both Errors and
Erasures." DSN Progress Report 42-53, 102-107 (July/Aug. 1979).
Miller, R. L. et. al. B "An Efficient Program for Decoding the (255, 223) Reed-Solomon
Code Over GF(2 ) with Both Errors and Erasures, Using Transform Decoding." IEEE
Proc., 127 (4), 136-142 (July 1980).
Morris, D., "ECC Chip Reduces Error Rate in Dynamic Rams." Computer Design, 137-142
(Oct. 1980).
Naga, M. A. E., "An Error Detecting and Correcting System for Optical Memory."
Cal. St. Univ., Northridge, (Feb. 1982).
Oldham, I. B., et. aI., "Error Detection and Correction in a Photo-Digital Storage
System." IBM J. Res. Develop., 422-430 (Nov. 1968).

- 458 -

Patel, A. M., "A Multi-Channel CRC Register." Spring Joint Computer Conf., 11-14
(1971).
Patel, A. M., "Error Recovery Scheme for the IBM 3850 Mass Storage System." IBM J.
Res. Develop., 24 (I), 32-42 (Jan. 1980).
Patel A. M. and S. J. Hong, "Optimal Rectangular Code for High Density Magnetic
Tapes." IBM J. Res. Develop., 579-588 (Nov. 1974).
Peterson, W. W., "Encoding and Error-Correction Procedures for the Bose-Chaudhuri
Codes. " IRE Trans. on Info. Theory, 459-470 (Sept. 1960).
Peterson, W. W. and D. T. Brown, "Cyclic Codes for Error Detection." Proceedings of the
IRE, 228-235 (Jan. 1961).
Plum, T., "Integrating Text and Data Processing on a Small System." Datamation, 165-175
(June 1978).
Pohlig, S. C. and M. E. Hellman, "An Improved Algorithm for Computing Logarithms
Over GF(p) and Its Cryptographic Significance." IEEE Trans. on Info. Theory, IT-24
(1), 106-110 (Jan. 1978).
Poland, Jr., W. B., et. aI., "Archival Performance of NASA GFSC Digital Magnetic Tape."
National Computer Conf., M68-M73 (1973).
Pollard, J. M., "The Fast Fourier Transform in a Finite Field." Mathematics of Computation, 23 (114), (Apr. 1971).
Promhouse, G. and S. E. Tavares, "The Minimum Distance of All Binary Cyclic Codes of
Odd Lengths from 69 to 99." IEEE Trans. on Info. Theory, IT-24 (4), 438-442 (July
1978).
Reddy, S. M., "On Decoding Iterated Codes." IEEE Trans. on Info. Theory, IT-16 (5),
624-627 (Sept. 1970).
Reddy, S. M. and J. P. Robinson, "Random Error and Burst Correction by Iterated
Codes." IEEE Trans. on Info. Theory, IT-18 (1), 182-185 (Jan. 1972).
Reed, I. S. and T. K. Truong, "The Use of Finite Fields to Compute Convolutions."
IEEE Trans. on Info. Theory, IT-21 (2),208-213 (Mar. 1975).
Reed, I. S. and T. K. Truong, "Complex Integer Convolutions Over a Direct Sum of
Galois Fields." IEEE Trans. on Info. Theory, IT-21 (6), 657-661 (Nov. 1975).
Reed, I. S. and T. K. Truong, "Simple Proof of the Continued Fraction Algorithm for
Decoding Reed-Solomon Codes." Proc. IEEE, 125 (12), 1318-1320 (Dec. 1978).
Reed, I. S., et. aI., "Simplified Algorithm for Correcting Both Errors and Erasures of
Reed-Solomon Codes." Proc. IEEE, 126 (10), 961-963 (Oct. 1979).
Reed, I. S., et. aI., "The Fast Decoding of Reed-Solomon Codes Using Fermat Theoretic
Transforms and Continued Fractions." IEEE Trans. on Info. Theory, IT-24 (I),
100-106 (Jan. 1978).
- 459 -

Reed, I. S., et. aI., "Further Results on Fast Transforms for Decoding Reed-Solomon
Codes Over GF(2n) for n=4,5,6,8.· DSN Progress Report 42-50, 132-155 (Jan.lFeb.
1979).
Reno, C. W. and R. J. Tarzaiski, "Optical Disc Recording at 50 Megabits/Second."
SPIE, 177, 135-147 (1979).
Rickard, B., "Automatic Error Correction in Memory Systems.· Computer Design, 179-182
(May 1976).
Ringkjob, E. T., "Achieving a Fast Data-Transfer Rate by Optimizing Existing Technology." Electronics, 86-91 (May 1975).
SanyaI, S. and K. N. Venkataraman, "Single Error Correcting Code Maximizes Memory
System Efficiency." Computer Design, 175-184 (May 1978).
Sloane, N. J. A., "A Survey of Constructive Coding Theory, and a Table of Binary Codes
of Highest Known Rate." Discrete Mathematics, 3, 265-294 (1972).
Sloane, N. J. A., "A Simple Description of an Error-Correcting Code for High-Density
Magnetic Tape." The Bell System Tech. J., 55 (2), 157-165) (Feb. 1976).
Steen, R. F., "Error Correction for Voice Grade Data Communication Using a
Communication Processor." IEEE Trans. on Comm., Com-22 (10), 1595-1606 (Oct.
1974).
Stiffler, J. J., "Comma-Free Error-Correcting Codes." IEEE Trans. on Info. Theory,
107-112 (Jan. 1965).
Stone, H. S., "Spectrum of Incorrectly Decoded Bursts for Cyclic Burst Error Codes."
IEEE Trans. on Info. Theory, IT-17 (6), 742-748 (Nov. 1971).
Stone, J. J., "Multiple Burst Error Correction." Info. and Control, 4, 324-331 (1961).
Stone, J. J., "Multiple-Burst Error Correction with the Chinese Remainder Theorem.·
J. Soc. Indust. Appl. Math., 11 (1), 74-81 (Mar. 1963).
Sundberg, C. E. W., "Erasure and Error Decoding for Semiconductor Memories." IEEE
Trans. on Computers, C-27 (8),696-705 (Aug. 1978).
Swanson, R., "Understanding Cyclic Redundancy Codes." Computing Design, 93-99 (Nov.
1975).
Tang, D. T. and R. T. Chien, "Coding for Error Control." IBM Syst. J., (1),48-83 (1969).
Truong, T. K. and R. L. Miller, "Fast Technique for Computing Syndromes of B.C.H. and
Reed-Solomon Codes." Electronics Letters, 15 (22), 720-721 (Oct. 1979).
Ullman, J. D., "On the Capabilities of Codes to Correct Synchronization Errors."
Trans. on Info. Theory, IT-13 (1), 95-105 (Jan. 1967).

- 460 -

IEEE

Ungerboeck, G., "Channel Coding With Multilevel/Phase Signals." IEEE Trans. on Info.
Theory, IT-28 (1),55-67 (Jan. 1982).
Van Der Horst, I. A., "Complete Decoding of Triple-Error-Correcting Binary BCH Codes."
IEEE Trans. on Info. Theory, IT-22 (2), 138-147 (Mar. 1976).
Wainberg, S., "Error-Erasure Decoding of Product Codes." IEEE Trans. on Info. Theory,
821-823 (Nov. 1972).
Wall, E. L., "Applying the Hamming Code to Microprocessor-Based Systems." Electronics,
103-110 (Nov. 1979).
Welch, L. R. and R. A. Scholtz, "Continued Fractions and Berlekamp's Algorithm." IEEE
Trans. on Info. Theory, IT-25 (1), 19-27 (Ian. 1979).
Weldon, Ir., E. I., "Decoding Binary Block Codes on Q-ary Output Channels." IEEE
Trans. on Info. Theory, IT-17 (6), 713-718 (Nov. 1971).
Weng, L. I., "Soft and Hard Decoding Performance Comparisons for BCH Codes." IEEE,
25.5.1-25.5.5 (1979).
White, G. M., "Software-Based Single-Bit 110 Error Detection and Correction Scheme."
Computer Design, 130-146 (Sept. 1978).
Whiting, I. S., "An Efficient Software Method for Implementing Polynomial Error
Detection Codes." Computer Design, 73-77 (Mar. 1975).
Willett, M., "The Minimum Polynomial for a Given Solution of a Linear Recursion."
Duke Math. I., 39 (1), 101-104 (Mar. 1972).
Willett, M., "The Index of an M-Sequence." Siam I. Appl. Math., 25 (1), 24-27 (July
1973).
Willett, M., "Matrix Fields Over GF(Q)." Duke Math. I., 40 (3), 701-704 (Sept. 1973).
Willett, M., "Cycle Representations for Minimal Cyclic Codes." IEEE
Theory, 716-718 (Nov. 1975).

Trans. on Info.

Willett, M., "On a Theorem of Kronecker." The Fibonacci Quarterly, 14 (1), 27-30 (Feb.
1976).
.
Willett, M., "Characteristic m-Sequences." Math. of Computation, 30 (134), 306-311 (Apr.
1976).
Willett, M., "Factoring Polynomials Over a Finite Field." Siam I. Appl. Math., 35 (2),
333-337 (Sept. 1978).
Willett, M., Arithmetic in a Finite Field." Math. of Computation, 35 (152), 1353-1359
(Oct. 1980).
II

Wimble, M., "Hamming Error Correcting Code." BYTE Pub. Inc., 180-182 (Feb. 1979).

- 461 -

Wolf, J.• "Nonbinary Random Error-Correcting Codes." IEEE Trans. on Info. Theory,
236-237 (Mar. 1970).
Wolf, J. K., et. al., "On the Probability of Undetected Error for Linear Block Codes."
IEEE Trans. on Comm .• Com-30 (2),317-324 (Feb. 1982).
Wong, J.. et. al., "Software Error Checking Procedures for Data Communication
Protocols. " Computer Design, 122-125 (Feb. 1979).
Wu, W. W., "Applications of Error-Coding Techniques to Satellite Communications."
Comsat Tech. Review, 1 (1), 183-219 (Fall 1971).
Wyner, A. D., •A Note on a Class of Binary Cyclic Codes Which Correct Solid-Burst
Errors." IBM J.• 68-69 (Jan. 1964).
Yencharis, L., "32-Bit Correction Code Reduces Errors on Winchester Disks.· Electronics
Design, 46-47 (Mar. 1981).
Ziv, J., "Further Results on the Asymptotic Complexity of an Iterative Coding Scheme."
IEEE Trans. on Info. Theory, IT-12 (2), 168-171 (Apr. 1966).

- 462 -

INDEX
Accuracy, data, 230-239, 274, 372
Alarm, 153-156,404
Anlilogarilhm, 90-92, 104, 118, 125-134,351,358,360,
362, 376-398
b-Adjacent codes, 205-212
BCH code, 121, 145-157,377,403
Bit-crror ratc, 216, 403
Berlckamp's iterativcalgorithm, 148, 165-166, 174
Binary symmctric channel, 404
Burst error correction, 56, 61-64, 135-144, 231-232,
242
Burst error rate, 233, 404
Burst length,54, 61, 65, 70-81, 85, 159, 195, 213,
218,251,281,370,404
Byte serial, 137-139,243-249
Catastrophic error probability (Pc)' 404
Characteristic ofa field, 88, 91, 97,103,404,409
Check bits, 4-6, 50-67, 282-283, 294-298, 300-301,
309,368,370,404-406,418
Chien search, 121-124, 148, 167,370
Chinese Rcmaindcr Method, 11-17, 46, 132, 210,
274-280
Code
Block,404
Cyclic, 82-83,406
Linear, 410
RLL,274
Systematic, 3, 417
Code polynomial, 64, 231-232, 405
Code rate, 405
Code vector, 405
Codeword, 4, 6, 50-51, 82, 145-171, 186-204, 256-273,
405
Computer-gcneratcd codes, 63-64, 140-144, 231-233,
274,293
Concatenation, 50, 405
Convolutional code, 405
Correctable error, 5-6, 62-65, 82, 161, 166, 170, 175,
194, 197, 232, 291, 306, 308, 310, 325, 365-366,
405
Corrected error rate, 405
Correction algorithm, 56-63, 74-81, 178-179, 185, 190,
195, 199,258-272,280,283,289,290-292,306-324
Correction span, 61-62, 65, 135, 139-144, 231-239,
274, 278-279, 281, 286, 293-294, 306, 309,
311-312,406
CRC codes, 49-55
Cyclic redundancy check (CRC), 227, 231, 237-238,
271-273,403,406

- 463 -

Dccoded bit-error ratc, 403
Decoded error ratcs, 215-222
Dccoding, 4, 38, 57, 60, 67-82, 121, 136·140, 147, IS!!.
164-170, 181, 185, 193-204, 231, 250-252, 271.
279,281,371
Defect, 406
Defect event, 406
Defect event rate (Pe), 406
Defect skipping, 224-225, 228
Detectable error slip,
Detection span, 53-54, 62-65, 135, 140-144, 231-236.
278-279,281,286,293-294,371,406
Diagnostics, 325, 364-369
Discrete memorylcss chalmcl, 407
Division circuits, 22-24, 31
Double-error correction, 403
Double-error dctection, 54-55, 61, 286, 289, 403
EDAC, 403
Elementary symmetric functions, 407
Erasure, 195-204,214,222,228,271-272.275,372
Erasure, dcfinition of, 407
Erasure correction, 195-196,222,228,271-272.407
Erasure locator polynomial, 407
Erasure pointer, 195-196,222,228,271-272,275,407
Errata, defmition of, 408
Errata locator polynomial, 408
Erratum, 408
Error burst, 408
Error correction, 4-6, 146, 170, 209-210, 223-224,
227,230,230,242,257-261,263,264,271
Single bit, 56-61, 274, 289
Burst, 61-63,135-144,231-232,242
Multiple bit, 145-157
Multiple symbol, 158-204,274-276,350,373
Error corrccting code, 275
Error, dcfmition of, 408
Error detection, 49-55, 202, 236-237, 241, 250, 256,
257-258, 260, 267-269, 271. 274-275', 286, 289,
294,304,370
Error displacemcnt, 57-59, 208, 210-211, 231, 208,
283,289-290,305-324
Error location, 145-149, 152, 164-176, 195-198, 214,
274,287,289,305,355,357,370,372,408
Errorlocation vector, 147-148, 164, 167,408
Error locator polynomial, 408
Error logging, 366
Error rates, 195,213-222,228,233,236,240
Error values, 61, 138-139, 152, 164-170, 173, 176, 196,
208,351,355,370,408
Euclidean division algorithm, 8, 50-51

Feedback shift register, 18, 280, 282-284, 295-305,
367-368, 403
Field,409
Finite field
Circuits, 103-128, 134
Computation in, 91-97,129-133
Definition of, 87, 409
Extension, 88,407,409
Ground, 88, 409
Order, 88, 413
Processor, 126-128, 152,351,353,360
Roots of equations, 121-125, 148, 167
Fire code, 64, 66, 135-149, 231-233, 242-243, 274,
279,365,371
Forward-acting code, 409
Forward error correction, 304-305, 403
Forward polynomial, 37-38, 293,301,409
Galois field, see finite field
Greatest common divisor
Integers, 8
Polynomials, 10
Ground field, 409

Minimum weight of a code, 412
Miseorrection, 5, 6, 61, 64~7, 135-136, 166, 176, 196,
201, 202-204, 23~236
Probability, 5, 65~7, 135-136, 140, 200-204,
231-235, 242, 258, 275, 281, 286, 294, 366, 371,
373,412
Misdetection, 5, 6, 230, 252, 267
Probability,6,54, 241,250,281,286,371,412
Modulo function, 9, 372
Monic polynomial, 9-10, 15,412
Multiplication circuits, 19-21,281-284
Nibble, I, 195,361
On-the-fly correction, 68-73, 235, 370
Order
of a finite field, 88, 413
of a fmite field element, 88, 413
Parity,1-7,32,35,52,54,275,370,413
Parity check code, 205, 414
Parity predict, 240, 280, 283, 367-369
Parity sector, 272
Parity tree, 146, 149, 151, 162
Pattern sensitivity, 64~, 135-136, 140, 230-232, 239,
241-242,274-275
Perfect code, 414
Period,414
Pointer, 214, 222, 228, 271-272, 274-275, 414
Polynomials, 10, 16-17, 29-30, 35-48, 135, 140-141,
145,186,197,205,210-211,231-232
Binary, 16,37,178
Code, 414
Defmitions, 10,37, 101
Division, 17
Error locator, 147-148, 151-152, 157, 164-170, 196,
351,354-357,370
Irreducible, 10, 16,37-42,62-64,135,210,231,410
Monic, 9-10, 15
Multiplication, 16,281-282,304
Non-Primitive, 136,210
Period,37,39-40,62,82,136,414
Primitive,37,41-48,62,101,286,372,384,415
Reciprocal, 37-38, 48, 82, 136, 247, 281, 293, 304,
306,324,347-349,372,416
Self-reciprocal,37, 136,416
Power sum symmetric functions, 414.
Prime fields, 414
Prime subfields, 414
Probability
Miscorrcction, 5~, 16, 65-71, 135-136, 140,
2~204, 222, 231-235, 242, 258, 275, 281, 286,
294,366,371,373

Hamming code, 59, 61
Hamming distance, 145, 159,410
Hamming weight, 410
Hard error, 235, 240, 274, 293, 410
Integer function, 9
Interleaving, 202, 265-267, 270, 272, 285, 350
Inversion, 92, 103, 131,261-264,266,280,360
Isomorphic, 88, 410
k-bit serial, 136, 243-249
Least common multiple, 64
Integers, 8
Polynomials, 10,281
Linear feedback shift register, 18, 403
Linear function, 3, 10,410
Linear sequential circuit, 18, 403
Linear shift register, 18-19, 403
Linearly dependent, 411
Linearly independent, 411
Logaritlun,9~91, 103, 128, 132,351,358,360,362
Longitudinal Redundancy Check (LRC), 403, 411
Magnetic disk, 205, 224, 230-239, 241, 274-349, 372
Majority logic, 411
Majority logic decodable code, 41 I
Mass storage devices, 35~363
Minimum function, 41 I
Minimum polynomial of ai, 412

- 464 -

Misdetection, 6, 54, 241, 250, 278, 281, 286, 371
Undetected erroneous data, 230, 233, 236, 239, 240,
250,256,370,418
Random errors, 201-202, 204, 415
Raw burst error rate, 233, 415
Readable erasure, 415
Recovcrability, data, 215, 223-230, 240
Recurrent code, 415
Reed-Solomon code, 55, 87, 158-204, 257, 265, 270,
274,276-277,370,377,403
Relatively prime, 416
Integer, 9, 371
Polynomial, 10,279,416
RS Codes, (see Reed-Solomon code)
Self-checking logic, 280, 283, 367-369
Shift register, 403
Extemal-XOR, 32-35, 68-81, 138, 181, 243, 296,
298,301,303
Intemal-XOR, 31, 52, 138, 181, 243, 247, 295, 297,
299
Sequences, 16,36,42,56-63,98
Shortened codes, 82-85, 281, 416
Soft errors, 213, 234, 416
Subfic1d
Computation, 129-134
Dcfmition of, 417
Sync framing error, 213, 239, 256-269, 280, 325, 417
Syndrome, 4-6, 57-65, 126, 147-151, 159, 164-165,
167-176, 185-198, 203-206, 211, 234, 270, 279-319,
351-353,360,364,370,373,417
Triple-error detection, 403
Uncorrectable error, 417
Uncorrectable sector, 195, 225, 417
Uncorrectable sector event rate, 225, 417
Undctected erroneous data probability, 418
Unrcadable erasure, 418
Unrecoverable error, 418
Verticle redundancy check, 403, 418
Weight, 418

- 465 -

Neal Glover is widely recognized as
one of the world's leading experts
on the practical application of
error correcting codes and holds
several patents in the field.

Trent Dudley is involved in a broad
range of development projects at
Cirrus Logic - Colorado. His broad
knowledge of electrical engineering,
computer science, error correcting
codes,
and recording codes has
contributed substantially to Cirrus
Logic - Colorado's success.

· ABOUT CIRRUS LOGIC - COLORADO·
Cirrus Logic - Colorado was origiDally founded in 1979 as Data System Technology
(DS1) and was sold to Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides .error detection and correction (EDAC) products and
services to the electronics industries.. We specializes in the practical implementation of
EDAC, recording and data compression codes to enhance the reliability and efficiency of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error management.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, workstations and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
Cirrus Logic - Colorado was a pioneer in the development and implementation of
computer-generated codes to ~prove data accuracy. These codes have become widely
used in magnetic disk systemS over the past few years and are now defacto standards
for 51,4 inch Winchester drives. Cirrus Logic - Colorado developed the first low-cost
high-performance Reed-Solomon ,code integrated circuits; the codes implemented therein
have become worldwide standards for the optical storage industry. EDAC codes produced by Cirrus Logic - Colorado have become ~o associated with high data integrity that
many users include them in their lists of requirements when selecting storage subsystems.
,
Cirrus Logic - Colorado licenses BDAC software and discrete and integrated circuit
designs for various BDAC codes, offers books and technical reports on EDAC and recording codes, and conducts seminars on error tolerance and data integrity as well as
EDAC, recording, and data compression code.'!.

ISBN 0-927239-00-0

ABOUT CIRRUS LOGIC - COLORADO
Cirrus Logic - Colorado was originally f011llded in 1979 as Data System Technology
(DST) and was sold to -·Cirrus Logic, Inc., of Milpitas, California, on January 18, 1990.
Cirrus Logic - Colorado provides error detection and correction (EDAC) products and
services to the electronics industries. We specializes in the practical implementation qf
EDAC, recording and data compression codes to enhance the reliability and efficiency./of
data storage and transmission in computer and communications systems, and all aspects
of error tolerance, including framing, synchronization, data formats, and error management.
Cirrus Logic - Colorado also develops innovative VLSI products that perform
complex peripheral control functions in high-performance personal computers, workstations and other office automation products. The company develops advanced standard
and semi-standard VLSI controllers for data communications, graphics and mass storage.
Cirrus Logic - Colorado was a pioneer in the development and implementation of
computer-generated codes to improve data accuracy. These codes have become widely
used in magnetic disk systems over the past few years and are now defacto standards
for 5 1,4 inch Winchester drives. Cirrus Logic - Colorado developed the first low-cost
high-performance Reed-Solomon code integrated circuits; the codes implemented therein
have become worldwide standards for the optical storage industry. EDAC codes produced by Cirrus Logic - Colorado have become so associated with high data integrity that
many users include them in their lists of requirements when selecting storage subsystems.
Cirrus Logic - Colorado licenses EDAC software and dis~rete and integrated circuit
designs for various EDAC codes, offers books and technical repOrts on EDAC and recording codes, and conducts seminars on error tolerance and data integrity as well as
EDAC, recording, and data compression code. ;.

ISBN 0-927239-00-0



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : No
XMP Toolkit                     : Adobe XMP Core 4.2.1-c043 52.372728, 2009/01/18-15:56:37
Create Date                     : 2012:07:20 23:09:49-08:00
Modify Date                     : 2012:07:20 23:52:16-07:00
Metadata Date                   : 2012:07:20 23:52:16-07:00
Producer                        : Adobe Acrobat 9.51 Paper Capture Plug-in
Format                          : application/pdf
Document ID                     : uuid:57b3faef-6e98-474e-bb56-45e238f9f8bf
Instance ID                     : uuid:7f42239a-3ed8-41f4-919c-0b5246443188
Page Layout                     : SinglePage
Page Mode                       : UseNone
Page Count                      : 495
EXIF Metadata provided by EXIF.tools

Navigation menu