1991_TI_Digital_Control_Applications_with_the_TMS320_Family 1991 TI Digital Control Applications With The TMS320 Family

User Manual: 1991_TI_Digital_Control_Applications_with_the_TMS320_Family

Open the PDF directly: View PDF PDF.
Page Count: 460

Download1991_TI_Digital_Control_Applications_with_the_TMS320_Family 1991 TI Digital Control Applications With The TMS320 Family
Open PDF In BrowserView PDF
-I!1 TEXAS

INSTRUMENTS

Digital Control Applications
lNith the TAfS320 Fatnily

1991

1991

Digital Signal Processing Products

I

Digital Control Applications
lNith the TMS320 Family

Edited by
IrfanAhmed
Digital Signal Processing-Semiconductor Group
Texas Instruments Incorporated

~

TEXAS

INSTRUMENTS

IMPORTANT NOTICE
Texas Instruments (TI) reserves the right to make changes to or to discontinue any
semiconductor product or service identified in this publication without notice. TI advises its customers to obtain the latest version of the relevant information 10 verify,
before placing orders, that the information being relied upon is current.
TI warrants performance of its semiconductor products to current specifications in
accordance with Tl's standard warranty. Testing and other quality control techniques are utiiized to the extent TI deems necessary to support this warranty. Unless mandated by govemment requirementlil, specific testing of all parameters of
each device is not necessarily performed.
TI assumes no liability for TI applications assistance, customer product design,
software performance, or infringement of patents or services described herein. Nor
does TI warrant or representthat license, either express or implied, is granted under
any patent right, copyright, mask work right, or other intellectual property right of
TI covering or relating to any combination, machine, or process in which such semiconductor products or services might be or are used.
. ,
Texas Instruments products are not intended for use in life-support appliances, devices, or llystems. Use of a TI product in such applications without the written consent of the appropriate TI officer is prohibited.
TRADEMARKS

Apollo is a trademark of Apollo Computer, Inc.
Apple and Macintosh are trademarks of Apple Computer Corp.
CROSSTALK is a trademark of Microstuf, Inc.
DEC, VAX, and VMS are trademarks of Digital Equipment Corp.
18M, OS/2, PC, PC-DOS, PC/XT, and PS/2 are trademarks of IBM Corp.
Intel is a trademark of Intel Corporation
MS-DOS and MS OS/2 are registered trademarks of Microsoft Corp.
NEC is a trademark of NEC Corp.
Power-14 and Power-8ource are trademarks of Teknic, Inc.
Sun is a trademark of Sun Microsystems, Inc.
UN/X is a registered trademark of AT&T Bell Laboratories, Inc.
VMEbus is a trademark of Motorola, Inc.
XOS is a trademark of Texas Instruments, Inc.

Copyright © 1991, Texas Instrl!ments Incorporated

CONTENTS
IIi' 'I

Preface ......................................................................... ix
PART I

Introduction To Digital Controllers

DSP·Based Control Systems ........................................................
Control Systems ...............................................................
Analog Control Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Digital Control Systems ... . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Analog Versus Digital Controllers .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Processor Requirements for Digital Controllers .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Architecture ................................................................
Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Peripheral Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
DSP Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
TMS320 Digital Signal Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
TMS320 Fixed-Point DSPs ...................................................
TMS320 Floating-Point DSPs .................................................
TMS320C14-An Optimal Solution ..............................................
Summary ....................................................................
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

3
3
3
4
4
5
6
6
6
7
9
10
10
10
12
12

Digital Signal Processors Simplifying High.Performance Control ...... :................. 13
(Irfan Ahmed and Steven Lindquist; reprinted from Machine Design. Sept. 10. 1987)
Taking Control with DSPs .. . . . . .. . .. . . . . . . . . .. . . . .. .. .. . . . .. . . . . . . . . . . . . .. .. . . . . .. 19
(lrfan Ahmed and Tom Bucella; reprinted from Machine Design, Oct. 12, 1989)
Using Digital Signal Processors for Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 27
(Herbert Hanselmann; reprinted from IECON '86, 1986)

PART II

Design of Digital Controllers

Designing Control Systems ........................................................
Discrete Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
z-Transforms ..............................................................
Discretization Methods for Analog Systems ......................................
Step Invariant Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Ramp Invariant Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Matched Pole-Zero .......................................................
Backward Difference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Bilinear Transformation ...................................................
Other Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

35
35
35
37
37
38
38
38
39
39

iii

Behavior of Poles in z-Domain ...................•....•...................•.•. 39
Plant Modelling ...... '. . . . . . . . . . • . . . . . . . . . . . . . • . . . . . . . . . . • . . . . . . . . . . . . . . . . • . . .. 40
Digital Controller Design ................. :..................................... 43
Control Algorithms ............................•............................ 44
Compensation Techniques • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . .. 44
PID ...••...•................•..•............................•..•..•..• 44
Deadbeat •••••.••.•..•........•...•....•.••..••••.•.•..•..•.••.•...••••.. 44
State Space Model . . . . . . . . . . . . . . . . • . . . . . . . . . . . • . . . . . . . . . . . . . . • . • . . . • . . . . .. 44
Observer Model . . . . . . . . . . . • . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . •• 44
Optimal Control. • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . .. 44
Kalman Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 44
Adaptive Control .• . • . . . • . • • . . . . . . . . . . . . . • . . . . . • . • • . . . . • . . . . . . . . . . . . . • . . .. 44
Performance Specifications ..........•................................•..•.... 45
Step Response. • . . . • . . • . . . . . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . • . . . . .. 45
Frequency Response ....................•..........•......•............... 47
Additional Criteria for Performance Specification ..•..•. . . . . . . . . . . . . . . . . . . . . . . .. 48
PID Controller ..............•........................•..•.................. 48
Controller Design ........................................................ 49
Implementation Considerations. . . . . • . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 52
Deadbeat Controller . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 53
Controller Design ....•..•.......................•........................ 53
Implementation Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 54
State Space Model .......................................................... 56
State Controller Design ..................................................... 56
Implementation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . .. 58
Observer Model ............................................................ 58
Observer Model and Estimator Designs ....................................... 59
Transfer Function Form ..........................................•....... " 60
State Controller and Estimator with Reference Input ... . . . . . . . . . . . . . . . . . . . . . . . . .. 63
Implementation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 63
Optimal Control and Estimation .....................•................ " . . . . . . .. 64
Linear Quadratic Regulator . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . .. 64
Kalman Filter. . • . . . • . . . . . . • . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 65
Implementation Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 70
Summary ...•... ~ . . • . . . . . . . • . . . • . . . . . . . . . . . . . • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 70
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . • . . . . . . . . . . . . . . .. 70
Appendix I ...............................•........•......................... 71
Appendix 2 .......•.•...•..•.•.....•........••..•.....•..•..............•.... 74
Appendix 3 .................................................................. 76
Appendix 4 ..............•..............•................•................... 79

Matrix Oriented Computation Using Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 83
(Jeffrey C. Kantor) ,
Modeling and Analysis of a 2-Degree-of-Freedom Robot Arm ........................... 93
(Integrated Systems Inc.; reprinted from Application Note brochure)
Simnon - A Simulation Language for Nonlinear Systems
(Tomas SchOnthal)

iv

103

PART III

Implementation of Digital Controllers

Implementing Digital Controllers ...................................................
Fixed-Point Versus Floating-Point ................................................
Binary Arithmetic ........................... '.' ................................
Finite Word-Length Effects .....................................................
Coefficient Quantization .....................................................
Signal Quantization .........................................................
AID and D/A Quantization Effects ...........................................
Truncation and Round-Off Effects ...........................................
Overflow Effects .........................................................
Scaling ......................................................................
Controller Structures ...........................................................
Transfer Function Forms .....................................................
State Space Form ............................................................
Computational Delay .........................................................
Sampling Rate Selection. . . . . . . . .. . . . . . . . . . . . . . . . . . . . . .. . . .. . . . . . .. . . . . . .. . . . ..
Antialiasing Filters ...........................................................
Controller Design Tools .......................................................
Algorithm Development .......................................................
Software Development. . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . .. . . .. . . . ..
High-Level Languages. . . . . .. .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . .. . .. . .. . . . . . .. ..
Assembly Language. . . . . .. . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . ..
Signal Processing Languages. . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. . . .. . . . ..
Code Generation Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . ..
Device Simulators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . . . .. . . .. . . . ..
Hardware Design ............................................................
Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Appendix 1 .................................................................

111
III
112
113
114
114
114
114
114
115
116
117
119
120
121
122
122
122
122
123
123
123
124
124
124
125
125
126

Hardware/Software-Environment for DSP-Based Multivariable Control .. . . . . . . . . . . . . . .. 141
(H. Hanselmann, H. Henrichfreise, H. Hostmann, and A. Schwarte; reprinted from
Proceedings of 12th 1MACS Conference)
Implementation of Digital Controllers - A Survey .................................... 145
(H. Hanselmann; reprinted from Automatica, Vol. 23, No. I, 1987)
The Programming Language DSPL .... . . . . . . . . . . . . . . . . .. . . . . . . . . . .. . . . . .. . . .. . . . .. 171
(Albert Schwarte and Herbert Hanselmann; reprinted from PCIM, June 25 - 28, 1990)
Application of Kalman Filtering in Motion Control Using TMS320C25
(Dr. S. Meshkat)

185

Implementation of a PID Controller on a DSP ....................................... 205
(Karl Astrom and Hermann Steingrimsson)
DSP Implementation ofa Disk Driver Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . .. 239
(Hermann Steingrimsson and Karl Astrom)

v

PART IV

Applications of Digital Controllers with the TMS320

Digital Control Applications with the TMS320 . . . . . . . . . . . .. . .. .. . . .. . .. . . . .. . . . . . .. ..
Computer Peripherals .........................................................
Disk Drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Tape Drives ..............................................................
Power Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
AC Servo Drives ............................................................
UPSs and Power Converters .................................................
Robotics and Motion Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Automotive ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Active Suspension .........................................................
Anti-Skid Braking ...........................•.............................
Engine Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

257
257
257
257
257
257
257
258
258
258
258
258

Computer Peripherals
DSP Helps Keep Disk Drives on Track. . . . .. . . . . . . . . . . . . . . . . . .. . . . .. . . .. . . . .. . . . . . .. 259
(James Corliss and Richard Neubert; reprinted from Computer Design, June 15, 1988)
LQG - Control of a Highly Resonant Disk Drive Head Positioning Actuator '" ......... "
(Herbert Hanselmann and Andreas Engelke; reprinted from IEEE Transactions on
Industrial Electronics, Vol. 35, No. I, Feb. 1988)

265

High Bandwidth Control of the Head Positioning Mechanism in a Winchester Disc Drive ... 271
(Herbert Hanselmann and Wolfgang Moritz; reprinted from IECON 1986,1986)
Fast Access Control of the Head Positioning Using a Digital Signal Processor ............. 277
(S. Hasegawa, Y. Mizoshita, T. Ueno, and K. Takaishi; reprinted from SPIE Proceedings,
Vol. 1248, 1990)
Motion Control and Robotics
Implementation of a MRAC for a 1\vo Axis Direct Drive Robot Manipulator Using a
Digital Signal Processor ...................................................... 287
(G. Anwar, R. Horowitz, and M. Tomizuka; reprinted from Proceedings ofAmerican
Control Conference, June 1988)
Implementation of a Self-TIming Controller Using Digital Signal Processor Chips ......... 291
(K.H. Gurubasavaraj; reprinted from IEEE Control Systems Magazine, June 1989)
Motion Controller Employs DSP Technology ........................................ 297
(Robert van der Kruk and John Scannell; reprinted from PCIM, Sept. 1988)
Power Electronics
Using DSPs in AC Induction Motor Drives .......................................... 303
(Dr. S. Meshkat and Mr. I. Ahmed; reprinted from Control Engineering, Feb. 1988)
Microprocessor-Controlled AC-Servo Drives with Synchronous or Induction Motors:
Which is Preferable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 307
(R. Lessmeier, W. Schumacher, and W. Leonhard; reprinted from IEEE Transactions on
Industry Applications, Vol. IA-22, No.5, Sept./Oct. 1986)

vi

A Microcomputer-Based Control and Simulation of an Advanced IPM Synchronous Machine
Drive System for Electric Vehicle Propulsion .................................. , .. 315
(Bimal K. Bose and Paul M. Szczesny; reprinted from IEEE Transactions on Industrial
Electronics, Vol. 35, No.4, Nov. 1988)
DSP-Based Adaptive Control ofa Brushless Motor ................................... 329
(Nobuyuki Matsui and Hironori Ohashi; reprinted from Conference Record of the 1988
IEEE Industry Applications Society)
High Precision Torque Control of Reluctance Motors ................................. 335
(Nobuyuki Matsui, Norihiko Abo, and Tomoo Wakino; reprinted from Conference Record
of the 1989 IEEE Industry Applications Society)
High Resolution Position Control Under 1 Sec. of an Induction Motor with Full Digitized
Methods ................................................................... 341
(Isao Takahashi and Makoto Iwata; reprinted from Conference Record of the 1989 IEEE
Industry Applications Society)
A TMS32010 Based Near Optimized Pulse Width Modulated Waveform Generator ....... , 349
(R.J. Chance and J.A. Taufiq; reprinted from Third International Conference on Power
Electronics and Variable Speed Drives, Conference Publication Number 291, July 1988)
Design and Implementation of an Extended Kalman Filter for the State Estimation of a
Permanent Magnet Synchronous Motor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 355
(Rached Dhaouadi, Ned Mohan, and Lars Norum; reprinted from Proceedings of Power
Electronic Specialists Conference, June 1990)
Automotive
Trends of Digital Signal Processing in Automotive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 363
(Kun-Shan Lin; reprinted from Proceedings of Convergence '88, Oct. 1988)
Application of the Digital Signal Processor to an Automotive Control System ............. 375
(D. Williams and S. Oxley)
Dual-Processor Controller with Vehicle Suspension Applications. . . . . . . . . . . . . . . . . . . . . . .. 383
(Kamal N. Majeed; reprinted from IEEE Transactions on Vehicular Technology, Vol. 39,
No.3, Aug. 1990)
An Advanced Racing Ignition System .............................................. 389
(T. Mears and S. Oxley; reprinted from IMechE, 1989)
Active Reduction of Low-Frequency Tire Impact Noise Using Digital Feedback Control .... 395
(Mark H. Costin and Donald R. Elzinga; reprinted from IEEE Control Systems Magazine,
Aug. 1989)
Specialized Applications
Implementation of a Tracking Kalman Filter on a Digital Signal Processor ............... 399
(Jimfron Tan and Nicholas Kyriakopoulos; reprinted from IEEE Transactions on
Industrial Electronics, Vol. 35, No. I, Feb. 1988)
A Stand-Alone Digital Protective Relay for Power Transformers. . . . . . . . . . . . . . . . . . . . . . .. 409
(Ivi Hermanto, y'V.V.S. Murty, and M.A. Rahman; reprinted from IEEE Transactions on
Power Delivery, Vol. 6, No. I, Jan. 1991)

vii

A Real-Time Digital Simulation of Synchronous Machines: Stability Considerations and
Implementation . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 421
(Jonathan Pratt and Sheldon Gruber; reprinted from IEEE Transactions on Industrial
Electronics, Vol. IE-34, No.4, Nov. 1987)
Real-Time Dynamic Control of an Industrial Manipulator Using a Neural-Network-Based
Learning Controller ......................................................... 433
(W. Thomas Miller, m, Robert P. Hewes, Filson H. Glanz, and L. Gordon Kraft, III;
reprinted from IEEE Transactions on Robotics and Automation, Vol. 6, No. I, Feb. 1990)

BIBLIOGRAPHY
TMS320 Bibliography ...........................................................
Automotive .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..
Control ...................................... :.............................
Industrial ............................................................... '. . ..

viii

445
445
445
447

Preface

Using digital methods for controlling motors, robotic arms, or disk drives is not new. But technical advances in digital signal processing and high-performance digital signal processors (DSPs) such as the
TMS320 family are rapidly moving digital control from the laboratory to the market place. Personal
computers, automated manufacturing equipment, automobiles, military weapons, toys, and games are
examples of products that are enhanced by the application of digital control technology.
This book introduces the reader to the concepts of signal processing and DSPs as they apply to digital
control theory. It also presents a collection of published articles that review selected applications within
the broad spectrum of digital control. The book is divided into four parts and a bibliography:
PART I

Introduction to Digital Controllers

PART II

Design of Digital Controllers

PART III

Implementation of Digital Controllers

PART IV

Applications of Digital Controllers with the TMS320

BIBLIOGRAPHY
Each part is introduced by the editor so that readers can gain insight into its purpose. The bibliography
is furnished for those who wish to seek additional studies in the areas of automotive, control, and industrial applications.
Opportunities to design digital control systems have grown enormously over the past few years. This
book is being published to aid practicing control engineers in becoming familiar and comfortable with
digital control theory. It can also be a valuable tool for teaching at the undergraduate and graduate levels. The book brings together the latest concepts and applications in digital control theory to meet the
needs of both new and experienced designers.
The editor, authors, and I hope that you enjoy this application book and gain valuable information to
assist you in designing new digital control systems as well as modifying current systems.

Gene A. Frantz
Applications Manager
Digital Signal Processing
Texas Instruments Incorporated

ix

x

PART I
Introduction to Digital Controllers
it

-q 00 mE u

t:

id

T

!'!~H

'!!;!!U!MfJ'awm

_MRl

I

L Uf

1 ii

f! M mmmlMl

; : ::; ;: ::;;::: : : :;:; ; :;: : ;: : : : ;: ::: ,;: ::: ; :; ;; ;:;;;;::;:~;:~...,_....".,.:. ; ::; : :~: ;: :~::;;:: . . .".. ..L. :. . :,:; ,:,::;:;

§

fl.I:§II!IIHI~

:, ;: ; ; :::; :;: : ;: : : ;: ; : : :;: : ;~

DSP-Based Control Systems ........................................................ 3
Digital Signal Processors Simplifying High-Performance Control ........................ 13
(Irfan Ahmed and Steven Lindquist)
Taking Control with DSPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 19
(Irfan Ahmed and Tom Bucella)
Using Digital Signal Processors for Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 27
(Herbert Hanselmann)

DSP-Based Control Systems
Digital signal processors (DSPs) are making digital control more practical. The special architecture and
high perfonnance of DSPs make it possible to implement a wide variety of digital control algorithms previously reserved for research work and simulation studies in laboratories. This general introduction discusses these aspects and uses of DSPs in digital control systems. It is followed by papers that discuss the
suitability of DSPs for implementing digital controllers.

Control Systems
A control system commands or regulates a process in order to achieve a desired output from the process.
As shown in Figure I, a simple control system consists of three main components: sensors, actuators, and
a controller. Sensors measure the behavior of the system or the process and provide feedback to the controller. Some of the sensors used in control systems are resolvers, shaft encoders, and current sensors. Actuators
supply the driving and corrective forces to achieve a desired output. Typical actuators are AC/DC motors
and valves.
The controller generates actuator commands in response to the commands received from the operator and
to the feedback provided by the sensors. The controller consists of computation elements that process these
signals to achieve a desired response from the entire system. The function of the controller is to ensure that
the actuator responds to the commands as quickly as possible and at the same time to ensure that the system
remains stable under all operating conditions. Typically, a controller will modify the frequency response
of the system. The computational elements of the controller are implemented with either analog or digital
components.
Figure 1. Control System
Controller

-NtReference
Command

Actuator

-I-

~

~
Sensor

'---

V

Output

I
I
___ .JI

Encoder

Analog Control Systems: Control systems have traditionally been implemented with analog components like operational amplifiers, resistors, and capacitors. Figure 2 shows a simple analog controller. These
elements are used to implement filter-like structures that modify the frequency response of the system. AI3

though more powerful analog processing elements like multipliers are available, they are generally not used
because of their high cost. In spite of the simpler processing elements, analog controllers can be used to
implement high-performance systems.
Most analog systems use single-purpose characteristics of an error signal like P (proportional), I (integral),
D (derivative), or a combination of these characteristics. This limits most analog systems to designs based
on classical control theory.
Figure 2. Analog Controller

RS

Digital Control Systems: With the high performance and increasing reliability of microprocessors, digital controllers are taking over many applications from analog controllers. In the digital control system
shown in Figure 3, a DSP (TMS32OCI4) processes the feedback/error signal [y(n)] in relation to the input!
reference signal [r(n)]. A digital-to-analog converter (D/A) changes the digital output of the processor into
an analog signal to drive the power amplifier (PA) and actuator. The D/A is typically represented by a ZOH
(zero order hold). Similarly, on the input side, an analog-to-digital converter (AID) interfaces the sensor's
signal to the DSP. In addition, memory is required to store the commands necessary for the operation of
the system; the TMS320C 14 uses its on-chip memory for that purpose.
Figure 3. Digital Control System

Analog Versus Digital Controllers: Several tradeoffs have to be made in selecting a controller. Analog controllers continuously process a signal and can be used for very high bandwidth systems. They also
give very high resolution of a measured signal and thus provide precise control. Analog controllers have

4

been around for a long time, are well understood. and are easy to design. They can be implemented with
relatively inexpensive components.
On the negative side. analog controllers suffer from component aging and temperature drift. Even a perfectly designed controller will exhibit undesired characteristics after a while. Analog controllers are hard-wired
solutions. making modifications or upgrades in the design difficult. Analog controllers are also limited to
simpler algorithms from classical control theory. like PID and compensation techniques.
Most processes are analog in nature. Digital systems can only attempt to approximate them. The accuracy
of this approximation determines the performance of the digital system. Digital controllers sample the signal at discrete time intervals. This limits the bandwidth that can be handled by the controller. The accuracy
of the signal and coefficients that can be represented is limited by the resolution or the word length of the
processor. Digital controllers require additional components like AIDs and D/As. although newer processors include these components on the same chip. Digital controllers are relatively new. and their behavior
is not thoroughly understood. Thus. designing high-performance digital controllers can be challenging.
However, digital controllers have some major advantages. They are not affected by component aging or
temperature drift. and they provide stable performance. Designing in the z-domain helps to control their
behavior more precisely. Digital controllers can be used to implement more sophisticated techniques from
modem control theory, such as state controllers. optimal control, and adaptive control. They can also handle
nonlinear systems. Digital controllers are programmable and make it easy to upgrade and maintain design
investment. They can be time-shared to implement additional functions like notch filters and system control
to reduce system cost. If digital controllers are designed properly. their advantages greatly outweigh their
disadvantages. Table I compares analog and digital controllers.

Table 1. Analog Versus Digital Controllers

Advantages

i-ligh bandwidth
High resolution
Ease of design

Disadvantages

Component aging
Temperature drift
Hard-wired design
Good dnly for Simpler design

Programmable solution
Insensitive to environment
Shows ptecise behavior
Implements advanced algorithms
Capable of additional functions
Creates numerical problems
Must use high-performance processor
Difficult to design

Processor Requirements for Digital Controllers
The choice of processor is critical in determining the performance and behavior of the digital controller.
The poor performance of a digital system can generally be traced to selection of the wrong type of processor.
Available choices" are microcontrollers, geneml-purpose microprocessors, and DSPs. In addition, reduced
instruction set computer (RISC) processors and bit-slice processors can be used. although their usage is not
practical in most cases because of high cost. The following factors must be considered when selecting a
processor:
•
•
•

Architecture
Performance
Peripheral Integration

5

Architecture:, Processor architecture is probably the most important factor. A control system is a demanding, realtime signal processing system. Control theory essentially deals with proper techniques for
processing control signals. Processing signals in realtime raises numerical issues that must be resolved
correctly, to ensure that performance from a digital controller is acceptable. Some of the problems resulting
from inadequate processor architectures are quantization noise, truncation noise, limit cycles, and overflow-handling.
Quantization noise results from representing a signal in discrete or quantized inagnitude levels. The signals
and gain coefficients must be represented accurately without any loss of resolution for the smallest and
largest magnitudes. A processor should support a large word length and scaling shifters to provide the
resolution and dynamic range needed. This aIlows the signals and coefficients to be scaled to the full resolution of the processor. In some cases, floating-point support may be necessary if gain coefficients and signals
are time-varying variables and have large dynamic ranges.
Truncation noise results from the processing ofsignals in realtime. Either a higher resolution orlarger word
length is needed for interim results. For example, the result of a 16 x 16 multiplication is 32 bits. If only
a 16-bit storage capacity is available to the 32-bit resultant, the loss of the lower 16 bits is known as truncation error. A processor should be able to support a larger intermediate word length for interim results.
Limit cycles usually result from quantization and truncation errors. Insufficient resolution of the output
causes the outputto oscillate around the actual value without being able to reach it. Minimization of quanti- '
zation and truncation errors reduces limit cycles.
Realtime processing requires a large number of mathematical operations. Sometimes the results will exceed
the range handled by registers. When registers overflow, they may make a positive number tum negative.
A processor should be able to handle this overflow situation without significant change in the value of the
result.
Performan~:

Performance is another important criterion in selecting a processor for a digital controller.
Sampling the signal at discrete time intervals requires certain performance requirements from the processor. The sampling rate should be at least 10 to 20 times the system bandwidth. The processor must finish
processing the signal before the arrival of the next sample, or information will be lost. The processing requirement is also dependent upon the controller structure and the algorithm.
Another aspect of performance is the computational delay. The processor should finish processing the signal as soon as possible. Too much delay in calculation will add phase delay and will affect the phase margin
and stability of the system. The processor should have fast instruction cycle time. It should also have a very
fast mUltiplying time because multiplication is the basic element in discrete representation of all signal processing control algorithms.

Peripheral Integration: The final consideration is the amount of peripheral integration on the system.
Peripheral integration is important from a system cost, ease of design/interface, and board space point of
view. Typical peripherals are on-chip timers for sample rate selection, 0/A or PWM (pulse-width modulation) circuitries to drive the actuators. either an AID converter or an interface to optical encoders, or other
sensors. In addition, bit I/O pins are required to look at system flags and other conditions.
Digital controllers have not been widely used, because most processors lack appropriate architectures for
signal processing. Microcontrollers have been designed primarily to replace hard-wired logic, to handle data
acquisition, and to implement logical decisions. On the other hand, microprocessors have been designed
primarily to act as computing elements in computer systemS. Thus, both types of architecture have failed
to meet the requirements of signal processing; nevertheless, they have been used for it. Only DSP architectures can solve the fundamental problems encountered in control and other signal processing applications.

6

DSP Architectures
The TMS320 DSP architecture has been optimized for signal processing systems. Figure 4 shows the
typical architecture of a basic DSP. Some of the key elements are multiple buses, l6-bit architecture, 32-bit
registers, and hard-wired implementation of various functions. It minimizes numerical problems in signal
processing and meets the bandwidth requirements of high-perfonnance systems using sophisticated
techniques. The features and benefits ofTMS320 architecture are shown in Table 2.

Figure 4. DSP Architecture
X2ICLKIN

16
015-00

Instruction
Program
ROM/EPROM
(4KWords)

A11-AOI
PA2-PAO

16

16

OataRAM
(144/256
Words)
Oata

Legend:
ACC
ARP
ARO
AR1
DP
PC
P
T

=

Accumulator

= Auxiliary Register Pointer
- Auxiliary Register 0
• Auxiliary Register 1
= Data Page Pointer
= Program Counter
= P Register
= T Register

16

7

Table 2. TMS320 Architectural Features
Feature

Benefit

Single-cycle Instructions

Execute advanced control algorithms In realtime

Pipelined architecture

Controls high~bandwidth systems

Harvard architecture

Simultaneously accesses data and Instructioris

Hardware multiplier

Minimizes computational delays

Hardware shifters

Have larger dynamic range

16-bit word length

Minimizes quantization errors

32-bit registers

Minimizes truncation errors

Hardware stack

Supports fast interrupt processing

Saturation moqe

Prevents wrap-around of accumulator

To minimize numerical problems, the fixed-point TMS320 architecture has a 16-bit word length with 32-bit
accumulator and other registers. The TMS320 DSPs include hardware shifters, which allow scaling, prevent overflows, and keep the required precision. These shifters allow shifting to take place simultaneously
with other operations and without additional execution time.
Also, the instruction set has been optimized for signal processing. The DMOV instruction implemet)ts the
operator. The MACD instruction implements four operations simultaneously: multiplies two values,
moves data, accumulates previous result, and loads T register. To handle overflow during arithmetic operations, an overflow mode is included. This allows the accumulator to saturate at most positive or least negative values (similar to analog circuits), instead of rolling over and varying between positive and negative
values.

Z-1

Several features ofDSP architecture provide the perfonnance necessary to implement digital controllers.
All functions are perfonned internally in hard-wired logic so that it takes a single cycle to execute most
functions. Processors not optimized for signal processing usualIy perfonn functions in microcode and
require numerous cycles to do so. The TMS320 devices employ an internal multiple-bus architecture that
alIows simultaneous fetching of instructions and data operands.
The TMS320 DSPs contain a hardware multiplier that perl'onns a 16 x 16 multiplication in a single cycle.
This minimizes the computation delay time and alIows very fast sampling rates to be implemented for
high bandwidth systems. An on-chip hardware stack reduces interrupt response time and minimizes stack
pointer manipulations. Table 3 compares the architectural features of a DSP and a microprocessor/microcontroller (J.tP/~C).

Table 3. DSP Versus Microprocessor/Microcontroller

8

Advantages

Signal processing architecture
High performance
Advanced control techniques
Additional functions

On-chip peripherals
Supervisory functions
Familiar architecture

Disadvantages

Limited peripherals

Low performance
Computation delay
Numerical problems

Table 4. Feature Comparison
FEATURE

UNIT

320C14

320C25

8OC196

68000

68020

Instruction cycle time

160

100

333

400

120

ns

Frequency

25

40

12

10

24

MHz

Multiply (16 x 16) -+ 32

0.16

0.1

2.2

7.0

1.0

ns

PIDloop

2.2

1.3

27.0

25.0

4.8

lis

Matrix multiply (3 x 3) (3 x 1)

4.3

2.7

24.3

65.2

9.5

lis

Many on-chip DSP features enhance system integration; peripherals include RAM, ROMJEPROM, serial
ports, timers, PWM, encoder interface, and parallel I/O. Table 4 compares performance characteristics of
the TMS320C14, TMS320C25, and several f.lCs and f.lPs.

TMS320 Digital Signal Processors
The TMS320 family consists of five generations of fixed-point devices and floating-point devices (see
Figure 5), offering different performance ranges. Members of each generation are object code and, in some
cases, pin compatible.

Figure 5. TMS320 Family Roadmap

P

E
R

F

o

R

M
A
N

C
E

TMS320C10
TMS320C10·141·25
TMS320C14
TMS320E14
TMS320C15
TMS320E15
TMS32OC16
TMS320C17
TMS320E17

GENERATION

Ll
fI

Fixed-Point Generation
Floating-Point Generation

9

TMS320 Fixed·Point DSPs: There are three generations ofTMS320 fixed-point DSPs: TMS320C I x,
TMS320C2x, and TMS32OC5x. All fixed-point DSPs have a 16-bit architecture with 32-bit ALU and accumulator. They are based upon a Harvard architecture with separate buses for program and data, allowing
instructions and operands to be fetched simultaneously. They also feature a 16 x 16 == 32 hardware multiplier
for single-cycle multiply operations, and a hardware stack for fast context-save operations. An overflow
saturation mode prevents wrap-around. All instructions (except branches) are executed in a single cycle.
Perfonnance ranges from 5 MIPS (million of instructions per second) to 28.5 MIPS.
The TMS320Clx generation is based on the first DSP, the TMS3201O, introduced in 1982. It includes
144/256 words of on-chip RAM and 4K words of address space. Instruction cycle time is 160 ns. Members
of this generation include the TMS320CIO, TMS32OC14 and its EPROM version TMS320EI4,
TMS32OCI5/E 15, and TMS320C 17/ E 17. All these devices have expanded memory of 256 words of onchip RAM and 4K words of on-chip ROM/EPROM. The TMS320C 14/E 14 has been optimized for digital
control applications. An additional member, TMS320C 16, has an expanded memory address space of 64K
words. Low-power versions are also available for 3-V systems.
The TMS320C2x generation is based on the TMS320C25, featuring 544 words of on-chip RAM and 4K
words of on-chip ROM. Total address space is expanded to 64K words for both data and program. The instruction set has been considerably enhanced overthe TMS320CI x instruction set, reducing the instruction
cycle time to 100/80 ns. Other members include the TMS320E25 (an EPROM version ofTMS32OC25),
TMS32020, and TMS320C26.
The TMS320C5x generation includes the TMS32OC50 with 10K words of on-chip RAM and 2K words
of on-chip ROM and the TMS320C51 with 2K words of on-chip RAM and 8K words of on-chip ROM.
With an instruction set even more enhanced than the TMS320C2x instruction set, a TMS320C5x device
is designed to execute an instruction in 35 ns. New features include a separate PLU, shadow registers for
fast context save, ITAG serial scan emulation, and software wait states.

TMS320 Floating.Point DSPs: There are two generations of TMS320 floating-point DSPs:
TMS320C3x and TMS320C4x (the first DSP designed for parallel processing). All floating-point devices
have a 32-bit architecture with 4O-bit extended precision.registers and are based on a Von Neuman architecture. Multiple buses have been added for even faster throughput than the traditional Harvard architecture (program and data memory in separate spaces). Features include a hardware floating_point mUltiplier
and a floating-point ALU.
The TMS320C3x generation is based on the TMS32OC30, featuring 2K x 32 words of on-chip RAM, 4K
x 32 words of on-chip ROM, and a 64-word instruction cache. Other features include a separate DMA, two
serial ports, two timers, two external 32-bit data buses, and a 16 M-word address space. Instruction cycle
time is 60 ns, and the device is capable !Jf perfonning up to 33 MFLOPS (million floating-point operations
per second). Another member of the TMS32OC3x generation is the TMS320C31.
The TMS320C4x generation includes the TMS32OC40, a parallel digital signal processer. It includes six
communication ports, a self-programmable/six-channel DMA coprocessor, a developing/debugging analysis module, two independent 32-bit memory interfaces, a 16G-byte addressing space, and two timers. Other features include two 4K-byte RAM blocks, one 16K-byte ROM block, and a 512-byte instruction cache.
This generation is designed to execute an instruction in 40 ns, perfonn up to 275 MOPS (million operations
per second), and provide a 320-Mbyte/sec throughput.

TMS320C14 - An Optimal Solution
The TMS320C14 is the first device that provides an optimal solution for implementing digital controllers
on a single chip. Its TMS320C 15 CPU meets the architectural and processing requirements for controllers,
and it incorporates all the I/O peripherals needed in controllers and typically found in l6-bit microcontrol-

10

Figure 6. TMS320C14/E14 Key Features

Memory

32-BitALU
32-BitACC
0-, 1-, 4-Bit Shifter

16 x 16-Bit
Multiplier
32-Bit P-Reg

2 Auxiliary Registers
4-Level HIW Stack

•
•
•

•
•
•
•
•

160-ns instruction cycle
100% object code compatible with TMS320C15
416-bittimers
- 2 general-purpose timers
- 1 watchdog timer
- 1 baud-rate generator
16 individual bit-selectable VO pins
Serial port - UART
Event manager with 6-channel PWM D/A capability
CMOS technology
68-pin PLCC and CLCC packages

lers. These peripherals include 16 pins of bit I/O, four timers, six channels of PWM, four capture inputs
for optical encoder interface, a serial port with UART mode, and 15 interrupts. Figure 6 shows the key features of the TMS320C14.
The TMS320Cl4 can address 4K words of on-chip ROM or EPROM or off-chip memory, and 256 words
of on-chip RAM. It has an on-chip hardware multiplier that performs a 16 x 16 = 32 multiplication in 160
ns. The TMS320C14 has a 32-bit ALU and 32-bit accumulator. It contains two hardware shifters and a
four-deep on-chip hardware stack. Two auxiliary registers provide indirect and autoincrement addressing
modes. The TMS32OC14 has a general-purpose and DSP-specific instruction set and is 100% object code
compatible with the TMS320C 15 and other members of the TMS320C I x generation. The TMS320C 14 has
16 pins of bit I/O that can be individually selected as inputs or outputs. In addition, each bit can be individually controlled without affecting the others. The 16-bit I/O port has the capability to detect and match
patterns on the input pins and generate an interrupt when a specific pattern is detected.

11

The TMS320C 14 contains four 16-bit timers. Two of the timers can be used as event counters with internal
or external clocks. A third timer can be used as a watchdog timer and can also give a pulse output to drive
external circuitry to indicate a time-out. The fourth timer can be used as a baud-rate generator for the serial
port. Each timer is associated with a 16-bit period register and can also generate a separate maskable interrupt to the CPU.
The TMS320C14 has an event manager that consists of a compare subsystem and a capture subsystem. The
compare subsystem has six compare registers that are constantly comparing their outputs with one of the
timers. Associated with each compare register is an action register that controls all of the six output pins
and two interrupt pins. The action registers determine an action that takes place on output pins in case of
a match between the timer and a compare register. The compare subsystem can also be configured to generate six channels of high-precision PWM using a high-speed timer mode. In this mode, the compare subsystem can generate a PWM output that can be varied from 8 bits of resolution at 100kHz to 14 bits of
resolution at 1.6 kHz.
The event manager also contains four capture inputs that capture the value of a timer in a four-deep FIFO
when a certain transition is detected on a capture input pin. Each capture input can detect pulses as narrow
as 160 ns and can also generate a maskable interrupt to the CPU.
The TMS320C 14 serial port is capable of full-duplex asynchronous operation with a transmission/reception rate of up to 400K bps. The serial port has a separate dedicated timer for generation of baud rates. The
serial port also supports two industry standard protocols for interprocessor communication.
Finally, the TMS32OC14 has a total of 15 internal/external interrupts, which can be individually masked.
All the interrupts trigger a master interrupt that is controlled by the INTM bit in the status register.

Summary
The TMS320 family ofDSPs solves many of the fundamental problems of signal processing in digital servo
control systems. With their processing power, it is now possible to implement advanced concepts from
modern control theory in cost-effective control systems. DSPs provide the precision and bandwidth of analog systems and at the same time provide the reliability of digital systems. Newer DSPs like the
TMS32OC14 provide a single-chip solution for the majority of servo control applications.

References
1. Texas Instruments, TMS320Clx User's Guide, 1989.

2.
3.
4.
5.
6.

Texas Instruments, TMS320C141E14 User's Guide, 1988.
Texas Instruments, TMS320C2x User's Guide, 1990.
Texas Instruments, TMS320C3x User's Guide, 1990.
Texas Instruments, TMS320C4x User's Guide, 1991.
Texas Instruments, TMS320C5x User's Guide, 1990.
7. Texas Instruments, Digital Signal Processing Applications with the TMS320 Family, 1986.
8. Texas Instruments, Digital Signal Processing Applications with the TMS320 Family, Vol. 2,1990.
9. Texas Instruments, Digital Signal Processing Applications with the TMS320 Family, Vol. 3, 1990.

12

APPLIED TECHNOLOGY

DIGITAL SIGNAL
PROCESSORS
Simplifying
high-performanc
control

Registers

16

Modern control
algorithms often
demand real-time
•
Address
speed that ordinary
microcontrollers
Data_
cannot provide.
.-------,-I."..,+-...t.,..,-...,.....,..,....,..,.,.,.~=~~~~~~
Digital signal
processors are
optimized to handle
such tasks.
IRFANAHMED
STEVEN LINDQUIST
Texas Instruments Inc.
Houston, TX

E

lectronic control systems of
few years ago were frequently
designed around a generalpurpose microprocessor or microcontroller. But though conventional micros are versatile, they
sometimes fall short when applied
to high-speed tasks in telecommunications and computers, and in
electromechanical tasks such as automotive engine control.
.
The problem is that advanced
control algorithms, as used in digital filtering and discrete Fourier
transforms, demand numerous
multiplications and additions.
When done in software on an ordi-

Data RAM
Shifters

Many digital signal processors are built with a Harvard architecture, where
data and instructions occupy separate memories and travel over separate
buses to speed program execution. The two buses are,evident in this
simplified block diagram of a TMS320C25, a second generation CMOS
processor. Other features of note on the 68-pin chip include eight auxiliary
registers and a hardware multiplier specially designed to handle complex
arithmetic.

Reprinted, with permission. from Machine Desi.r:1? Sept. 10, 1987.

13

Analog blQCk diagram

.~
Input

~

Analog
controller

Controlled
device (plant)
I

t

y(t}

yet}

Loop delay
(s·')

Sensor

Digital block diagram

~

Input

-

:!:

Digital
controller

I

u(n}..

D/a
converter

output
u(t}

Controlled
device (plant)

I

~

-

Loop delay
(z·')

Y(n}

1 ..

A/d

y(t)

y(t}
Sensor

converter

When reduced to a block diagram, traditional analog control
systems resemble the digital counterpart. But analog controller qualities
are determined by circuit elements, while those of digital counterparts
are programmed In a few lines of code.

nary processor, these operations
can consume too much time to provide high-speed control.
Most new classes of control algorithms, along with other algorithms
such as state modeling, state estimation, Kalman filtering, and optimal control can be implemented
with analog circuitry. In practice,
however, it is difficult to design
analog hardware that offers the precise and often nonlinear behavior
required in such approaches. In addition, it is often expensive to build
in the needed stability and temperature range.
The modification of a control algorithm implemented in hardware
can also be complicated. Changes
may sometimes be made simply by
substituting a simple component,
but can also involve redesigning
part of the control system.
An approach to solving the speed
requirements associated with modern control algorithms is to use a
special kind of processor chip. Digital signal processors (DSPs) are
constructed to speedily perform the

."'DEAD8EAT·CONTROLLER
Soine adVantage.;

c:i a DSP· beCome clear when imp1e-

lDi!Iliing fUbctions that are difficult Of impossible to rialize
iii .analog .controllers. A deadbeat controller serves .as an
example,
In prinCipie, analog controllers require lin infinite time to .
settle toa reference input signal. In practice, they usually
approach ·the l'ilference quickly enough for most purpoSes.
Bu.t when extremely fast settling is needed, digital deadbeat
controller.
may be preferred.
As a review,
deadbeat controllers are those that settle toa ..•
steady state iii as few samples as pOSIlible. Ifn is the order if
tl\e controller, deadbeat controllers reach steady state in:
n + tsamples. Tbey are constructed byselectmg the proper
elements for the feedback loop. Control theory says that this

14

MACHINE DESIGN/SEPTEMBER 10.1987

CO~:~~~::::::!::;;r!:

kinds of arithmetic operations
associated with digital filtering and
processing. Most DSPs are built
with what is called a Harvard architecture. This configuration is
unlike conventional computer architectures in that it employs sepa-

rate data and instruction memories
that are accessed by separate buses.
The benefit of this arrangement is
increased speed because instructions and data can move in
parallel instead of sequentially.
In addition, these les generally

carry high-speed hardware multipliers and fast on-chip memories
that eliminate delays associated
with shuttling information on and
off chip to peripheral devices. This
promotes fast program execution.
For example, a DSP can fetch an

APPLYING DSPs IN SIMPLE CONTROL
A PID loop provides a simple example of how DSPs can be
applied to common control problems. A basic analog PID
(proportional-integral-differential) control algorithm is
frequently defined by
u(t) = Kpe(t)

+

Kf

e (t)dt

+ Kdde/dt

where e = some input voltage that varies over time. U =
output voltage and K p , K" and K~ are constants. This
equation indicates that output voltage is proportional to the
sum of an input error voltage, the time integral of the error
voltage, and the time rate of change of the error voltage.
For the sake of review, PID control functions as follows.
The integral term is added to the basic proportional term to
reduce the steady-state error to zero. It makes possible a
nonzero control output even when, the error signal (controller input) is zero. In this manner, it serves to anticipate
increasing error and apply a correction faster than would
normally be the case.
The derivative term is added to improve the stability of
the feedback loop. It allows the system to provide more
correction for a faster rate of change of error. The proportional K constants are usually chosen using standard 8plane techniques such as root-locus diagrams, Routh-Hurwitz criterion, Bode plots, and state variable techniques.
A typical approach to implementing a digital control
algorithm is to, write the analog transfer function in the
usual way using Laplace transforms, and then convert the
equation into a sampled data version through use of z transforms. Next, the digital transfer function is converted to a
difference equation in the time domain. A program is then
written for a DSP that implements this time domain difference equation.
.
The two most widely used analog/digital transformation
methods are the lI!atched pole-zero (also called matched
z -transform) and the bilinear transformation. Though the
former method is simpler, it is somewhat heuristic and does
not always produce a suitable controller. The bilinear transformation is more complex but mimics analog functions
more closely. This is because it uses the trapezoidal rule
instead mrectangular areas to solve the differential equation specifying the transfer characteristic.
Tbe bilinear transformation converts expressions in LaP.lace transforms into corresponding equations in z using the
identity
.
2(0-1)

Wp)s, where w. is the frequency (in rad/s) to be matched in

the digital transfer functions and

"p -

(21T) tan (...T /2)

To summariie, the design of any digital control function
usually begins with the specification of a few critical frequencies (ws) and magnitude requirements (K s). These are
prewarped into a set of analog specifications by plugging
each w into the prewarping formula. The resulting frequencies are then used in deriving the Laplace transform
version of the transfer function. This function ins is derived
in the usual way, and then is converted to a digital transfer
function in z, generally by means of tbe bilinear transformation. Finally, an inverse z transformation applied to
this expression yields a difference equation that is expressed
in terms of sample times. This equation can then be coded
into a DSP.
The procedure can be readily applied to the equations

m

defining a PID looIi. The exact sequence operations is too
lengthy to be given here, but the resulting difference equationis
u(n) = u(n-2)

+ K,e(n) + K.e(n-1) +K,.(n-2),

K, - K;, + 2Kd/T + K,T/2
K. = K,T - 4Kp/T
K. = 2Kd IT-Kp + K,T/2
Here e (n lis the nth fnputsample of the controller, the nth

m

sample of the error voltage; It (n) is the nth output sample
thecontro\ler, u(n-l) is the n·t sample,and so forth.
Because this equation represents quantities in terms m
where T= sample period.
Under the bilinear transformation, parallel or cascaded sample number rather than as functions mtime, it can be
control elements retain their respective structures. Overall easily implemented in IKitwarefor a DSP.· The accomfrequency response is treated less faithfully, however. Low panying l3-instruction program for the 32011) processor
executes the above PID difference equation in about 2.6 lIS
frequencies map accurately, but high frequencies do not.
Fo~ that reason, a frequencYprewarping scheme is usually when the processor runs at 20 MHz. In contrast, a similar
employed with this technique. Here a single critical fre- program running on a general purpose processor such as a
quency ismatcbed in 'the ana10g and digital domains by lO-MHz 68000 wouldJI(n-k)

k=l
M

+ I:

b.X(n - k)

k-O

This equation basically says that
any output y can be expressed as a
weighted sum of the input x at the
present time n, past inputs x(n - k)
for some number of past samples k,
and past outputs y(n - k). Terms a.
and b. are the weighting factors. A
computer optimized to quickly
synthesize this equation must be
able to store an input, multiply it by
a weighting factor, and sum it with
previous inputs.
DSP architecture provides these
functions by incorporating a large
degree of parallelism, carrying out
multiple operations per machine
cycle. The ability to perform parallel fetches from two registers and
store the contents in two memory
locations is an example. In addition,
the memory on chip is extremely
fast and constructed in ways designed to facilitate data transfers.
For example, the Harvard architecture on the TMS320 DSP family

Compared to first generation DSPs such as the EPROM-version 320E15,
second generation devices sport higher speed and more on-chip features.
The cmos 320C25, for example, provides a 100 ns cycle time and 544
bytes of on-chip RAM.

contains provisions for transferring
information between data and instruction memories.
Because DSPs typically do not
need to store large programs or
blocks of data, they usually lack the
extensive memory-management
circuitry found in general-purpose
microprocessors. Nevertheless,
DSPs have become very powerful.
The first such chips had only
limited instruction sets and memory, and were limited to fixed-point
(integer) calculations.
In contrast, DSP chips today are
second and third-generation devices that eliminate such problems.
They typically use clock rates of 20
MHz, and 40 MHz clocks are not
unheard of. Newer DSPs also provide on-board functions such as

serial ports, analog/digital and digital/analog converters, EPROM,
bit I/O timers, and similar functions that enhance capability.
The cost of single-chip DSPs is
on the order of a few dollars, comparable to that of conventional microprocessors used in control applications. Recently developed
DSPs tend to provide sophisticated
functions that enable them to operate with video and radar-frequency signals. Examples of such
functions can be found in the
TMS320C30, a third-generation
chip. The device provides floatingpoint math capability, facilities for
handling off-chip memory as well as
on-chip RAM and ROM, a more
extensive instruction set, and clock
cycle times of about 60 ns.
_

MACHINE DESIGN/SEPTEMBER 10. 1987

17

Taking

Control
with

asps
New DSP microcontrollers offer many
improvements over
current analog and
digital control systems.
TOM BUCELLA
Tekniclnc.
Rochester, NY

IRFANAHMED
Texas Instruments
Houston, TX

In many control systems, digital
signal processors (DSPS) are relegated to computational chores that
bog down conventional processors.
But their limited role is expected to
increase because new DSPS can
manage I/O tasks as well.
These revolutionary les are basically microcontrollers with on-chip
digital signal-processing hardware.
They make possible single-chip
control for real-time multiaxis systems. In addition, software and
hardware support tools simplify
their use in motion applications.

Analog to digital
Digital signal processors have enabled control systems to advance
from analog to full-digital implementations. Microprocessor-based
systems are only a halfway point.
They are an improvement over
analog controllers, but lack processing speed to totally displace

older technology. Dsps, on the
other hand, have powerful arith~
metic logic units (ALVS) capable of
high-speed processing.
Early solid-state controls consisted of hard-wired analog networks
built around operational amplifiers. Analog controls offer two distinct advantages over digital systems. First, they provide higher
speed control by processing input
data in real time. They also have,
higher resolution over wider bandwidths because of infinite sampling
rates. However, they have several
drawbacks.
Analog component values vary
with age and temperature, necessitating periodic adjustments to
maintain consistent operation. For
example, high-gain amplifier paralll.eters such as offset and gain
can drift by as much as 20% in their
lifetime. Such fluctuations can
cause major changes in the fre-

Reprinted, with permission, from Machine Design. Oct. 12, 1989.

quency response of band-pass and
band-reject filters.
Other weaknesses stem from the
construct;ion of analog hardware.
Reliability can be a problem because analog systems typically
have high part counts. Also, component lot tolerances frequently
complicate design and may introduce error. And field upgrades are
nearly impossible, often requiring
redesign and repackaging of the
hard-wired circuits.
In contrast, microprocessorbased motion systems offer many
improvements over their analog
counterparts. Drift is eliminated
because most functions are performed digitally. Upgrading or
modifying a digital system usually
involves rewriting the software:
hardware does not need to be replaced. And single-chip solutions
for simple applications are possible
with microcontrollers that have onMACHINE DESIGN/OCTOBER 12, 1989

19

chip hardware for I/o operations.
ALU features hardware multipliers
lating velocity from encoder posiEven the best microcontrollers, that handle multiply/accumulate tion data. Microprocessor-based
however, have limitations. In many operations in a single instruction systems, on the other hand, are too
applications, they are too slow. cycle. This is particularly im- slow to estimate velocity and typiProcessor time is largely spent portant for motion-control appli- cally use tachometers for feedback.
managing system I/O, leaving little cations because control algorithms
Other hardware enhancements
time for data manipulation. Also, are dominated by multiply and ac- include barrel registers. Barrel regmicrocontroller ALUS are not suited 'cumulate instructions.
isters allow DSPS to scale numbers
While general-purpose pro- in a single instruction cycle. Scaling
for high-speed processing. Only
simple control algorithms can be cessors take from 5 to 20 itS to mul- pushes all insignificant zeros to the
supported. Real-time, adaptive, or tiply two l6-bit numbers, DSPs right side of the number field by
multiaxis control is inefficient and need only 60 to 150 ns, about 100 shifting the data string to the left.
often impossible because com- times faster. Such speed im- These maneuvers increase preputations overload the processor.
provements make possible sam- cision by making room for less sigMost' processor-based systems pling rates of over 20 kHz. They nificant bits during calculations.
employ lookup tables to avoid cal- also allow controllers to extract They also minimize truncation erculations. But interpolation and more information from feedback rors. Conventional processors scale
round-off errors reduce precision. data during the time between sam- numbers in software, shifting them
Also, lookup tables can consume pling periods. For instance, DSPs one bit at a time. A one-bit word in
vast memory space, often limiting can provide speed control by calcu- a l6-bit field may eat up 15 clock
algorithms only one variable.
'Ib reduce table size, data word
lengths are sometimes shortened.
But this approach may introduce
limit cycling. Cycling occurs when
output commands have fewer sigA basic DSP controller consists of an piing rate that provides distortilieesisuccessionof filter thet liniita the aDaiogsignal's
These hardware limitations slow
amplitude-modu]ateil,zero.width
bandwidth isanOther~ But'aliasing can·
the processor and ultimately repuilles whose envelopel:llllforms to the not be totally prevented bees_of tba
duce asmpling rates.
analog signal.
',
filter's nonideal qu8Iitiet and Wgb...fre- ,
Dsp micro controllers, on the
A.ccI)m.cy of digitized; information is a ~uency noiseoompoilents.iI1,tba@alog
other hand, are geared for highfunction of the nllm\1et of data, pmnt$ signal,
,",
"
.
,
speed control applications. A dualsampled per ~'t'hehigher,this" . ~r~ with, tba llIablput
bus (Harvard) architecture allows
number, the ~the ,~~OI!:'<:itthe _tapIS llpeJ1:utetil!ie,-theL.nM.'kot,tinie" '
simultaneous processing of pro~cted~:Mwm~~;,~:. "~~~Itng.~fake.e,~~~:;:c,
gram instructions and data. The

THE BASICS OF CONVERSION

20

MACHINE DESIGN/OCTOBER 12.1989

cycles.
Improvements also result from
reduced instruction sets oriented
toward signal processing. For ex c
ample, a single DSP command
called MACD multiplies two numbers, adds the product to an accumulator' and shifts the data to an
adjacent register. This sequence of
operations synthesizes a digital filter pole or zero. Commands such as
MACD simplify software development by reducing the. number of
code lines.
Dsps, furthermore, allow controllers to provide functions impossible with analog or microprocessor
systems. For instance, they can
produce sharp-cutoff notch filters
that eliminate narrow-band methe analog input. Its maximum value
depends on required accuracy and
analog signal slew rate. Signals with
high slew' rates need shorter aperture
times to maintain accuracy.
After sampli"!!,, the quantizer (aid
converter) chR"!!'es the data to a digital
format. Rounding signal magnitude up
or down to the nearest threshold level
introduces quantization error. Threshold le\l!lls are discrete valueS that digital strings can assume. Quantizationerror is the difference between. the actual
analog signal and the nearest threshold
value. Maximum quantization error for
a linear ramp signal, for instance, is onehalf the separation hetween adjacent
threshold levels.
As threshold levels move closer together, resolution increases and the discrepilncy between the analog input and
thequlll1tiz,edolJ,tput decreases. Quantizat;iQQ1!FI'I!l'J(!im -only be reduced. by

~~~~ofdiscretecon1ll'~Au. 3. 13 and
4,27.
Marrin. K. E. (1985). VLSI and eoflware move DSP lechniques inlo mainslream Computer Dengn. Sepl. 15.
89.
Mehrgardl. S. (1984). 32-Bil-Proze••or erzeugl analoge
Signale. D.ttron£!:. 7, 77.
Morllz. W., H. Henrichfreise and H. Siemensmeyer
(1985). A contribution to the conlrol of elaslic robols.
Proc. IFAC :Jvrn.p. RDl>ot Control. Barcelona.
Moritz. W.• and H. Henrichfreise (1986). Regelung eines
elaslitlchen Knlckarm-Robolers. Proc.
Wortshop
"Steuenmg und Reg.lung """ RDbot ....." 0/ VD1/VDEGeI.Useha/t 11...- imd Regelungstech"ik, Langen,
Germany.
Phillips. C. L. and H. T. Nagle (1984). Digital Cbnlrol
SS/st...... Ana.lysit and Design. Prenlice-Hall.
Enllewood-Cliffs.
Slivinski. Ch. and J•. Bominskl (1985). Conlrol Syslem
Compensalion and
Implemenlalion
wllh
lhe
TIIS32010. h"a.s Ins"""'''''" ApplicAtion Report.

PART II

Design of Digital Controllers
ti

::::

I

EU

~1

fO

!fln

IlL

::: :::

::::::::

::::::::::::: ::

rr

!HI!

,:: :::::::: ::::: :::: ::::

J 'I.!~

::

Designing Control Systems ........................................................ 3S
Matrix Oriented Computation Using Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 83
(Jeffrey C. Kantor)
Modeling and Analysis of a 2-Degree-of-Freedom Robot Arm .......................'. . .. 93
(Integrated Systems Inc.)
Simnon - A Simulation Language for Nonlinear Systems .............................. 103
(Tomas SchOn thaI)

I,

iii

•

Designing Control Systems
The design of a control system involves two major steps: (1) the process or plant must be put into a
mathematical form so that its behavior can be analyzed and evaluated (Le .• a plant model must be derived).
and (2) an appropriate controller must be designed so that the plant gives the desired response under the
influence of the control system. Designing a controller requires selecting an appropriate structure and
specifying performance requirements from the control systems. This introduction gives a brief overview
of discrete systems. tells how to model a plant and convert it into a discrete mathematical form, and
describes how to design different types of controllers. Most of the following information can be found in
those textbooks appearing within the Reference section. The articles that follow this introductory material
describe several of the commercially-available CAD packages that may be used for designing and simulating either the controller or the entire control system.

Discrete Systems
A system must be represented in its discrete form in orderto be implemented on a DSP or a microprocessor.
Discrete representation involves two elements. First, the signal is represented by its samples at discrete time
intervals. These time intervals depend upon the sampling rate of the system. Second. the magnitude of the
signal and its samples is also represented by discrete magnitude. The resolution of this magnitude depends
upon the word length of the processing element. Here, only the sampling rate affects our treatment of this
subject. However, in Part In's introduction, where we are concerned about the actual implementation. the
effects of magnitude representation on a processor will greatly influence our treatment of that subject.
z- Transforms: In the continuous time domain, the system is represented with differential equations, and
the analysis is carried out with Laplace transforms. Similarly, in the discrete time domain, a system is represented with difference equations, and the analysis is carried out with z-transforms. The z-transform of
a signal is a representation of that signal as a sequence of samples as shown in Figure 1. Mathematically,
it is given as a power series in z-n with coefficients equal to the value of that signal or
X(z) =Z(x(t)

=Xo + XtZ-1 + X2Z-2 + ... + xnz-R

(I)

Z represents the z-transform; z-n represents the delay of n samples, where n represents the position (0,
1,2,··· ,00) of time; xo, Xh X2,···, and xn represent the magnitude of signal x(t) at that time.

Figure 1. z-Transform

""".-.--//,
I

:
:

."""".,

;-.. ......,.
I

[6(s)j = b[U(s)j

where

a=+(B+K~K.)
b =+(!,)

If
U(s)

=V(s)

then
6(s)
b
V(s) = s(s + a)

(10)

Equation (10) is the final form of the transfer function of the motor in continuous form. This must be converted into a discrete form. The zero order hold (ZOH) transformation is used.
Zero order hold states that

0(,) - ( 1 - .-') {

42

z[

L -,

~(.)1}

(11)

Then,
G(s)
b
-s- = s[s(s + a)]

b

=

S2(S + a)

Expanding as partial fractions, the above can be expressed as
G(s)
Al
A2
A3
-- =-+-+-s
s
S2
S+a
Solving for AI, A2, and A3 gives

~)

(~)

(~)

G(s)
(-- =-- + -- + -s
S
S2
s+a
When mUltiplying by (1 - z -I) and using tables to derive the z-transform,
{ ~ [(eaT -1) + aT]z-'} + {~ [(I-eaT) - (aTe- 0.8, make the response sluggish.
A constant natural frequency COn in the s-plane maps as a straight line emanating from the origin. Figure 9
shows the loci of constant ~ and line of constant con in the unit circle in the z-plane.

Figure 9. Root Locus of Constants ~ and COn
1m axis

n

lVn

-1.0

='T
-0.8

NOTE: T

46

-0.6

-0.4

=sampling period

Frequency Response: If the performance specifications are specified in terms of the frequency response,
they are given in terms of phase margin, gain margin, and cross-over frequency roc as shown in Figure 10
-- essentially specifying the bandwidth of the closed-loop system.
The cross-over frequency roc is defined as the frequency where the phase angle. LGH(jro). of an open-loop
system equals -180 o.
The gain margin is defined as the magnitude. IGH(jCl)I. (in decihels) that lie~ both below 0 db and at the
cross-over frequency.
The phase margin is defined as the phase. LGH(iro). (in deg,.ees) that lies both above -180 ° and at the zero
gain frequency.
To directly use frequency response methods. the z-plane is mapped into the w-plane by using the inverse
bilinear transformation given by

w=fC::)
The w-plane mathematics is similar to the s-plane mathematics. The controller is transformed to the
w-plane, and most of the classical techniques like Bode analysis can be carried out in the w-plane. Once
the compensator is designed in the w-plane. it can be transformed back into the z-plane.

Figure 10. Frequency Response Curves
Magnitude (db)

201--_ __

O~----------------------~~~----------~-------

Phase (deg)

I
I
I
I
I
I

:/000
r-----------------------------~--------------~--------ro

18

47

Additional Criteria/or Performance Specification: Some of the other performance requirements can
be specified as:

•

Disturbance reje~tion

•
•

Control effort
Sensitivity to parameter changes

One of the primary goals of a control system is to reject disturbances while maintaining stability under a
wide variety of operating conditions. In fact. without disturbances, there would be no need for closedloop control systems. The feedback gains in a control loop act to minimize disturbances. For example, if
a disturbance is constant, then integral action will cause the steady-state error to be zero. However, if the
disturbance is of a different nature, then additional steps may have to be taken. It is important to take into
account the source of the disturbance and make the preceding gain large. If the disturbance is outside the
control loop and affects the measurement or reference input, then a feed forward path can minimize the
disturbance. If the disturbance is inside the loop and affects the plant itself. then the loop gain must be
made large.
Sensitivity to parameter changes can be an important consideration. especially if the plant has slow-varying
parameters due to drift. Minimizing these effects is similar to handling disturbances. However, some
controller structures like deadbeat controllers that perform pole-zero cancellations are more sensitive to
parameter variations and should be avoided. If parameter variation is an extremely critical consideration,
then adaptive control should be used.
Sometimes it is necessary to minimize either the control effort or other parameter(s) in the system. Optimal
control techniques can be used to determine a control law and do pole placement. They are discussed in subsection Optimal Control and Estimation. In general, a system with either minimum response or a high
bandwidth requires higher control efforts.

PID Controller: This topic describes the design and implementation of a PIO controller. Figure 11
shows a block diagram of a control system using the PID controller. PID is a commonly used technique in
classical control. In designing controllers, it is often found that just minimizing a term proportional to the
error is not sufficient. The inclusion of the integral of the error term will reduce the steady-state error to
Figure 11. Block Diagram of a Control System Using PID Controller
Controlier

~

Uret

e(t)

e±:J
I ~de I
dt

48

U(l)

zero because it represents the accumulated error. To further improve stability and plant dynamics, a differential of the error term is introduced. This term represents the error rate. A PID controller that includes all
three terms can give very good results. It can be used in its discrete form with digital control systems. If
both low-frequency and high-frequency responses are modified, this controller can be viewed as a special
lead-lag compensator.

Controller Design: The trapezoidal approximation is used for conversion of PID into discrete form.
Usually, the tra.pezoidal approximation is used for the integral term, and the backward difference is used
for the differential term. However, when the design is carried out in the z-domain, the approximation techniques are not important. The design is carried out as a compensator with a pole at z= I to ensure integral
behavior. Hence, the following design is done directly in the z-domain using pole placement techniques.
The analog PID algorithm is given by:
u(t) = Kpe(t) + K j

f

edt + K.t de
dt

(14)

where
Kp, Kj, and K.t = PID constants
u(t) = output of controller
e(t) = error signal
In a trapezoidal approximation, also called Tustin transformation, the area of the integral fedt is given by
the summation of small trapezoids, see Figure 12.
The integral fedt can also be solved by taking the Laplace transform of equation (14) and substituting for
the s. The Laplace transform of (14) gives
U(s) = (Kp + sK.t+

~j)[E(S)]

Figure 12. Trapezoidal Approximation

..

'C
:::I

:t:

a.
E



---------------

- - - - -

0.4

-,

-

10

"

. . ,.
"

., . .

.
., - .

"

.

.

.

.

-,.

.

.

-,-

.

,. .

0
-10 0

20

40

60

80

100

120

Time In lISamples

140

160

180

200

69

Implementation Considerations: Implementation considerations for LQR controllers and Kalman filters are not essentially different from those for state controllers and estimators. Their structures are the
same, only the design approach is different. Still, the following should be taken into account.
When designing an LQR controller, some weight should be placed on the R mlltrix as controi signals could
become excessively large. Note that the LQR approach to controlling does not necessarily guarantee that
the optimum solution will be found. Still, the Q and R matrices do allow the designer to trade-off between
control effort and speed of the response while, at the same time, guaranteeing a stable system.
When designing a Kalman filter, R. can usually be chosen realistically since some information on sensor
characteristics and accuracy is available from the manufacturer. Rw is more difficult to chose. If it is chosen
to be zero due to lack of information, the Kalman filter's gain is zero; the estimator runs open-loop. As a
result, no adjustment is made to the estimated states. This causes the model to slowly drift. Again, as in the
case of Q and R for the LQR controller, the designer can trade-off between reliability of measurements and
plant model. As Rw increases, more reliance is placed upon the measurements, while less reliance is placed
upon the plant model. As R. increases, more reliance is placed upon the plant model. and less reliance is
placed upon the measurements.

Summary
This paper has given a basic overview of digital control theory without going into too much mathematical
detail. The use of CAD tools like PC-MatIab, Matrix-X, and Simnon is strongly recommended in order
to eliminate some of the drudgery in the math calculations and to provide simulation of a system under design.
The choice of the appropriate controller structure will depend largely upon the user's background and
application. Classical control techniques have been practiced for a long time, and people have acquired an
intuitive feel of the behavior of those designs. Modern control theory now gives more capabilities to these
systems; but, at the same time, most of the theoretical/implementation information is still fairly new and
.
unfamiliar.
However, it should be emphasized that the behavior of any system in actual practice largely depends upon
the implementation and not upon the elegance of its design. Elegant theories are attractive: but a simple design, when properly implemented, can yield a more superior performance, higher reliability, and better
manufacturability than a sophisticated design that is poorly implemented.
In general, it is advised that modern control theory be used. With their powerful simulation capabilities,
today's new CAD design tool can eliminate much of the user's fear and uncertainty, along with the laborious
mathematical calculations. At the same time, powerful processors like DSPs are able to implement complex
designs in practical and cost-effective systems.

References
1.
2.
3.
4.
5.
6.
7.
8.
9.

70

Astrom, K., and Wittenmark, B., Compllter COllfrolled Systems, Prentice-Hall, 1984.
Phillips, C., and Nagel, H., Digital COllfrol Systems, Prentice-Hall, 1984.
Isermann, R., Digital Control Systems, Springer-Verlag, 1981.
Franklin, G., Powell, D., and Workman, M., Digital Collfrol of Dynamic Systems, Addison-Wesley,
1990.
Jacquot, R., Digital Control Systems, Marcel Dekker, 1981.
Katz, P., Digital Control Using Microprocessors, Prentice-Hall, 1981.
Lewis, E,.oPtimai Comrol, John Wiley, 1986.
Lewis, E, Optimal Estimation, John Wiley, 1986.
Astrom, K., and Hagglund, T.,Automatic Tllning ofPID Comrollers, Instrument Society of America,
l~L

.

Appendix 1

%
%
%
%
%
%
%
%
%
%
%

%

This program will do simulation of a PIO controller using
trapezoidal approximation and a pole placement technique
If the plant transfer function is G(z) = A/B
and controller function is given by H(z)

c/o

then the closed loop response is given by
G(Z)H(z)
1 + G(z)H(z)

AC
AC + BO

%

ggg=l
while ggg==l

% run simulation continously

%

% This section will implement simulation of a dc servo motor
% the motor used in the example is a Pittman motor, model 9412
%

Kt=O.0207;
% Torque constant
Ke=Kt;
%
j=O.00006;
% Armature inertia + assumed load inertia
R=6.4;
% Resistance
input ( , input sampling period in milliseconds')
T=ans/lOOO;
% get sampling period
a=(Kt"2)/(R*j)
% a, and b will give transfer f~nction in s-domain
b=Kt/ (j*R)
pause
ab=b/(a"2);
% Calculate values to transfer into z-domain
c=exp(-a*T);
dl=a*T;
d=(c-l+dl);
e=(l-c-(c*dl»;
input('input numerator gain
')
Kg=ans;
% get numerator gain
bl=ab*d*Kg;
% numerator terms
b2=ab*e*Kg;
al=-(l+c);
% denominator terms
a2=c;
num=[O bl b2]
% numerator of transfer function in z-domain
den=[l al a2]
% denominator of transfer function in z-domain
[A,B,C,O]=tf2ss(num,den)
%

71

% This section will design a PIO controller using pole placement
'% techniques. Desired pole locations have to be input. The PID

% is converted into discrete form using trapezoidal approximation
%
% Enter desired pole locations in the next step
'Enter the location of your poles'
')
input('Input location of pole 1:
p1=ans;
')
input('Input location of pole 2:
p2=ans;
')
input('Input location of pole 3:
p3=ans;
')
input ( 'Input location of pole 4:
p4=ans;
p=[p1 p2 p3 p4];
% The desired characteristic polynomial is found as
Q(1:5)=poly(p)
% The coefficients of different powers are given by
q2=Q (:,2);
q3=Q (: (3) ;
q4=Q (:,4) ;
q5=Q ( : , 5) ; ,
% The system polynomial is given by
% (KIz**2 + K2*z + K3) (b1*z + b2) + (z - 1) (z - r) (z**2 - a1*z +a2)
% Equating coefficients of different powers we get
% four linear equations. The next few steps will solve for
% K1, K2" K3 and r, where r is an arbitrary location of one of the
% poles of the controller.
-1
o = [ b1
0
0
b2
b1
0
l-a1
o
b2
b1
a1-a2
o
0
b2
a2 ];
%

01=

q2+1-a1
q3+a1-a2
q4+a2
q5

0

0

b1
b2

0

-1
1-a1
a1-a2
a2 ] ;

b1
b2

0

%

D2=

b1
b2
0
0

q2+1-a1
q3+a1-a2
q4+a2 b1
q5
b2

-1

0
0

1-a1
a1-a2
a2 1 ;

%

03=

b1
b2

0

0

b1
b2

0

0

q2+1-a1
-1
q3+a1-a2
1-a1
q4+a2
a1-a2
a2 1 ;
q5

%

04=

72

b1
b2

0

0

b1
b2

0

0

0
0

b1
b2

q2+1--a1
q3+a1-a2
q4+a2
qo 1 ;

d=det(D);
dl=det(Dl);
d2=det(D2);
d3=det(D3);
d4=det(D4);
Kl=dl/d
K2=d2/d
K3=d3/d
r=d4/d
%
This section will implement closed loop simulation of
%
PID controller and the DC motor
%
% numerator of PID controller
numl=[Kl K2 K3);
Rl=[l,r);
% poles of the PID controller
denl=poly(Rl);
% calculate denominator
compnum=numl;
compden=denl;
procnum=num;
procden=den;
num5=conv(numl,num);
% Multiply numerators
den5=conv(denl,den);
% Multiply denominators
input ('specify the time in secs over which you want to see the step: ')
t=ans;
n=t/T;
%
Calculate number of samples to see simulation
input('input a loop gain: ')
% Enter any additional loop gain
g=ans;
u=ones(n,l);
% Number of samples to see simulation
closnum=g*num5
% numerator of closed loop system transfer function
closden=g*num5+den5 % denominator of closed loop system trnasfer function
y=dlsim(closnum,closden,u);
% do discrete simulation
plot(y)
title('Position Step Response')
xlabel('Time in # of samples')
ylabel('Position in radian')
grid
pause
end

73

Appendix 2

%
%
%
%
%

This file will ,do simulation of a closed loop deadbeat controller
If the plant transfer function is G(z) = A/B
and controller function is given by H(z)

c/o

%
%
%
%

%
%

then the closed loop response is given by
G(z)H(z)
-------~,~

AC

....

1 + G(z)H(z)

AC

+ BO

%

ggg=l
while ggg==l

%

Keep doing

%

% The next section will implement simulation of a dc servo motor
% the motor used in the example is a Pittman motor, model 9412
%

Rt=O.0207;
% Torque constant'
Re=Rt;
%
j=O. OOOQ,~,~
% Armature inertia + assumed load inertia
R=6.4;
% Resistance
input ( , ~Oli'ut sampling period in milliseconds')
T=ans/1000;
% get sampling period
a=(Rt"2)/(R*j)
% a, and b will give transfer function in s-domain
b=Kt/ (j*R)
pause
% Calculate values to transfer into z-domain
ab=b / (a ~,2,); ~
c=exp(-a*T);
d1=a*T;
d= (c-1+d1) ;
e=(1-c-(c*d1»;
input ( 'input numerator gain
')
Kg=ans;
% get numerator gain
b1=ab*d*Kg;
% numerator terms
b2=ab*e*Kg;
a1=- (1+c) ;
% denominator terms
a2=c;
num=[O b1 b2]
% numerator of transfer function in z-domain
den=[l a1 a2]
% denominator of transfer function in z-domain
[A,B,C,O]=tf2ss(num,den)
%

% This section will implement design of a deadbeat controller
% The form of the controller is given by the following equation
%
%

%
%

%
%
%

-2

-1

pO + p1*z

+ p2*z

-3

+ p3*z

-.0

. . . . . . . . pn~z

G (z) = -------------------------------------------db
-1
-2
-3
-n
qO + q1*z
+ q1*z
+ q3*z
. . . . . . . . qn*z

%
%
%
%
%

%
%
%
%
%

pO
pI
p2
qO
ql
q2
%
%
%
%

If the plant transfer function is given by
-1
bO + bl*z
G

P

-2
+ b2*z

-3
+ b3*z

-n
.....•.. bn*z

(z) = --------------------------------------------1
-3
-n
-2
+ a3*z ........ an*z
aO + al*z
+ a2*z

then the following procedure can be used to design a
deadbeat controller
1/(1 + bl + b2)
al*pO
a2*pO
1

-bl*pO
-b2*pO
This section will implement closed loop simulation of the
deadbeat controller and the DC motor

numl=[pO pl p2];
% Numerator of the controller
% denominator of controller
denl=[qO ql q2];
compnum=numl;
compden=denl;
procnum=num;
procden=den;
num5=conv(numl,num);
% multiply both numerators
den5=conv(denl,den);
% multiply both denominators
input ('specify the time in secs over which you want to see the step: ')
t=ans;
n=t/T;
% Calculate number of samples to see simulation
input('input a loop gain: ')
g=ans;
u=ones(n,l);
closnum=g*num5;
% Enter additional closed loop gain
closden=g*num5+den5;
% Calculate denominator of closed loop system
y=dlsim(closnum,closden,u); % Do closed loop simulation
plot(y)
title('Position Step Response')
xlabel('Time in # of samples')
ylabel('Position in radian')
grid
pause
end

75

Appendix 3

% This file will do simulation of a closed looped system with
% a DC servo motor and a state controller/estimator. The estimator
% will make a full estimate of states from position measurement>
% The state controller is given by the following equations
%

% x(n+l) = A*x(n) + B*u(n) + L[y(n) - C*x(n)]
--- State estimation
% y
C*x(n)
estimation of measured variable
% u = -K*x(n)
------ control law
%

% States of the system will be position and velocity
%

aaa=l
while aaa==l

% Do simulation continously - to exit do CTRL C

%

% The first section will build model of dc motor
% The motor used in this example is a Pittman motor, model 9412
%
clear;
Kt=0.207;
% Torque constant
Ke=Kt;
% Back emf constant
j=0.0006;
% Armature inertia + assumed load intertia
R=6.4;
% resistance
a=(Kt"2)/(R*j)
% a, and b will give transfer function in s-domain
b=Kt/ (j*R)
pause
num=[O 1 b]
% define numerator and denominator of transfer function
den=[l a 0]
pause
F=[O 1
% state representation of motor in continous time

o

-a]

G=[O
b]
% convert state model to discrete form
input('Input sampling period in milliseconds
')
T=ans/lOOO;
% get sampling period
[A,B]=c2d(F,G,T)
C=[l 0]
% Assume position measurement
%
%
%

%
%
%

The next section will implement design of the state controller
and observer using pole placement techniques. Pole locations will
have to be input for the controller. The estimator poles will be
chosen to faster than the controller.

, Enter 0 if you will have complex poles'
input(' and 1 if you will have real poles:
X=ans;
if X==O
input('input real part of pole location:
')
rlreal=ans;
input ( 'input imaginary part of pole location:
rlirnag=ans
i=sqrt(-l)
r=[rlreal+ *rlirnag; rlreal-i*rlimag];
end

76

')

.)

if X==l
,)
input('input location of pole 1:
r1=ans;
,)
input('input location of pole 2:
r2=ans;
r=[r1; r2];
end
K=place(A,B,r)
% do pole placement for controller
l=r/2
% choose observer poles to 1/2 distance from origin
ll=place(A',C',l)
% do pole placement for observer
L=ll'
%
%
%

The next section will do simulation of the closed loop system

D=[O]
% direct link matrix is 0
input('input reference signal in radians:
'l
re=ans;
N=[l;O]
% position command will be input
xr=N*re
% reference state
input ('specify time in sec over which you want to see step:
t=ans;
n=t/T;
% calculate number of samples
x=[O;O]
% actual states - initial values
xe=[O;O]
% estimated states - initial values
u= 0
% control action - initial value

')

%

%

This section will do simulation of the motor

%

for i=l:n,
x = A*x + B*u;
%
simulation of actual plant
y(i)= C*x;
%
This section will do simualtion of the controller and estimator
%

u
-K*xe + K*xr; % implement control law
yu(i) = u;
xe = A*xe + B*u + L*(y(i)-C*xe);
% do state estimation
ye(i) = C*xe;
% estimated postion
end
clg
plot(y)
% plot actual postion
hold on
plot (ye, '+g')
% plot estimated position
ylabel ( 'Postion ' )
xlabel('Time in # of samples')
title('Step response of State Controller/Estimator')
text(O.60,O.40, ,---- actual postion', 'sc')
text(O.60,0.30, '**** estimated postion', 'sc')
grid
pause
hold off
clg
subplot(211),plot(y),title('Step response - Actual Position'),
subplot(212),plot(ye),title('Step response - Estimated Position'),

77

pause
plot(yu),title('Control effort'),
grid
ylabel ('u')
xlabel('Time in f of samples')
end

78

Appendix 4

%
%
%
%
%
%
%

This program will do simulation of Linear Quadratic Regulator (LQR)
and a stationary Kalman Filter.
The controller and estimator are given by the following equations:
x(n+1) = A*x(n) + B*u(n) + Lp[y(n) - C*x(n)] --- state equation
y = C*x(n)
--- estimation of measured variable
u = ~ K*x(n) --- control law
K is optimal gains and Lp is kalman gains

%

aaa=l
while aaa===l
% run simulation simultaneously - to exit use CTRL C
%
This section will build model of a dc servo motor
%
The motor used in the example is a Pittman motor, model 9412
%
%

clear;
Kt=0.207;
% Torque constant
Ke=Kt;
% back e.m.f. constant
j=0.0006;
% rotor inertia + assumed load inertia
Res=6.4;
% resistance
A
a=(Kt 2)/(Res*j)
b=Kt/ (j*Res)
F=[O,l;O,-a]
% state representation in continous time
G=[O;b]
')
input('Input sampling period in milliseconds
T=ans/1000;
% get sampling period
[A,B]=c2d(F,G,T) % convert state model to discrete time
C=[l 0]
% Assume position measurement
%
%
%
%
%
%

The next section will design the LQ Regulator and the
Kalman filter. The cost functions will be input to
to calculate the optimal gains and noise characteristics
will be input to calculate Kalman gains

input ('enter cost function matrix Q:')
Q=ans;
input ('enter cost function R: ')
R = ans;
input ('enter measurement noise covariance Rv:')
Rv=ans;
input ('enter disturbance matrix g:')
g=ans;
input ('enter disturbance covariancce matrix Rw:')
Rw=ans;
K=dlqr(A,B,Q/T,R*T)
% calculate optimal gains
Lp=dlqe(A,g,C,Rw*T,Rv/T)
% calculate Kalman gains
pause
%

79

%
%
%

The next section will do simulation of the closed loop
system

D=[O)
% no direct link input
input ( , input reference signal in radians:
')
re=ans;
N=[l;O)
% position command will be assumed
xr=N*re
% reference state
input ( , specify time in sec over which you want to see step:
')
t=ans;
n=t/T;
% calculate number of samples to do simulation
x=[O;O);
% actual states - initial values
xe=[O;O);
% estimated states initial value
yv= rand('normal'); % characteristics for injected sensor noise
yv=rand(n,l);
uv=rand('normal'); % characteristic for disturbance noise
uv=rand(n,l) ,
u=O;
% control signal - initial value
%

Next section will do simulation of the motor
%
%
for i=l:n,
x = A*x + B*u;
% simulation of actual plant
y(i)= C*x + yv(i,l);
% measured position
%
Next section will simulate regulator and kalman filter
%
%

u
-K*x + K*xr + uv(i,l);
% control action with disturbance
yu(i) = u;
xe = A*xe + B*u + Lp*(y(i)-C*xe); % state estimator
ye(i) = C*xe;
% estimated position
end
clg
plot(y, 'r');
% plot actual position
hold on
plot (ye, '.g')
% plot estimated position
title('Measured position vs Estimated postion')
grid
text(0.60,0.24, ,---- measured postion', 'sc')
text (0.60,0.18, , • • •. estimated post ion ' , 'sc')
xlabel('Time in t of samples')
ylabel ('position')
pause
hold off
clg
plot(y),title('Step response - Measured Position'),
grid
xlabel('Time in t of samples')
ylabel('position')
pause
clg
plot(ye),title('Step response - Estimated Position'),
grid
xlabel('Time in f of samples')

80

ylabel('position')
pause
clg
plot(yu),title('Control effort with disturbance'),
xlabel('Time in t of samples')
ylabel('Control u')
grid
end

81

82

Matrix Oriented Computation Using Matlab
Jeffrey C. Kantor
Department of Chemical Engineering
University of Notre Dame
Notre Dame, IN 46556
Phone: (219) 239 5797
Email: jeff@ndcheg.cheg.nd.edu
Fa.x: (219) 239 8007
MaUab is a tool for interactive numerical computation. It contains as built-in functions essentially all of
the numerical linear algebra algorithms in LIN PACK
and EISPACK. Coupled with a programmable interpreter and good scientific graphics capability, Matlab
can be used for algorithm development in many areas
of engineering and science.
To demonstrate some of its funct.ionality, I've included in this article several examples where MaUab
has proven useful in my own teaching and research activities. These examples are not comprehensive since
t.hey neither fully exploit all of the features of Mat.lab
or do they show all of our appiications. The examples were chosen only because they seemed to be relatively st.raightforward and self-contained illustrations
of how Matlab can be used.

1

Some Background

Matlab was originally conceived by Cleve Moler just
over a decade ago while he was teaching numerical methods at the University of New Mexico. He
found it, frustrating to simultaneously teach numerical methods and the programming tricks it takes
to implement them. The effort required to write
numerically sophisticated FO RTRAN code can simply overwhelm a student and not leave much time
left over for doing applications, So to address the
problem, Cleve Moler wrote a simple interpret.er in
portable FO HTRAN for a high-level matrix oriented
language. The interpreter was based on one given
l>y N. Wirth for a model language called PL/O [12].
Naturally, the numerical algorithms were based on
the recently completed Linpack and Eispack projects
to which Cleve Moler had made substantial contributions. This primitive MaUab interpreter was evidently quite successful and ported to a number of
machines during the late 1970's and early 1980's, undergoing minor revisions in the process.
Several companies subsequently adopted Matlab as

Reprinted, with pennission from author.

a platform for developing and delivering commercial
control synthesis and analysis software. Systems Control Technology produced a package called Control-C,
and at about the same time, Matrix-X was developed
by Integrated Systems, Inc. Both companies found
many shortcomings in the original MaUab interpreter
including workspace constraints, lack of function definitions, and overall performance. The Matlab interpreter was largely rewritten at each of these companies to support their products.
A few of the professiona.l st.aff from these companies joined together to form a new company called the
MathWorks, Inc. There they produced an entirely
new version of Mat.lab writ.t.en in C for port.abilit.y
and efficiency. The interpreter was greatly enhanced
to include an ability for the user to program Matlab
functions. They also developed an integrated facility
for producing a basic set of puhlication quality scientific graphs. The Math Works currently markets this
version of MatIah for a variet.y of hardware platforms,
the details are given at the end of this article.
Beyond the basic interpret.er, there are several
'toolboxes' intended for specific application areas. A
'toolbox' is typically a collection of functions and
scripts that implement specialized numerical algorithms. These generally are not finished applications
in the sense of a well-developed user interface with
a lot menUs and the like, but are rather integrated
collections of algorithms that you either can use directly or build into your own script.s. It is sort of
like using a FORTRAN subroutine library, but with
the advantage of being able to directly execute the
routines in the interactive MaUab environment. The
Math Works distributes a Signal Processing Toolbox
with Matlab, and markets several others including
a Control Design Toolbox, Robust Control Toolbox,
System Identification Toolbox, a Chemometrics Toolbox. There are also tool boxes commercially available
-from third parties, in addition to a number that University researchers may have put together for their

83

own purpOl!ell.
Now for the confusing part. There is a 'public damain' IBM PC version of Matlab. In addition several
universities sell very low cost versions of Matl~b available for the Macintosh and IBM PC. These are based
on Moler's original FORTRAN code, sometimes with
enhanced graphics and macro writing facilities. l A
person should be careful with these since they are
not of the same calibre as the MathWorks and simply don't include the tools necessary for doing real
work. Nor will the toolboxes cited above work with
these versions. A corollary of this advice is to not let
an exposu re to these other versions color your view
of Matlab.

2

Figure 1: Noisy image of a disk.

What is Matlab?

In some ways, the Matlab interpreter vaguely resembles a crOAS between BASIC and APL in the sense
that it is programmable and endowed with a rich set
o.f op.erat.ors for matrix manipulations. The key distmctlon IS that Matlab incorporates well-developed
and reliable algorithms for numerical linear algebra.
'Moreover, the built-in graphics capability is often entirely sufficient for presenting results in final published form. (The graphics in this article, for example, were pasted in directly from MatJab).
Let me give an example of how these capabilities
can be used for day-ta-day 'scratchpad' kind of calculal.ions I,hat pop up. A few days ago a colleague of
mine walked into my office with an idea for processing video images to enhance the edges of discs that
appear in the picture. lie acquires these images in
his experiments on concentrated suspensions of noncolloidal p"rticles. lie started off by saying (roughly)
·Suppose you have a noisy image of a disc" at which
point I stopped him, turned on my computer, and
typed the following commands in Matlab

% X mesh
x = -1:.1:1;
1 = -1:.1:1;
% Y mesh
[XX,1Y] = meshdom(x,y); Y. 2D mesh
x=sqrt(xx,·2+yy."2) or a package of FORTRAN Bubrout,ines such as given in Pr""s.
et al. [11]. On the one hand, FORTRAN remains as
t,be principle programming language for numerically
int.ensive engineering applications, t.herefore a facility
with FORTRAN is highly desirable. Moreover, our
st.udent,s all take a required Freshman Engineering
course that teaches the element.s of FOR'"~"RAN.
On t.he other hand, it. is significant.ly fa..ter to writ.e
and t.est small codes usin/!: t.he high-I .. vel Ma.t.lab interpret.er. The st.udent.. also indicat,r" a st.rong pref.
erence for microcompllt,er ba.,ed software tools which
could be used on various workstation clusters about
campus rather than be tied to a single minicomputer
located in the Engineering College.
On balance, I felt that a more productive environment would allow the course to survey more topics
with more emphMis on applications. so I chose to
use Matlab. I have been pleased to note how students have transfered their new computational skills
to other courses. They continue to use Matlab to do
routine laboratory calculations, data fitting, and for
computations in their Senior Design courses.
Recent textbooks have appeared which incorporate
various amounts of Matlab into the text and exercises.
The third edition of the classic linear algebra text by
Noble [10] contains is number of Matlab exercises and
examples. Another linear algebra text,book by Hill is
basically centered 'on Matlab, with chapters regarding
programming technique [7]. It is sO complete that it
could serve as a low-cost Matlab manual for students.
The Handbook for Matrir Computations is useful to

85

anyone doing numerical linear algebra, and includes
a survey of relevant Fortran, BLAS, Linpack, as well
as Matlab [4).
Lennert Ljung's book on system identification (9) is
closely coupled to the System Identification Toolbox.
The toolbox, in fact, was written by Ljung, and the
text provides excellent technical documentation.
The following two sections present two examples of
incorporating Matlab into classroom activities.

4

Classroom Example: Linear
Programming

Three years ago our Department introduced a new
required course for our undergraduate majors entitled Comp.t/er Methods for Chemical Engineers. This
course is normally taken by Spring semester Juniors
after having completed the normal Mathematics sequence, and before commencing the two-semester Senior design sequence. The course covers elements of
numerical met.hods with application to problems in
chemical engineering.
Linear programming is discussed in Borne detail in
the course because it is' one of those skills that an
engineer can transfer to a wide variety of problem
areas. A key teaching goal is for the student to be able
to recognize a problem as a linear Program, and then
to formulate the requisite objective and constraints.
I prefer to use the Active Set method as outlined by
Fletcher [5) to teach the principles behind linear programming. It seems to leave the student with a more
intuitive understanding of the role of constraints and
their sensitivities than does the usual presentation of
the Simplex method. If the students can undcrstand
the relatively simple strategy to solving a linear program, it is then much easier to motivate and teach
the numerical tricks it takes to implemcnt an efficient
algorithm.
The linear programming problem is formulated as
minimizing the linear objective
minz

"

= cT z

where x is a n vector, subject to m linear constraints
a;z?,b; i=1,2, ... ,m

to move systematically from one vertex to another 80
as to reduce the value of the objective function at
each step. Each step of the algorithm is defined by
just two rules. The first rule identifies a constraint
to throw out of the active constraint set in order to
decrease the objective. The second rule determines
which constraint to add to the active set to establish
a new feasible vertex.
Let A be the set of active constraints th at determine a feasible vertex. The vertex is given by solving
a set linear equations to give

where AA and bA are const,meted from the coefficients
of the active constraints. Now suppose the right hand
side of each active constraint is altered by a small positive amount ';' Positive values of the <; 's correspond
to feasible perturbations, while negative values would
cause constraint violations. As a resu It of a feasible perturbation, the vertex then shifts from x to x"
where
x,

= A:;,lbA + A:;' 1 ,

Substituting x, into the objective function yields

z = cT A:;,lbA

+ cT A:;' I,

The second term shows the change in the objective
function due to independent perturbations in the active constraint set. Thus the elements of the row
vector
A = cT AAI
play the role of 'sensitivity coefficients' revealing how
the objective function responds to feasible perturbations in the active constraint scI.. If any element of
A is negative, then the objective function can be reduced by removing that const.raint from the active
set. Just as in the Simplex met.hod, we choose to remove the constraint corresponding to the most negative element of ~.
Let Ap be the most negative element of A. Then the
effect of removing the p'h active constraint is given
by
x! = X + f p 8 p
where 8 p is the p'h column of A:;' I. How large can
be before some other constraint becomes active?
This can be computed explicitly as

'p

where n :£ m. If positivity constraints are present,
then these are explicitly included in the constraint
list. It is easy to show that if the feasible region is
bounded, then optimum will always be found at a vertex defined by the intersection of n active constraints.
The basic algorithm is, firstly, to find any active
set of n constraints forming a feasible vertex, then

86

The search is done over all constraints not in the active set (i ¢ A), but only for those constraints in

which the right hand side becomes smaller as (p increases. (GiSp < 0).
The conotraint which realizeo the minimum lp io
exactly the one to be added to the active constraint
set. !laving done that, the procedure repeats itself
until no further improvement in the objective is possible, i.e., until all of the sensitivit¥ coefficients are
non-negative.
This hMic algorithm cleanly translates to the following Matlab function. The function Ip takes four
arguments specifying the coefficients on the len and
right hand sides of the constraints, coefficients of the
objective ftlnction, and an initial feasible constraint
set. The function retllfns the optimal value of the objective function, the optimal solution for the decision
variables, the value of the senoitivity coefficients, and
the final active con·straint set.
function[z,x.lamb.activ]=lp{a.b.c.feas)

% Initialization

activ{p) .. q:
ainv .. inv{a{activ.:»:
x. ainv.b{activ.:);
lamb = c.ainv:
end

z = c.x:

%Compute objective function

This example uses several of the Matlab control
structures to simplify the coding process. The construction
whila any{lamb < 0).

C••• ]
end
controls the main iteration over vertices of the feasible
region. The iteration continues as long as any element
of the vector lamb is less than zero. Nested within
this loop is an iteration

acth

[a.n] .. size (a) :
feas{:):

for 1'=1:m.

% Compute Initial Vertex

end

[ ... J
which specifies a conventional indexed iteration loop
where i successively takes values between 1 and m.
Within this loop are several nested conditional statementa such as

ainv = inv{a(activ.:»:
x = ainv.b(activ.:):
lamb = c,,"ainv:
while any (lamb < 0).

if -any(i==activ).

% Find which constraint to drop. p

=

% Find which constraint to add.
alpha = Inf:
q .. 0:

for i=1:m.
if -any{i==activ).
den = a(i.:).sp:
if den < O.
tmp = (b(i)-a(i.:).x)/den:
if tmp < alpha.
alpha .. tmp:
q = i;
end
end
end
end
%aecompute x. lamb. and z

[

... ]

end

Ctmp.p]
min{lamb):
ap ,. ainv{: .p):
q

In this ease, the conditional code is executed if 'not
any' of the elements of the vector activ are equal to
i. The practice of indenting nested control structures
graphically reveals program flow and is strongly nrged
on the stndents.
This function is a zeroth order cut at a practical algorithm for linear programming, it will work for small
problems but will be inefficient and error prone when
applied to larger problems. As exercises, the students
are asked to correct several of the glaring deficiencies. Foremost is to avoid the repeated inversions
of the active constraint matrix with a more efficient
procedure using rank-one updates (Le., the ShermanMorrison formula). Having done this, the algorithm
is then identical to the usual revised simplex method
as discussed in most textbooks. Other exercises in algorithm development could include writing a code to
identify an initial feasible constraint set, or to modify
the algorithm to handle equality constraints.

87

5

Classroom Example:
cess Control

Pro-

The next eJ!:ample iIIustratefl the use of several toolboxes ~o do model identification and a simple control
design. Students taking a graduate course in Advanced Process Control during Fall, 1987, were assigned a homework project in which they were to analyze input-output data for a small gas furnace. They
were to first obtain a transfer function model, then
use the model to design a PIO, minimum variance,
and optimal LQG controllers. The three controllers
were to be evaluated by simulation. The students
were given one week to complete the assignment.
The gas furnace data was adapted from Appendix
B of Box and Jenkins [2] consisted of 300 pairs of
input-output measurements {\l(k), y(k)} obtained
at 9 second ,ntervals. The manipulated input is gas
Oowrate, and the measured output is the percentage
of CO2 in the stack gas. These data were given to
the students as a MatIab file called GasFurnaceData
. The file can be read and plotted using the following
commands to produce the following plots shown in
Figure 3.
%

O'8~

~O.6~

0.4 L--_ _-'--_ _- ' -_ _- '
o
100
200
300

Time
Figure 3: Input-Output data for a gas furnace. The
data is adapted from Appendix B of Box and J enkill8.

Read data record

GasFurnaceData;
udata u;
ydata y;

=
=

%

Plot input-output data

subplot(211); % Specifies upper plot
plot(udata);
title('Gas Flow (Input)');
ylabel('CFM');

The function detrend (from the Signal Proceflsing
Toolbox) is used to remove means and linear trends
from the input and output data series. Then apa
(also from the Systell1ldentification Tholbox) is applied to construct a transfer function estimate that is
stored as gO. The transfer function is displayed using
bodeplot to give the refluit shown in Figure 4.
There are a number of possible model~ that could
be used to describe this data. Of these, an ARMAX
model in the form

yet)
subplot(212); % Specifies lower plot
plot(ydata);
title('C02 Composition (Output)');
ylabel('% C02');
xlabel( 'Time');
The first task for the students was to identify a
discrete-time transfer function model for the gas furnace. A non-parametric spectral analysis provides a
starting point for estimating model order. This is
done with the following commands:
y = detrend(ydata);
u detrend(udata);
z"[yuJ;
gO = spa(z);
% System_lD toolbox
bodeplot(gO);
%Systea_ID toolbox

=

88

B(q)

= A(q) u(t -

C(q)
nt) + A(q) e(l)

or, explicitly, as

yet)

=

bIq -I + ... +bn,q -n, U(t-nl)+
1 + olq-I + ... + on.q-n.
clq-I + ... + cn,q-n. e(t)
1 + olq-I + ... + on.q-n.

does an adequate job (,-I is the backward shift operator). The following commands use functions in
the System Identification Toolbox to fit an ARMAX
model for the case n. n&·= n. 2, nl 1. The fitted transfer function is then evaluated and Bode plot
is displayed to compare the fitted transfer function to
the previous non-parametric estimate.

=

th = armax(z, [2 22 1J);

=

=

:] ::~:~::~,:J ::l::~~:]
1:1::3:::] l:l:=S::]
10-2

10-1

1oo

101

10-2

10-1

frequency

1oo

101

frequency

Or~-=~PHA~~SE~PLOT~~~~

10-2

10-1

1oo

101

10-2

10-1

frequency
Figure 4: A nonparametric estimate of the transfer
function between the input and output of the gas furnance based on the data in Figure 3. The results are
computed using the System Identification Toolbox.

g = trf(th);
bodeplot([g gO]);
The resulting Bode plots shown in Figure 5 demonstrate a reasonable fit of the data using a second order model. 'Goodness of fit' can also be explored by
computing an estimated autocorrelation function for
the residuals, and an estimated cross correlation between the input. This is done with the command
e = rellid(z. th) to produce the results shown in
Figure 6.
These plots indicate that there is little significant
correlation left in the residua" so there is no stati...
tical justification for employing higher order models.
(Attempting to fit a first-order model to this data provides an example where statistically significant correlations do remain in the residuals.) The fitted model'
coefficients are displayed as follows:

pre.ant(th)
Thi. matrix was created by the command
ARMAX on 2/28 1989 at 10:47
Lo •• fen: 0.09217
Akaike's FPE: 0.09593

Sampling interval 1

1oo

101

frequency
Figure 5: Comparison between the nonparametric estimate of the gas furnace transfer function, and the
transfer function by fitting a second order ARMAX
model. A good fit is obtained except at relatively
high frequencies where noise is expected to be the
dominant contribution.

The polynomial coefficients and their
standard deviations are

B=

o
o

-6.3133
1.9007

16.9243
2.3403

1. 0000

-1. 3899
0.0618

0.6299
0.0480

0.1386
0.0858

0.1307
0.0659

A=

o

c=
1.0000

o

At this point in the exercise, the student has developed a transfer function model for the gas furnace
that can be used for designing simple control systems.
Omitting the details, an optimal LQG controller can
be designed to minimize the loss function

J/,

=E[1I2(k) + pu'(k»)

by the computational method outlined in Chapter 12

89

X in transfer function fora

Correlation function of residuals

0.5

o
-0.5 L -_ _ _ _

o

~

_ ____l

20

10

30

lag

o~
-20

o

20

40

Figure 6: The auto- and cross-correlation functions
of the residuals obtained after fitting a pal'ametric
model provides a simple test of model fit. In this
case, a second-order ARMAX model appears to adequately account for all of the essential correlations in
the gas furnace data. The horizontal lines mark the
96% confidence intervals for the null hypothesis.
of Astrom and Wittenmark [1). The necessary calculat.ions are encapsulated in the function dlqg given
below. This function makes use of others defined
in the Control Systems Toolbox. These are dlqr,
which computes a solution to the algebraic discrete
time Ricatti equation, and aa2tf, which convert.. ..
state-space model representation to a transfer function description.
function [B,r]=dlqg(th,rho)
XDLQG
X Cr,B] = DLQG(theta,rho) compute8
X the LQ optimal controller to
% minimize the objective function
2
2
E[y (k) + rho.n (k)]

X
X The reaulting controller i. given

90

a

Vittenmark

[a,b,c,d,f]=polyfora(th):
a"conv(a,f):
na = length(a)-1:
nb " length(b)-l:
nc length(c)-l:
n = max([na,nb,nc]):
A = [zeroa(n,l),[eye(n-1): ..
zeroa(l,n-l)]] :
1(1:na,l) .. -a(2:na+1)':
B = zero8(n,O:
B(l:nb,l) = b(2:nb+l)':
K = z8ros(n,n:
K(l:nc,l) = c(2:nc+l)':

=

lag

X
X
X

X Ref:Chapter 12, Astrom

% J.C. Kantor, 3 December 1987

oSross correlation: Input 1 and Residuals

-O.~

X
seq)
X
n(k) .. - ---- y(k)
%
R(q)
X
%
X The plant model h gben by theta
X in the standard fora of the Systam
X Identification Toolbox.

K = K + 1(:,0:

C" [l,zeros(l,n-1)]:
L" real(dlqr(A,B,C'.C,rho»:
[a,r]

= a82tf(A-K.C-B.L,K,L,[O] ,1):

=
=

Letting p
10- 5 gives an approximation to minimum variance control. The resulting controller is
given by u(t) -Gc(q)y(t) where

5(q)
Gc

=

R(q)

0.0762q-1 - 0.0512q-'

= 1 + 1.1555q-1 + 1.6364q-2

Finally, the student can compute the simulated response of the closed-loop glIB furnace control system.
The closed-loop transfer function between the output
and exogenous disturbances e(t) is given by

C(q)R(q)

y(t)

= A(q)R(q) + 8(q)5(q) e(t)

The following sequence of commands computes the
products of polynomials using the MatIab convolution
operator conv, does a simulation of the closed-loop
plant models, and displays the results.

% Compute control and closed-loop

%tran8fer fuctionB

[s,r] = dlqg(th,O.OOOOl);
[a,b,c]=polyfor.(th);
p = conv(a,r) + conv(b,s);
qy = conv(c,r);
qu = conv(c,.);

t Construct a white noise input
rand( 'normal');
w = O.1.rand(200,l);

t Output simulation
subplot(211) ;
plot(dlaim(qy,p,w»;
title('Output');
ylabel('C02');

-:~
o

50

0.05

100

150

200

Control Action

t~~
o

50

100

150

200

t Control aimulation
Bubplot(212);
plot(d181m(qu,p,w»;
t1tle('Control Action');
ylabel('Gaa Flow');

Figure 7: Response of the gaR furnace with LQ controlin place.

• Mallah is useful for prototllping algorithms.
The simulated performance of the closed-loop regulator results in a 20.5% reduction in the variance
of the CO 2 sl.ack gas composition compared to the
CaRe of no control. The results are shown in Figure
7. Many additional aspects of !.lte problem can be
readily treated using simple Matlab procedures.

6

Summary Remarks (Why
Matlab Can't be Used for
Everything?)

Matlab is a high-level language with a large number of primitives so that even complex algorithms
can be written in a minimal number of lines. The
interpreter provides a convenient mechanism for
debugging numerical algorithms. For example,
simply by deleting the semicolon at the end of
a line, the intermediate results oC any computation are printed. There are also facilities Cor
introducing keyboard interrupts and monitoring
intermediate values.

• Mallah is useful when you need results fast.

In spite of its many useful real.ures, Matlab is not
an appropriate tool for all applications. While it is
difficult to draw precise boundaries, there are some
general guidelines.

In addition to the points given above, the available toolboxes and graphics Cacilities are oCten sufficient for solving problems from start to
finish, including the production of publication
graphics.

• MaUd is useful when your problems are 'vectorizable '.

• Mallab does not replate either FORTRAN or
specialized application software.

Matlab exhibits excellent floating point performance when using its matrix oriented primitive
operations. However, because it is an interpreted
(not compiled) language, it sutTers some performance degradation on scalar and non-numeric
operations. Some algorithms, such aR for integrating ordinary differential equations, can be
quite slow in Matlab for this reason.

Matlab is not a replacement for a FORTRAN
compiler and a good package of scientific subroutines. It not suited to truly large scale computation, nor can it be used effectively in a batch
mode. Linear programming provides an example of the tradeoffs. Straightforward Matlab LP
codes might be useful for problems with, say, up
to a few hundred constraints. This is no match

91

for commercial that can handle many thousands
of constraints.

• MaUd is not fie,., effective /or IIon-ovmeriml algorithms.
Matlab treats essentiaily all inCormation as matrices of real or complex floating point numbers.
The simple facilities for handling textuai data in
Matlab are inade,quate for anything beyond manipulating titles and labels. It would be a mistake to use Matlab to do data base programming,
for example, or Cor writing compilers.

[4] Coleman, Thomas F., and Charles Van Loan
(1988). H.ndbool: lor M.tri" Comput.'ion&
SIAM, Philadelphia.
[5] Fletcher, R. (1987). Prflctical Method, oIOp'imization, Second Edition. John Wiley & Sons,
New York.
[6] Forsythe, George E., Michael A. Malcolm, and ,
Cleve B. Moler (1977). Computer Methods lor
Mathemaliml Computations. Prentice-nail, Englewood Clift's, NJ.

(7) lIi11, David R. (1988). Experiments ill Compu-

7

Where to Obtain Matlab

Academic institutions can purchase Matlab directly
from the MathWorks, Inc. Their address is
The MathWorks, Inc.
21 Eliot Street
South Natick, MA 01760
Phone: (508) 653-1415
Fax: (508) 653-2997
E-mail: tung@mathworks.com
The Math Works has special licensing provisions for
e1asi1room and educational use. For commercial uses,
Matlab is also distributed by
MGA, Inc.
73 Junction Sljuare Dr.
Concord, MA 01742
Phone: (508) 369-5115
Versions of Matlab are available for IBM PC, AT, and
80386 platforms, including Weitek support. Also for
the Apple MacintOBh (with and without support for
the 68881), Sun and Apollo workstations, DEC Vax,
Gould, and Ardent machines. The Ardent version has
Cacilities Cor 3D solids rendering.

References
[I] Astrom, Karl J., and Bjorn WiUenmark (1984).
Computer Controlled Systems. Prentice-nail,
Englewood Clift's, NJ.
[2] Box, George E. P., and GwiIym M. Jenkins
(1976). Time Series Analysis: Forecasting lind
Control. Rev. Edition. liolden-Day, San francisco.
[3] Chapra, Steven C." and Raymond P. Canaie
(1988). Numerical Methods lor Engineers, Second 'Edition. McGraw-Hili, New York.

92

tational Matrix Algebrfl. Random House, New
York.
[8] Kahaner, David, Cleve Moler, and Stephen
NlISh (1989). Numerical Methods and SoftWfl.re.
Prentice-llall, Englewood Clift's, NJ.
[9] Ljung, Lennart (1987). System Identifictdion:
Theo,., lor the User. Prentice-Hail, Englewood
Cliffs, NJ.
[10] Noble, Ben, and James W. Daniel (1988). Applied Lillear Algebrfl, TAinl Edition. PrenticeHail, Englewood Clift's, NJ.
[111 Press, William H., Brian P. Flannery, Saul A.
Teukolsky, and William T. Vetterling (1986).
Numeriml Recipes - The Art 01 Scientific Computing. Cambridge University Press, Cambridge.
[12] Wirth, Nicklaus (1976). Algorithms + Data
Structures = Progrflms. Prentice-nail, Englewood Cliffs, NJ.

lSI

PRODUCT

FAMILY

• • • • • • • • • •
Application Note

Modeling and Analysis of a
2~Degree~orFreedom

Robot Arm

This application note describes the modeling and analysis of a
two-linkage robot arm using MATRlX,c"' and SystemBuild Tl' .
The nonlinear equations of motion of the system are presented,
followed by the SystemBuild block diagram description of those
equations. The SystemBuild model is lineari:zed and an optimal
regulator is designed based on the lineari:zed model. The response of the closed-loop system is found through simulation
and the results are plotted.

Modeling
Consider the two-linkage robot arm shown in Figure 1. Both
links are assumed to be perfectly rigid and are connected by a
frictionless pin joint. The system thus has two degrees of
freedom, 6, and 62 , There are two control inputs to the system,
the motor torques '1', and '1'2 at the rotating joints. For a
particular set of arm masses, lengths, and inertias, the nonlinear
equations of motion for the system are:
(1)

~=1.['I'1-'I'2+0.01 ih~sin2/h]

(2)

~=....!L_l.ih2 sin2/h

I

O.ot

2

where I is given by:

Figure 1: Two-Degree-of-Freedom (2-00F) Robot Ann
the block with an ID of (32) and having inertia as an output (see
Figures 2 and 3).
Y = 0.07 + 0.06 • U2

* U2 + 0.05 • Ul * Ul

This block calculates inertia as defined in Equation (3). The
strings used to calculate
and ~ in the algebraic expression
block with an ID of (12) are:

ih

Yl =(Ul- U2 + O.ot * U3 * U4 * U:;)/U6
Y2 =U21o.01 - 112 * U3 * U3 * U5;

ih ,

(3) 1 = 0.07 + 0.06 coS2 /h + 0.05 sin 2/h

Note: . Y1 is the calculation of
and Y2 is the calculation of
~as defined in Equations (1) and (2).

These nonlinear dynamic equations can be represented in
SystemBuild, the interactive block diagram modeling facility of
MATRIXx, using combinations of algebraic and dynamic blocks.
Block diagrams constructed in SystemBuild are hierarchical.
Each node in the hierarchy is represented by a SuperBlock,
which can contain up to 99 other blocks, including other
SuperBlocks.

Once all of the blocks have been defined as illustrated in
Figure 2, the system can be analyzed through the ANALYZE
option of SystemBuild. When this option is selected under the
BUILD menu, SystemBuild creates and internal simulation
model by assembling all of the SuperBlocks in the hierarchy. A
reference map is then created which displays the structure of
the super-block hierarchy:

Figure 2 illustrates how the dynamics of the two-linkage robot
arm can be modeled in SystemBuild. The ROBar super-block
shown in Figure 2 contains two algebraic general expression
blocks, four Nth order integrator blocks, two trigonometric
function blocks, and one gain block. Figure 3 gives the necessary details required to define each block.

Super-Block Reference Map:

General expression blocks are defined by passing text strings in
the block form. The following text string was used in dt;fining

ROBOT

All super-blocks identified
System Built with 0 error(s) and 0 warning(s) .
Use SIM ('IALG') to set the integration algorithm

The ANALYZE option in SystemBuild returns the user to the
MATRIXx command level where he can simulate the system,
lineari:ze it, or issue any MATRlXx command.

Reprinted. with permission, from Application Note brochure.

93

Yl- (Ul • UZ + O.Ol*UJ*U4*US)/U6

HET! 1 DOlIlILE DOT

cos

F~e

2: Robot Ann.lJynamics

linearization and Controller Design
In MATRIXx the continuous state space model is described by
The S Jrultrix is
defined as the collClltenationofthefour Jrultrices, (A,B, C, and
D) used in describing a linear system as given by the following
relation between the 'system output, y and the inputs, u.
II system Jrultrix, S and the number of states, NS.

x,=Ax +Bu
y=Cx +Du
whemx{O)=x

and

S

=[~;]

Once back at the MATRIXx com=nd level (the <> prompt),
the system built in SystemBuild can be linearized with the LIN
command:
<> [SL,NSL),.LIN(.l)

where the argument of the LIN comJrulnd is the she of the
perturbation to be applied to all system states and inputs when
the partials are computed numerically. MATRIXx teturns the

94

THETA

system state space Jrultrix, SL, and the number of states, NSL,
which represent the linearized system:

NSL
4.
SL
0.0000
1.0000
0.0000
0.0000
1.0000
0.0000
0.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0000
1.0000
0.0000
0.0000

0.0000
0.0000
0.0000
1.0000
0.0000
0.0000
1.0000
0.0000

0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
1.0000

7.6923 -7.6923
0.0000
0.0000
0.0000 100.0000
0.0000
0.0000
0.0060
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000
0.0000

ik and Ik ), twO inputs (the
motor torques) and four outputs (which are the states).

The system has four states (ih, th,

Once the ROBOT SuperBlock has been linearized, one can design a linear regulator controller. The REGULATOR comJrulnd computes the optiJrulI constant gain, state-feedback JruItrices for continuous-time systems under t:heassumption of full
state feedback.

Inputs into the REGULATOR command Include the A and B
(plant and input) components of the system mattix, S and the
design weighting matrices, R".. R ... and R.... where R. is optiona\. The design weighting matrices provide weights on the
states, x, and controls, II, as defined by the follOWing quadratic
cost function:
COSI' =

1-

(x·R..x + II·R.Jl +

x'R~ + II·R'....x)dt

Note: Ruu must be positive definite and Rxx must be positive
semi-definite.

The A and B parts of the system matrix, SL, can be extracted
with the SPLIT command:

<>

[A,B)-SPLIT(SL,NSL)

A

o.

o.

o.

1.
O.
O.

O.
O.
O.

O.
O.
1.

O.
O.

o.

O.

Diagonal state (RXX) and control (RUU) weighting matrices
are defined for the purpose of designing an optimal regulator.

<> RXX-DIAG ([10 100 1 100)
RXX

10
O.
O. 100.

o.

o.

O.

O.

o.

o.

o.
O.
1.
O.
O. 100.

<> ROU-DIAG([20 100])

B

7.6923
0 •.0000
0.0000
0.0000

ROO

-7.6923
0.0000
100.0000
0.0000

20.

O.

O. 100.

~c

&QUATIQIlS (ALGI

TYPE; GI:IISML &XrUS8IOH

IHPDtS: ,
OD'1"PO'1'S: 2
ALGEUAIc IQIJATIOIfS:

Yl -

-------;

Y2 ... ------;

KETAlDOOBLBDOT

Y2"UZ/O.Ol-112'U)'U3'U5

ALGURAIC BQIJ.'1'IQlS (ALGI
TYPE: GAIM BLOClC

IIIl'UTs: 1
OOTI'OTS: 1

GA.III;2

T1tIGI'VNCl'IOMS ('tIlG)
n,,; COSlli! Cu)

ALGEBU.I.c EQIlI,.'1'IOIIS:

IIIPU'l'8:1

y ... - - - - - ;

OUTPOTS; 1

FiguTe 3: Block Fonn Details (ROB SuperBlock)

95

The optimal regulator is designed with the REGULATOR
command:
<>[EV,KRI-REGOIATOR(A,B,RXX,RIJU)

MATRIXx returns the closed-loop eigenvalues (of the linearizedsystem) and the optimal regulator state feedback gains, KR.
KR

1.0365 2.2261 0.0683 0.2112
-0.0298 -0.0944 0.1727 0.9955

EV
-8.7233
-8.7233
-4.0127
-4.0127

+ 4.8199i
-

Closed-loop Simulation

4.8199i

+ 1.1024i
-

matrix -KR passed from the MATRIXx stack (note: input as
[-KRj). The SuperBlock SYSTEM has six external inputs, the
first four being the reference states being the last two are reference (disturbance) torques. -SYSTEM also has four external
outputs which are the actual states. The summing junction in
the top left of Figure 4 computes the difference between the
reference states and the actual states. This error vector goes to
the gain block, which computes the control torques. These
control torques are differenced from the reference torques in
the summing junction which is just below the MINUS KR
block in Figure 4. The outputs of this summing junction are the
actual torques which are inputs to the differential equations in
the ROBOT SuperBlock. Figure 5 gives the details necessary to
fill out each of the block forms in the SYSTEM SuperBlock.

1.1024i

The closed-loop system can now be completed in SystemBuild
as the SuperBlock SYSTEM, which is illustrated in Figure '4.
This SuperBlock includes the SuperBlock ROBOT (the openloop plant), the gain block, MINUS KR, and two summing
junctions. Rectangular, gain matrices are defined in
SystemBuild as state space systems with zero states. Thus the
gain block, MINUS KR is defined as a state space system with
zero states, four inputs, and two outputs, and with the gain

The closed-loop system can be analyzed through the
ANAL'YZE option of SystemBuild. Selecting SYSTEM for
analysis results in:
Super-Block Reference Map :
SYSTEM
ROBOT
All super-blocks identified
System Built with 0 error(s) and 0 warning(s) •
Use SIM ( 'IALG') to set the integration algorithm

ONTROL TORQUE 1
ONTROL TORQUE 2

ROBOT
• T TA

0

T
ET

2

THETA

~
BLOCK

Figure 4: Closed-Loop System

96

ALGEBRAIC BLOC!< (ALG)

DYNAMIC SYSTEMS (DYN)

TYPE: SUM. 01' VECTORS

TYPE: STATE-SPACE SYSTEM

INPUTS: 8

IIIPUTS: 4
otrrPOTS: 2
STATES: 0
STATE-SPACE MATRIX: -KR

OUTPUTS: ..

tIlIPUT VECTORS: 2

ALGEBRAIC BLOC!< (ALG)
TYPE: SOH OF VECTORS

INPUTS: ..
OUTPUTS: 2

tINPUT VECTORS: 2

TORQUE 1
TORQUE 2

SUPER BLOCK (SUP)

NAME: ROB
INPUTS: 2
OUTPUTS: ..

Figure 5: Block Form Details (sysTEM SuperBlock)

The reference (disturbance) torques are defined as:
After receiving the above message you will be at the MATRIXx
command level. The time vector used for simulation is defined
as starting at 0 and going to 10 seconds in steps of 0.1 seconds.

<> TAU1- TAU2-0*ONES (T) ;

<> 1'-[O:0.1:10!';

<> UTORQ-[TAU1

The reference states call for step rotations of both joints at
consistent angular velocities (0.5 and 0.375 radians/seconds)
from 0 to 2 seconds. after which the final angles (0.1 aru:lO.075
radians) are to be held. The reference states are then defined as:

The reference states and torques can be combined to define the
system input matrix:

<> THE100l'-[O.05*ONES (21,1) ; O*ONES (80,1) ];
<> THE1-[O.05*T(1:21) ;0.10*0NES(80,1)];
<> TllE2OOl'-[O.0375*0NES(21, 1) ; O*ONES (80,1) ];
<> THE2-[O. 0375*T (1:21) ; 0.07S*ONES (80, 1) 1;

The closed-loop response is then simulated with the SIM command as follows:

<> USTATE-[TBE1DOT THE!

Figure 7 illustrates the system response to the system inputs.
This plorcan be generated by typing the following:

TBE2DOT TBE21;

The reference states can be plotted by typing the following
command (see Figure 6):
<> PIDT (T, USTATE, 'STRIP REPORr XLlIB/TIME

(sec) I . ..
YLAB/THETAl DOT I THETA11 TBETA2 DOT ITllETA211 •••

TAU2];

<> USYS-[USTATE UTORQl;

<> Y-5IM(T,USYS)

<> PIDT (T, USTATE,

'STRIP REPORT XLlIB/TIME (sec) I . ..
YLAB/TIIETA1 !lOT ITIIETA1I TBETA2 DOT ITBETA21/ •••
TITL/SYSTEM RESPONSE VS TIME/')

TITLE/REFERENCE INPUT VS TIME/')

97

We can compare the commanded and the actual trajectory by
plotting both the Input and the response on the same plot (see
FigureS).

algebraic loops, the joint angular actelerations could be written
in terms of themselves, i.e.:

~ =J{~. ih. ~.
~=f(~, ih.~.

<> PLOT(T, !USYS Yl, 'STRIP2 REPORT lCLAB/TIME(sec) / •••
YLAB/T/IETAl DOT ITHETAll THETA2 DOT ITHETA2 I / •••
TITL/REFERENCE INPUT , SYSTEM RESPONSE VS TIME/')

The response of the system over a larger (more nonlinear) state
trajectory can be computed with:
<> Y2=SIM (T, 2*USY5) ;
The results are shown in Figures 9 through 11. Note the
responses are similar to those obtained with the smaller
trajectory.

Alternate Methods
We have described an approach to modeling and analyUng a
two-linkage robot arm using the lSI Product Family. Many
different modeling approaches could also be taken. Using

in. th. Bl. -r1. -rJ
in. th. Bl. -r1. -rJ

This approach could be useful If the equations are hard to
separate. FORTRAN blocks could also be used to define the
dynamic equations. This would allow one to include existing
FORTRAN simulation code into SystemBuild where the dynamics could be analyzed and controllers designed. One could
use a symbolic manipulation program to generate the dynamic
equations and the FORTRAN code to simulate them.

The controller design presented is a very simple continuoustime linear one. In practice. robot controllers tend to be
nonlinear and multi-rate digital. Designing nonlinear multirate controllers is very easy with SystemBuild, as there are a
wide variety of nonlinear blocks available. Sampling rates are
defined at the SuperBlock level. Different sampling rates can
be used for different SuperBlocks, without restricting the rates
to being multiples of each other. Adaptive controllers could
also be designed.

l:r-------\

I

i ~ l/>--~,-:-'---m---'-----'---~-m----I
~ ::: ~'-'-'-'-'-'-'-'\
oc(

w

.02

I

I

i

i

~ .01

O .

!

t

T

,

i ~ ~/----,-----,-------------~--------------I
o

3

4
5
motE (sec)

6

REFERENCE INPUT VS. TIME

Figure 6

98

7

10

TIME (sec)
SYSTEM RESPONSE VS TIME

Figure 7

~II
!-:~~
:: / "..._.. _.._._.....__. __.. __.--I
o

'0

-t:i

-.l___ ::::......._ .._ ..__ ~

,04

~.

,

-< .02

"n

'"

",,_.__ --;

Ii

0

! !

0":-

'

,

!

!

!

•

•

;:~
~

!

•

•

I

i ~ t/'"-_·--···--··_·_··__··_·_···--I
0

.

ot
a

.'\../'

!

t

2

!

!

'"

_

5

TI...E (uc)
REFERENCE INPUT " SYSTEM RESPONSE

10

vs nw[

Figure 8

99

!

•

5

10

TIME (sec)
TWO • REFERENCE INPUT VS TIME

FipTe9

:~: r(------------\
o

I

........... __

-.05

!

':L/~------'------

______

m

______

~r----t

-------I
I

i ~ l//~--~---------------------I
a

2
4
TIME (sec)
SYSTEM RESPONSE TO TWO • REF'ERENCE INPUT

Figure 10

100

to

TIME (sec)
TWO • REFERENCE INPUT &c SYSTEM RESPONSE VS TIME

Figure 11

:sY;tejlraled
•••••••

systems Inc.

Corporate Headquarters:
Integnucd Systems Inc.

2500 Mission College Blvd.
Santa Clara, CA 95054-12.3 3
Tel: (408) 980-1500
European Office:

MATRIX", is aregislered _ a n d SystemBuiId
is atrademarkofIntegmled Systems Inc.

Integrated Systems Inc Limited

ANOJ90 • Jj90

274 Camhridgc Stience Park
Milton Rmd, Camhridgc CM 4WE
England
Tel: 022> 420999

101

102

Simnon - A Simulation Language for
Nonlinear Systems
Tomas Sehonthal
Department of Automatic Control
Lund Institute of Technology
S-221 00 Lund, Sweden

Ab.tract. This paper presents Simnon, an interactive simulation environment for nonlinear systems, developed by the Department of Automatic
Control, Lund Institute of Technology, Lund Sweden. The following topics are covered: System descriptions, interactive facilities, examples, application areas and technical features.

1.

Introduction

Simnon is a modular high level language for
describing dynamical systems with continuous
and/or discrete time. Equally important, it is an
interactive command language, a "software laboratory", designed to organize and carry out simulation runs, vary circumstances (i.e. parameters,
initial values or the models themselves) and display results graphically or numerically. A macro
facilitity permits developers to pack models and
command sequences into "turn-key" applications.
The first version of Simnon appeared as the result of a master thesis in 1972. At that time digital simulation meant expensive batch runs on
main frames, or writing your own dedicated Fortran programs, since there hardly existed any

interactive systems with reasonably flexible input formats for the type of computers that a
small research group could afford. Simnon soon
became a standard tool at Automatic Control,
Lund. In the years to follow Simnon went through
several stages of development. Today Simnon is
used worldwide by many universities for research
and education in several disciplines and is equally
popular in industry. Thanks to the MS-DOS version, Simnon is rapidly finding new users in both
large and small organizations.

2.

System Descriptions

The key concept is the system, which corresponds to a mathematical model of the reality being studied. In Simnon a system is a
sequence of statements in a special modeling
language. There are continuous systems (differential equations) and discrete systems (difference equations). A third type of system,
connecting system, is used to form compound
systems from continuous and discrete systems.

103

inputs u

states x

outputs y

Continuous system:

z = J(z,u,t)
y= g(z, u, t)

ofSimnon's 43 commands. In addition to this, operating system commands may be executed from
within Simnon.
The interaction mode is command driven, i.e.
commands can be entered in arbitrary order, like
when you communicate with conventional operating systems such as MS-DOS, Unix or VMS. In
'Technical Features, Macros' it is indicated how
the user may influence this situation.

4.

Examples

4.1

Chaos

Discrete system:

Z(tt+1) = J(z(t.), u(tA:), tA:)
y(tt) = g(z(t.), u(tA:) , tt)

In 1963 Loren. derived a set of ordinary differential equations to approximate the behavior of
atmospheric air currents:

tl:+ 1 = h(z(tA:)' u(t.), tt)
As we shall see later, describing a process (continuous system) controlled by a digital regulator
(discrete system) is very natural in Simnon, but
Simnon as such has no "built-in control theory".
The approach is "open architecture" , deferring all
that is specific to a particular discipline to the
user written models.
The statements of the system description language are: declarations (type of system, type of
variable), assignments of variables, initial values and parameters. Variable assignment: variable = [IF condition THEN expression ELSE)
expression. Expressions are formed by the common arithmetic operators and elementary functions. Random numbers, time delays, interpolation and the ability to drive a simulation by an
external data file are also provided. Please refer to
'Technical Features, Compiler' for more details.

z = a(y- z)
iI =

2: = zy- c.z

These equations can be represented by the following Simnon system:
continuoUs system Lorenzeq
state x y z
States
der dx dy dz
Derivatives
dx=a*(y-x)
ComputatiDn8
dy=b*x-y-x*z
dz=x*y-c*z
Parameter.
a: 10
b: 28
c: 2.667

x: -8
y: -8

z:

3.

Interactive Facilities

Once a system has been written according to the
rules of the system description language, the user
can, with the aid of the command language, begin
to experiment with it. First of all, the system has
to be translated by Simnon;s compiler. Then variables are selected for plotting, and a simulation
over a selected time interval is started.
Simulation, in general, is very much a trial-anderror process. If the results differ from those expected, it is easy, with Simnon, to change a parameter, an initial value, or even an equation and
repeat the simulation. In the meantime Simnon
can accumulate raw material for a report. All this
can be accomplished conveniently with only a few

104

bz - y- z.z

Initial value.

24

end
To solve the equations we type:
Tran.late the Iydem
syst Lorenzeq
Store the ,olution
store x y z
~rror 1e-6
Demand higher accuracy
simu 0 20
Simulate
ashow z(y)
Plot Z VI Y
text 'Simulare Necesse Est!'
Add a title
Print the diagram
hcopy

which produces:
1.5

ou .......... .-.. l.e

I ••

-II .•
-II.lr-______~~~----~~---4~~------~

Now we can activate the anti-windup by setting
the low and high values of the control. We then
specify overplotting and repeat the simulation:
par ulov: -.1
par uhigh: .1
plot y[proc]:l 1 uclip:2 1
simu
1.5 OIIt ... t ...d .. t-potnt

4.2

Control

A simple example of a nonlinear control is one
that respects the saturation limits of its regulator:

1.5

'.1

II

ContN. lipa.

I .•

This model is represented in Simnon as a continuous process called proc, a discrete regulator called
pireg and a connecting system called regsys.
The discrete PI regulator has logic to limit saturation, or windup, on its integrator.
To simulate the model without anti-windup (default), type:

syst proc pireg regsys
store yr y[proc]
simu 0 40
sp1it 2 1
ashov y yr
text 'Output and set-point'
ashov uclip
text 'Control signal'

-1.1:1

-•. l!--------:1:---4----..",.L---~'-T------___,c

This gives far better performance. If we instead
wish to try an adaptive regulator in this environment, we could replace the module pireg with
a "plug compatible" (i.e. having the same inputs
and outputs) module adaptreg, and repeat the
above commands, except, of course, that the parameter tunings would look different.

5.

Application Areas

Simnon is used for education and research in

105

such diverse disciplines as automatic control, biology, chemical engineering, economics, electrical
engineering, mathematics, mechanical engineering, etc. Typical problems include engine control,
food processing, power systems, process control,
robotics and ship steering. '

6.

Technical Features

6.1

Compiler

Before a model can be simulated, it has to
be translated by Simnon's integrated compiler.
The compiler not only checks for syntax errors,
but also ensures that all quantities appearing to
the right of an assignment have defined values.
Thanks to the equation sorter, equations may be
entered in arbitrary order, and algebraic loops
will be detected. One kind of optimization is
made: Time-invariant expressions are only evaluated once.
Numerical errors (e.g. zero divide) during simulation will be pinpointed in their source context.
'Since the models are compiled into machine code,
the simulations will run as fast as Fortran programs. In contrast to conventional programming
techniques, the turnaround times are neglible, allowing the users to modify their models "on the
fly" .
The MS-DOS version has dynamic memory allocation, which permits very large models.
6.2

Data Formats

Simnon is file oriented: System descriptions and
macros are normal text files that can be prepared
by any text editor. Time series are stored as binary files. These can be exported to printable
ASCII (a time series then forms a column) and
re-imported.
There exists a one-way path from PC-Matlab
(The MathWorks, Inc, Sherborn, Mass.) to Simnon at the system description level: Included with
Simnon is a preprocessor written as a Matlab
function that takes as arguments a matrix set
comprising a linear, time-invariant system and
produces a complete Simnon system description.
The command hcopy dumps the graphics part of
the screen to a plotting device or to a file for
further processing.
6.3

Documentation

Simnon comes with an English 180 page computer
set tutorial and reference manual with many examples. The on-line help utility has over 100 entries.

106

6.4

Macros

Simnon usually takes commands from the keyboard, but a sequence of commands can be defined as a macro (for historical reasons the term
'macro' is used; perhaps a more adequate term
is 'command procedure'). A macro can then be
invoked by typing its name and any associated
arguments. In this way the user may add extra
commands to the Simnon vocabulary. There is
provision for jumps and input/output just like
in a programming language. Macros can be used
to change Simnon's interaction mode from command driven to question and answer sequences,
which may be utilised for demonstrations. Macros
enable one person to develop and test a simulation model and someone else to use it. Macros
have the reel of genuine Simnon commands, or
they could act as "programs within the program" .
Typically, such a macro could present the user
with a list of alternatives, then prompt the user
for a choice (input wave form, PID-control or
adaptive, etc.), or a numerical value.
6.5 System Requirements, MS-DOS version
• IBM PC, XT, AT, PS/2, 80386-based or compatible personal computer
.8087,80287 or 80387 maths coprocessor
• MS-DOS/PC-DOS version 2.00 (or later) or
OS/2 with Compatibility Box
• 256 kB of RAM or more
• 3.5 or 5.25 inch diskette drive
• Hard disk (strongly recommended)
• One of these graphics systems (highest resolution used): CGA, EGA (enhanced or
mono display), Ericsson PC, Hercules, Olivetti
M24/ AT&T, Toshiba PC or VGA/MCGA
• Recommended hard-copy devices:
- Epson MX-80, IBM 5152 or compatible
- HP LaserJet family
- PostScript printers (e.g. Apple LaserWriter)
6.6

Prices, MS-DOS version

(July 1988, version 2.11) North American customers pay in US $. All other customers pay in
Swedish currency (SEK). Swedish'customers will
be charged value ad.ded tax.
One copy of Simnon costs US $ 695 (SEK 5000).
Quantity discounts:
3--4 copies
5-9 copies
10- copies

10 %
15 %
20 %

For universities and schools the following prices
apply:
1 copy
us S 345 (SEK 2500)
5 copies
US S 1250 (SEK 9000)
10 copies
US S 1750 (SEK 12500)
20 copies
US S 2500 (SEK 18000)
Universities and schools may buy the ClaS61'OOm
Kit for US $ 500 (SEK 3500), provided that they
(have) order( ed) at least one copy of regular Simnon. This reduced problem-size version of Simnon, which is intended for education only, comes
with a license for 10 PCs.

7.

References

For all of these works Simnon has been used extensively:
.Astrom, K. J., Bell, R. D. (1987): Dynamic Models for Boiler-'1Urbine-AIternator Units: Data
Logs and Parameter Estimation for a 160 MW
Unit, CODEN LUTFD2/TFRT 3192, Department of Automatic Control, Lund Institute of
Technology, Lund, Sweden.

Olsson, G., Holmberg, U. and Wikstrom, A.
(1985): A Model Library for Dynamic Simulation of Activated Sludge Systems. Reprinted
from Instrumentation and Control of Water
and Wastewater Treatment and Transport Systems, Pergamon Press, Oxford and New York.
.Astrom, K. J. and Wittenmark, B. (1984): Computer Controlled Systems - Theory and Design,
Prentice Hall Inc, Englewood Cliffs, N~.
.Astrom, K. J. and Wittenmark, B. (1988): Adaptive Control, Addison-Wesley, Reading, Mass.
Elmqvist, H., .Astrom, K. J. and Schonthal, T.
(1986): Simnon - User's guide for MS-DOS
Computers, Department of Automatic Control, Lund Institute of Technology, Lund, Sweden.
Mattsson, S. E. (1984): Modelling and Control
of Large Horizontal Axis Wind Power Plants,
Ph.D. dissertation,
CODEN: LUTFD2/TFRT-1026, Department
of Automatic Control, Lund Institute of Technology, Lund, Sweden.

Simmon is a product of SSPA Systems,
PO Box 24001, S-400 22 Goteborg, Sweden.
Fax: +46 31 63 96 24, Phone: +46 31 63 95 00.
In North America, Simnon is provided exclusively by
Engineering Software Concepts, Inc.,
PO Box 66, Palo Alto, California 94301.
Fax: 415 325 0531, Phone: 800 325 1789
(in California 415 325 4321).
Simnon is a USA registered trademark of the
Department of Automatic Control, Lund, Sweden,
who invented Simnon, created a larger user community for it,
and developed it into a commercial product, but no longer
supports it.

107

108

PART III

Implementation of Digital Controllers
j

fIT

: :::: ::::

.'.fllI'lin.

:: : :::: :

: ::::: ;:~I:

lIti1iWnX.llfnrn

:::: :

!!

IW

'§tU'I!iI If

r:

It

::

:

; : ::;::::::

au

Iii!

8$ (tUU

, : :::::::::

Implementing Digital Controllers ................................................... 111
Hardware/Software-Environment for DSP-Based Multivariable Control. . . . . . . . . . . . . . . .. 141
(H. Hanselmann, H. Henrichfreise, H. Hostmann, and A. Schwarte)
Implementation of Digital Controllers - A Survey .................................... 145
(H. Hanselmann)
The Programming Language DSPL ................................................ 171
(Albert Schwarte and Herbert Hanselmann)
Application of Kalman Filtering in Motion Control Using TMS320C2S .................. 185
(Dr. S. Meshkat)
Implementation of a PID Controller on a DSP ....................................... 205
(Karl Astrom and Hermann Steingrimsson)
DSP Implementation of a Disk Driver Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 239
(Hermann Steingrimsson and Karl Astrom)

:::::11

Implementing Digital Controllers
A lot of work has been done recently in the area of modem control theory. and many quite elegant theories
have resulted. However. implementation has lagged substantially behind theory and idealized mathematical design. The outcome is that modem control theory is still limited somewhat to research labs. and most
of the servo control applications in the industry utilize classical control techniques. This introduction discusses some of the issues in implementing digital controllers. It should be emphasized that there are no easy
solutions-digital controllers still lag in the body of knowledge that is available for implementation. The
introduction and the articles in this part may not provide canned solutions; however. they do highlight many
pitfalls and problems of implementation and provide suggestions to minimize them.
The major issues in implementing digital controllers are the effects offinite word length. optimal controller
structures, computational delays, and software development for microprocessors/DSPs. The most important issue in implementation is the effects offixed-point arithmetic and finite word length. Some problems
can be minimized by using floating-point processors; however, this may not always be possible. Before
going into the effects of finite word length, section Fixed-Point Versus Floating.Point wiII review fixedpoint and floating-point arithmetic formats.

Fixed·Point Versus Floating.Point
Floating-point processors have a very large dynamic range. In floating-point. a number is represented with
a mantissa and an exponent. The mantissa represents the fraction, and the exponent represents the number
of digits to the left of the decimal point. For example, assuming that a four-digit storage is available, 3740
can be written as 0.374 x I Q4. In floating-point, this can be represented as 4.374; where, exponent = 4 and
mantissa = 374.
The largest floating-point number represented by four digits is 9.999 or 0.999 x 1()9 = 999000000. The
largest fixed-point number represented by four digits is 9999.
Floating-point numbers thus allow amuch larger dynamic range than fixed-point numbers. However, floating-point does not necessarily eliminate all finite word-length effects. Storage length is still limited, but
with a larger dynamic range. There is also some loss of resolution. The number of significant digits in a
mantissa determines the accuracy of the numerical value. However, the mantissa does not use all the storage
capacity as some of the storage is taken up by the exponent. In practice, to minimize this loss of resolution,
floating-point formats use 24 bits or greater to represent the mantissa. The TMS320 floating-point generations, TMS32OC3x and TMS32OC4x, have 32-bit architectures. Three floating-point formats are available:
short format with a 12-bit mantissa and a 4-bit exponent, standard-precision format with a 24-bit mantissa
and an 8-bit exponent, and extended-precision format with a 32-bit mantissa and an 8-bit exponent.
Floating-point processors are generally more expensive than fixed-point processors, and the cost may not
be justified in some applications. Floating-point may be needed in applications where either gain coefficients are time varying or signals and gain coefficients have a large dynamic range. Other cases where
floating-point can be justified is where development cost is more significant than component cost, and very
low quantities are required. Floating-point processors usually allow code to be developed in high-level
languages and reduce the need to fully identify the system's dynamic range.
Fixed-point processors generally are less expensive because less hardware is required on chip. In addition,
they have smaller word length (typically 16 bits), and system cost is lower. However, more effort isrequired

111

to develop appropriate scaling factors to eliminate the effects of truncation or overflow during the intennediate and final states. Even in applications requiring use of floating-point for dynamic range requirements,
it may be possible to use to use fixed-point processors. If gain coefficients have a large dynamic range but
are constant, their dynamic range can usually be reduc;:ed-by structure optimization techniques. If gain coefficients are time-varying and require adaptive control, a hybrid scheme can be used. Calculations for system
identification typically have a slower update rate and can be perfonned with pseudo-floating-point fonnat.
The controller calculations, on the other hand, have a much faster rate and can be implemented in fixedpoint arithmetic. Fixed-point processors can thus be used in most applications. The next section, Binary
Arithmetic, will deal with fixed-point numbers only.

Binary Arithmetic
In binary fonnat, a number can be represented in signed magnitude, where the left-most bit represents the
sign and the remaining bits represent the magnitude:
+52 (decimal) =34 (hex) is represented as 00110100 (binary)
-52 (decimal) =-34 (hex) is represented as 1011 0100 (binary)
Twos complement is an altemate fonn of representation used in most processors, including the TMS320.
The representation of a positive number is the same in twos complement and in signed magnitude:
+52 (decimal) = 34 (hex) is represented as 0011 0100 (binary)
However, the representation of a negative number is different; as its name implies, the magnitude of a negative number is given in twos complement.
-52 (decimal) = -34 (hex) is represented by taking its twos complement, 1100 1100 (binary); i.e.,
OOtt 0100
Convert+52 to
Invert all bits to get ones complement
1100 1011
Add one to get twos complement
+
I
Twos complement is
II 00 1100
Therefore, -52 (decimal) = -34 (hex) is represented as 1100 1100
Adding 52 and (-52) gives
0011 0100
+ 1100 1100
00000000
as expected. The main advantage of twos complement is that only one adder is required to handle both positive and negative numbers. An addition will always give the correct result for both addition and subtraction.
Also, if the final result is known to be within the processor's number range, an intennediate overflow can
be ignored as the correct final result will still be produced. The largest positive number that can be represented with 8 bits is 7F (hex) or 127 (decimal), and the largest negative number represented with 8 bits is
80 (hex) or-128 (decimal).
The fixed-point binary representation does not have any binary point and does not represent fractions. However, it is sometimes advantageous to use an implied binary point to represent fractions. In signal processing, it is common to represent a number in fractions. For example, if 0.99 is the highest number that.can
be represented, the result of multiplying any two numbers will always be less than one - an overflow will
never occur.
The location of the implied binary point affects neither the arithmetic unit nor the multiplier. It affects only
the accuracy of the result and location from which that value will be read. For fractional arithmetic, the result is read from the upper 16 bits. For integer arithmetic, the result is read from the lower 16 bits (assuming
no overflow). Fractional arithmetic loses accuracy but protects from overflows, while integer arithmetic

112

provides an exact result but offers no protection from overflow. In fractional arithmetic. an addition or a
subtraction could produce an overflow, but a multiplication never causes one; generally, a single carry bit
is sufficient to handle the overflow.
ForTMS320processors, numbers are typically represented in the QI5 format; where. the numberfoUowing
the letter Q represents the quantity of fractional bits. This implies that, in Q I 5, each number is represented
by I sign bit, 15 fractional bits. and no integer bits. Likewise, a number in the QI3 format has I sign bit,
13 fractional bits, and 2 integer bits. The following shows both Q formats of eight decimal fractions and
one integer:

decimal

Q15

Q13

+0.5
+0.25
+0.125
+0.875
-0.5
-0.25
-0.125
-0.875
-1.000

0.100 0000 0000 0000
0.010 0000 0000 0000
0.001 0000 0000 0000
0.111 0000 0000 0000
1.100 0000 0000 0000
1.110 0000 0000 0000
1.111 0000 0000 0000
1.001 0000 0000 0000
1.000 0000 0000 0000

000.1 0000 0000 0000
000.0 1000 0000 0000
000.00100 0000 0000
000.1 11 00 0000 0000
100.1 0000 0000 0000
100.1 1000 0000 0000
100.1 11 00 0000 0000
100.00010 0000 0000
100.00000 0000 0000

When two QI5 numbers are multiplied, the result is Q30 format and is also a fraction. The result has 30
fractional bits, 2 sign bits, and no integer bit.
-0.5
x~

-0.25

x

1.1 00 0000 0000 0000
0.100 QQQQ QQQQ QQQQ
11.11 0000 0000 0000 0000 0000 0000 0000

To store the result as a Q I 5 number, a left shift of one is performed to eliminate the extra sign bit, and the
left-most significant 16 bits are stored. The result is stored as 1.I JO 0000 0000 0000.
Multiplication never gives an overflow in QI5 format. but successive additions may. If the final result is
Known to be within range, overflow in partial results will give correct results for the final sum. However,
the saturation mode on the TMS320 must be tumed off. For example.
+0.875

+ ±Q.&
+1.375

+ :::Q.SQQ.
+0.875

1.1 00 0000 0000 0000
+ 0.100 0000 0000 0000
1.01\ 0000 0000 0000
+ 1.1 00 ()()()() 0000 0000
0.111 0000 0000 0000

(Q15 format)

(add twos complement to obtain result)

Finite Word-Length Effects
Finite word-length effects are probably the most critical issue in implementing controllers. Most digital
controllers use fixed-point processors. In a fixed-point processor, only a finite amount of storage lengthfor example. 4, 8, or 16 bits - is available to represent the signal and coefficients. Signals and coefficients
must be scaled to fit in the dynamic range and word length of the processor. This limited storage capacity
is referred to as the finite word-length issue. Finite word-length effects show up as noise in the system and
may cause limit cycles or instability. But, it should be noted that finite word-length effects are somewhat
forgiving in first- ,and second-order controllers. Finite word-length affects the controller in two ways:
coefficient quantization and signal quantization.

113

Coefficient Quantization: Finite word length affects the representation of coefficients. The coefficients may need to be truncated or rounded to fit in that word length. If truncation or round-off is necessary,
the process is called coefficient quantization. Coefficient quantization alters the transfer function of the
system and changes the pole-zero locations and the gain of the system. Coefficient quantization is dependent upon the sampling rate as well as the word length. As the sampling rate gets higher, the poles tend to
move toward and cluster around z= I, making the system very susceptible to coefficient quantization. Coefficient quantization can be minimized by using proper structures. Some of these structures make the system
less susceptible to errors resulting from the effects of truncation/round-off. This is discussed in section
Controller Structures.
Signal Quantization: Finite word length can also cause signal quantization. This can be divided into
three different categories.
AID and DIA Quantization Effects: One type of signal quantization occurs upon the conversion and
representation of a'continuous signal into discrete magnitude by an AID or a D/A converter. The AID and
D/A word lengths are usually limited to 8-12 bits. AID and D/A conversion also affects the controller by
contributing to computational delay. This is discussed in section Computational Delay.
Most commercial AID and D/A converters are available in the range of 8 to 16 bits with heavy premium
on higher resolutions. An 8-bit AID converter gives an accuracy of I in 256 or error of 0.4%, while a 10-bit
AID converter gives a resolution of a I in 1024 or an error of 0.1 %. Unlike errors caused by the other
quantization processes, errors in the processor's word length due to AID and D/A effects are not recursively
fed back into the control system. In most cases, signal conversion requires a smaller word length than the
processor word length. Sensor accuracy must also be taken into account. If the sensor has a 5-mV noise in
a 5-V system, then there is no point in having an AID with greater than 10-bit resolution. Once the AID is
selected, the D/A is chosen to have the same.or slightly higher resolution. Selection of AIDs and D/As are
usually not a major problem when implementing the controller. Too often, errors from numerical calculations (truncation or round-oft) are mistaken as low resolution in the input/output signal.
If the controller is used in the servo mode and forced to follow a reference signal, the reference signal must
then be represented correctly. If it is represented with a higher precision than the AID's resolution, the error
will never go to zero, causing a limit cycle.

Truncation and Round-Off Effects: The second kind of signal quantization appears when results of
signal processing are truncated or rounded. As intermediate calculations are carried out, they need higher
precision. For example, a 16 X 16 multiply requires a 32-bit register to store the result. If only 16 bits are
available, the lower 16 bits are thrown away; this is known as truncation errOT. If the LSB is rounded before
throwing away the lower 16 bits, this is knoWn as round-off error. Since both of these errors are fed back
recursively, they will accumulate as successive calculations are performed.
Truncation and round-off introduce bias and noise into the system, which may produce limit cycles because
of nonlinearities. If q denotes the quantization step, /.l denotes the mean of noise density, and 6 denotes the
variance of noise density, then
/.l = q/2
and
6 = q2/12
for truncating
/.l = 0
and
6 = q2/ 12
for rounding
These effects can be minimized by the proper selection of structures. For example, a fourth-order system
becomes less sensitive to truncation and round-off errors if it is broken into lower-order parallel structures.

Overflow Effects: A third effect of signal quantization is overflow conditions. Successive calculations
(i.e., addition) can cause registers to overflow even when fractional arithmetic is used. This, in return, will
force the contents of associated registers to wrap around and change magnitude from most pOsitive to most

114

negative numbers. This is equivalent to changing the direction of the control. To prevent this, a check for
overflows must be continuously made during the intennediate and final stages. When twos complement
arithmetic is used, intennediate overflows can sometimes be ignored if the final result is known to be within
bounds. In the lMS320 architecture, a saturation mode is provided to prevent the contents of registers from
wrapping around and changing sign when an overflow occurs. Overflow effects can be minimized by the
proper selection of scaling factors and by leaving extra guard bits.

Scaling
Selection of a proper scaling factor is critical in minimizing the effects offinite word length. The scale factor
should support the full dynamic range of signals and coefficients. A large scale factor may cause an overflow condition. Although overflow protection is built into the lMS320 architecture, it is advisable to minimize the possibility of overflows. To solve that problem, sometimes it may be necessary to choose a smaller
(12-13 bits) scale factor. The small scale factor could, on the other hand, increase quantization noise.
Usually, there is little choice in handling the dynamic range of signals. If the dynamic range is too big, it
may dictate selection of a floating-point instead of a fixed-point processor. Simulations are required to detennine the dynamic range. In some cases, it may be possible to switch modes and change scale factors.
For proper scaling, a two-step approach is required. The first step requires optimization of the structure.
Once the structure has been transfonned into a suitable one for implementation, scaling can be carried out.
If transfer functions are used, direct structures should be avoided and broken into smaller cascaded structures. If necessary, different scale factors can be chosen for each substructure. The scale factor is found by
first calculating the worst-case response, H(z), of a system under maximum input signal conditions. Different techniques,lp, 110 and h (described later in this section) may be used to find H(z). Next, H(z) must be
scaled down in value to prevent an overflow during the intennediate and final stages. If fractional representation of a Q 15 fonnat is assumed, the scaled response, H' (z), must be less than unity. The scale factor, Sn,
is finally found by satisfying the following relationship:
H'(z)

H(z)
Sn

where

< I

H(z)
Sn

For state space structures, diagonal scaling can be used. Again, before scaling, the first step requires the
transfonnation process. Techniques like Schur transfonnation or Modal transfonnation can be used to
optimize structures. These transfonnation techniques not only reduce the dynamic range of coefficients,
.but also reduce the number of nonzero elements in the structure. This minimizes the calculations that the
processor must carry out.
The next step is to find the appropriate scale factors. The scaling factor must take into accountthe translation
of proper input/output variables (Le., voltage range of the AJDand D/A converters). In addition, it must
prevent overflow or saturation during the intennediate states. Extensive simulations are usually necessary
to ascertain the maximum and minimum values of states to provide the necessary scale factors. The scaling
procedure can be broken into two different operations: input/output scaling and state vector scaling.
Input/output scaling transfonns the internal fractional representation of numbers to external physical variables. Internal numbers within the range of +0.9999 to-I.OOOOmay have to be changed into external values
of ±Volts for the AID and D/A converter. For example, given a system
Xn+l

Yn

=

AXn

+ BUn

= Cxn+Dun

115

Then B, C, and D matrices must be scaled by the following relationship
B. = B[(Su)-1]
c. = [(Syrl]C
D. = [(Syrl]D[(Su)-i]
where (Sy)-l and (Su~l are diagonal matrices.
For a system with an input/output physical range of ±t 0 V and a processor number range of ±1.0000, we
have
B. = lOB
C. = O.lC
Ds = D
For state vector scaling each, state variable must be scaled to keep it within the number range of the processor. Each state vector is divided by the following diagonal scale factor matrix.
Xs = [(Sxri]x
The system can now be represented as
Xs.n+1 = A.x.,n + B.Un
Yn = C.x.,n + DUn
where
As = [(Sx)-I]AS x
B. = [(Sx)-I]B
C. = CS x
There are three different ways to calculate the scale factor matrices.
The first way to choose Sx is to simulate the closed loop under worst-case conditions and to check for
overflow at each node or summation. The worst case is defined as when the largest absolute value of a state
variable is selected for the calculation. This is know as Ip scaling.
Given
S,,\ = max[abs(xnJl]
then
S,=diag(f)

x.'

The second approach is to statistically analyze for the probability of overflow at each node instead of doing
actual simulations. This is known as 12 scaling.
The third approach is to perform an analysis with certain bounded conditions of input signals. This is known
as II scaling. II scaling can be applied only to stable systems.

Controller Structures
Selection of the proper control structure for digital controllers is a very critical issue, and its importance
cannot be overemphasized. It is often the most overlooked aspect of implementation. Digital controllers
can be described in terms of different, but equivalent structures. These structures have the same infinite

116

word-length behavior but different finite word-length behavior. The difference in finite word-length behavior results from the fact that some structures have coefficients that are less sensitive to coefficient truncation
or that lie within a smaller numerical range. thus making it easier to scale. They may also produce lowerorder equations.

Transfer Function Forms: Several different structures can represent systems when transfer functions
are used. The simplest form is the direct structure shown in Figure 1.
Usually, in this structure, the coefficients have a wide range, depending upon the pole-zero locations. This
makes the structure very susceptible to coefficient quantization. round-off error, and overflow. The structure can be represented in a transfer function form as
H(z)

1>0 + bl:cl + b:zz-2 + b]2:-3 + b4z-4
= =-.;.....:.:..:;.,.......;....;~~=,.......;..~:.,..
I + alz- I + a2:c2 + a3z-3 + ~z-4

Figure 1. Direct Structure

en

Z-1

Z-1

b

en-I

Yn-l
Z-1

Z-1

~

en-2
Z-1

Yn-2
Z-1

b

en-2
z-1

en-4

Yn

bo

Yn-2
Z-1

b4

Yn-4

Another alternate structure is the cascaded structure as shown in Figure 2.
This can be represented in a transfer function given by
(blO + b l1 z-1 + b I2z-2) (b20 + b21 z-1 + bz2z-2)
H(z) =
(I + al1z-1 + alz:CZ) (1 + aZlz-1 + azzz-2)
The cascaded structure is somewhat less susceptible to round-off error and overflow than the direct structure. One advantage of this method is that poles and zeroes close to each ather can be matched together.
This will reduce the range of coefficients for each substructure. Different scale factors can then be chosen
for them. A transfer function should be broken into first- or second-order cascaded functions to derive the
~reatest advantage from this structure.

117

Figure 2. Cascaded Structure

en

11m

Y1(n)
2:'"1

2:'"1
8"

en-1
2:'"1

2:'"1

Yn
2:'"1

Y1(n-1)
2:'"1

2:'"1

Yn-1

8,2
......2

Y1(n-2)

Yn-2

A parallel structure can also be chosen to represent a system. This is shown in Figure 3.
It ,is least susceptible to round-off errors and overflow problems. A paraIlelstructure can be obtained by
partial fraction expansion or division. This can be represented as
H() _ (blO + bllz-J)
(bw + b2Jz·J) (blO + blJz·J) (b40 + b41 z"1)
Z - (1 + a,iz"l) + (1 + a21z"1) + (I + a3Iz·l) + (1 + Il.!rz·l)
For example, if the transfer function is given by
~-azl +bZl
H(z):::;
Z2 - 1.9979 + 0.9979
the poles of the systems are at Zl :::; 0.9988 and Z2 :::; 0.9991 .

If this system is represented with coefficient round-off, it becomes
H(z):::;

Zl-azl + bZ1
Z2 - 1.998 + 0.998

The new pole locations are now Zl:::; 0.9980 and Z2 :::; 1.0000.
If this system had been represented as a cascade of two first-order substructures, the new structure after
round-off would be
(z - at) (z- a2)
H(z) :::;
(z - 0.998) (z - 0.999)
Thus, the cascaded structure shows less sensitivity to coefficient round-off.

Figure 3. Parallel Structure

118

State Space Form: If the state space fonn is used, the controller can again be represented in different,
but equivalent state space structures that can give better finite word-length behavior. Structure transfonnation techniques should also be employed to create structures that will have less numerical sensitivity.
Structural fonns like Modal or Schur can reduce the number of nonzero elements in the structure.
The Modal fonn of a matrix is a diagonal matrix with all its eigenvalues as the diagonal elements. If the
eigenvalues are complex, then the diagonal elements are a 2-by-2 matrix. The Modal fonn requires that all
eigenvalues be linearly independent. This is referred to as the diagonal canonical fonn. The Modal fonn
is represented as follows:

l
f1

0 0 0 01
0 0 0 I
f3 0 0 I
0 0 r4 0 I
00 0 f~

o f2
o 0
o
o

The Schur representation of a matrix is the upper-fight triangular portion of the matrix with its eigenvalues
on the diagonal. If the eigenvalues are complex, then they are 2-by-2 blocks on the diagonal. The Schur
representation is given as follows:

l

f1 x
o
f2

o0
o0
o0

X

x

X
Xl
X X

f3 X X

0 f4 X
0 0 rs

The following example shows the effects of structure transfonnation; complete implementation examples
along with the TMS320Cl4 code are given in Appendix I. The state controller and estimator that were developed in PART II's introduction are used here. The structure is transfonned with the Schur method and
Impex® software. The A matrix now represents

A-BK-LC
from the original matrices in order to satisfy the input requirements for the Impex® software. The original
system is given by the following set of coefficients. The software uses extended-precision/floating-point
fonnat to represent the original that system. After structure optimization and scaling, the numbers are converted into 16-bit/fixed-point fonnat for implementation and code generation. For illustrational purposes,
the system will also be represented in 32-bitlfixed-point fonnat to show the loss of resolution due to lack
of structure optimization.

119

After the Schur transfonnation, the matrices are obtained as follows:

Note that the Schur transfonnation has tremendously reduced the dynamic range of the coefficients, thus,
making it easier to scale them. Matrix C is not treated since it can be scaled independently.

Computational Delay
Computational delay is a critical disadvantage to using digital controllers. It has prevented widespread use
of microprocessors and microcomputers in digital controllers because the amount of computational delay
that is produced by these processing elements is unacceptable. With the high-perfonnance of DSPs computational delay becomes more manageable. Computational delay shows up as phase delay within the system
and affects the phase margins of that system. The negative phase-shift contribution can be calculated as
follows:
phase delay = (computational delay)(bandwidth frequency)(360 0)
For a system with a I-kHz bandwidth, a lOO-).ls computa~ional delay will produce a negative phase shift
of 36 degrees.
Even when using DSPs, it is advisable to minimize the effect of computational delay. This may be done by
adopting appropriate structures or signal flows. For example, a compensator is represented by the following
difference equation:
u(n) = K\[u(n-2)] + K 2[u(n-I)] + K 3[y(n-2)] + ~[y(n-l)] + Ks[y(n)]
Only the last element, Ks[y(n)], is dependent upon the latest measurement. The remaining elements can
be precomputed and stored into memory. As soon as the measurement is made, the last element can becalculated an4 the control output u(n) sent to the actuator.
Similarly, a state estimator is expressed as
x(n + I) = A[x(n)] + B[u(n)] + L[y - Cx(n)]

y = C[X(n)j
u

= -K[x(n)]

These can be split up as follows:
X(n + I) = Ax(n) + B[u(n)]

y = C[X(n + I)]
As soon as the measurement y is made, the control can be calculated by the following:
x(n+ I) = X(p+ I) + L{y -

y)

u = -K[x(n+I)]
This structure is usually referred to as a current estimator.

120

Another aspect of computational delay is the contribution by the AID and D/A converters; the AID usually
being the main factor. The AID has some minimum conversion time, while the D/A requires settling time.
The conversion delay of the AID creates a negative phase shift and affects the phase margin and stability
of the system. The ZOH hold action of the D/A converter produces a delay of one sample time. This delay
is comprehended into the design when the plant is discretized. The AID conversion and the D/A settling
times, on the other hand, must be taken into account during the implementation.
Typical AID converters available in the market today range in conversion time from 50 ns for video applications to 50 ~s for data acquisition. There is often a trade-offbetween conversion time and resolution. Those
AIDs with fast conversion times usually have lower resolution. For most control systems, AID converters
are chosen with conversion time of 15 ~s or less. However, the selection depends upon the bandwidth and
phase margin of that system. The phase delay is given by
phase delay = (computational delay)(bandwidth frequency)(360 0)
For a system with a I-kHz bandwidth and an AID converter with a I O-~s conversion time, the AID converter
will contribute a negative phase shift of 3.6 degrees.

Sampling Rate Selection
Another important consideration is the selection of sampling rate. In signal processing, the sampling rate
should be at least twice the bandwidth or twice the highest frequency component in the system. Iflower
sampling rates are selected, noise from the high-frequency components may be introduced into the system
and would be indistinguishable from the signal. Antialiasing filters are installed before the controller so that
high-frequency components can be attenuated. In control systems, the sampling rate is commonly chosen
to be ten to twenty times the system's bandwidth. However, this refers to the closed-loop bandwidth for the
controller. If the system has structural resonances so that notch filters are needed to cancel them, a sampling
rate of two times the bandwidth or higher is sufficient for the filters.
Theoretically. a digital system should be equivalent to an analog system if the sampling rate is very high.
However, in practice, when the sampling frequency becomes too high, the poles will cluster around z= 1.
making the system more susceptible to coefficient quantization. Modifying the structure may be necessary
to minimize this effect.
Another factor that needs to be taken into account is stability. When a stability analysis is done by mapping
pole locations or eigenvalues on the z-unit circle, it is true only for that sampling frequency. As the sampling
frequency is changed, it creates a new mapping of eigenvalues on the unit circle.
Table I shows the pole locations for various sampling frequencies of a lead-notch controller, which is transformed into the z domain using the bilinear transformation. The lead-notch controller is given by
Go(s) =

[(S(s++0.35)]
[(S2 +(sO.06s
+ 1.2)]
8)
+ 27)2

In addition to the controller's sampling rate, the sensor's bandwidth needs to be considered. Sensors like
encoders give digital outputs. At high sampling rates and low speeds, their outputs may be heavily quantized, causing large variations from sample to sample. Taking a moving average of the last few samples may
be necessary to eliminate those variations. This essentially implements a low-pass filter for the input signal.

12l"

Table 1. Location of Poles for a Lead-Notch Controller

Antialiasing Filters
In a digital signal processing system, a minimum sampling rate must be implemented to allow reconstruc.tion of the information in the digital domain. According to the Nyquist criteria, the sampling frequency must
be at least twice the highest frequency component in the signal. If a lower sampling frequency is used or
if high-frequency noise is present, some of the information will be lost. This is known as aliasing. If unwanted high-frequency components are present, they must be removed through circuits known as antialiasing filters.
In control systems, antialiasing filters must be used carefully; they can cause phase delay, which also adds
to the computatiomil delay of the controller. A negative phase shift affects the phase margin of that system.
Due to the oversampling intervals (10 - 20 times the frequency) in control systems, it is usually possible
to avoid the usage of antialiasing filters. If antialiasing filters are used, they should be first-order filters with
minimum phase delay. The negative phase-shift contribution of the filter should be taken into account along
with the computational delay and AID conversion delay.

Controller Design Tools
Analog controller implementation requires only hardware design. A digital controller implementation not
only requires a hardware design, but also extensive software design. The hardware design of a digital controller is somewhat easier to accomplish, and standard forms of processor interface can be chosen independently of the type of controller structure selected. The burden of software design can be eased by the wide
selection of CASE (Computer Aided Software Engineering) and code-generating tools that are available
today. These tools tremendously increase the productivity of the control designer.

Algorithm Development
In control systems, extensive simulation of control algorithms is necessary before the design can be carried
out. Simulations may also be necessary under worst-case conditions so that appropriate scaling factors can
be obtained. Numerous software packages are available that allow not only simulation capability but also
design capability. As mentioned earlier, some of the more popular packages are PC-Matlab, Matrix-X, and
Simnon. The Impex® software package also has extensive simulation capabilities. It supports simulation
with AID and DfA converters and the effects of the converters' resolution and conversion delay. It can also
comprehend computational delays and different levels of quantization on all or some of the states.

Software Development
Software development is another major concern in implementing digital controllers. The programmable
approach to controllers allows easy upgrade and maintenance. It protects development investment but, at

122

the same, requires more initial development effort. Still, programming with DSPs requires slightly different
techniques than programming with ordinary processors.
Typically, in control systems, processors are used for supervisory functions, and analog circuits are used
for signal processing functions. When DSPs are used, they may be required to implement not only the signal
processing functions but also the supervisory functions. With ordinary processors, there is usually a large
reliance on lookup tables for math and other functions. With DSPs, it is more common to calculate the actual
math functions or the algorithms. Functions like sine and cosine may be easily calculated using the expansion series. Due to the high speed ofDSPs, it is very common to eliminate as much of the extemal hardware
as possible and, instead, use on-chip processing for those functions. For example, low-cost sensors could
be used; or, some of the sensors can be eliminated entirely, and on-chip processing can compensate for their
removal.
DSPs have been designed for realtime signal processing and have very fast interrupt response. On earlier
processors, the facilities for concurrently running multiple tasks were limited due to their smaller-sized
hardware stacks, although larger software stacks were possible. This reduced the number of nested interrupts or subroutines that the processors could handle. Therefore, it is normally advisable to use macros and
the straight-line code instead of repeated subroutine cal1s.
DSPs do not have a single-cycle divide instruction, so division should be avoided. If necessary, the first
choice is a multiplication by an inverse procedure. Division can also be performed by repetitive executions
ofthe SUBC instruction. Or, a limited division can be performed by right-shift operations.
Fourdifferent approaches to software development can be taken: high-level languages, assembly language,
signal processing languages, and code generation software.

High.Level Languages: Using a high-level language (HLL) like C, Pascal, or FORTRAN can substantially cut development effort. Such languages are familiar to everybody and easy to program. 1Ypical1y,
high-level languages are used for initialization and nonrealtime code. They are not optimized with respect
to signal processing functions and to particular processor architectures. Code compiled on a processor is
always larger than handwritten assembly code and may be 2 to 4 times the size of assembly code; this is
a high penalty trade-off for time-critical signal processing applications. In cases where a high-level language is necessary, it is beneficial to have a thorough knowledge of processor architecture to make the most
efficient use of the special signal processing features.
Due to the general trend towards more usage ofHLL in industry, new TMS320 architectures are also being
optimized for HLL. Floating-point generations (TMS32OC3x and TMS320C4x) of the TMS320 family
have architectures especially designed for greater support of high-level language code and produce highly
efficient assembly code. On the other hand, the fixed-point generations may require assembly coding for
their time-critical routines.

Assembly Language: Assembly language produces the most efficient coding. When using a high-level
language, it may be necessary to use assembly language for the more time-critical operations. Assembly
language programming requires an intimate knowledge of the processor architecture. At the same time, the
nature of performance requirements for some signal processing systems requires maximum code efficiency,leaving very little choice in the usage of assembly language. To give assembly language some resemblance of high-level language, macro libraries are often developed for more frequently used functions.
Signal Processing Languages: Signal processing languages can provide a middle ground betweel.
high-level language coding and assembly language coding. They can ease the development of standard
high-level languages. At the same time, they offer code efficiency that is comparable to that of assembly

123

language because they are designed for specific signal processing applications. Digital signal processing
language (DSPL) from dSPACE is one example. One disadvantage is that there is no standard for these languages, and none of the languages is widely known.

Code Generation Software: Code generation packages that will automatically generatea.~sembly code
.for particular processors are becoming available. For example, the Impex$ software package from
dSPACE will generate TMS320 assembly code from a mathematical description of the controller. The
DFDP (Digital Filter Design Package) from ASPI will generate assembly code for TMS320 processors
from a description of a filter. These packages are becoming increasingly popular because they allow the
control designer to focus on design issues instead of developing assembly language software.
Device Simulators
Another useful tool in designing software is the device simulator. Simulators for the TMS320 family run
on common platforms like PC and VAX, which provide full simulation of the instruction set along with instruction timing. Such simulation of the controller software can fully check the effects of math operations
on internal registers and memory without the need for off-chip hardware. In some ca.'les, software simulators have features that are not available on hardware development tools. These include full access to and
tracing of internal processor memory and registers and sometimes even internal pipeline operations. Also
available are full breakpoint capabilities for the inspection of the processor's state at the required/desired
instances.

Hardware Design
A wide variety of tools is available for designing the hardware for a controller. These include target systems
and EVMs that plug into a PC or are stand-alone. The in-circuit emulators can be used for complete system
debugging. The XDS/22 emulators from TI support complete in-circuit emulation along with extensive
breakpoint and tracing capabilities. Also available are device behavioral models that can simulate the
timing and bus behavior of a complete target system without additional hardware. Logic Automation
provides behavioral models for most members of the TMS320 family that run on popular workstations.
Manufacturers like HP and Tektronix produce logic analyzers that can be used for extensive tracing. These
logic analyzers can debug code by disassembling captured data.
Figure 4 shows the typical block diagram of a digital controller. A digital controller normally requires a
processor, a memory interface to the processor, and AID and D/A converter interfaces. Figure 5 shows a
typical interface of a TMS320 DSP with memory and AID and 0/A devices. Further information is available in the appropriate TMS320user's guides.

Figure 4. Digital Controller

' - - - - - - - - - 1 . AID

124

Figure 5. Controller Interface
Memory
Timer
Host

Processor

SerlailiF
PWM

EncoderliF

Summary
Implementation of digital controllers is a relatively new area as the limited availability of infonnation
suggests. Most of the previous commercial implementations in industry were either first- or second-order
systems. Typically, these are low-bandwidth systems like process control and do not take full advantage
of the capabilities that modem control theory has to offer. Limitations of earlier processors had prevented
widespread use of digital controllers in many segments of industry. DSPs are the first class of processors
that have the right combination of architecture, perfonnance, and cost to make it possible for implementing
these advance concepts in practical everyday systems. This combination now allows people to implement
advanced controllers in a wide variety of products and services and to solve the major problems in implementation of digital controllers. PART IV's introduction as well as articles describe many of these products
and applications.
Digital controller implementation, however, is fundamentally different from analog controller implementation. Since natural analog processes are approximated, a fair amount of work must be done in preparing
a controller design for implementation. This introduction highlighted some of the major problems that are
usually encountered when implementing digital controllers. Undoubtedly, there are countless other problems that are unique to each application. However, minimizing these problems that are discussed here will
provide a solid foundation for control system implementation. The use of CASE tools like Matrix-X, Impex, and DFDP is again recommended because they not only automate design and implementation processes but also represent years of experience by experts.

References
1.

Maroney, P.,Issues in the Implementation ofDigital Feedback Compensators, The MIT Press, 1983.

125

APPENDIX 1
This shows
controller
locations
parameters
A

an example of a design and implementation using CASE Tools. The
was designed in the previous section using PC-Matlab. The pole
were' chosen to be z=0.90 and z=0.95. The following design
were obtained.
0.00099444139773

[ L 000000:0000000

1

0.98890343243454
[ 0.00002685315106
B

0.05360660659645
C

1

D

0

K

L

1

o 1

[ 93.27208561511948

100

2.54443979371671

[ 0.01063'03432437
2.78492351899385

A - BK - LC

1

-0.00066408081842

0.00000926115172

-2.83492351899385

0.00852504649404

100 [

1

The Impex software will be used for code generation that is suitable for
implementation on the TMS320E14. The next sections of Appendix 1 show the
different outputs of the software.
1a. This shows the original system derived from PC-Matlab and the input to
the Impex software. The matrices A, B, K, and L have to be combined as
shown above and will be referred to as the "a" matrix in the system. The
remaining matrices will remain the same.
lb. This shows the effect of schur transformation in the system.
dynamic range of the coefficients has been significantly reduced.

The

1c. This shows the system after 'scaling and schur transformation. The C
matrix is not scaled as this can be done via input output scaling or even
with an external amplifier.
ld. This shows the realized system and the DSPL (Digital Signal Processing
Language) code for the state controller/estimator.
1e. This shows the assembly language code for this controller on the
TMS320E14 DSP. The code also shows the macros that will be used in the
expansion. The code interface to a DSII01 ( a TMS320E14 board developed by
dSPACE). Initialization and peripheral addresses can be changed for other
systems.

126

Appendix la
- This is the original system obtained from PC-Matlab
- Dynamic_matrix a represents A - BK - LC in the design.
basic block is
state controller/estimator
system info text is
example for a second order state controller/estimator
with one input and one output.
end system_info_text;
sampling-period := 0.001;
system inputs is
name-=> pos err, unit => v,
lower bound => -1.00000000000000E+01, upper_bound =>
1.00000000000000E+Ol;
end system_inputs;
system outputs is
name-=> plant con, unit => v,
lower bound-=> -1.00000000000000E+Ol, upper_bound =>
1.00000000000000E+01;
end system_outputs;
system_equations ssd is
system_representation := PHYSICAL;
system states is
name-=> state xl;
name => state-x2;
end system_states;
dynamic matrix is
a ( 1,-1) := -6.66408081842000E-02;
a( 2, 1) := -2.83492351899385E+02;
a( 1, 2) := 9.26115172000000E-04;
a( 2, 2) := 8.52505649404000E-01;
end dynamic_matrix;
column input matrix pos err is
b( 1):= 2. 68531510600000E-05;
b( 2) := 5.36066065964500E-02;
end column_input_matrix;
row output matrix plant con is
c( 1) :=- 1.00000000000000E+OO;
end row_output_matrix;
direct link pos err to plant con is
d
:= O~OOOOOOOOOOOOOOE+OO;
end direct_link;
end system_equations;
end basic_block;

127

Appendix lb
- This shows the controller after performing schur
~ transformation on it.
basic block is
state controller/estimator
system info text is
example f~r a second order state controller/estimator
with one input and one output.
end system_info_text;
sampling~eriod

:= 1.OOOOOOOOOOOOOOE-03;

system inputs is
name-=> pos err, unit => v,
lower bound => -l.OOOOOOOOOOOOOOE+Ol, upper_bound =>
1.OOOOOOOOOOOOOOE+Ol;
end system_inputs;
system outputs is
name-=> plant con, unit => v,
lower bound-=> -l.OOOOOOOOOOOOOOE+Ol, upper_bound =>
1.OOOOOOOOOOOOOOE+Ol;
end syste~outputs;
system_equations ssd is
system_representation := SCHUR;
system states is
name-=> state xl schur;
name => state-x2-schur;
end system_states;dynamic matrix is
a ( 1,-1) := -6.66408081842000E-02
a( 2, 1) := -5.53695999803486E-Ol
a( 1, 2) := 4.74170968064000E-Ol
a( 2, 2) := 8.52505649404000E-Ol
end dynamic_matrix;
column input matrix pos err is
b( 1):= 1.37488l33427200E-02;
b( 2) :- 5.36066065964500E-02;
end column_input_matrix;
row output matrix plant con is
cT 1) :=- 1,953l2500000000E-03;
end row_output_matrix;
end

syste~equations;

end basic_block;

128

Appendix lc
- This shows the controller after performing schur transformation and
scaling on it
basic block is
state-controller/estimator
system info text is
example for a second order state controller/estimator
with one input and one output.
end system_info_text;
sampling-period := 1.00000000000000E-03;
system inputs is
name-=> pos err scaled,
lower bound => -1.00000000000000E+OO, upper_bound =>
1.00000000000000E+OO;
end system_inputs;
system outputs is
name-=> plant con scaled,
lower bound-=> ~1.00000000000000E+OO, upper_bound =>
1.OOOOOOOOOOOOOOE+OO;
end system_outputs;
system_equations ssd is
system_representation := SCHUR;
system states is
name-=> state xl schur. scaled;
name => state-x2-schur-scaled;
end system_states;dynamic matrix is
a ( 1,-1) :- -6.66408081842000E-02;
a( 2, 1) := -3.05727555099513E-01;
a( 1, 2) := 8.58759911760409E-01;
a( 2, 2) := 8.52505649404000E-01;
end dynamic_matrix;
column input matrix pos err scaled is
b( 1):= 2.04659364615941E-01;
b( 2) := 4.40603476246127E-01;
end column_input_matrix;
row output matrix plant con scaled is
c( 1) :=- 1.31209002384973E-04;
end row_output_matrix;
end system_equations;
end basic_block;

129

Appendix 1d
- This shows the realized system and the DSPL code to implement it.
system realization
linear system is
2nd order state controller/estimator
type fractional is
fix' (bits => 16,
fraction => 15,
representation => twoscomplement);
scptype state1 is
fix' (acculength => 32,
round => on,
scale => on,
saturation => on);
scptype out1 is
fix' (acculength => 32,
round => on,
scale => common,
saturation => on);
a1 : scalable constant vector
:= ( -6.665039062500E-002 ,
8.587646484375E-001 );
a2 : scalable constant vector
:= ( -3.057250976563E-001 ,
8.525085449219E-001 );
bl : scalable constant vector
:= ( 2.046508789063E-001);
b2 : scalable constant vector
:= (
4.406127929688E-001);
c1 : scalable constant vector
:= ( 1.220703125000E-004,
O.OOOOOOOOOOOOE+OOO );
xk
vector
vector
xkl
vector
u
input is u;
vector
y
output is y;

(2) of fractional
(2) of fractional
(1) of fractional
(1) of fractional
(2) of fractional

(2) of fractional;
(2) of fractional:
(1) of fractional;
(1) of fractional;

begin
every
1.OOOOOOOOOOOOE-003 do
update (xk1, xk);
input (u);
output (y);
accumulate scalpro (state1, 1.OOOOOOOOOOOOE+OOO
xkl(l) := a1 * xk + b1 * u:
end accumulate ;
accumulate scalpro (statel, 1.000000000000E+OOO
xk1(2) := a2 * xk + b2 * u;
end accumulate ;
accumulate scalpro (out1, 1.000000000000E+OOO )
y(l) := c1 * xk1;
end accumulate
end every ;
end linear_system:

130

Appendix 1e
TMS320e14 assembly code for the controller/estimator
.tit1e "linear system"
.list
-

enable listing

.global RESET

user program entry

code for OSPL's initialization
standard version
version for OSl101 TMS 320e14 / E14 processor board
WARNING: no interrupt besides TIMINT1 must be used '"
revision 2.01 / 09-Nov-1989
(e) 1989 dSPAeE GmbH

in it

$macro callno,blkno

TIMINT1 bit?

.set

bsr?
ddr?
if?
im?
fclr?
adcO?
adc1?
strb?
comreg?

. set
. set
.set
.set
.set
.set
. set
.set
.set

16
7

1
4
5

6
8
9
OAR
OEH

initialize RESET vector and INT vector
.asect "RESET", 0
b
b

RESET
ISR

.asect "OSPL"

vector to user program entry
vector to interrupt dispatcher
return to OSPL compiler's code section

define. initial processor state
disable interrupts
disable hardware overflow mode

dint
rovm

initialize constant because INIT is called before OSPL code
transfers data to on-chip RAM
lack
sacl

1
one

6
131

initialize interrupt system
zac
sacl
out
out
sub
sacl
out
out

*
*,
*,

bsr?
ddr?

*
*,
*,

im?
fclr?

select BANKO
configure parallel port as input

one
; mask off all interrupts
; clear all interrupt flags

dummy read 12 bit ADCs to enableADC operation
lack
tblr
lack
tblr

adcO?

*

adcl?

*

dummy read communications port to reset rxfull flag
lack
tblr

comreg?

*

clear incremental encoder counter registers
lack
sacl
lack
tblw
b

30H

*

strb?

*

exit?

initialization complete

interrupt service routine
;

ISR

in
lack
and
bz
sacl
out
call
no TIMINTl?
eint
ret
exit?
.endm

132

*, if?
TIMINTl bit?

*

no TIMINTl?
*, -0
*, fclr?
timintl

read interrupt flag register

====================================================================

code for DSPL's EVERY-statement (begin)
standard version
version for TMS 320C14 / E14 on-chip timer 1
formal parameter TIME passes requested sampling period in

s

160 ns <= samplingyeriod <= 10.4 ms, resolution 160 ns
10.4 ms < samplingyeriod <= 41. 9 ms, resolution 640 ns
41. 9 ms < samplingyeriod <= 65.5 ms, resolution 2.56 s
revision 2.01 / 09-Nov-1989
(C) 1989 dSPACE GmbH

evbeg

$macro callno, blkno, time

time?
bsr?
bankO?
im?
fclr?
bank2?
tcon?
tpr1?
tlint?

.set
.set
. set
.set
. set
.set
.set
.set
.set

:time:
7

0
5
6
2
4
1
0010H

on-chip timer setup TMR1
lack
sacl
out

bank2?
*, 0
*, bsr?

$if time? < 03333H
lack 006H
$else
$if time? < OCCCCH
lack 002H
$else
lack 004H
$endif
$endif
sacl
out

*, 0
* , tcon?

one
It
mpyk tpr?
pac
tblr ,*
out
* , tprl?
lack

select BANK2
prescale 0
prescale 4
prescale 16

update TCON

load timer period value
set TPR1

bankO?

133

sacl
out

*,
*,

It
mpyk
pac
tblr
out

one
imval?

lack
sacl
out

tlint?
*, 0

eint
b
tpr?
tpr?
tpr?

*
*,
*,

0
bsr?

im?

fc~r?

set 1M register

c~ear

TMR~

interrupt flag bit

enable interrupts
wait for interrupt

$

$if time? < 03333H
.word time? * 5
$else
$if time? < 6CCCCH
.word time? * 5 I 4
$else
.word time? * 10 I 32
$endif
$endif

imval? .word

select BANKO

if period < 13.107 ms
if period < 52.428 ms

~t1int?

timint1
.endm

code for DSPL's

EVERY-statement (end)

standard version
version for TMS 320C14 I E14 on-chip timer 1
revision 2.01 I 09-Nov-1989
(C) 1989 dSPACE GmbH
=============================================================

evend

$macro callno,blkno,time
ret
.endm

134

code for OSPL's

INPUT-statement

standard version
version for OS1101 on-board 12 bit ADCs
revision 2.01 I 09-Nov-1989
(C) 1989 dSPACE GmbH

in12

$macro

iop?

.set

0

lack
in
and
bnz

1 «
wait?

lack
tblr

: channel:
:data:

wait?

callno,blkno,data,channel

(:channel:-8)

*, iop?

*

setup busy test mask
get busy bit
test busy bit
wait until adc ready
read adc data

.endm
=================~===========================================

code for OSPL's

START macro

version for OS1101 on-board 12 Bit ADCs
revision 2.01 I 31-0ct-1989
(C) 1989 dSPACE GmbH
=============================================================

start

$macro callno,blkno

strb

.set
lack
sacl
lack
tblw

OAH
003H

*

strobe for ADCO .. 1

strb

*

start both ADCs

.endm

135

=============================================================

code for DSPL's

OUTPUT-statement

version for DS1101 on-board 14 bit DACs
revision 2.01 / 31-0ct-1989
(C) 1989 dSPACE GmbH
=================================================-===========

out14

$macro callno,blkno,data,channel
lack
tblw

: channel:
:data:

; write data to DAC

.endm
.asect "DSPL", OOOlOh

; program memory base address

status register save location (data page 1)
st
.set OOOffh
; predefined constants
c1
.set OOOOOh
predefined constant
.word 1
c2
.set 0000lh
predefined constant
.word 32767
c3
.set 00002h
predefined constant
.word -32768
c4
.set 00003h
predefined constant
.word -1
; declarations for.UPDATE variables
v1
.set 00004h
xk1(1)
.word 0
v2
.set 00005h
xk(l)
.word 0
v3
. set 00006h
xk1(2)
.word 0
v4
.set 00007h
xk(2)
.word 0
; declarations for variable vectors
v5
.set 00008h
u(l)
.word 0
v6
.set 00009h
y(l)
.word 0
; declarations for coefficients
c5
.set OOOOah
a1(2)
.word 28140
c6
.set OOOObh
b1(1)
.word 6706
c7
.set OOOOch
a2(1)
.word -10018
c8
. set OOOOdh
a2(2)
.word 27935
c9
.set OOOOeh
b2(1)
.word 14438
; declarations for external procedures
one
.set OOOOfh
constant for procedure init

136

zero
v7
v8

. word
.set
. word
.set
. word
.set
. word

1

00010h

constant for procedure in12

00011h

parameter for procedure in12

00012h

parameter for procedure out14

o

o

o

start of program
RESET
lark ar1, 000e7h
initialize software stack pointer
1arp ar1
make stack accessible
ldpk OOOh
select data page
init 0,1
; call external procedure init
perform data RAM initialization
lark ar1, 19
initialize counter
lark arO, OOOOOh
initialize destination pointer
lack 00010h
initialize source pointer
11
larp arO
select destination pointer
tblr *+, ar1
transfer word, select counter
c1
increment source pointer
add
repeat until transfer complete
banz -11
lark ar1, 000e7h
initialize software stack pointer
larp ar1
make stack accessible
line 42
begin block statement
evbeg 0,1,1000
16 cycles
line 43
ldpk
dmov
dmov

OOOh
v1
-v3

select data page
xk1(1) --> xk(l)
xk1(2) --> xk(2)

3 cycles
line 44
start 0,1
in12 0,1, _v5,00008h

initialize input
input u(l)

56 cycles
line 45
out14 0,1,_v6,00008h

output Y (1)

4 cycles
line 46
zac
xk(l)
v2
It
a1 (1)
mpyk ::-2184
xk(2)
v4
1ta
-c5
a1(2)
mpy
-v5
u (1)
Ita
-c6
b1 (1)
mpy
apac
add
c1, 14
perform rounding
overflow test and rescaling 0 bit

137

, 1
'" 12

sach
bIz
sub
blez
lac
b

:c2, 15
13
:c2, 0
14

save result
branch if result negative
positive limit
branch if no positive overflow
use positive saturation
update result

sub
bgez
lac
b

_c3, 15
13
-c3, 0
-14

negative limit
branch if no negative overflow
use negative saturation
update result

lac

"',

reload result

sacl

_vI, 0

12

13
14

0

xk1(1)

19 cycles
line 49
zac
It
v2
xk(l)
-c7
mpy
a2 (1)
-v4
Ita
xk(2)
-c8
mpy
a2(2)
-v5
Ita
u(l)
-c9
mpy
b2(1)
apac
add
c1, 14
; perform rounding
overflow test and rescaling Obit
sach '" , 1
save result
bIz
15
branch if result negative
-c2, 15
positive limit
sub
blez -16
branch if no positive overflow
-c2, 0
lac
use positive saturation
-17
b
update result
15
negative limit
sub
_c3, 15
bgez
branch if no negative overflow
16
-c3, 0
lac
use negative saturation
-17
b
update result
16
lac
reload result
'" , 0
17
sacl _v3, 0
xk1(2)
19 cycles
line 52
zac
It
vI
xk1(1)
mpyk 4"
cl(l)
apac
add
c1, 14
perform rounding
overflow test and rescaling 0 bit
sach "', 1
save result
bIz
18
branch if result negative
sub
-c2, 15
positive limit
blez -19
branch if no positive overflow
lac
-c2, 0
use positive saturation
b
-110
update result

138

18
sub
bgez
lac
b

c3,
- 19

=c3, 0
110

negative limit
branch if no negative overflow
use negative saturation
update result

lac

*,

reload result

sacl

_v6, 0

19
110

15

0

y(l)

15 cycles
line 55
evend 0,1,1000

end block statement

2 cycles
b
. end

$

wait for interrupt

139

140

HARDWARE/ SOFIWARE-ENVIRONMBNT FOR DSP-BASBD MULTIVARIABLB CONTROL

H. Haosclmana.1t Hcmichfrcisc, It HosIDIIUUI and A. Schwarte
dSPACB diplalsipal procesain& and coollOl en&ioeerinc GmbH
AD dcr Sc:hCIneo Aussichl2, 0-4790 PadcIbom, Fed. Rep. Gcnnany

AIIIIIJQ

AppJjcatjons

Si&naI Processors (DSP) are powcdul candidates
for the implementation of multivariablc control for fast systems. We
repon briefly on several applications of DSP in the control of
mechanical systems. The success of these applications was 10 a large
extent due to a set of software and hardware tools for controller
implementation. Buildin& upon our experiences of these applications
we derive requilements and concepts for a novel deve10pmcnt
system (hanlwarc/soflware~nvironment) for DSP in multivariable
control.

In this section we repon briefly on some multivariablc control
applications using DSP. Unless otherwise stated these are applications we were involved in during our work at the Dcpanmcnt of
Automatic Control in Mechanical Engineering at the University of
Paderbom.

Sin&l~hip Diplal

Cumollm
The reason for considering DSP for control is their computing

speed. In most other respects DSP are inferior to other kinds of
processors ... The speed of DSP comes mainly from the integrated
hardware multiplier and accumulator, and from the multiple bus
architecture. Thc latter is necessary in order to keep the fast
arithmetic units busy, i.e. 10 allow the operand and result data
transfers 10 keep up with thc usually single-cycle arithmetic operations.
A detailed descriplion of DSP architectures is not given here.
Some comparisons of current DSP chip architeclUrcs can be found in
.... A few benchmark results related 10 conllOl are mentioned in .... and
some more arc reported below in the applications scclioo.
The spectrum of DSP has grown rather broad now. II is divided
into two blocks: one with fIXed point and one with floating point
arithmetic hanlware.
The low cnd is represented by low cost devices such as the
Texas Instruments TMS32010 with 16 bit fixed point arithmetic (32
bit in Ihe accumulator) and rather limited data memory address range
(144 words on-chip), which needs 400 os for a multiply-and-accumulale operation (mac). In the medium range are devices which also
suppon 16 bit fixed point arithmetic but are abouttwicc as fast, have
increased addressing space, and have increased Cunctionality (such as
on-chip serial interfaces). One example is the TMS320C2S. High
end fixed point arilhmetic chips are the AT&T DSP16 with its speed
(7S ns per mac), and the Motorola DSPS6000 with its extended
wordlength (24 bit operands and S6 bit in the accumularor). For high
volume industrial use versions wilh on-chip program EPROM
(TMS320E I S) or even EEPROM (General Instruments
DSP32OEEI2) are panicularly interesting.
A few floating point DSP have become available n:ccnlly, most
notably the NEC77230 and the AT&T DSP32. Both chips offer 32
bit arithmelic with ISO ns (NBC, pi!'Clined) 10 2SO ns (AT&T) for a
mac. So Ihere' Is only a small time penalt~ for floating poinl
arithmetic if these chips are used. Even faster wtll be the chips which
arc scheduled 10 be sampled in 198811989 such as the AT&T
DSP32C (up 10 80 ns per mac) and the Te_ Instruments
TMS32030 (60 ns per mac).
These chi~ will use 0.7S JIIII and I JIIII technology. The same
technology will enable fixed point chips 10 be faster, but whal Is
often more imponanl for industrial usc, Ihe chip area saved by
sticking 10 fixed poinl hardware can be used 10 increase the chip's
functionality by integrating more timers, pons. interrupt conlWl etc ..
MiclOCOlltrollers like the Intel 8096 bul with DSP core may be
c""'ted Ihat way. Using more conventional technology will on the
other hand lower chip cost and thus open up high-volume applications. Fixed point DSP will have a place in industrial applications for
years 10 come.

WincheslCr~~

Modern high performance disc drives usc Cast voice coil
actuators for the positioning of magnetic heads onlO desired tracks
and for keeping them on track against various dislurbanccs by
closed-loop control. Hcad positioning conIIOI comprises two tasks:
(A) Positioning on a target track (maybe across many tracks), and
(B) track following during read and wme operations. Modem control
techniques can be expected 10 improve control speed and accuracy
for both tasks.
For task (A) state eslimator techniques help to solve the problem
of estimating the state of the fast moving actuator from the track
error, which is the only measurement variable usually available. For
task (B) controllers can be designed which achieve high control
bandwidth and good disturbance rejection despite the complicated'
nature oC the mechanical plant.
Using a simple low order model (double integrator) for the
actuator an estimator-based controller was impicmcnted on an Inlel
8096 rnicrocontroller by IBM '. Owing 10 the medium performance
embedded servo technique, the crossover frequency (around 300 Hz)
and the sampling rate (around 4 kHz) were nor very bigh and the
controller was relatively unambitious with respect 10 processor
compuling speed.
The computing power of a TMS32010 DSP was utilized in the
track following control studies reported in ". A 9th order controller
based on notch filter techniques (to compensate for structural
resonance effects) was designed and implemented, running at about
30 kHz sampling rate '. A crossover frequency around 900 Hz was
achieved. The crossover frequency was limited mostly by model
unccnainty, bulthe high sampling rate was not a luxury because of
strong resonances in the plant even at 10 kHz. This controller was
for an 8 inch drive with dedicated servo and a rotary voice coil
actuator. A disturbance observer with disturbance feedforwanl was
added to the 9th order controller for improved disturbance rejeclion.
A different controller with exccllcnt dislurbance rejection based
on Iq (linear quadratic optimal) controller design for the same drive
was also implemented and ran at 34kHz sampling rate '.
Tailoring positioning controllers 10 modelled disturbance dynamics, incorporating adaptive techniques, or increasing the usable
frequency range of the mechanical construction (smaller drives and
beller construction) will funher increase processor power demand.
So disc drives arc an interesting fte1d for DSP application.
~l![~~1IW!£I1Jkm

Active vehicle suspension means toW replacement of the
convenlional spring and shock absorber asscmbIies. Hydraulic
cylinders driven by servovalvel arc used instead. The system relies
fully on control'.
The abovementioned ~up at the University of Padcrbom has
been working on this subJcct for years under contract with several
groups of Daimler Benz AG. Multivariable control techniques are
applied. Mullivariablc controllers with more than 10 sensor inputs, 4
actuator outputs, several diagnostic outputs, and orders above 20 are
common. These controllers are mostly linear with some added
lumped nonlincaritics for the compensation of nonlinear hydraulic

Reprinted, with pennission. from Proceedings of 12111 IMACS COIlIt'rellce.

141

flow ~ Fill dynamicl of the hydraulic systems require
sampIin, n.ICI above I kHz. After aiop IXiI
IIudica _

_·bed

yean 1,0 (mady usUal DSP) ... experimenw off·road truck is
cum:ndy bcinl equipped 10 ruB _in !he field. A Sludy for ~
type of vehicle is underway. TMS32010 systems _
used Wllil
rcccndy. and have DOW been rcpJ-s by TMS32020 IfIICIDS DOW.
10 prcparaIioo fot !he off·road truck _ the eyllndcr coasllUClion was ICSICd &I the uaivcnily lab in • hardw_in·tbc-Joop
simulation. Tbc RaI eylinder. which is 10 rcpJace !he IjIIins/lbsorbcr
uscmbIY. was used. Tbc road and the vchiclc body _
limulalCd
in a TMS32010. lO,elbu with the suspension c:oatroIlCl' and !he
COIlIrOlIcr for • second cylinder I_linl !he c:omct dynamic load
such as !he IUlpension eylinder would find in !he RaI vehicle. Tbc
10181 system could have ruB at 7 kHz samplinl rate, _what more
than nec:essuy.
A fully aclive system bu also been designed and implcmcnlCd
for • nee car at Lotus Co.• UK. also usUal a TMS320 processor. 17
sensors are involved.
Senliactive vehicle suspension means replacement of !he con·
ventional shock absorber by an adjusllble one. In contraSlIO exisling
slowly and I or discontinuously adjusllble absorbccs the aclUator
mechanism has servovalve charac:terislics in order 10 come close 10
an active system in performance. Such a sYSlem is under development in an industrial company which is advised by !he abovemen·
tioned universilY. Again • TMS32020 syslcm is used. which
replaced. TMS32010 system n:ccnJly.
~&!WI

With cOnventional control. !he elaSlic movemenlS in Ihe drives
and the flexibility of !he anus of lightweight robots resuh in large
vibrations of !he band. panicularly during and aflCr high acceleration
intervals. A multivariablc controllCl' bas been designed and im{lle·
mcnlCd for • tbtcc-joInt atticulalcd robot driven by elc:caical
scrvo-drivel ..... This controller n:movcd the vibrations virtually
completely without. speed penalty.
Each motor was equipped wilh a position encoder and a
tacho-generaror and the two anus c:anied two strai.... gages each for
curvature mcuurcments in bo!h deflection directions. The IOIal
number of sensors was thus 10. The reference Irajectory was fed inlO
tbe controllCl' as 3 position. 3 velocity and 3 accemtion fcedfor·
ward signals. The controllCl' thus had 19 inputs and 3 outputs 10 thc
1IlOIOI'S. The order of tbe conlrOller was only 6 due 10 !he special
design lCchnique and duc 10 !he fact that many sensors were been
used (many SIalic gains). Tbc conlrOllCl' was implemenled on a
TMS32010 and !he sampling rate used was 10 kHz. The sampling
rate could however have been more than twice that. so there was
considerable spare computing power for additional IaSks to be
perfonncd by tbe processor.

lWIrmIIilo &!WI
For tasks requiring medium speed but very high acceleration
(such as water jet culling) a S desn:es-of·frecdom (6 drives) ganary
robot is under conSlrUCtion at an industrial company. Hydraulic
drivcs have been chosen because of their good !OIque·lO-weight
ratio. The conslrUCtion is DOvel in many respects and makcs use of
very lightweight materials.
Two particular challenging requirements for control design and
implcmcnlalion have been: (a) 10 maintain lOugh trajectory control
under maximum accelCl'ation (i.e. max. error 0.2 mm at 30 mls'). (b)
10 use DO other sensors than the position encoders of each hydromotor (absolute minimum).
Requirement (a) ncccssilalcd nonlinear compensation 10 cope
with the SIrODI nonlinearitics of hydraulic flow through the servovalve. Rc~. + rup.• + K[yp.' -

H~.]

(1)

(Franklin and Powell, 1980; Astram and Wittenmark, 1984),
where up is the vector of control inputs to the plant and yp may
contain plant measurement variables as well as reference inputs
or measured external disturbances, in the case of reference and
disturbance modelling. The observed state vector is then used
in
(2)

where Lis a constant state feedback matrix, possibly including
columns for feedforward of observed referen~ or disturbance
model states. In (1) there could additionally he input terms
separate from the control input term in the case of additional
measurable external plant input signals. The term in brackets
could he augmented by - Du p ." when the discrete plant state
space description contains it as a direct feed through term. This
occurs for instance when dealing with computational delay of
the control processor using the approach given by Kwakernaak
and Sivan (1972). The state observer/estimator may also come
in another version, slightly different rrom (I):
~1+ I

= x, + rup~ + K[yp ... I

-

Hx.].

(3)

This version is called "current" estimator by Franklin and
Powell (1980). Astram and Wittenmark (1984) distinguish the
predictor version given by (1) from the filter version given by
(3). The presence of Yp,lr.+ I has implications with respect to nonzero computation time (see Subsection 2.2).
Because Up,t; which is computed via (2) also appears on the
right-hand sides or (I) and (3) it is sometimes argued that (2)
could just as easily be included in (1) or (3), yielding ror instance
x.+ I

= ( -

rL)~.

+ K[yp.H 1 -

H~.] .

(4)

in the case or (3), along with (2) ror computing the control input
to the plant. This could however be dangerous when up as input
to the plant saturates (Astram and Wittenmark, 1984). The
versions (1) and (3) still work (Fig. 2a) but in the case or (4) the
control system is broken up due to saturation into the plant
and a system whose eigenvalues are those or  - rL-KH,
which are not even guaranteed to he stable (Fig. 2b). The control
system may never again regain stable operation after saturation
has occurred. Astonishingly, this simple ract has rrequently been
ignored in the literature. Note that this problem ties in with the
loop transfer recovery issue of continuous control (Doyle and
Stein, 1981) as well as with antiwindup compensation (Astram
and Wittenmark, 1984). From the author's own experience
designs are not unlikely to end up with an unstable system (4).
In such cases at least, the control inputs to the plant should
also be explicit inputs to the controller. If saturation occurs
only at the DA-converter, an internal feedback of up under
saturation in the control processor to the right-hand side or (1)
or (3) may suffice, otherwise the inputs to the plant should he
measured. Note that even when (4) is stable, the dynamics may
be very unsatisfactory. If a continuous controller is designed
and afterwards discretized, the discrete controller with reedback

Survey Paper
as in Fig. 2b may be unstable if actuator saturation occurs, even
if the continuous controller remains stable.
In (1) and (3) there is an explicit computation of the observation/estimation error (the bracketed tenns). The term depending
on ::I. could however be omitted if  - KH in (1) or  - KH
in (3) are used for  instead. The controller (I), (2) can then be
reduced to a standard state space fonn

Xu. = ( -

KH)x, + (r, K)[U'.']

This occurs if certain methods are used to discretize an analog
controller. But the simple PlD controller given by
"P,k

=

rxe t

= UU-l + pet
UD,k = }'"D,It-l + 8(e t

(9)

"l,k

Uk

(5)

-

e,,-t)

= Up,lt + UI,k + "D,t

Yp,k

which is equivalent to (1), (2) with infinite arithmetic precision.
With short wordlength arithmetic there may however be cases
where the representation of (() - KH) and K in the processor
causes observation/estimation errors.
The reduction of (3) and (2) to standard state-space form is
prevented by the presence of y,.u. in (3). An input/output
equivalent standard state-space form could be found (see below)
but *- would not be preserved.

2.1.2 Standard state-space systems. If the controller design
method does not yield a specific algorithmic structure such as
(1) and (2), but just a discrete dynamic system with some inputs
and some outputs, or in cases where the structure is not required
to be preserved, the standard state-space description may be
adequate: .
Xk + 1

Yk

= AXk + BUk
=

eXk

+

(6)

Duk ,

Such a system may also appear as a subsystem in a complex
controller. Its input thus does not necessarily coincide with the
plant measurement. reference and measured disturbance vectors
as in (1), and its output is not necessarily the control input
vector to the plant. The usual convention of " being the input
and y being the output of this system has therefore been adopted,
and will be used in similar cases below. It is important to include
the direct feedthrough terms in (6) because controllers frequently
have such a term (think of simple P, PI, PD, PID type
controllers).
If (6) describes an unstable controller/compensator with
which does not contain the actuator control variables (as
opposed to (5)), the same problems in the case of actuator
saturation arise as discussed above. The closed-loop system of
course should be stable but breaking of the loop because of
actuator saturation is likely to have disastrous consequences (due
to possibly only "conditional stability" in Bode's terminology).
Astrom and Wittenmark (1984) suggest a neat way of circumventing such problems by implementing the system (instead of

"k

(6))

Xu. = (A - MC)x,
Yk = CXt

+ (B -

MD)u,

+ My"

(7)

+ DUb

which is equivalent to (6) as long as everything is linear. The
point is that (7) is a feedback system because y, appears as
in the computation ofxt • Assume now Y to be the control
input to the plant, and U to be the plant output. If y, now
saturates, not only the controller/plant loop is broken, but
also the loop in (7) (see again Fig. 2 with (7) replacing the
observer/estimator/feedback system there). Thus one is left with
a system the dynamics of which are determined by A - MC
instead of A, and A - MC may have more desirable eigenvalues
because M can be chosen freely.
Yk-I

21.3. State space system with "current" term. Frequently the
system description contains a "current" term. which means that
Xk depends not only on " t - l but also on the currently sampled
Uk or
X U1

=

AXl

+ B1Uk+l + B O"l

Yk

=

eXk

+ DUk'

(8)

also yields a description of the form (8), if the integral part U I
and the differential part UD ared chosen as state variables. If it
is not necessary to preserve the state variables, (8) can be
translated into (6) using the substitution (Hanselmann, 1984)
(10)

resulting in
X:+l

y,

= Ax: + (ABl + Bo)uk
= Cx; + (CB. + D)u,.

(11)

2.1.4. Transfer junctions. Controllers or controller subsystems
are often given in transfer function form if they are SISO. MISO
or SIMO systems. In the case of MIMO systems the transfer
matrix description is not directly appropriate for implementation
purposes because of the underlying minimal realization problem.
For this reason and because state-space models are more easily
amenable to numerical treatment, basing CACE tools on state
space descriptions might be preferred, with some important
extensions as given in Subsection 5.4.
In the SISO case it is quite natural to derive an implementable
difference equation directly from the z-transfer function in
polynomial form:

could be implemented as

This is only the simplest equation, requiring more storage
elements than necessary. There are various other structures also
involving the polynomial coefficients of(12) more or less directly
(see for example Phillips and Nagle, 1984). The problem is that
such an implementation is very likely to fail with finite precision
arithmetic even in low order cases, so transfer functions are
usually realized in different, more appropriate forms (see Section
5).
If an observer/feedback controller is given in transfer function
form in the case of a SISO plant, it has at least two inputs and
one output. The two minimal inputs are the control input to
the plant as measured and the plant's output variable. Additional
inputs for the command or reference signals and measured
disturbances may be present. So such a controller is always
MISO. It may be tempting to eliminate the input of the plant's
actuating variable into the controller which computed that
variable. The problem associated with actuator saturation
discussed above in the state-space context then also arises. If,
originally, the controller is a compensator without this actuating
variable feedback, and it is unstable or exhibits unsatisfactory
dynamics, it is also possible to remedy this in transfer function
form (Astrom and Wittenmark, 1984~ corresponding to the
modification shown in (7).

2.1.5, Finite impulse response filters. Finite imPl1:1se response
(FIR) filters are known from digital filler theory (Oppenheim
and Schafer, 1975). They are commonly realized as non-recursive

147

!iurvey Paper
systems, i.e. the difference equation has only input terms on the
right-hand side

(a)

(14)

but note that recursive realization is also possible. an example
being the 'common recursive realization of a moving average
filter. In a control system context, FIR filters may appear as
subsystems for filtering purposes. They may also be used directly
as controllers in certairi settings (Fromme and Haverland, 1983;
Wid row and Walach, 1983).
2.1.6. Non-linearities. AU the controllers or subsystems discussed
above only require simple scalar product operations involving
coefficient vectors (matrix rows) and data (signal) vectors.
This computation of sums of products, which requires only
multiplications and additions. is the type of operation predominant in general digital signal processing. for instance in digital
filtering or correlation computations. Thus processor architectures suited to the strong market of general digital signal
processing are usually also well suited to controlJer implemen~
tation (see Section 3).
Practical control systems, however, frequently need extensions
of the simple linear timeeinvariant systems discussed. Examples
are: compensation of stateedependent noneviscous friction in
mechanical systems (Henrichfreise, 1985; Walrath, 1984), nonlinear command or reference generators (Broussard et al.. 1985~
compensations of kinematic non~linearities in robot control, or
adaptive mechanisms (Aslram, 1983). Computations introducing
operations such as decision making, divisions, table lookup.
interpolation, polynomial evaluation, and computation of nonlinear functions may give rise to problems with processors which
are intended for linear digital filtering.
2.2. Implications of computational delay. In the difference
equations discussed above the subscript k of input or output
variables expresses time instants where sampling or output
occurs respectively. Thus u, in (6) means u(k1) and y, means
~k1). Sampling and output musl therefore be performed exactly
simultaneously. Note that the state vector may have a meaning
with respect to time instants 100 as in the case of (1) or (3),
where t is the observed plant stale but this depends on the
design method which yielded 'the controller. It is in any case
irrelevant at what time instant the state vector is computed, as
long as it is computed before it has to be used for the
computation of the output.
If there is no direcl feedthrough from input to output and
there is no current term in the state update equation (as in (1).
(2), (5) and (6) if D = 0) then the output can be readily computed
before the input is sampled and sampling and output can be
simultaneous in reality. Otherwise, there is inevitable delay
because-take (6) for instance-Du, al leasl has to be computed
and added to Cx" which might already have been computed
because x, does not depend on u,. If ex, is precomputed, delay
is minimized. The control processor program can easily be
organized that way (Franklin and Powell, 1980; Hanselmann,
1982; Astrom and Wittenmark, 1984). Similar arguments apply
to the observer/estimator described by (3) with (2), where,
in order to compute U".II' Y".a must be available and the
computational effort is at least the addition of Kyp., to the
precomputable part of ~" and finally the computation of up.,
=-UII •

If the minimized delay is not negligible, it should be taken
into account in the controller design. How this is done depends
on the design method. With classical Bode diagram design for
instance the delay introduces additional negative phase which
could be assigned to the plant for this purpose. With direct
discrete design the delay may also be assigned to the plant and
design is then based on a discrete description for the plant with
input delay. This description is compuled either in the z-domain
using modified z-transforms (Franklin and Powell, 1980; Astram
and Witlenmark, 1984; Phillips and Nagle, 1984~ or in state
space (Franklin and Powell, 1980; Astram and Wittenmark,
1984; Wittenmark, 1985). In all these cases the delay shows up
in the design of the controller.

148

(k-1)T

kT

of
1output
conlrol signal Up
latest instant for
measuring plant output yp
(b)

(k -1)7'

t

k'/"

1

!output of "Up

measurement of yp

FIG. 3. Computational delay.

With observers/estimators there are more elegant possibilities
which compensate for the delay. In the approach given by
Kwakernaak and Sivan (1972), the time grid is fixed to the time
when output of the control signal U",k to the plant occurs, i.e,
up., means u,J.k 1) (Fig. 3a). With the requirement of simultaneous
sampling and output the latest measurement usable to compute
u,J.k1) would be y,J.k - 1)1). If skewed (non-simultaneous)
sampling were used. the latest measurement could however
preferably be y,J.(k - I)T + Ii), where Ii = T -- t" and t, means
the computational delay. Thus an observer/estimator design
based on a plant descriplion with output y,(kT + Ii) instead of
y,(k1) would compensale for the computational delay.
In the approach given by Meisinger and Lange (1976), the
lime grid is fixed to the sampling of y,J.k1) but the compulation
of up., is based on a predicted plant slate X(kT + t,) (Fig. 3b).
The prediction is easily incorporated into the observer/estimator
equations with no additional computational overhead. Similar
ideas are used by Mita (1985). Meisinger and Lange's approach
appears different from that of Kwakernaak and Sivan. and no
reference to the latter is given. In fact, the equations describing
the estimator can be shown to be equivalent. The difference is
~hat Meisinger and Lange express the estimator gain matrix in
terms of the "no delay" gain matrix assumed to be computed
first.
The observation that direcl feedthrough terms, or current
lerms which map into direct feedthrough, cannol be
implemented exactly with finite speed processors, has led to the
exclusion of such systems in the whole work of Moroney et al.
(1980, 1981, 1983) and Moroney (1983). However it seems
reasonable not to exclude such systems as models, firstly for
cases where delay can indeed be neglected, secondly ror cases
where delay is assigned to the plant during design, and finally
because such systems may be sericswconnected to others which
do not have direct feed through, so that the input and output
operations of the series connection visible from outside may
well occur at the correct time instants.
2.3. Discretization of continuous controllers.
2.3.\. Motivation. Although the common design methods are
available in discrete form, it is quite common to carry out
continuous design first, so that discretization can be assigned
to the implementation task. Discretization of continuous designs
is sometimes ruled out as being inefficient with respect to
necessary sampling rates, giving up some possibilities present
only in discrete design (such as deadbeat behaviour), and being
simply imprecise beeause discretized control never behaves like
the continuous design. Experience shows however that it is far
from uncommon for none of these arguments to be of significant
relevance in practice, and there may be several reasons why the
indirect way via continuous design may be the better choice.
One possible reason is that in order to exploit the exactness
of discrete design there must be early decisions on sampling

Survey Paper

discretizalion methods

Isolated

closed loop

transform (s .z)
substltlltion
sImulatIon

l':tatc mutching

x=

Acx

expunsion

y=

ex + Du

+ 8(u

(15)

transllion matnx
frequency response
mutchmg

FIG.

The bilinear transform is widely in use, and tests on numerical
examples (Katz, 1981; Hanselmann, 1984) indicate that this is
not a bad choice. It is also quite simple to formulate this method
in state space for multivariable systems. Given the continuous
system

the discrete system is of the form of (8) (Haberland and Rao
1973; Hanselmann, 1984), with
'

(16)

4. Discretization methods.

frequency, and possible sampling skew (non-simultaneous sampling of aU inputs) or computational delay must be known in
aqvance. But all ~his depends on what shows up to be computed,
what the numencal data are, and which processor and which
data format will be used. If inadequate estimates have been used
initially, the control system has to be redesigned.
2.3:2. Methods. There are so many methods available for translatmg a linear time-invariant controller into a discrete "equivalent" system (which in fact can never be completely equivalent),
that this topic could be the subject of a survey in itself. [n
the following, not much more than a classification and a
bibliography are given, plus a short discussion of two methods.
The discretization methods available can be classified as
indicated in Fig. 4. 'There are two main groups. The tirst
comprises methods which do not take into account the fact that
~he controller will be connected to the plant and wiIJ operate
m closed loop. At most there are a few assumptions about the
input signals. [n the second group, discretization is carried out
considering the clQscd-loop use of the controller.
Among the contributions to the secon,d group are those
published by Kuo (1980), Kuo et al., (1973), Yackel et til., (1974),
Smgh et al., (1974) and Miller (1985). They consider the redesign
of continuous system state feedback and reference feedforward
matrices for the discrete case with the objective of matching the
state or parts of the state of the discrete control system to those
of the continuous system in closed-loop operation.
Also connected with state feedback and reference feedforward
matrix redesign is another approach given by Kuo et al., (1973)
and Kuo and Peterson (1973) (also in Kuo, 1980) based on a
Taylor expansion of those matrices about T = O. These tnethods
have been reviewed and further discussed by Kleinman and Rao
(1977), who also give a so-called average gain method with the
objective of approximating control signals instead of states.
Closed-loop redesign is also the objective with the methods
proposed by Rattan and Yeh (1978), Rattan (1981, 1982, 1984)
and Shieh et al., (1982), which are based on frequency response
curve fitting.
The group of methods for "isolated" discretization, where
only the system to be discretized is considered without taking
its later connection to the other systems into account, is the
largest. The most widely described methods within this group
assume that the s·transfer function G(s) of the continuous system
is given. With the most prominent method, the so-called bilinear
transform, (see for instance Oppenheim and Willsky, 1983) the
recipe is: substitute s by 2(z - I)/T(z + 1). A z-transfer function
GJz) is thus achieved. This transformation is also known as
Tustin's method and relates to discrete integration, i.e. to
simulation. It "has the nice property of never generating unstable
z-poles as long as the s-poles are stable. Another property is
that the frequency response of G(s) is exacdy replicated in the
freql:'ency response of the discrete system (more precisely
G,(o''''), i.e. without hold device) but unfortunately with a
wa~ped frequency axis. The response of the continuous system
shnnks to the range 0 ... w&/2, where roll is the angular sampling
frequency.

where [ means identity matrix. Note that A is a tirst-order Pade
approximation for the transition matrix exp (Ae 7).
The formulation in state space directly translates into a simple
computer program. The calculations based on the transfer
functions can however also be mechanized (Ahmed and Natarjan, 1983; Bose, 1983; Pei, 1985). Bilinear transformation is not
the only method from the transform or substitution class. More
can be found for instance in Katz (1981) and Franklin and
Powell (1980) along with some comparisons by examples, and
10 Rosko (1972) and Smith (1977). A "small T' root and frequency
response error (continuous/discrete) analysis for the bilinear
transformation is given by Howe (1982).
Since. determination of a discrete system equivalent to a
continuous one is related to simulation, methods from that field
may also be of interest here. [n fact, the bilinear transformation
already corresponds to a simulation of an equivalent continuous
state space system ·via implicit trapezoidal integration. Hanselmann (1984) also derived discrete systems from Heun's simulation method and one of the Runge-Kutta type and compared
them to other methods. Experience showed no general advantage
over for instance bilinear transformation and over the rampinvariance method described below. One method which seems
very interesting and also has some connection with simulation
has recently been published by Forsythe (1983, 1985). It is given
for SISO systems and is based on expressing the samples of the
input and output variables via Taylor series expansion of the
continuous functions. Results are shown which are clearly
superior to those of the bilinear transform in a large frequency
range, although at the expense of increased gain in the high
frequency region. This could be dangerous in a closed-loop
control system.
The last class of methods is based on assumptions on test
input signals applied both to the continuous system and to the
discrete one to be determined. The objective is to achieve
agreement of both outputs at sampling instants. Assumption of
a step input leads to a step-invariant and to a ramp input to a
ramp~jnvariant discretization, occasionally called "zero order"
and "first order hold equivalence" methods, respectively. The
step-invariant discretization is just what has to be performed in
order to describe a continuous plant driven by a zero-order
hold (ZOH). A table of step-invariant transfer functions can be
found in Neuman and Baradello (1979). The ramp-invariant
discretization is also easy to achieve, either via transfer function
calculation, i.e.

G,(z) = Y'(z)
= (z - If Z{G(S)-'-}
R(z)
Tz
s' '

(17)

or in state space. The assumption of a ramp input between

149

Survey Paper
sampling instants leads to the state-space equation solution
(continuous system (15) assumed)

x.. , = exp(A,T)x. +

[u

B, . +

f.

T exp[A(T - T)]

T UktJdT

(18)

Uk+1 -

= Ax, + Hu, + H.(u .. , =Ax,,+Bt"Hl + BOuk•

uJIT

3. Implementation hardware

The transition matrix and the input matrices Hand H. can
be computed simultaneously via a single transition matrix
calculation (Hanselmann, 1984), but also by other means. The
power series expression of exp(AtT), for instan_cc, which is
sometimes used as a basis for computation of A and H, also
leads to algorithms for oomputing H •. Schittke and Dettinger
(1975) used this (unfortunately there is an error in the series
given in their equation (15)). The approach given by Kiillstrom
(1973) for computation of A and H based on one single series
calculation can also be extended*. The series to be summed is

>/I

f (A,Tj'/(i + 2)!

(19)

= I + A,T+ T'A:>/I

(20)

=

;=0

then

A

H = (TI

+ A,T '>/I)B,

H,= T'>/IB,.

(21)

(22)

Definition and derivation of the ramp-invariance method in
state space has already been found in a paper by Haberland
and Rao (1973). The expressions given there for B, and Bo can
be derived from (20)-(22). A small T study concerning scalar
transfer function zeros generated via impulse-, step-, or rampinvariance has been carried out by Bondarko (1984), whose
step-invarlance results relate to those of Astrom et al., (1984~
The author's experiences with the ramp-invariance method
are very good, particularly in critical cases where there are
continuous system eigenfrequencies near mJ2. The stepinvariant results, however, showed bad frequency responses
compared to those of the continuous systems in practically
every application. Sampling frequency could have been lowered
by a factor of five using ramp invariance instead of step
invariance with a high-order controller for a hydraulic system
(Hanselmann, 1984). So the unsatisfactorr experiences with
discretized continuous designs, compared to discrete designs,
which are sometimes reported may well be due to inappropriate
discretizatIon.
2.3.3. Influence of zero-order hold. A general problem with
discretized controllers is that the ZOH at the outputs introduces
collsiderable phase lag. Thus discretized controller frequency
responses are likely either to show more negative phase compared to the continuous controller, or to show increased gain
in the higher frequency region, which stems from the attempt
to lift phase. Stability and damping problems could occur. In
applications carried out by the author, sampling frequency had
to be from a factor of 3 to 10 higher than crossover frequency,
in order to preserve reasonably the behaviour of the continuous
system. From aliasing and roughness of control signal considerations, which often also dictate sampling frequencies in that
range, such a ratio does not seem to be excessive.
With some of the discretization methods the phase lag of the
ZOH can be taken into account directly. This applies naturally
to the closed-loop discretization method class. The "isolated

• Thanks to Prof. K.-J. Astrom who brought this to my
attention.

150

discretization" method of FOTSythe (1985) is also able to do
this, and furthermore to compensate somewhat for possible
computational delay. The price of delay compensation however
is again an increased high frequency gain. The same applies to
what might be caned "post-filters", which are digital filters
conllec!ed between the controller difference equation output
and the ZOH. Such filters have been described by Yekutiel
(1980) and Beliczynski and Kozinski (1984). They lift phase but
must be handled with care unless a rapid gain rolloff beyond
the crossover frequency is guaranteed.

The author is well aware of the fact that any discussion of
hardware is doomed to be obsolete within a very short time. So
this survey gives only & snapshot of current implementation
hardware, but there are some points which might be relevant
for a few years.
3.1. Spectrum of CUTl"ent hardware. The range of possible
hardware for implementation of algorithms as discussed in
Section 2 is very broad. A rough overview is given in Table 1.
3.1.1. Specialmnchines for rapid experimenting. At the upper end
in terms of cost as well as computational power there are
high-speed computers specifically designed for real time data
acquisition and computation. The ADIO from Applied Dynamics International, Ann Arbor, Michigan, is capable of 30 million
arithmeticoperations- 1 and 10kHz data acquisition on 32AjD
channels simultaneously (Powers, 1985; Kerckhoffs et al. 1985),
but costs are in the US $200,000 range. Advanced versions
recently available are also capable of floating-point computation
(Fadden, 1984), but at even greater cost. Such systems are
attractive for experimental work in the early stages of a control
system design and implementation project, in order to obtain
feedback from real experiments as early as possible, and as easily
as possible, with the convenience of floating-point arithmetic,
flexible programming, and plenty of speed. Common minicomputers backed up with array processors may also be used with
similar power but also at high cost (Jacklin et al. 1985). Without
array processors, the speed of minicomputers is usually rather
modest. A less costly system which is marketed specifically for
experimental linear control system evaluation is the PC 1000
from Systolic Systems Inc., San Jose, California, starting at US
$ 25,000. It is rated at 200 ns multiply as well as addition
time with 32-bit floating-point numbers, and 2kHz maximum
sampling rate. Controllers of type (6) with up to 32 states and
16 inputs and outputs can be accommodated under the control
of a personal host computer with download facility.
3.1.2. Fast floating-point chips. Roughly the same computation speed as described above will be possible with systems
based on so-called word-slice chip sets from Advanced Micro
Devices (Flaherty, 1985; Quong and Perlman, 1984) and Analog
Devices (Windsor, 1985; Taetow, 1984). They evolved from
the more traditional bit-slice ooncepts and now oomprize all
necessary building blocks to develop microprogrammed highspeed signal processing systems with just a few chips, among
which are special purpose arithmetic chips, i.e. separate chips
solely for accumulating or multiplying floating-point numbers.
Floating-point computation in the same speed range as with
word-slice devices is possible using arithmetic chips from Weitek
Corporation. Separate 32-bit floating-point adder and multiplier
chips along with 32-word register file devices form a powerful
numerical processor. Control of the devices must be derived
from microcode memory and control logic. About 2 MFLOPs
(mega floating-point operations per second) can be achieved in
low latency flowthrough mode, which means that the result of
• single arithmetic operation is available as soon as possible. If
pipelining can be used 10MFLOPs are achievable, but results
are then not immediately usable in subsequent operations.
Another two-chip set for floating-point arithmetic is available
from TRW (Eldon and Winter, 1983), which is, however,
restricted to a 16-bit mantissa 6-bit exponent fOTmat. Note that
division is not as directly perfoTmed as accumulation (addition
and subtraction) or multiplication with these chips, nor is it
with the above-mentioned word-slice devices. Division must be

Survey Paper
TABLE

High
speed

Medium
speed

1. IMPLEMENTATION HARDWARE

Experimental use in the laboratory
Medium cost
High cost
AD-IO minicomp. and
word-slice
array processors
Systolic Systems
floating~point
PC 1000
chips

minicomputer

Low cost

Dedicated
low volume

VLSI signal processors

microprocessor with
numerical coprocessors
Microcontrollers

performed using table-lookup methods to yield rough estimates
which are then improved via additional operations, or it is
performed totally iterativeJy. This means that division and any
other function computation involving division is performed
much more slowly than the elementary scalar product operation
ace: = ace + coefficient. variable. Within the Weitek register
file there is an integrated lookup table for computing l/x and

Jx.

3.1.3. Microprocessors. Easy implementation and testing of
controllers at much lower cost and 'effort is of course possible
using standard personal computers or microcomputer board
level systems, equipped with process interfaces, and speeded up
by numerical coprocessors, such as the Intel 80286/80287 or the
National Semiconductor NS 32016/32081 combinations. Such
systems are easy to program in high-leve11anguages and deliver
medium speed (see Subsection 3.3), sufficient for implementing
even complex process control in many cases, but frequently not
fast enough for control of fast systems such as mechanical ones.
Attaching fast hardware multipliers to general microprocessors
may also seem to be an alternative. They are available in
abundance from many companies, up to (24 x 24)-bit fixedpoint format at 200ns mUltiply speed or (16 x 16)-bit in 35ns.
But data transfer from and to such a chip via a microprocessor
is much too slow, so the multiplier would be idle most of the
time. Avoiding this would necessitate not only using a hardware
multiplier, but surrounding it with a lot of hardware to achieve
more independent operation on local data memory, under local
sequencing control.
3.1.4. Microcontrollers. The term microcontroller is used commonly for single-chip microprocessors which are designed to be
used as dedicated processors. But control is meant here in a
much broader sense than considered in this paper, including
sequencing control, pulse-width or pulse-frequency modulation
control, and so on. Microcontrollers stand somewhere between
traditional single-chip microcomputers and general purpose
microprocessors. Three powerful 16-bit devices shaH be named
here, the Motorola MK 68200. the Nippon Electric NEC I'PD
78312, and the Intel 8096. Typically, the arithmetic computation
speed is not much higher than with generaI16/32-bit microprocessors for fixed-point arithmetic. But there are features like onchip AD-converters or timers and modulators which make such
processors attractive for developing products. It is interesting
to note that the 8096 evolved from a chip originally designed
according to requirements specifications made by Ford Motor
Company for control applications in an automobile (Powers,
1985; Breitzman, 1985; Simmers and Arnett, 1985).

3.1.5. Signal processors. Very attractive computation speed is
achieved with a number of VLSI signal processors at microprocessor level cost (Hanselmann and Loges, 1983, 1984; Hanselmann 1986). Present devices of that kind that seem to be useful
for control implementation and are available to the public are
the Nippon Electric NEC 7720 (Nishitani et al. 1981), the Texas
Instruments TMS 32010 (McDonough et a1. 1982), the Fujitsu
MB8784 (Gambe et al. 19831 the STCDSP128 (Pickvance,
1985), and the Texas Instruments TMS 32020 (Magar et a1.,
1985; Essig et ai., 1986). Some descriptions can also be found in
Quarmby (1984), Marrin (1985), and of some recently announced
processors in Marrin (1986).

Dedicated
high volume

custom
VLSI

The signal processors mentioned are off-the-shelf products.
The cJass of only mask-programmable signal processors has
been excJuded. They are not of course useful for the average
control implementation task. There is great activity in the
development of signal processors. Several companies have
announced such devices.
For medium to high volume applications, custom chips may
be the choice. Custom design is advancing in supplying quite
complex building blocks such as multipliers, arithmetic units
and memory (Cole, 1985). Furthermore, there is considerable
effort towards fully automated chip design (Cappello, 1984).
Pope et al. (1984) and Rabaey et al. (1985) for instance describe
a silicon compiler which starts with some high-level descriptions
of what the signal processor chip is expected to perform. The
software then chooses optimal parameters of a parameterized
architecture and finally outputs a complete chip layout. Combining building blocks into a freely designed architecture is another
approach (Olesner et a1., 1986).
.
VLSI signal processors make implementation of non-trivial
controllers at high sampling rate feasible at reasonable cost,
and particularly the TMS 32010 has already heen used in many
control applications, as described for example by Slivinski and
Bominski (1985), Kanade and Schmitz (1985), Hanselmann
(1986). The power of signal processors is due to their architecture,
not to exotic silicon process technology. It may therefore be
interesting to have some general discussion of architectural
features in the next subsection.
3.2. Architectural issue.'). When a chip or chip set is to be
selected for controller implementation, there are many criteria
which might be relevant. Their priority depends mostly on the
type of application intended. Building a tool for flexible lab
experimentation sets priorities other than looking for a medium
volume dedicated industrial instrumentation system.

3.2.1. General considerations. How general purpose 16/32-bit
microprocessors. a typical microcontroller, and the current
VLSI signal processors meet some of the relevant criteria is
shown in Table 2 (for a survey of microprocessors see Gupta
and Toong, 1983, 1984). The 8096 has been chosen as representative ofa trend in microcontrollers. Note the amount ofinput/output support right up to multi-channel on-chip AD-conversion.
Microcontrollers are particularly well suited to industrial applications, where control of the type discussed in Section 2
is frequently only one task among many others, including
sequencing, complex timing, interrupt processing and communication. Computing speed is however not as high as with signal
processors. Apart from the special input/output features, the
architecture of the 8096 is much like that of traditional general
microprocessors, with the exception of an increased number of
on-chip registers fonning a so-called register file. There are 232
bytes free to the user to be referenced as byte, word or double
word registers. This is an important feature, because such an
on-chip register file can be accessed more quickly than external
memory. It is large enough to carry out large portions of the
task locally and also helps speed up context switching during
processing interrupts.
When so many functions as a microcontroller has are integrated on a single chip, something must be sacrificed in
comparison with general purpose 16/32-bit microprocessors.
One of the features of the latter, missing in a microcontroller,

151

Survey Paper
TABLE 2. PROCESSOR COMPARISON

Microprocesors

Microcontroller
8096

Signal
processors

slow,

slow

impossible or
slow

5-121"

71's

0.1-0.31"

ALU wordsize

16-32

Program address
space
Data address space

>IMB

16
64kB

1.5-128kB

same

same

Floating point

medium with

coprocessor
(,.,5-15I's
mull. or add)
Speed 16 x 16
fixed-point mull.

On-chip ADIDA

128-588 x 16 for
onchip RAM (external
ex:tension possible
with newest proe.)

4-8 AD chan!,els,
10 bit
pulse widtp. mod.,
timer, counter,
watchdog, ports

Special 1/0

On-chip ROM.

8kB
medh",m

all but one
25-150ns

7 sources internal,
J external

0-3

Memory speed
required

medium

Interrupts

flexible via
interrupt
control1ers

Multiprocessor
capability

ext. logic

Program language
support

best

some

Chip count

high

low

is the large address space, which is in fact not necessary for
control implementation. A small address space saves much room
on the chip, because the address space is reflected in all registers
and logic related to effective address computation, as well as in
the bus interface. Provisions for memory management can also
be dispensed with.
Other savings stem from reduced instruction decoding circuitry due to a simpler instruction set, excluding advanced highlevel language-like instructions as for instance incorporated in
the VAX-like instruction set of the NS 32016 general microprocessor. The reduction in instruction decoding and processing
logic due to a simpler instruction set is also a general line of
development with advanced supermicroprocessors for general
purposes. These processors are said to be of the RISe type
(RISe means reduced instruction set computer)(Wallich, 1985).
They are characterized by an instruction set which includes only
the most used instructions and by executing one instruction
every machine cycle. Operations are performed on operands in
large register files, not on memory, which is accessed only by
load and store operations. Among the digital VLSI signal
processors there are also some which are RISe-like. particularly
the TMS 32010 and the DSP 128 signal processors.

3.2.2. Specifics of signal processors. Whereas the 8096 microcontroller discussed above appears, from outside the chip, to be
of the traditional "von Neumann" computer type, internally
instructions go their own way separately from the data. It is a
well-known bottleneck of traditional processors (von Neumann
type) that instructions and data travel on the same bus.
This architecture must be abandoned if data transfer between
registers, data memory, and arithmetic units is t9 be fast for
maximum throughput. One step away is the so-calleq 'Harvard'

152

16-35

newest proc.
asm only in most
cases; high level
language support
for one, processor
medium to high

architecture. In this architecture the instruction bus is separated
from the data bus so that instruction fetch and data transfer do
not interfere with each other. Some signal processors exhibit
even more data paths. For mustration, a sketch of the core
architecture of a hypothetical hut typical signal processor is
given in fig. 5, showing the data manipula~ion part (instruction
bus and control unit are separate). There are two 16-bit data
buses, each connected. to a block of data memory and to the
hardware mUltiplier inputs. Factors can thus be routed to the
multiplier without bus conflicts. The arithmetic/logic unit (ALU)
gets operands either from the accumulator, from memory, or
from the multiplier, converted to 32-bit where necessary. Typical
components are the shifters, particularly the barrel shifter. It
allows the shifting of an operand by multiple bits within a single
data transport operation.
Besides the multiple bus and data path structure, the most
significant difference between signal processors and general
microprocessors or microcontrollers is the integrated parallel
hardware. multiplier. This multiplier produces a (16 x 16)-bit
product in every machine cycle (see discussion of speed in
Subsection 3.3), which is afterwards directly fed through the
ALU into the accumulator in the next cycle in order to perform
the basic operation
ace:

= ace + coeff * variable.

With a hardware multiplier the multiplications no longer
dominate execution times as usual. They are as fast as additions
or logic operations. It is however important not only to have a
hardware multiplier, but also to have a powerful data path
structure. Otherwise the precious arithmetic units cannot be
kept busy all the time. Note that these ,components ronsume

Survey Paper
microprocessor host within a control system (Schumacher and
Leonhard, 1983; Rojek and Wetzel, 1984; Leonhard, 1986). They
cannot however directly compete with microcontroUers in terms
of functionality.

data~RAM

data~RAM

B

A

16

16

FIG. 5. Typical signal processor core.

large parts of the chip aTea (see photographs in Cushman, 1982).
With the TMS 32010 fOT instance, a scalar product computation
a = c T x proceeds as fonows:

LTAx(i)
MPYc(i)
LTAx(i+ 1)
MPYc(i+ 1)

where the LTA instruction loads one operand in one of the
multiplier's input registers, but at the same time performs
accumulation of the previously computed product. The MPY
instruction loads the second operand into the second multiplier
input register and in the same cycle the multiplication is
performed, the result of which is accumulated during the next
LTA. The operands travel to the multiplier over a single data
bus, so loading takes two cycles. With processors having split
memory (as in Fig. 5) the coefficients (of cT in the example) and
variables representing signals (x in the example) could be stored
~parately and loaded simultaneously, so single-.cycle operation
is possible. This can frequently be found in signal processor
architectures.
The integrated hardware multiplier, along with an appropriate
data path structure connecting the arithmetic units (ALU,
multiplier, shifters) and memory are the keys to the high speed
of VLSI signal processors. There are however quite a number
of miscellaneous features which also contribute to speed, mostly
by devoting hardware to tasks traditionally performed by
software. The VLSI signal processors are currently acknowledged as being ·attractive candidates for control implementation,
not only in the sense of Section 2. They are also weU-suited to
performing arithmetic subtasks as a slave to a general

3.2.3. Arithmetic and data formats. A last important point of
discussion is the arithmetic data format supported by the
different processors. This point can be as crucial as speed. In
many cases floating-point arithmetic is desired, be it because
the dynamic range required is indeed large, or because the
implementer does not want to deal with the problems of fixedpoint arithmetic. With general microprocessors as well as
microcontrollers, floating-point arithmetic in a common format
(IEEE standard 754, 32-bit) is easy to achieve through subroutine
libraries or floating-point coprocessors, providing considerable
speed.
With present VLSI signal processors, however, floating-point
arithmetic is not easily achievable. There has been an effort to
perform Hoating-point arithmetic on a TMS 32010 (Blasco, 1983)
and on the TMS 32020 (Crowell, 1985), but speed results are
rather disappointing compared to general microprocessor/coprocessor combinations. No effort to implement floating-point
arithmetic on the other fixed-point signal processors has been
reported. There is one VLSI signal processor, the Hitachi
HD61810 (Hagiwara et al., 1983), which is specifically designed
for a particular kind of floating-point arithmetic, but it is only
available with mask programmed ROM, and floating-point
accuracy is limited by a (12 x l2)-bit multiplier. There are
some known developments of signal processors with full 32-bit
floating-point hardware on the chip (from Bell Labs, Nippon
Electric and Texas Instruments), but the first is not available to
the public, the second has just been announced, and the third
still seems to be in the design stage. Thus with present VLSI
signal processors one must deal with fixed-point arithmetic and
aU the associated problems.
Within this group of fixed-point processors there are still
differences in the useful data formats, which stem from architecture design decisions. The main differences are in the processing
of products from the multiplier, and in the format of the
accumulator. With the exception of MB 8764 all processors
provide at least 32 bits for accumulation of (16 x 16)-bit
products, so that full precision is preserved until storage of a
final scalar product result (see Section 4). At this point rounding
or truncation is usually performed to obtain the most significant
16 bits of the result, although more precision is possible with
most processors, at the cost of more complicated code and
slower execution.

3.2.4. New architectures. In addition to the more conventional
architectures just discussed, there are other developments which
are already having an impact on signal processing and also
beginning to have one on control. The transputer concept
(Taylor, 1984), systolic architectures (Kung, 1984; Jover and
Kailath, 1986), and data flow processor concepts (Chong, 1984;
Hartimo et al., 1986) should be mentioned here.

3.3.' Speed. Although there are usually many aspects of
processor selection other than speed, it is nevertheless often the
most pressing factor in controller implementation. This is
typical of the field of controlling mechanical devices via fast
electromechanical or servohydraulic actuators. Eigenfrequencies
from 100Hz up to 10kHz are not uncommon, and higher order
controllers are often necessary to cope with structural resonance
effects (see for example Slivinski and Dorninski, 1985; Kanade
and Schmitz, 1985; Hanselmann, 1986).
A speed comparison for general microprocessors for the task
of digital filter implementation, which is many respects similar
to controller implementation, has been given by Nagle and
Nelson (1981), also published in Phillips and Nagle (1984) (note
that some of the programs originally published have been
corrected in the latter pUblication). Speed comparisons on
instruction and routine level using general data processing
benchmarks have been published by Gupta and Toong (1983)
and Toong and Gupta (1982).

153

Survey Paper
TABLE 3. SAMPUNO FREQUENCIES WITH AN EXAMPLE
CONTROLLER USING FIXED-POINT ARITHMETIC

Microprocessor
Clock (MHz)
8086
8
Z8000
5
68000
10
32016
10
TMS 32010 signal processor

!. (kHz)
<2
<2
<4
<5
31

If floating-point arithmetic is required the current signal
processors can be excluded from the comparison. Their fixed~
point speed is about the same as the floating-point speed of the
fast word-slice and floating-point chips from Section 3.1.2. The
fastest chip set (using the AMD 29325) achieves computation
of a length n scalar product in about n. 200 ns, with full 32-bit
IEEE standard data format. This should be compared with
the often "thought to be fast" microprocessor/coprocessor
combinations such as the Intel 80286/80287 or the faster
National Semiconductor 32016/32081. The latter require about
n. 20 /lS for the same thing (at 10 MHz clock, slave processor
protocol execution included, from measurements by the author).
Roughly the same speed as 'with microprocessor/coprocessor
combinations can be achieved with the microprocessors alone
iffloating-poinfarithmetic is dispensed with. Compared to adds
and subtracts or miscellaneous operations, the fixed-point
multiplications are the most time-consuming. ones. A typical
execution time is 6/lS for a 10 MHz 32016 processor (operands
in memory).
With VLSI signal processors the execution times of add/subtract as well as multiply operations are in the range l00-300ns.
Multiplication is no longer the most time-consuming operation.
Remember that in the example of a scalar product computation
with the TMS 32010 in Subseetion 3.2 only two instructions
provide computation of a product (16 x 16 bit) and its accumulation (32 bit). This takes just 4OOns.
In Table 3 a comparison is made between some microprocessors and a signal processor (Hanselmann and Loges, 1984;
Hanselmann, 1986). The comparison is based on the implemen~
tation of a 9th order controner with only one input and one
output. This controller arose in an industrial application with
a very fast electromechanical positioning system. Since with
general microprocessors the multiply operation mainly deter~
mines the execution time, an upper bound for the achievable
sampling rate can be given based only on the total number of
multiplications. This upper bound is given in the rightmost
column. The controller had 33 non-zero and non~one
coefficients, i.e. 33 (16 x 16)-bit multiply operations had to be
performed per sampling interval. Since there are also additions
and data transfer operations to be performed the sampling
frequency actually achievable would be somewhat lower. A
comparison of the estimate with actual experimental results was
carried out on a filter (from Phillips and Nagle, 1984~ and on
the controller on which Table 3 is based. The target was a 68000
system running at 10 MHz, programmed in assembly language.
Actual sampling rates turned out to be about 50% of the upper
bound estimate in the filter case, where subroutines and loops
were used, and about 70% in the controller case with fast
subroutine~ and loop-less code.
The same controller was also implemented on a TMS 32010
signal processor and ran at 31 kHz sampling frequency. Thus
the signal processor is an order of magnitude faster. Roughly
the same applies to the other signal processors mentioned, and
this compares quite well with the 17 kHz achieved in what seems
to be a similar situation using an ADIO machine (Howe, 1982).

3.4. Processors with special architecture related to control. The
average control engineer still only has access to off-the-shelf
processors such as general purpose microprocessors or signal
processors. Custom processor design, however, is already beginuing to play.a part. In the general digital signal processing field
there is much going on in that direction (Cappello, 1984). Since
there are many relationships between general signal processing
and control these efforts also have an impact on this field.
Proposals for processor architectures directly related to control

154

were made years ago by Tabak and Lipovsky (1980) and recently
by Jaswa et al. (1985~ Proposals for processors using nonstandard arithmetic such as that given by Lang (1984) or
Tan and McInnis (1982) should also be mentioned here; the
arithmetic issue will however be discussed in Section 4.
3.5. Interfacing to the plant. It is not the intention to go into
the details of analog and digital interfacing techniques here, but
there are some points which seem to be worth making.

A typical analog-to-digital interface consists of an analog
prefilter for each channe~ a multiplexer if an analog-to-digital
converter (ADC) is to be shared among several inputs, a sarnplehold circuit, and the, ADC. The purpose of prefilters is to avoid
aliasing due to spectral components of the input. signal above
f.l2, where!. is the sampling frequency. Clearly such filters have
to be chosen carefully in control applications, because generally
the sharper the cutoff in the magnitude frequency response, the
lower the phase introduced into the loop. FQ.r instance even a
simple second order low pass (damping I/J2) designed to give
a mere 20 dB attenuation atfJ2 still introduces about 25 degrees
negative phase at 0.05/" where the crossover frequency mi~t
be. Most often it will be necessary to include prefilter dynamICs
in the control design (Astrom and Wittenmark, 1984).
Measurement noise effects under variation of prefilter band~
width and sampling rate have been studied by Peled and Powell
(1978). The results are also given in Franklin and Powell (1980).
!tis shown that good noise attenuation at quite low sampling
rates can be achieved with prefilter bandwidth only about twice
the control bandwidth, provided that appropriate digital lead
compensation is introduced to counteract the prefilter lag.
The purpose of a sample-hold (SH) circuit in front of an ADC
is to provide a constant input signal to the normal successive
approximation ADC during conversion (Davies, 1985; Jaege~,
1982). SH circuits in front of the multiplexer are necessary If
simultaneous sampling of several channels is desired, sharing
only a single ADC. It is always taken for granted that a
successive approximation ADC must be preceded by a SH.
Otherwise changes of the input signal during conversion may
be reflected in the binary conversion result. This is considered
to be erroneous since the value at the definite sampling time,
i.e. at start-of-conversion time, is expected to be converted. To
prevent such a change of the input signal, its amplitUde and/or
frequency must be very low or a SH must be inserted (Jaeger,
1982; Shoreys, 1982). In the control application it may however
sometimes be reasonable to omit the SH, because in that case
changes in the input signal occurring during conversion influence
the conversion result so that it can be nearer to the input signal
value at the end of the conversion than in the SH case. Thus
reduced effective conversion delay can be expected. Experiments
by the author showed delay reduction of a factor of up to ~.
This factor is even higher if the acquisition time of the SH IS
significant. The effective delay reduction is however dependent
on signal amplitude and spectrum, so some dynamic non~
linearity is introduced.
At the analog outputs of a controller there are commonly
digital-to-analog convertors (DAC). Standard components are
fast enough for conversion time to be neglected. But spectrum
shaping may be of interest to smooth the staircase output
signal or correspondingly .to remove the extra high frequency
components introduced by the zero-order hold device. Analog
reconstruction or low pass filters for that purpose are often used
in general digital signal processing or signal generation. With
control systems such output filters are introduced more reluctantly, because of effects on system dynamics similar to those
of prefilters. Reducing actuator wear as well as preventing
excitation of high frequency structural modes in mechanical
systems might however require output filtering.
The last point to be discussed is the sequencing of inputs
and outputs. In the usual "near-theory" case there will be
simultaneous sampling and simultaneous output, possibly with
delay between the two, but non-simultaneous sampling may
be dictated hy processor hardware, or may be deliberately
introduced to include the latest measurements in the computation. For example, numerical processing of channell input
may take considerable time before channel 2 is involved. It
might then be reasonable to delay sampling of the latter. The
same applies if ADCs with quite different speeds are used.

Survey Paper
Non·simultaneous output may also occur for similar reasons.
Although such cases do not fit well to common control design
software, they do exist, and should be considered, at least in
simulation.
4. Arithmetics and their implications
Basically, there are several choices of arithmetic which could
be used to implement a controller. The most well known are
floating- and fixed-point binary arithmetic and they are the
ones supported by standard processors. Fixed·point arithmetic
is mainly used because of the high speed which can be achieved
with relatively simple arithmetic units. In speed, space, or costcritical applications fixed-point arithmetic will most likely be
chosen. In the following some main issues concerning fixedpoint arithmetic will be reviewed. Floating-point arithmetic
will be discussed only briefly as well as some other possible
candidates. Unfortunately, the chapters on arithmetic found in
most texts on digital filters or digital control are quite rudimentary. There are, however, some texts on computer arithmetic
covering the material needed to understand the principles and
problems of the mechanics of binary (and other types of)
arithmetic, such as Flores (1963), Hwang (1979), Waser and
Flynn (1982). Classical original papers on arithmetic as reprinted
in Swartzlander (1980) are also quite instructive.
4.1. Fixed-point arithmetic
4.1.1. Basics. The usual fixed-point data formats in digital signal
processing make use of two's complement representation. Here,
the decimal value of a number is

(23)
where the bj.j = 0•... ,1- 2 represent the binary digits, i.e. bits,
b,_ t carries the sign infonnation, I is the total wordlength, and
B determines the location of the binary point. Two special cases
are B = 0, which means r is an integer, and B = 1- 1, which
means r is a fractional number. With floating-point number
representation B could be different for each number whereas
with fixed-point numbers B is fixed throughout.
The reason why the representation (23) is .called two's complement becomes obvious in the important case of fractions,
where B = I - 1 and thus

+

r = -b/- 1

/-2

L b 21-,I-I).
j

(24)

j:O

If r < 0 but the binary representation bit pattern of Irl is known,
then the binary representation bit pattern of the positive two's
complement number 2 - Irl yields the bi in (24) exactly, because
2 - Irl - 2 = -Irl = r and subtracting 2 has the same effect as
changing the weight of the b,_ t bit from + 1 to -1, as is done
in (24). No bit is altered from the bit pattern representing 2 -Irl,
only the interpretation as decimal value is affected by changing
the weight of b I - t . A 4-bit fractional two's complement representation for example is
0.875 0.111
0.125 0.001
o

0.000

-0.125 1.111
-I

1.000

and for instance the bit pattern for -0.125 is that of the binary
representation of2 - 0.125 = 1.875. The example also illustrates
that the number range is unsymmetrical, i.e.

-1.0

~

r

~

1.0 -

2-(/-1)

(25)

in the fraction case. An implication of this is that the product

-1.0. -1.0 = + 1.0 (all decimal) can never be represented. In
fact, processors usually yield the wrong result -1.0 in this case.
In consideration of the dynamic range of data in connection
with scaling (Section 6) the upper limit is simply approximated
by 1.0 to simplify discussion.
The main advantage of two's complement representation
compared to other candidates lies in the simplicity of hardware
for adding or subtracting (Shaw, 1950). No distinctions need to
be made as to what the signs and magnitudes of operands are
and a single adder unit plus a simple complementer circuit is
sufficient to perform addition and subtraction.
Another advantage is that a sequence of two's complement
additions or subtractions, as encountered in the scalar product
computation, always produces the correct result as long as this
is in the number range. Intermediate overflows of partial sums
thus do not matter and can be ignored. This nice property
however is only useful if the result is indeed known to be in the
number range. Where it is not, it is even impossible to detect
this and to supply a maximum or minimum value. Sometimes
arithmetic units have an extended accumulator to accomorlate
overflowing bits up to the moment where the result is going to
be stored away. Then a check can be made on whether the
result is valid or should be replaced by max or min values.
Although multiplication of two's complement numbers may
seem complicated at first due to the negative weight of h,- l , it
can be carried out quite easily, for instance by performing
appropriate sign extensions on negative number representations,
or using Booth's algorithm or modifications of it (Booth, 1951;
MacSorley, 1961; Rubinfield, 1975; Cappellini). These algorithms
work for any combination of signs of the factors and at the
same time speed is gained as compared to the simple "shift and
add" technique. They are incorporated for instance within the
hardware multipliers of signal processors.
The basic idea behind such algorithms is based on the
observation that a string of ones in a binary number could be
replaced by only two non-zero digits, ifnegative weights (denoted
by bar) are allowed, for example
0111011110 ~ 01111000To ~ lOOOTOOOTO.

(26)

Thus if the leftmost binary pattern represents a factor in a
multiplication, the right-hand side of (26) shows that the product
can be computed with one addition.and two subtractions, along
with appropriate shifts. This compares to seven additions with
shifts necessary originally. See for instance Peled and Liu (1976)
for a short but instructive discussion. Translation of a binary
number into this so-called canonical signed digit code (CSD)
can easily be mechanized in an iterative process.
Multiplication based on CSD code has also found a number
of applications in signal processors, which execute the shifts
and adds or subs under program control, saving a hardware
multiplier. A well-known chip of this kind was the now outdated
Intel 2920 signal processor, (Hanselmann, 1982) but it is not the
only one. In the design of chip area-effective custom signalprocessing devices this kind of multiplication aroused (for
instance Schmidt, 1978) and still arouses interest (Gaszi and
Giilliioglu, 1983; Steinlechner et al., 1983; Pope et al., 1984).
The product of two I-bit numbers is a (21-1)-bit number. This
is because there is a sign bit in each factor, but the product
needs only one. It is important to understand that a multiplier
device is not usuaUy concerned with the binary point location.
It can multiply integers as well as fractions because the interpretation of the bit pattern of a number representation only takes
place when the (21-I)-bit product is stored away, see Fig. 6 for
a 16-bit example. Here a 32-bit product register or accumulator
is assumed, and the product bit pattern is right justified. So if
the factor bit patterns were meant to represent integers, the
result (assuming it should be 16 bits long) would be found in
the lower (right) half of the register. If the factors were however
meant to represent fractional numbers the result would be found
in bits 15 through 30. Note that with some processors the output
oftbe multiplier is aligned differently by hardware: to be specific,
a fraction result could be left justified so that the store operation
does not overlap into the lower half of the 21-bit accumulator.
Note also that rounding could be performed before storing the
truncated 16-bit result away by adding, prior to storing, a 1

155

Survey Paper
msb

overflowed
number

Isb

15

15

II

I'
factor 1

value
I
factor 2

number

• c

va!tH'

7ii3

.~

.~{31

1610

~

ov('rnOW

0

~S ~ID~I==~==~i~il==~====1

(u)

u

E~

n.o

.

WlthOLlL

-I

~rf~~lci?o~~St'

result in case
of integers

(truncated)

Ib)
suturatlon O\('rl1ow

wrap around of overflowlTlg
lwo's complement numl'cr

FIG, 7. Arithmetic overflow (with fractional numQers).

FIG. 6. Fixed·point arithmetic product.

into the most significant of the bits which will be discarded. i,e.
into bit 14 in Fig. 6 in the fractional case.

The reason for preferring fractions in digital signal processing
or control is that products, or accumulated products with scalar
product computation, can easily be cut down to the size of the

factors for storage and further processing by dropping the least
significant I-I bits. Fractional fixed-point arithmetic thus trades
precision for number growth. Integer arithmetic on the other

hand would not allow for this. It is always exact but at the price
of excessive risk of overflow. Overflow of course can also happen
with fractional arithmetic in add or subtract operations, but not
with multiplication. Sometimes implementors of digital filters
or controUers claim to use "integer arithmetic", A closer look
however shows that indeed processor instructions for integer
arithmetic are used, but there is "scaling", "shifting" and the
like. In fact, fraction arithmetic or something close to it is
actually performed.
4.1.2. Overflow. Because of the limited number range with usual
wordlength, say 16 bits, care must be taken that data, for
instance controller states, and coefficients fit well into this range.
Numbers should not exceed the range, but at the same time
should not be so small that the quantization has undesirable
effects. Controller scaling and rea1ization structure selection are
the major means to achieve this, These are considered in Sections
5 and 6.
In the case of scalar product computation, which is the
basic operation with the controller equations, the partial sum
overflows can be ignored with two's complement arithmetic, as
mentioned abOve. provided the final result is guaranteed to be
in range, but there may be quantities to be computed during
evaluation of the controller equations which cannot be guaranteed never to overflow, so there may be results not guaranteed
to be in range. This is very likely the case for controUer outputs,
i.e. actuating signals, but may also apply to state variables.
Two's complement arithmetic then suffers from "wrap~
around". For instance adding binary 0.010 (0.25 decimal) and
0.110 (0.75 decimal) yields binary 1.000, which would erroneously
be interpreted as -1 decimal in two's complement fractional
arithmetic, whereas the saturated binary value 0.111 (4-bit
arithmetic assumed) would be preferable. This means that the
desired saturation (Fig. 7) must be provided by code (Loges,
1985).
Signal processors sometimes incorporate optional saturation
hardware intended for such cases, but the problem is that
intermediate results, i.e, partial sums, are better not saturated
because this would destroy an otherwise possibly non~overflow­
ing result. The decision about whether the final result is in
overflow and with which sign can only be made if there are
enough spare bits in tbe accumulator to the left of the leftmost
bit oftbe result to be stored away (Fig. 6). Perhaps the processor's
accumulator provides a few bits for this purpose, but they may
be too few for long scalar products, or the processor provides
none at all, Overflow processing then requires computation of
a downscaled scalar product which does not overflow, and a
rescaling operation preceded by overtlow checking, i.e. first
a':= c'TX is computed .instead of a:= cTx, with C'T = 2- PcT,
P '" 1. Then the content of the accumulator (a') is either left

156

shifted p positions under saturation. if the processor provides
for this at enough speed, or the result is read out of the
accumulator displaced by p bits, see Fig. 8 for an example. Both
operations are. equivalent to multiplying a' by 2', correcting for
the downscaling of CT.
4.1.3. Signal quantization. As discussed above. products are of
almost double length and thus must usually be cuI down to the
size ofthe factors. If the processor's accumulator is double length,
which is quite often the case, the products 8re accumulated in
futl length and the truncate or round operation is performed
only with the final result. In any case truncation or rounding
introduces a quantization error into the computations. Note
that additions and subtractions are exact as long as there are
no overflow problems.
Discussion of the influence of the quantization error was
always an issue in the digital filter field, and can be found in
most textbooks (for instance Oppenheim and Schafer, 1975),
but there were also early papers in the control field (Bertram.
1958; Slaughter, 1964; Johnson, 1965, 1966; Knowles and
Edwards, 1965a, 1965b, 1966; Lack, 1966; Curry, 1967), and the
issue is now also to some extent dealt with in digital control
textbooks, particularly in Katz (1981), Franklin and Powell
(1980), and Jacquol (1981). Quantization (of variables or signals;
for coefficients see Section 5) introduces three effects: bias, noise,
and limit cycles. Bias is introduced with truncation, because in
two's complement trunc (x) < x for x positive as well as negative.
It is better to use rounding, which is quite easily achieved, as
mentioned ahove.
4.1.3.1. Noise model. The noise model of quantization is
widely used and replaces the quantizer by a purely Hnear gain
block foHowed by an injection of an additive white noise
sequence, uncorrelated with the input. Two's complement arithmetic with truncation or rounding is assumed here. otherwise
there could be correlation (Claasen er al., 1975). If the quantization step is described by q, which is equal to 2 -B according to
(23), then the noise statistics are taken as follows:

variance
mean

(1'

=q'/12

1'=

-q/2

for truncation
for rounding.

1'=0

,
31

(27)

result if nol
down scaled and
ovcrflow-frt"c

21

Willi

161b

11111

~

resulllf downscaJcd:
bll Z7 . .31 musl be
equal. olherwise
salurale

FIG. 8. ,Scalar product scaling example (fractional numbers,
p = 3, TMS 32010 processor).

Survey Paper
The expressions for (12 and jJ foHow from the assumption of
uniform quantization error distribution in the q interval. As has
been shown by Widrow (1956, 1961), Katzenelson (1962~ Sripad
and Snyder (1977). and Boite (1983), this assumption is valid
under some conditions, particularly if the amplitudes of the
signal to be quantized are not too low. A Gaussian signal, for
instance, with variance a few times greater than q2/12 already
renders the model very Dear to what has been evaluated
analytically and experimentally.
This classical noise model is, however, based on the assumption of a continuous amplitude input to the quantizef. This is
the situation of AD-converter quantization, but within the
digital computations the quantizer input is not continuous. In
fact, with the rounding of product of a coefficient and a variable
(state variable for instance) which is already quantized, the
model predicts noise variance less accurately (Halyo and McAlpine, 1971; Sjoding, 1973; Eckhardt, 1975; Boite, 1983). Then
there are pecularities leading to coefficient-dependent noise
variance, and additionally correlation of error and signal may
become significant even with larger signal amplitudes (Barnes
et al., 1985).
The noise model of quantization can easily be exploited to
compute the total noise contribution to every variable of interest
within a control system using standard covariance computation
techniques for linear systems (Franklin and Powell, 1t'~O;
Moroney et at, 1983). Transfer function-based variance con~··,u­
tation is also possible, for instance via the simplified methods
given by Patney and Dutta Roy (1980) and Mitra et al. (1974).
4.1.3.2. Limit cycles. Of course, the noise model of quantization is only an approximation. If signal variations are small
compared to q, such as near the steady state of a control system,
the non-linear nature of quantization shows up. The result may
be limit cycles. Limit cycles observed in practical control systems
are often due to the quantization of AD- and DA-conversion,
but may as easily be caused by arithmetic. Since there are many
quaotizers at the same time, analysis of limit cycles in a closed
loop control system is difficult. Much has been published on
limit cycles in digital filters operated open loop, but the results
are of little significance io a closed loop control system. This
has been clearly pointed out by Moroney (1983), who gives a
comprehensive discussion of the approaches in the digital filter
field and their relevance to control.
Some discussion of limit cycle existence for SISO systems and
some techniques for bounding their amplitudes (whether they
exist or not) are also given by Ahmed and Belanger (l984b).
The basic idea of such bounding techniques is to exploit the
boundedness of the quantization errors and to check which
signal amplitudes can be generated from those error sources.
Absolute (Long and Trick, 1983) as well as rmg (Sandberg and
Kaiser, 1972) bounds, partly exploiting the periodicity ofa limit
cycle. have been derived for filters, and have been used for
control by Ahmed and Belanger (1984b). They also demonstrate
that for low external input (reference or disturbance) signal
amplitudes limit cycles may be dominant in the output, but for
increasing amplitudes the noise model of quantization comes
into play and limit cycles may be quenched off, resulting in less
output noise than for low input signal amplitudes.
The value of the available techniques for limit cycle bounding
for higher order multi variable control systems seems however
to be limited. Since they have to be carried out numerically for
given parameters, it is probably more attractive to check the
effects directly via simulation in practice, taking into account
realistic input signals. Note that even slight measurement noise
may already quench off limit cyc1es in the critical "steady state"
situation. This is the same effect which sometimes leads to the
deliberate introduction of dither signals in non-linear systems.
On the same lines is the technique of random rounding known
from digital filters (Callahan, 1976; Buttner, 1977).
4.1.3.3. Double precision arithmetic and error feedback. In Fig.
6 it has been assumed that accumulation in scalar products is
carried out with the full length. partial products. Quantization
occurs only when the result is stored away and (assuming
fractionals) the least significant bits are discarded. To compensate somewhat for the discarded residues thus produced they
could be stored too, and inc1uded in some simple way in

the next sample computation. This technique, called "error
feedback", plays some part in the digital filter field (a recent
paper is by Vaidyanathan, 1985), and has also recently been
proposed for Kalman filter implementation by Williamson
(1985). Such techniques are however not far away from performing double precision arithmetic (on signals, not coefficients),
as has been pointed out by Mullis and Roberts (1982).
A special technique for performing almost double precision
scalar product computation in an efficient way has been
described by Loges (1985) for a signal processor. Even if both
coefficients and signals are desired to have extended precision
this technique leads only to a four-fold increase in processing
timc. This is quite good because doing anything other than
performing the arithmetic the processor is designed for (16-bit
in this case) is difficult and normally costs a lot cif instructions.

4.2. Floating-point arithmetic. If standard wordlength floating-point arithmetic can be used, there is usually no reason to
worry about accuracy and dynamic range, provided that the
numerical values of data are in a reasonable range and computation of small differences of large numbers is taken care of. The
usual single precision format (standard IEEE 754) consists of
the mantissa's sign bit, an 8-bit biased exponent e, and 23
mantissa bits for the fraction f The decimal value is given by

(-I)'· [2'-"'] ·(1

+ f).

(28)

The dynamic range spans 2- 126 ~ 10- 38 up to 2+ 128 ~ 3.10 38 ,
and the accuracy according to 2- 23 as value of the least
significant bit in f corresponds to about seven decimal places.
If much shorter wordlength floating-point fonnats were used
it might however be necessary to introduce scaling to keep data
in the dynamic range, as discussed in Section 6, and quantization
effects might become significant. Note that a fundamental
difference from fixed-point quantization is that there the error
is an absolute one, i.e. the noise model may assume noise
injection to be independent of the signals, but with floating~
point arithmetic the error is a relative one, dependent on the
signal amplitude.
Studies of quantization errors for floating-point arithmetic
operations and the resulting signal to noise ratio decrease effects
in digital filters go back to the end of the sixties (Sandberg,
1967; Weinstein and Oppenheim, 1969; Liu and Kaneko, 1969;
Kaneko and Liu, 1973; Fettweis, 1974). There are also studies
concerning digital control. Rink and Chong (1979a) derived an
upper bound for the variances of the plant state in a state
feedback plus observer regulator control system in a stochastic
setting. The bound can be quite loose, however. More accurate
analysis is possible by computing covariances directly (Rink
and Chong, I 979b). Van Wingerden and de Koning (1984)
studied the increase of the cost function due to roundoff noise
from mantissa rounding when an LQG state feedback is
implemented using floating-point arithmetic. Some examples
indicate good agreement between roundoff analysis and simulation. Emphasis is placed on derivation of approximate
expressions for means and variances of errors in floatingpoint addition and multiplication by improved modelling of
quantization. Phillips (1980) proposed a simulation scheme for
evaluating the variance of the error between a control system
output in the infinite and finite wordlength cases under the
assumption of a deterministic input (reference or disturbance).
This approach is however not far away from dispensing with
analysis and checking for wordlength directly with simulation.
Generally, the value of existing roundoff analysis for practical
purposes seems limited. Results can perhaps more easily and
more significantly be found by simulation, which is also more
easily adaptable to complicated situations, for example if different wordlengths are to be used at different points in a controller:

4.3. Non-standard arithmetic. Apart from the common fixedand floating-point binary data formats and arithmetic, there are
at least two other candidates, logarithmic and residue arithmetic.

157

Survey Paper
Logarithmic number representation might seem to be particularly well suited to control. Let the value to he represented he
v, and fractional numher range he assumed, i.e. Ivl < I, then '"
in

v'

= v + Av = sign(v)' D',O < D <

I

(29)

could be stored as a conventional binary number in the
processor, representing v' which is the quantized version of v,
with .&V as quantization error. Practical values of D would be
close to 1. The interesting property of this representation is that
the quantized values are unevenly spaced. With fixed-point
numbers. spacing is equal and quantization error is absolute.
With logarithmic number representation closest spacing is
achieved in the low magnitude range. If control system trajectories for large state transitions are not required to be very close
to the intinite precision ones, the higher quantization errors
resulting from large signal magnitudes may be tolerable. If in
steady state operation the signals (controller states, outputs,
partial sums) are of low magnitude, the increased resolution in
that range may be beneficial, leading to lower quantization
noise or less limit cycle amplitude. Interfaces to the plant should
however also be logarithmic. This is non-standard but possible
for AD- as well as DA-conversion, for instance via switched
attenuator networks. The arithmetic computations inside a
logarithmic number processor are obviously simple in the case
of multiplication. Addition and subtraction require logarithm
computation but this .can he replaced by table lookup (Kingsbury and Rayner, 1971; Swartzlander and Alexopoulos, 1975;
Etzel, 1983; Frey and Taylor, 1985).
The use of logarithmic number representation for digital
filtering has heen proposed by Hall et al. (1970) and Kingsbury
and Rayner (1971), preceded by yet earlier proposals motivated
by construction of calculators, and has been discussed in several
papers since then. The digital control application has already
heen mentioned in Lee and Edgar (1977) and Edgar and Lee
(1979). They proposed a numher system with an integer and a
fractional part. The representation corresponding to (29) has
recently been proposed as a basis for a special-purpose control
processor by Lang (1984).
For control there seem to be two main problems. The first is
that the controUer coefficients are not likely to be of low
magnitude, thus they are quantized relatively coarsely and
possibly this is detrimental to the control system's behaviour.
The second is that with practical control systems pure logarithmic signal representation will frequently be simply inadequate.
Imagine, for instance, a position control system involving high
resolution shaft encoders., where the position values are reqUited
to be represented with equal absolute accuracy over the entire
range. The assumption that near steady-state operation leads
to near~zero signals wiU often be unjustified, for instance
when measurement signals are to be processed or preprocessed
separately from reference signals, instead of taking differences
first.
The last number system to be mentioned here is the residue
numher system (Waser and Flynn, 1982). It was proposed long
ago for arithmetic unit construction and digital tiltering. It also
showed up in control-related. publications (Tan and McInnis,
1982; Pei and Ho, 1984). The main advantage is that very fast
computation is possible because operations are on digits instead
of whole numbers. There are no carries, possibly propagating
through all digits, thus slowing down the hardware. A high
degree of parallelism is possible in principle. It may well be that
residue arithmetic will gain ground in special purpose processor
designs.
5. Structures

5.1. Basic issues. Frequently, a state space description of a
controller or controller subsystem is derived in a manner
motivated by design theory. An example is the observer/state
feedback controller (I), (2), where the state has a physical
meaning (assuming that the plant state had one) and correspond.
ing matrices are involved. If it is not necessary to preserve the
state meaning, but achieving the desired closed-loop control is
the only objective, then any system with equivalent i/o behaviour
from input to output will do the job. There may he i/o equivalent

158

bo

FIG. 9. A direct structure.

systems which are preferable to the original one in the following
respects:
number of storage elements;
number of non-zero non-one coefficients;
computational delay;
multi-input/output capability;
" state space description possible or not;
coefficient range;
coefficient sensitivity;
round/truncate noise.
If transfer functions are the starting point there may be
seemingly natural choices for obtaining programmable difference equations, such as (13) for (12), but other i/o equivalent
equations may be preferable. Traditionally. specitic organizations of the difference equation computation are depicted in
block diagrams involving the Z-1 or delay element, as in Fig.
9, so that the structure becomes visible. The term "structure"
(or synonymous "form") is also used generalJy. for instance when
one state-space description (6) is transformed into another by a
similarity transformation

A=
B=

rlAT
T-1B

(30)

C=CT
yielding new matrices with different zero/non-zero entry pat·
terns. or at least new numerical values.
Determination of "good" structures has always been a main
issue in digital tiltering. It seems to be quite reasonable to adopt
for control purposes structures which proved to be useful in
this field. However, some aspects are usually not addressed in
digital filtering, namely computational delay, MIMO capability,
and the influence of the closed.loop operation. In the following
some basic structures are discussed without taking the closed
control loop into account; work on this is reviewed in the
penultimate subsection. AU discussions are on system (6).
5.1.1. Direct structures. The simplest case to consider is realization of a SISO transfer function G(z) from (12). In (13) a
corresponding structure is given in terms of its difference
equation. This structure belongs to the class of so-called direct
forms or structures because the polynomials appear directly as
coefficients in the difference equation or block diagram. As given
in (13), n + m delay or storago elements would he needed, but
this can be remedied. Various direct structures can be derived.
and a few of these at least can be found in any textbook on
digital filtering ,or control, for instance in PhiUips and Nagle
(1984), Oppenheim and Schafer (1975). In Fig. 9 one of the direct
structures is shown, assuming m = n for convenience. This
structure can easily be extended to the MISO case.
It is well known that direct structures suffer from various
drawbacks. First, the coefficients can easily be spread over a
large number range, causing problems with number representation and arithmetic. This is because, according to Vieta's
theorem, sums, products, and sums of products of polyriomial
roots form the coefficients, and roots can be anywhere from the
origin even to outside the unit circle in the z-plane· with

Survey Paper
controllers. This is sc;>mewhat in contrast to digital filters, where
poles and zeros are usually positioned well off the origin. Second,
the sensitivity of roots or coefficient errors can be up to infinity.
Such errors are introduced by the quantizing of coefficients to
represent them in the processor. If ;.v denotes a root of
p(z) = z"

+ P,,-lZ,,-1 + ... + Po

IUkf

(31)

and Pi is perturbed by APi, then Av is shifted by &).v and .1.iv is
given (to first order, denoted by "') by
A}·v == -"

2:

FIG. 10. Cascade structure example.

).~

APi'

(32)

[i., - i.;J

{~!

which clearly indicates high root sensitivity for clustered roots
(Kaiser, 1966; Oppenheim and Schafer. (975).
The situation becomes particularly bad when unclustered
"slow eigenvalues" in the s-plane generate clusters near 1 in the
z-plane. There is some remedy to this case by means of "delay
replacement" (Agarwal and Burrus, 1975; Nishimura et ai., 1981;
Orlandi and Martinelli, 1984; Goodwin, 1'185; Middleton and
Goodwin, 1985). One version of this is to replace the z- 1 blocks
by so-called Ii -I blocks. A Ii - I block realizes the z-transfer
function TI(z-l) and thus represents a discrete integrator.
Implementing a b-1-block requires the operation
(33)

(. output variable, P input variable of the /i-I-block), instead
of the Z-l shift operation. A z-transfer function then transforms
into a b-transfer function, which can be realized using any
suitable structure known for z-transfer functions, but now
involving b-1-blocks instead of z-I-blocks. The advantage over
the z-!-block based realization is that the corresponding zpoles can be orders of magnitude less sensitive to errors in bpolynomial coefficients, just in the case of pole clusters near
z = 1 as introduced with relatively fast sampling.
The first order root sensitivity of (32) is not always of great
importance, but sensitivities of impulse response (Knowles
and Olcayto, (968) and frequency response (Crochiere and
Oppenheim, (975) are high as well with direct structures.
Another related drawback is potentially high gain sensitivity.
Assuming a stable transfer function G(z) with notation of (12)
for simplicity. the final value of the output after a unit step
input is

fbi
yk!k .... OC=~.
1+

(34)

2: ai,

means departing from pure cascade structure. The second
drawback is that this structure is limited to the SISO case, so
that it may be valuable for SISO subsystems in a complex
structured controller, but not for a complete MIMO controller.
5.1.3. Parallel structure. A structure which is widely regarded
to be as.good a candidate as the cascade structure is the parallel
structure (Jackson, J970b). It corresponds to implementing G(z)
in a partial fraction expansion form (Gold and Rader, 1969).
The partial fraction blocks are commonly chosen to be of first
and second order and can be implemented by suitable structures.
Special cases of the paraJlel form have received much attention
in digital filtering as being suboptimal in some respect to certain
optimal structures, as discussed below (Jackson et al., 1979;
Mullis and Roberts, (976). An advantage of parallel structures
is that they can be used in MIMO cases.
5.1,4. Other structures. The above discussion does not cover all
types of structure. There are several additional structures of
practical importance known in digital filtering, such as wave
digital filters or ladder structures. For an overview and bibliography see the recent paper of Fettweis (1984). However. such
structures have not yet appeared in control applications.
5.1.5. Relevance of non-state-space structures. That a structure
can be described by standard state-space models might be taken
for granted by control engineers who are used to thinking in
state-space terms, but there are many structures which cannot
be represented by a single standard state-space model (Willsky,
19.79; Moroney, 1983). This is because a state-space structure
places restrictions on what nodes ~ay be present. Take the
cascade structure of Fig. 10 for example, where signal v occurs
at a node not accounted for in a state-space model. If the
cascade structure is restructured to map into a state-space
structure, such as in Katz (1981), other coefficients are involved
and intermediate signals are no longer represented. This is
only irrelevant under infinite precision arithmetic. A useful
description solving this representation problem is discussed in
the last subsection.

i=!

A direct structure, directly involving the quantized versions of
hi and ai' is now likely to introduce inaccurate small differences
of large numbers in (34), because the coefficients are frequently
of large absolute value with alternating signs. Finally, direct
structures suffer from particularly high signaJ quantization noise,
which relates to high coefficient sensitivity (Feitweis, 1972,1973;
Jackson, 1976). The conclusion drawn from aU this is to
recommend direct structures only for low order systems or
subsections of higher order systems, and to use them with care.
5.1.2. Cascade structure. A more reliable SISO structure, which
is well accepted in digital filtering, is the cascade structure,
where G(z) is implemented in factorized form as a series
connection of low order blocks, usually of first or second order
(Oppenheim and Schafer, (975), see Fig. 10 for an example. This
structure offers possibilities of optimal distribution of poles and
zeros among the blocks, and internal block structures can be
chosen optimaJly. However, there are drawbacks for control
application. First, the structure introduces increased computational delay in the common case of G(z) having direct
feedthrough, i.e. bo I< 0 in (12). This is because output appears
only after computation in every block is finished, unless direct
feedthrough is bypassed directly from input to output, which

5.2. State-space structures. Given a state-space description (6)
(treatment of types other than (6) is obvious), an infinite number
of structures can be derived via similarity transformation (30).
The control engineer may be tempted to pick well-known
canonical forms first, such as a control canonical form. This and
related forms, however, involve transfer-function polynomial
coefficients more or less directly and thus suffer from the
problems discussed above. The only real advantage of such
structures is their minimum coefficient count. For lower order
systems or subsystems, they may be used in general without
problems, although the author has encountered practical applications where canonical forms of only third order caused
accuracy problems even with 32-bit floating-point arithmetic.
More promising for higher order controllers are the parallel
structures. A typical one has a block diagonal A

159

Survey Paper
with (2,2) silbmatrices Aj accomI11odating complex eigenvalues,
and C possibly non~zero everywhere. This or related. structures
play an important role in digital filtering. It is a special case of
that devised by Mullis and Roberts (1976) with special Aj as
suboptimal quantization noise structure. The latter leads to
dense A. requiririg much computational effort, and has thus
been considered unattractive. Several authors took the block·
optimal structure as a starting point and then focussed on the
second order structures, i.e. on the Aj and the associated parts
of Band C (Jackson et a1., 1979, Barnes, 1979, 1984; Mills er
a1., 1981; Bomar, (985). The second order substructure also
attracted authors because results on overflow stability and limit
cycles could be derived (Mills er a1., 1978; Jackson, 1979).
Various structures can be chosen for the second order
subsystems which accomodate complex eigenvalues (1 ± jw and
the selection may be guided by the sensitivity, qU811tization
noise, or limit cycling considerations discussed in the literature
quoted above. The coefficient number range and the number of
coefficients contributing to the computation time may also
influence the decision. If, for instance, the number of non~zero
non~unity coefficients is to be minimized, control canonical or
observer canonical forms may be of interest, e.g.

on the variance of the output noise generated by roundoff in
the state vector computation. A lower bound exists because the
scaling constraint has to be met. Attaining the lower bound is
possible. and a corresponding transformation matrix T can be
constructed.
The optimal realizations suffer however from the fact that A
generally has no specific structure. All coefficients can be
non-zero and non·unity. This has always been considered
unattractive. But with a digital signal processor as a target the
computation of long scalar products is not so time consuming
in relation to other operations such as overflow management.
If for example an optimal realization enabled single-word
arithmetic to be used, whereas a structure with sparse A
demanded multi-word arithmetic, the former might lead to
the faster solution. Considering optimal structures in control
applications thus seems worthwhile. Moroney (1983) adapted
the theory to closed-loop operation, but focussed on the block·
optimal case. From his numerical example, as well as from openloop filter examples by Jackson er .1. (1979), there is some
indication that non~optimal parallel structures with second
order blocks perform quite closely to corresponding blockoptimal ones.

Aj~[~ -U~;W2]

5.3. Closed~/oop considerations. It is quite useful to have a
collection of "known-to·be-good" structures and guidelines from
which to select under given conditions. In most c!lses such a
selection without closed·loop optimization will be sufficient.
Given a 16-bit target processor, for instance. it does not matter
much whether the minimum wordlength necessary to achieve
satisfactory closed-loop operation is 8~ or lO.. bit, because 16·bit
will be the increment. The situation changes, however, at the
boundary, and in cases where wordlength is not fixed. as in
custom VLSI processor design. Methods of structure selection
or optimization considering the closed~loop operation, which
optimize with respect to roundoff noise from signal quantization
as well as with respect to coefficient quantization effects, should
be useful in such cases. These issues have been studied by
Moroney (1983). Moroney et al. (1980, (983), and Sasahara er
al. (1984). All assume a stochastic setting in an LQG context.
As mentioned above, Moroney el al. adapted the theory of
Mullis, Roberts and Hwang to the closed-loop SISO case
and additionally devised an iterative structure optimization
technique for minimizing roundoff noise, which could be augmented to extend optimization to coefficient wordJength effects.
The objective is to minimize the increase of an LQG cost
function. The influence of coefficient wordlength is introduced
via a statistical wordlength technique. The idea of statistical
wordlength estimation'is already found in Knowles and 01cayto
(1968) and was later used by Avenhaus (1972) and Crochiere
(1975), who estimated filter frequency response errors by
assuming coefficient quantization errors to be independent
random variables, leading to a variance estimate on frequency
response. This is not as pessimistic as the equally possible worst·
case bound, which is based on the assumption that individual
coefficient errors are maximum in absolute value with signs
opposed to the corresponding sensitivity of the response to the
coefficient. But Crochiere's examples show that the statistical
estimate is likely to be still somewhat pessimistic.
The statistical wordlength concept has been applied by
Moroney( 1983) to estimation of LQG cost function degradation.
Second order sensitivities are involved because first order
sensitivities are zero owing to LQG design. The structure
optimization technique of Moroney allows for constraints in
the structure, so the matrices of the controller description can
be kept sparse, if so wished. Furthermore, the, class of structures
considered is wide, because everything is done for a generalized
state-space structure discussed in the next subsection.
Sasahara et al. (1984) also minimize cost function degradation
(for digital filters see also Kawamata and Higuchi, 1985). They
derive a transformation matrix Tfor a Kalman filter, plus state
feedback controller, which minimizes degradation due to signal
quantization noise. So far this is also an adaptation of the
Mullis, Roberts and Hwang theory to closed-loop control. From
statistical modeHing of coefficient quantization errors they
then conclude that this approximately minimizes coefficient
Quantization degradation too. This conclusion is in line with
results from digital filtering (Fettweis, 1973; Jackson, 1976;

n,

(36)

i'j~(O,I)

for a MISO controller. Another choice is

A~[ -w
u
J

w]

(37)

(J

with no special pattern in Bj , C!, The resulting matrix A is well
known in control~related algebra as a real valued version of the
diagonal form. Stable controllers always have lui < 1,Iwi < I, thus
Aj is wen suited to fractional arithmetic. Transformation of any
state~space model of the controUer into the real diagonal form
(35), (37) can easily be achieved using standard EISPACK
software, provided the eigenvectors of A are sufficiently linearly
independent in a numerical sense. A successful transformation
using CAD software does not however guarantee that the
resulting state·space model can be implemented with sufficient
accuracy with shorter wordJength arithmetic on the target
processor. Problems with large numbers in Band C can be
expected. They correspond to large residues in a partial fraction
expansion of the transfer functions, where contributions of terms
are likely to almost cancel, thus producing large errors. The
author encountered a case in a practical application where three
real eigenvalues spaced 5% from each other caused such
accuracy problems even with 24~bit mantissa ftoating~point
arithmetic.
The same problems may occur with any attempt to force a
model into any parallel structure in Cases where there are
clustered eigenvalues requiring a series connection represen·
tation instead of a parallel one. The obvious way to treat such
cases is to introduce parallel blocks of higher order with
appropriate internal structure. C1ustered eigenvalues could then
be accommodated within a lordan block or a companion form
block. But this should not simply be done after the observation
of eigenvalue clusters without checking the residues, because
clusters with inherent parallel block structure also occur.
Additionally, it is with clustered eigenvalues that companion
forms suffer from high eigenvalue sensitivity. The problems just
mentioned h~ve astonishing)y not been an issue in digital
filtering. The Jordan form played some part in Barnes and Fam
(1977) but not with respect to the residue problems mentioned.
Particular types of state~space structures which have received
a lot of attention are the minimum roundoff-noise structures
proposed by Mullis and Roberts (1976) and Hwang (1977). They
minimize signal quantization noise arising from the state update
computation in (6) while retaining scaling of the state vector.
Scaling is performed in such a way that the overHow probability
is made equal for every state variable assuming a white noise
input signal. The reasoning behind the optimal minimum
roundoff realization is based on the derivation of a lower bound

160

Survey Paper
Jackson et al., 1979; Antoniou et al., 1983) also showing close
relationships between minimal noise and minimal sensitivity.
An example given by Sasahar8 et al. shows large improvements

in cost function degradation using the optimized structure
compared to a direct form. and improvement on an unfortunately not specified canonical form is also considerable. Agreement between analysis and simulation appears to be very good
for roundoff noise. but less so for coefficient quantization.
Since LQG cost function degradation is not always a suitable
objective in practical applications, other means of analysis and
optimization should also be developed. Quite effective tools
could probably be derived from closed-loop eigenvalue sensitivity analysis. Closed loop frequency response sensitivity might
also be interesting, possibly exploiting non-approximate largechange sensitivity expressions as discussed for digital filters by
Jain et a/. (1985).
5.4. Serialism. As mentioned in Subsection 5.1.4, the cascade
structure of Fig. 10 cannot be described as a standard statespace model. Obviously, variable v between the first order blocks
cannot be represented because it is neither a state nor an output
and these are the only variables, i.e. network nodes, available
in state-space formulation.
From another viewpoint, the example possesses serialism,
whereas in a state-space structure aU state vector components
could be updated in parallel from the "old" state and the input
vector. In order to describe more general structures (the cascade
is only one example), it is necessary to account for precedence.
Crochiere and Oppenheim (1975) distinguish node precedence
from multiplier precedence. In the structure of Fig. 10 there are
two node precedence levels: first node signal v" must be computed. then y". There are also two multiplier precedence levels:
multiplications involving ao. bo, b l , ai' b2 for instance could be
performed in parallel first, but multiplication with b, has to
await computation of Vi' However, the number of multiplier and
node precedence levels is not always the same. The motivation for
considering multiplier precedence lies in the dominance of
multiply execution time frequently encountered. The number of
multiplier precedence levels of a structure then determines the
minimum sampling period achievable assuming that as many
multiplies as possible are carried out in parallel using multiple
arithmetic units.
This issue may be of importance in special purpose processor
design, but precedence also has important implications in the
usual single processing unit case. One implication is that
minimum achievable computational delay in the case of direct
feed through is dependent on precedence. another is that structures with precedence might be preferable with respect to finite
wordlength effects. In this case it is necessary to have a
description of the structure representing the original coefficients
and the original node signals. Such a description has been
introduced to the control field by Moroney (1983) and Moroney
et al. (1980,1981,1983). It had previously been used with digital
filters by Chan (1978), and recently by Mullis and Roberts (1984)
in a VLS[ filter chip design context, labelled factored state
variable description (FSVD). Using this description. the structure of Fig. 10 would be represented by

(38)

or

(0)

time
sampling

sampling

t1-----

I
I

block 1

block 2

I

j
output

-

time

(b)

sampling

I
I

block I

sampling

t
t

\
I

start

t

block 1

block 2

PJpelme
filled here

sampling

II

\
\
I

block 1

block 2

I

---

I

---

output

FIG. 11. Pipelining with structure from Fig. 10: (a) unpipelined;
(b) pipelined.

result r".
Each", i matrix necessary in a FSVD corresponds to one node
precedence level. The intermediate signals can be represented
and so can the coefficients. Note that revoking the factorization,
introducing e = 'I'.l '1'1 immediately yields a standard state·
space description (by partitioning e into A, B, C, D appropriately) but neither the intermediate signal nor all original
coefficients are then represented.
Thus FSVD could be useful for modelling general structures
within an implementation oriented CACE environment. Cascade
structure is only one example of such a more general (with
respect to standard state-space) structure, a delay-replacement
state-space structure based on (33) being another one.
In the work of Moroney et al. a slight modification has been
made. Owing to their restriction to LQG compensators without
direct feedthrough (see Subsection 2.2) they introduce the output
(SISO case) as a state and call the result "modified state-space
representation". All their work, which has frequently been
quoted in this paper, is based on this representation.
Another issue linked with precedence is pipelining. In the
example in Fig. 10, imagine that there is double hardware, so
that multiplies and adds for the left-hand block 1 and the righthand block 2 can be executed simultaneously, i.e. in parallel.
Then simply letting block 2 hardware wait for completion of
block 1 computation so that one hardware unit is always idle
would of course be unattractive. But if a delay (i.e. storage) is
inserted between the blocks for storing V", the multiplier as welt
as node precedence levels are reduced to one, and both hardware
units could always be busy, running at double sampling frequency, see Fig. 11. This is pipelining. It allows an increased
throughput rate but introduces delay. In a control feedback
loop this delay must then be accounted for in design (Moroney,
1983; Moroney et al., 1981) but despite this delay the control
system performance can possibly be improved compared to the
lower sampling frequency non-pipelined case.

6. Scaling

(39)
Serialism is now expressed by the first computed intermediate

At least when using fixed-point arithmetic it is usually
necessary to perform scaling on the controller to be implemented.
The primary objective is to fit data which are computed during
the course of a difference equation calculation into the limited
number range, so that overflows are avoided without provok~g
excessive signal quantization effects. A second objective WIth
scaling is to alter coefficients in such a way that they fit into
the coefficient number range. This is not always achieved when

161

Survey Paper
scaling is only oriented towards data overflow avoidance and
scale-factors then have to he altered appropriately.
The following discussion is on a controller formulated as a
state-space system, but the concepts apply equally well in other
formulations. such as (13), for example. The scaling task may
he partitioned into three subtasks, which might he called
input and output scaling;
state vector scaling;
scalar product scaling.
They are dicussed below in this order, which also reflects the
chronological sequence within the implementation process. Note
however that scaling cannot always be handled separately after
structure selection. Any kind of. structure optimization or
evaluation with respect to finite precision arithmetic should
have a scaling procedure as an underlying process, because
scaling affects the numerical values of the coefficients.

6.1. Input and output scaling. During controller design the
plant's outputs and control inputs are often conveniently
handled' as physical variables without normalization, i.e. outputs
of a system may be in bar and ms -I. Once the range of values
occurring in closed-loop control system operation are known,
the transducer gains can be determined. The output of a
transducer, say -10V ... + lOV, must be repr~ented in the
processor according to the data fonnat used in the controller
implementation. Using fractional arithmetic, the bit pattern
output of the AD-converter representing -lOV ... + lOV may
be aligned to give -1 ... + 1 in the processor, i.e. the most
significant bit (msb) of the ADC output is also the msb of the
data word. then used for the input to the difference equations.
In the case of digital input, for instance from a position encoder,
the alignment could also be done in this way, and for the outputs
of the controller it is just the same.
Let the physical variable range of a planfs input, which has
been used in designing the controller, be given for example as
Yi in the range -20A ... +20A for an electromechanical
actuator. This variable is represented in the range -1 ... + 1 in
the processor (fractional arithmetic assumed). Possible intermediate variable transformations, for instance into -10V ...
+ lOV via a DAC and then into the yi range via a power
amplifier, do not matter here. The gain between the value in the
processor and the physical value used in the controller derivation
must however be accounted for by scaling the controller
equations before supplying them to the further steps of the
implementation procedure. In the example the ith row of C and
D must he multiplied by 1/20 to obtain the correct numerical
values. As a whole, there must be input and output scaling to
change the B, C, and D matrices of(6) for example, to BS;;',
S,,-IC, and S; 1 DS,,- t respectively. The scaling matrices are
diagonal and their elements are given by

S

R'"

.=1',1

R:~'

(40)

where RlJI' means the number range span in the processor (same
number range for inputs and outputs assumed), i.e. 2 for
fractional arithmetic, and Rph means the physical variable range
span used in the design of the controller which led to the original
A, B, C, D matrices, i.e. 40 A for Rr. In the case of an
unsymmetrical physical variable range. for example 0 ... 40 A,
appropriate offset must he added, preferably at the transducer
or amplifier side.
In the above discussion it has been assutned that the range
of the physical variables, and accordingly the measurement
ranges of the transducers are known. This is usually the case. If
it is not, the techniques discussed below for determination of

162

maximum deflections of state variables could also be used.
correspondingly to get that information.

6.2 State vector scaling. For a system. (6), the state variables
are scaled via

x". . . = (t,)x = 'x
S;

(41)

Y.t = CS xXtcah,d.k + Dul;

(43)

diag

thus the scaled system is given by

leading to new matrices AI, B s , and C•. The scale factors sz,i can
always be chosen so that Xscated stays within the number range
given by the data format used. But in order to minimize data
quantization effects X sca1ed should not be permanently far off the
limits during operation of the closed-loop control system, i.e.
scaling should be such that the maximum absolute value of a
variable is just below the upper number range limit under worstcase conditions.
What remains to be discussed. is determination of the scale
factors. Basically, there are two approaches to determining the
sz,i analysis and simulation. The conceptually simplest is to
simulate the closed-loop control system under various conditions, preferably under conditions which are anticipated to be
worst-case with respect to the values of x. The largest absolute
values of the components of x can be coUected and scale factors
can easily be derived from the largest absolute values overall
per state variable. All this can be automated by appropriate
software. Although the effort in performing a number of simulations might be considerable, the data collection mentioned can
often be a by~product of simulations carried out anyway in the
case of control design evaluation.
In the digital filtering field, state vector scaling has been dealt
with analytically since the early seventies and is represented in
most textbooks in this field. Some prominent papers have been
published by Jackson (1970a, 1970b1 Hwang (1975a,b), Mullis
and Roherts (1976). Concepts developed there have been used
in control engineering by Moroney (1983), who gives an extensive
discussion and bibliography, by Moroney et al. (1980, 1983),
Sasahara et al. (1984), Scharf and Sigurdsson (1984), and Ahmed
and Belanger (1984a). In the digital filtering field, the digital
systems usually operate in open loop and are of the SISO type.
Analytic scaling is based there on certain assumptions about
the input signal to he expected. In the MISO or MIMO
controller case, such assumptions cannot be made as easily
because the plant measurement outputs, which are inputs to the
controller, are interdependent according to the plant dynamics
and structure. Furthermore, the closed-loop nature of controller
operation should be taken into account.
If scaling is allowed to be a bit pessimistic, experience shows
that quite often useful scale factors could have been found by
driving the digital open-loop controller alone with worst-case
input signals. For a SISO servo-control system, for example,
full-scale step reference inputs may be supposed to be worst~
case. Indeed, the largest deflections of the controller's state
variables frequently occur right after the step, and where the
plant reacts slowly, the feedback case is not much different from
the open-loop case in terms of maximum deflection of the
controller state. So, simple calculation of controller state after
a step input in open loop yields reasonable scale factors in such
a case, called unit-step scaling with respect to Iilters in Phillips

Survey Paper
and Nagle (1984) and also used by Mitchell and Demayer (1985).

Note however that such an approach only works with stable
controllers. An integral part in the controller would only be
allowed if it affected only onc state variable, i.e. if it were
decoupled in the controller. It could then be scaled separately.
based on what is expected to be specified as its maximum
contribution to the actuating variables.
Generally. scaling of controllers would be most safe and least
pessimistic when based on closed~loop, considerations. As in the
open-loop case of digital filters, there must be some assumptions
about input signals, but now these are not necessarily input
~ignals to the controIler. They may be inputs to the plant.
for example disturbances. The dosed-loop system must be
sufficiently linear, because the analytic scaling approaches rely
on linear models.
After a linear discrete model of the closed~loop system has
been set up assumptions about input signals are in order. In
stochastic settings it is reasonable to assume worst-case stochastic input signals and then to compute the variances O'~.i of the
controller state variables by standard techniques. In the case of
zero~mean Gaussian signals, for instance, the probability that
the amplitude exceeds 3.3 u is only 0.001, thus a scale factor S",I
in the range 30'.11:,1 ... 100'",i should be reasonable for fractional
arithmetic overflow limits -1 ... + 1. The actual value selected
depends on the supposed quality of the Gaussian model of the
real signal.
\ The variance-oriented scaling approach has been used in
connection with control by Moroney et al. (1980), Moroney
(1983), Scharf and Sigurdsson (1984), Sasahara et al. (1984), and
Ahmed and Belanger (1984a). If there are constant (not step)
disturbances or reference inputs in addition to the stochastic
signals, the mean values XI of the controller state variables must
also be computed for worst-case situations and scare factors
must be selected so that IXil + C(1x,i ~ 1. C = 3 ... 10, in the case
of fractional arithmetic. Moroney (1983) and Moroney et al.
(1983) propose some remedy for the non~zero setpoint situation
in order to obtain zero-mean controller state variables, but it
seems to be rather limited from a practical viewpoint.
Deterministic input signals are probably most often accounted
for in practice by simulating the closed-loop system for the
operating conditions expected in reality, as mentioned above.
Possibly a linear simulation is sufficient for collecting scale
factors. In digital filtering other means have been developed to
determine scale factors based on deterministic assumptions on
input signals. They can be adopted for closed-loop control as
discussed by Moroney (1983). The aim is to calculate upper
bounds for the controller state variables XI under some bound
edncss assumptions on the input signals. The basic idea of
bound-based scaling is illustrated by the following.
Given an input signal to the closed-loop system represented.
by its samples vA:' a controller's state variable is given by
M

,

x i .1-

=L

hi.k_jVj ,

(44)

j=O

where {h k } is the impulse reponse sequence of this transfer path.
The sum in (44) can now be bounded via the Holder inequality
(Epstein, 1970; Hwang, 1975a, 1975b), thus

(45)

where! + ! = 1. Since the factors (.. .)1/" and (.. .)1/1 are I" and
p q
Iq norms of vectors, scaling based on such bounds is called I"
scaling (Hwang, 1975a, 1975b). The name of tbe norm used for
the impulse response sequence is used by convention.
With digital filtering, 12 scaling plays an important role in
connection with optimal realization structures (see Section 5).

In this case the Euclidean norm of the impulse response sequence
as well as the input sequence is used, which in fact means that
an assumption on an energy bound on {v,,} must be made from
a deterministic viewpoint. This type of scaling corresponds to
the stochastic overflow-probability scaling discussed above in a
stochastic setting, with {v,,} a white noise sequence, in which
case the variance of XI is given by

(46)

If the only assumption on {v.J were that any sample is
bounded by v,l :s; M, then q = ao and p = 1 should be taken.
Note that this is the limiting case but (45) is still valid (Epstein,
1970). Equation (45) then yields

Ix,.,1 "

M

t Ihi.J

(47)

i="O

It is obvious that this yields an absolutely worst-case, pessimistic
bound. Equality in (47) holds if Vj in (44) is always at its limits
+ Mar - M with the sign corresponding to that of h,,k- j ' Note
that the impulse response sequence must form an absolutely
convergent series for (41) to be useful. but this is guaranteed for
a stable linear closed-loop system (Str~c, 1981). The opposite
case to that last discussed is q = I, p = 00, which leads to an
assumption on :t Iv}l. Only signals whose number sequence is
absolutely summable are allowed here.
The norm~based bounding techniques outlined above work
in the time domain. Similar techniques are available in the
frequency domain based on function space norms, in which case
frequency response and input signal spectrum assumptions are
involved (Hwang, 1975a, 1975b; Moroney, 1983; Ahmed and
Belanger, 1984a).
A strong point of bound-based scaling techniques is that
absolutely worst-case (although perhaps conservative) scaling is
possible, whereas with simulation the safety of scaling depends
on how well the worst-case situations have been anticipated. It
was interesting to the author of this survey to check the degree
of conservatism in the simple control system example given by
Ahmed and Belanger (1984a). They Used I, norm-based scaling
assuming the reference signal to be absolutely bounded by M.
It turned out that the 11 worst-case scaling was not very
conservative at all. Compared with a reference step input of
value M over·scaling was only about 50%. Since the given c;lata
wordlength of a processor may be sufficient to allow for worstcase scaling, it is attractive to let an automatic scaling algorithm
perform this. The control engineer then needs only to supply
hounds on input signals. Experience with such scaling applied
to the controller alone (open loop) indicates that even this
simple automatic scaling method yields good results for stable
controllers or controller subsystems.
Note that in the discussion given above as well as in the
literature only single input signals have been considered, whereas
several signals might act on plant and controller simultaneously
in reality. With scaling this is accomodated for by computing

'I

'1

(48)

where M ~ are the bounds on the individual input signals, and
{hi,i,v} is the impulse response sequence for an impulse at the
vth input.
6.3. Scalar product scaling. From the discussion in Section 4,
it is not sufficient to scale only the state vector in the case of
other than two's complement arithmetic. Partial sum overflow
during scalar product evaluation also has to be avoided. With
two's complement arithmetic the same is necessary if it cannot
be ensured that X.caled is overflow free, and the saturation value
should be taken if overflow occurs. The scalar product scaling
as discussed in Section 4 can be used for !his purpose, leading

163

Survey Paper
to computation of an intermediate downsca1ed state

numerical data
(matrices)

(49)
coded functions

(SO)

FromllbrtU',y

S()tup duta

which must be rescaled
(51)
to yield the new scaled state. This rescaling operation has to be
performed by the target processor, whereas downscaling in (49)
only modifies the coefficients of the matrices supplied before
target processor programming.
A similar situation arises with the output computation'(43).
With practical control systems it is very likely that outputs of
the controller will saturate in certain states of operation. This
is a very common case, for instance, with drives and positioning
mechanisms. In order to be able to determine correct output
saturation it is necessary to perform scalar product scaling here,
i.e. first to compute a downscaled overftow~free version of the
output vector and then to rescale it using a saturation overflow
mechanism.
This downscaling procedure also conveniently scales down
the coefficients (matrix elements) 'of the output equation, which
are very often quite large. Values in the hundreds are not
uncommon here when fractional arithmetic is used, in which
case coefficients should not exceed the range - I ... + I (at least
not much; small integer parts may still be realized using multiple
adds and subs). The reason for large coefficients here is that in
contrast to digital filters, controller transfer functions frequently
have gains far above unity. Since the state vector will be scaled
so that it fits into the number range, high gains consequently
show up in the output equation in the Cs matrix. The direct
"feed through matrix might also have large coefficients. particularJy with controllers of PO type where a step input immediately
produces a large output. Th~ scalar product scaling technique
has been implemented to be carried out automatically in an
automatic code generator for a certain signal processor by Loges
(1984), Hanselmann and Loges (1984), Loges (1985).
7. Programming
In any case where computation speed is not crucial and a
common and welI~supported general microprocessor is used as
target" programming of controllers as introduced in Section 2
should not cause problems. Common general high-level languages (HLL) can then be used, along with convenient floatingpoint arithmetic. It is, however, necessary to account for realtime operation.

7.1. Multi~tasking and languages. Where the controller is the
only task for a dedicated processor, timing is sometimes achieved
through simply polling a status signal ("ADC ready" for example)
of a peripheral which is under timer control, leaving the
processor idle while waiting. If there is something more useful
for it to do instead of waiting, a foreground/background solution
would be better. In that case, the background job is interrupted
by a real-time clock whenever the controlJer has to be served.
This type of real~time operation is quite primitive, but may be
appropriate for simple systems and is widely used (for an
example see Clarke, 1982). It works with HLLs even if they were
not original1y designed for real~time operation, provided the
machine code generated by the compiler is re-entrant. This
means that routines which are used in both foreground and
background (such as library routines) can be interrupted. reused, and resumed without errors due to altered local variables.
In a multi-rate system for example, composed of several
subsystems, the situation is a bit more complicated. since
modules executed at a slower rate have to be interrupted to
Jet high-rate modules be serviced. Foregroundfbackground
operation then becomes clumsy. If additionally asynchronous
events occur, or if synchronization problems have to be solved,
a multi~tasking executive becomes more and more necessary.
There are ways of staying with HLLs, though, because some
languages have at least basic real-time operations support built
into them, such as Modula 2, some versions of Pascal, and
Forth environments, or real-time facilities are achievabJe

164

optllnul assembly code

FIG. 12. Automatic code generation.

through widely available real-time operating system kernels.
available to intetface with C or Pascal programs for example
(Evanczuk, 1983; Ready, 1984; Heider, 1982). Although realtime executives (operating system kernels) are quite an effective
means of achieving multi-tasking. they usually require consider~
able proCessor execution time for task management. Switching
from one task to another may easily take around lOOJ-ls and
more. even with a modern 16-bit processor. So, if appropriate,
more primitive means might be the choice.
A problem with HLLs is that they most often only support
integer and Hoating-point arithmetic. If the latter is too slow,
fractional arithmetic would be an alternative, but one might be
forced to program the equation evaluation parts in assembly
language. Emulation of fractional arithmetic through integer
computations is possible but with a loss of speed. It is interesting
to note that there are Forth language environments which
include not only multi-tasking (Pountain, 1985) but also fractional arithmetic. This backs the claim of Forth advocates that
this environment is well suited to real-time control. at least for
small systems.
The lowest level of programming is of course the use of
assembly language. [n most cases it is chosen for reasons of
speed. With a modern microprocessor the assembly code for
implementing a controller can be quite concise owing to powerful
instruction sets. For examples of coded digital filters see Phillips
and Nagle (1984), where subroutines and loops have been used.
Maximum speed is obtained if straight code without loops and
subroutines is used because then there is no associated overhead.
Straight code. however, contradicts'what is good programming
sty'e. A satisfying solution' to this could come from automatic
program generators. Such a generator would generate tailored
code once the type, dimensions and numerical values of a
controller from Section 2 were known (Fig. 12). and should be
fairly easy to write for a general microprocessor.
7.2. Code generation. The generator concept has repeatedly
been applied to signal processor programming for digital filtering
and related tasks (Schafer el al., 1984; Mintzer et al., 1983;
Skyttii et al., 1983; Herrmann and Smit, 1983), and also for
controller implementation (Hanselmann, 1982; Hanselmann and
Loges, 1983, 1984; Loges, 1984, 1985). An interesting project
aimed at automatic program generation for a microcontroller
(the 8096 from Section 3) has been described by Srodawa el al.
(1985). Since the starting point is a language description of the
computations to be performed, this tool is more a compiler than
a code' generator. Compilers translating high level descriptions
into signal processor code are also emerging commercially, both
for special signal processing languages and for suitably modified
general HLLs, such as Pascal or C (Marrin, 1985).
An early control-related generator for the Intel 2920 signal
processor developed by Hanselmann (1982) was aimed at MIMO
controllers in the form of (6). Good experiences with this tool
later led to application of the code generator concept to the
TMS32010. A brief description of this generator fonows as an
example of what can be expected from implementation tools at
the programming end today. Details on internals can be found
in Loges (1985). and details on how to use it in Hanselmann
and Schwarte (1985).
The generator is aimed at implementation of MIMO controllers and accepts the four matrices of (6) as its input. Output is a
mnemonic assembly language program, which. can be assembled
and downloaded to the target. Because everything is automatic,
less attention needs to be paid to the readability and length of

Survey Paper
the program (as long as there is sufficient program RAM, which
is usually the case). and straight code without unnecessary loops
and without subroutines is generated to increase speed. The
generator also copes with data RAM limitations. If it detects
lack of RAM it automatically trades program space against
data space by utilizing an immediate multiply instruction of the
target processor so that coefficient space is saved in the data
RAM. This instruction only accommodates t 3-bit numbers, but
even in cases where there are too many more-than-13-bit
coefficients the generator finds a way out (Loges, 1983, 1985).
Another important option is extended precision arithmetic.
The generator provides this on two levels: extended coefficient
precision, and extended coefficient and signal variable precision.
With the special extended precision computation technique
realized by the generator the controller of Section 3.3 and Table
3 for example would run at 7 kHz with full precision (coefficients
and signal variables) instead of 31 kHz with single precision.
The generator also automatically provides overflow management code along with rescaling of scalar products (see Section
4). Finally, the generator has a facility to include function code
automatically. This is particularly useful for extending linear
controllers (for which the generator does coding) by non-linear
functions
destination: = f(destination, states, inputs, aux. variables),
where destintion can be a predefined variable such as a state or
output or a user-defined variable used in another function call.
A major type of function code performs table lookup with or
without interpolation, leading to very fast non-linear function
computation. At present, the generator concept is even being
applied to tailoring such function code. For instance there is a
program which generates square-root function code according
to the user's specifications, such as argument range, table length
allowed, or precision desired.
Automatic code generation seems to be a viable means of
achieving application-specific code with about the same
efficiency as an expert programmer coding by hand would
attain. This is particularly valuable for target processors with
non-standard architectures and instruction sets, such as special
.
signal or custom design processors.
Code generation as just discussed is aimed at production of
optimal assembly code, but generation ofHLL code shOUld also
be mentioned. It helps in translating application oriented
descriptions of a controller, for example in the form of a block
diagram with transfer functions, into general programming
languages. A recent example is the RT BUILD facility of the
MATRIX. CACE-Package (Shah et al., 1985~ which generates
ADA language source code for controller implementation.
8. Simulation of digital control systems
In any realistic control design and implementation project,
digital simulation is an invaluable tool. This applies even more
to digital control. As mentioned earlier, simulation is useful in
the determination of scale factors. It can also reveal effects
due to quantization, overflows, spectrum aliasing, and nonsimultaneous sampling and output. Such effects are only partIy
emanable to limited analysis. Very few publications addressing
the problems of digital simulation of digital control systems
have appeared up to now, although there are indeed several
problems, as discussed briefty below. They fall mainly into two
groups: efficient simulation/integration methods, and modelling.
If the plant is linear (rare case), a simulation based on
transition matrix techniques could be augmented by the modelling of delays (computational, sampling, and output) and quantizers, including overflow simulation if necessary. The usual case,
however, will be with additional non-linearities in the continuous
part of the control system, so that general integration methods
for differential equations must be used. This results in certain
peculiarities:
(a) Integration is on the continuous system state only, but the
state derivatives depend on the discrete system's outputs,
which are held constant between update time instants.
(b) For the sake of accuracy, the integration step boundaries

should be made coincident with the controller's sampling
and output time instants. This may dictate a small step size

with variable step-size integration.
(c) The discrete system introduces discontinuities due to staircase functions. Discontinuities force multi-step integration
into restart and since this occurs many times, such integration
methods may become inefficient. Fortunately, the time
instants at which discontinuities occur are known (as long
as there are no sources other than the staircase function
output), so if (b) is satisfied there is no need to perfonn the
discontinuity-finding operations (Hay, 1984, 1985) familiar
from problems where discontinuity occurrence is state dependent.
(d) Integration of slow subsystems with larger step size (sop
called multi-rate simulation (Gear, 1984) may seem to
provide a solution to the small step size problem. It requires
interpolation in order to provide the samples for the controller, and decimation at fast-to-slow system interfaces which
should prevent aliasing effects. Fidelity for instance with
respect to limit cycles due to quantization must be questioned. In fact this field seems to be largely unexplored.
The points given are partly addressed by some known
simulation packages (for a survey of simulation software see
Cellier, 1983), such as MATRIX. (Shah et al., 1985); SIMNON
(Astrom and Wittenmark, 1984). There are also commercial
packages to be mentioned such as ACSL by Mitchell and
Gauthier Inc. and CSSL-IV by Simulation Services. For (a) for
instance, there is a so-called DISCRETE section in ACSL which
combines with the DERIVATIVE section for the continuous
system, and (b) is satisfied because sampling and output instants
are placed in an event list supervized by the integration control
mechanism, which steers integration step boundaries to coincide
with any event. Point (b) can also be satisfied by choosing
appropriate integration routines.
The points discussed so far have to do with the event nature
of sampling and output and apply to discrete control. They are
also partly considered by Stirling (\983) and Zimmennan (1983).
Digital control additionally requires simulating AD- and DAconverters. quantizers in general, and possibly overflow behaviour, along with an interactive overflow detection mechanism.
Quantizers and limiters are usually available in the package
libraries, but not high-level constructs, although it seems possible
to build them up from primitives.
A special purpose package where the user no longer deals
with quantizers and limiters, but "talks" to the progr.am in
higher-1evt?1 terms such as ADC-wordlength, two's complement
arithmetics, 32-bit accumulation, and the like (Hanselmann et
al., 1983) proved to be very useful. Simulation on the basis of
sufficiently detailed and realistic models abstracted from the
actual processor and its software should always be available. In
contrast to the processor simulators sometimes supplied by
processor vendors, mapping all registers, flags etc., and instructions of a specific device, the use of abstract models yields
processor independency. It also allows experiments (with arithmetic for example) which help determine what processor should
be used, regardless of availability of the processor or its
simulator. This will become particularly important for custom
control processor design.
9. Conclusions

Controller implementation is a topic involving many disciplines at the same time, from processor technology and elec~
tronies through system theory aspects up to software engineering. Even in the rather restricted case of mostly linear control
there may be many problems when the idealizations of cohlmon
theory of algorithms and design methods no longer hold.
Some of the issues arising were already considered in the
old direct digital control days in the sixties. Stimulated by
microprocessor technology, these issues are once more arousing
interest. Some of the problems encountered still require further
work, and more experience should be gained to know which of
the methods prove to be practical.
Much could be gained by integration of all implementation
related tools into CACE software (and also hardware to some
extent) environments. All the steps necessary in the implementation process should be integrated into the CACE environment
and should be supported as much as possible by software tools
(Hanselmann and Loges, 1984; Hanselmann, 1986). Possibilities

165

Survey Paper
range from the minimum of having consistent controller data
structures throughout the process up to the coding stage, to the
maximum. of fully automatic structure selection, scaling, and
final code generation for the target processor, accommodating
complex controllers, composed of several subsystems, possibly
of the multi·rate type.
The advantages of extending CACE to control implementation are now well recognized by control engineers. This is
reflected in .recent discussions of CACSD/CACE scopes given
by Spang (1985), Sutherland and Sonin (1985) and Powers
(1985). pesigning such software is certainly not a trivial task
because of the many disciplines involved, the fast pace of
processor technology, and increasing control system complexity.
Acknowledgements-This work is much related to the work of
the control systems group at the author's institution, which is
headed by Prof. J. Locket. The author is indebted to his
colleagues for fruitful discussions, particularly to W. Loges and
A. Schwarte for help with hardware and software. Thanks also
to Prof. K.-J. Astrom for support and advice in preparation of
the paper, and to the anonymous reviewers for their constructive
remarks.

References

Agarwal, R. C. and C. S. Burrus (1975). New recursive digital
filter structures having very low sensitivity and roundoff noise.
IEEE Trans. Cets. Syst., CAS-n, 921.
Ahmed, M. E. and P. R. Belanger (1984a). Scaling and roundoff
in fixed-point implementation of control algorithms. IEEE
Trans. Ind. Electron., IE·31, 228.
Ahmed, M. E. and P. R. Belanger (1984b). Limit cycles in fixed·
point implementation of control algorithms. IEEE Trans. Ind.
Electron., IE-31, 235.
Ahmed, N. and T. Natarjan (1983). Discrete·time Signals and
Systems. Reston, Virginia.
Antoniou, A., C. Charalambous and Z. Motamedi (1983). Two
methods for the reduction of quantization effects in recursive
digital filters. IEEE Trans. Ccts. Syst., CAS-30, 160.
Astrom, K. J. (1983). Theory and applications of adaptive
control-a survey. Automatica, 19, 47l.
Astrom, K. J., P. Hagander and J. Sternby (1984). Zeros of
sampled systems. Automatica, 20, 31.
Astrom, K. J. and B. Wittenmark (1984). Computer Controlled
Systems. Prentice-Hall, Englewood Cliffs, New Jersey.
Avenhaus, E. (1972). On the design of digital filters with
coefficients of limited word length. IEEE Trans. Audio Electroacoust., AU-ZO, 206.
Barnes, C., B. N. Tran and S. H. Leung (1985). On the statistics
of fixed-point roundoff error. IEEE Trans. Acoust. Speech Sig.
Process., ASSP·33, 595.
Barnes, C. W. (1979). Roundoff noise and overflow in normal
digital filters. IEEE Trans. Ccts Syst., CAS-26, 154.
Barnes, C. W. (1984). On the design of optimal state·space
realizations of second-order digital filters. IEEE Fans. Cets
Syst., CAS-31, 602.
Barnes, C. W. and A. T. Fam (1977). Minimum nonn recursive
digital filters that are free of overflow limit cycles. IEEE Trans.
Ccts Syst., CAS·Z4, 569.
Beliczynski, B. and W. Kozinski (1984). A reduced·delay sam·
pled·data hold. IEEE Trans. Aut. Control, AC-29, 179.
Bertram, J. E. (1958). The effect of quantization in sampled.
feedback systems. Trans. Am. Inst. Elec. Eng., 77·2, 177.
Blasco, R. W. (1983). Floating-point digital signal processing
using a fixed-point processor, presented at Southcon; also in
Signal Processing Products and 1echnology, Texas Instruments.
Boite, R. (1983). On the quantization of low-level signals: the
fixed-point casc. Proc. Eur. Conf. eet Theory and Design,
Stuttgart.
Bomar, B. W. (1985). New second-order state-space structures
for realizing low roundoff noise digital filters. IEEE Trans.
Acoust. Speech Sig. Process., ASSP·33, 106.
Bondarko, V. A. (1984). Discretization of continuous linear
dynamic systems. Analysis of the methods·. Syst. Control Lett.,
5,97.
Bnoth, A. D. (1951). A signed binary multiplication technique.
Q. J. M echo Appl. Math, 4, 236. Also in Swartz1ander, E. E.

166

(Ed.) (19801 Computer Arithmetic. Dowden, Hutchinson &
Ross, Stroudsbnrg, Pennsylvania.
Bose, N. K. (1983). Properties of the Qn·matrix in Bilinear
transformation. Proc. IEEE, 71,1110.
Breitzman, R. C. (1985). Development of a custom microprocessor for automotive control. IEEE Control Syst. Mag., 23
May.
Broussard, J. R., D. R. Downing and W. H. Bryant (1985).
Design and flight testing of a digital optimal control general
aviation autopilot. Automatica, 21, 23.
Btittner, M. (1977). Elimination of limit cycles in digital filters
with very low increase in the quantization noise. IEEE 1rans.
Ccts Syst .. CAS·24, 300.
Callahan, A. C. (1976). Random rounding: some principles
and applications. Proc. IEEE Int. Can! Acoust. Speech Sig.
Process., Philadelphia.
Cappellini, V., A. G. Constantinides and P. Emiliani (1978).
Digital Filters and their Applications. Academic Press, London.
Cappello, P. R. (Ed.) (1984»VLSI Signal Processing. IEEE Press,
New York.
Cellier, F. E. (1983). Simulation software: today and tomorrow.
In Burger, 1. and Y. Jarny (Eds), Simulation in Engineering
Sciences. Elsevier Science, Amsterdam.
Chan, D. S. K. (1978). Theory and implementation of multidimensional discrete systemsfor signal processing. Ph.D. Dissertation,
Mass. Inst. Technology.
Chong, Y. M. (1984). Data flow chip optimizes image processing.
Computer Design, 15 Oct., 97.
Claasen, T. A. C. M., W. F. G. Mecklenbrauker and J. B. H.
Peek (1975). Quantization noise analysis for fixed point digital
filters using magnitude truncation for quantization. IEEE
Trans. Ccts Syst., CAS-22, 887.
Clarke, D. W. (1982). A simple control language for microprocessors and its applications. Proc. IFAC Congr. Theory Applic.
Dig. Control, New Delhi.
Cole, B. C. (1985). Signal processing: a big switch to digital.
Electronics, 26 Aug., 42.
Crochiere, R. E. (1975). A new statistical approach to the
coefficient wordlength problem for digital filters. IEEE Trans.
Ccts Syst., CAS·n, 190.
Crochiere, R. E. and A. V. Oppenheim (1975). Analysis of digital
networks. Proc. IEEE,63, 581.
Crowell, C. D. (1985). Floating'point arithmetic with the TMS
32020. Texas Instruments Application Report.
Curry. E. E. (1961). The analysis of round-off and truncation
errors in a hybrid control system. IEEE Trans. Aut. Control,
AC·I2, 601.
Cushman, R. H. (1982). ICs and semiconductors. EDN, 16 July,
44.

Davies, E. (1985). Sample and hold-the key to fast A to D
conversion. Electronic Engng, Mar., 67.
Doyle, 1. C. and G. Stein (1981). Multivariable feedback design:
concepts for classical/modem synthesis. IEEE Trans. Aut.
Control, AC·26, 4.
Eckhardt, B. (1975). On the roundoff error of a multiplier. Arch.
Elektrische Uebertragungstechnik,29, 162.
Edgar, A. D. and S. C. Lee (1979). FOCUS microcomputer
number system. Comm. ACM, 22, 166.
Eldon, J. and G. E. Winter (1983). Floating·point chips carve
out FFT systems. Electron. Des., Aug., 4.
Epstein, B. (1970). Linear Functional AnalYSis. Saunders, ·Phila·
delphia.
Essig, D., C. Erskine, E. Caudel and S. Magar (1986). A second·
generation digital signal processor. IEEE Trans. Ccts Syst.,
CAS-33, 196.
Etzel, M. H. (1983). Logarithmic addition for digital signal'
processing applications. Proc. IEEE Int. Symp. Ccts Syst.,
New York.
Evanczuk, S. (1983). Real·time OS. Electronics, 24 Mar., 105.
Fadden, E. J. (1984). The System 10 Plus: a major advance in
scientific computing. Proc. Con/. Peripheral Array Processors,
Boston.
Fettweis, A. (1972). On the connection between multiplier word
length limitation and roundoff noise in digital filters. IEEE
Trans. Cct Theory, CT·19, 486.
Fettweis, A. (1973). Roundoff noise and attenuation sensitivity
in digital filters with fixed.point arithmetic. IEEE Tra~s. Cet

Survey Paper
Theory, CT-20, 174.
Fettweis, A. (1974). On properties of floating-point roundoff
noise. IEEE Trans. Acoust. Speech Sig. Process., ASSP-22,
149.
Fettweis, A. (1984). Digital circuits and systems. IEEE Trans.
Ccts Syst., CAS-31, 31.
Flaherty, T. J. (1985). Building blocks stack up to higb performance. Comput. Des., Feb, 161.
Flores, I. (1963). The Logic of Computer Arithmetic. PrenticeHall, Englewood Cliffs, New Jersey.
Forsythe, W. (1983). Algorithms for digital control. Trans. Inst.
Meas. Control,S, 123.
Forsythe, W. (1985). A new method for the computation of
digital filter coefficients. Simulation, 44, 23; 44, 75.
Franklin, G. F. and J. D. Powell (1980). Digital Control of
Dynamic Systems. Addison-Wesley, Reading. Massachusetts.
Frey, M. L. and F. J. Taylor (1985). A table reduction technique
for logarithmically architected digital filters. IEEE Trans.
Acoust. Speech Sig. Process., ASSP·33, 718.
Fromme, G. and M. Haverland (1983). Selbsteinstellende Digitalreg]er im Zeitbereich. Regelungstechnik, 31, 338.
Gambe, H., T. Ikezawa, N. Kobayashi, S. Sumi, T. Tsuda and
S. Fujii (1983). A general purpose digital signal processor.
Proc. Eur. Conj Cct Theory Des.• Stuttgart. VDE-Verlag,
F.R.G.
Gazsi, L. and GiiUiioglu (1983). Discrete optimization in CSD
code. Proc. IEEE M ELECON, Athens.
Gear, C. W. (1984). The numerical solution of problems which
may have high frequency components. In Haug, E. J. (Ed.),
Computer Aided Analysis and Optimization of Mechanical
System DynamiCS, Nato ASI Series, Vol. F9, Springer, Bedin.
Glesner, M., H. Joepen, J. Schuck and N. Wehn (1986). Silicon
. compilation from HDL and similar sources. In Hartenstein
(Ed.), Advances in CAD jar VLSI, Vol. 7. North-Holland,
Amsterdam.
Gold, B. and C. M. Rader (1969). Digital Processing of Signals.
McGraw-Hill, New York.
Goodwin, G. C. (1985). Some observations on robust estimation
and control. Proc. 7th IF AC Symp. ldent. Syst. Paramo Est.
York.
Gupta, A. and H. D. Toong (1983). Microprocessors-the first
twelve years. Proc. IEEE, 71, 1236.
Gupta, A. and H. D. Toong (1984). Microcomputers in industrial
control applications. IEEE 'Trans. Ind. Electron., IE-31, 2,109.
Gupta, A. and H. D. Toong (1983). An architectural comparison
of 32-bit microprocessors. IEEE Micro, Feb., 9.
Haberland, B. L. and'S. S. Rao (1973). Discrete-time models:
bilinear transform and ramp approximation equivalence.
IEEE Trans. Audio Electroacoust., AU-21, 382.
Hagiwara, Y., Y. Kita. T. Miyamoto. Y. Toba, H. Hara and T.
Akazawa (1983). A single chip digital signal processor and its
application to real-time speech analysis. IEEE Trans. Acoust.
Speech Sig. Process., ASSP-31, 339.
.
Hall, E. 1.., D. D. Lynch and S. J. Dwyer (1970). Generation of
products and quotients using approximate binary logarithms
for digital filtering applications. IEEE Trans. Comput., C-19,
97.
Halyo, N. and G. A. McAlpine (1971). A discrete model for
product quantization errors in digital filters. IEEE Trans.
Audio Electroacoust., AU-I9, 255.
Hanselmann. H. (1982). Tischrechner programmiert Signalprozessor als digitalen Mehrgrossenregler. Elektronik, 31. 21, 134.
Hanselmann. H. (1984). Diskretisierung kontinuierlicher Regier.
Regelungstechnik, 32, 326.
Hanselmann, H. (1986). Einsatz Digitaler Ein-Chip-Signalprozessoren in der Mess- ond Regelungstechnik. Bull. Schweizer
Elektrotechnischer Verein (to appear).
Hanselmann, H., R. Kasper and M. Lewe (1983). Simulation of
fast digital control systems. Proc. 1st Eur. Simulation Cong.,
Aachen, Informatik-Fachbericht 71. Springer. Berlin.
Hanselmann, H. and A. Schwarte (1985). Guide to the T MS
320 controller code generator, version 1.1. University of
Paderborn, Dept. Aut. Control in Mech. Eng.
Hanselmann, H. and W. Loges (1983). Realisierung schneller
digitaler Regier hoher Ordnung mit Signalprozessoren. Regelungstechnik, 31, 330.
Hanselmann, H. and W. Loges (1984). Implementation of very

fast state-space controllers using digital signal processors.
Proc. 9th IFAC WId Congr. Pergamon Press, New York.
Hartimo,l., K. Kronlof, O. Simula and J. Skytta(1986). DFSP:
A data flow signal processor. IEEE Trans. Comput., C~3S, 23.
Hay, J. 1.. (1984). ESL---advanced simulation language
implementation. Proc. 84 U KSC ConI, Bath. Butterworths,
London.
Hay, J. 1.. (1985). Applications of ESI.. Proc. 11th 1M ACS Wid
Congr., Oslo.
Heider. G. (I 982). Let operating systems aid in component
design. Comput. Des., Sept.
Henrichfreise, H. (1985). Fast elastic robots: control oran elastic
robot axis accounting for nonlinear drive properties. Proc.
11th IMACS Wid Conllr., Oslo.
Herrmann, O. E. and J. Smit (1983). A user-friendly environment
to implement algorithms on single-chip digital signal processors. Proc. EU-RASIP. Elsevier Science, Amsterdam.
Howe. R. M. (J 982). Digital simulations of transfer functions.
Proc. Summer Simulat. Co'!{., La Jona, California.
Hwang. K. (1979). Computer Arithmetic. Wiley, New York.
Hwang, S. Y. (1975a). Dynamic range constraint in state-space
digital filtering. IEEE Trans. ACOUSl. Speech Sig. Process.,
ASSP-Z3, 591.
Hwang, S. Y. (1975b). On monotonicity of Lp and lp Norms.
IEEE Tran.Ij. ACOUSl. Speech Sig. Process .• ASSP-23, 593.
Hwang, S. Y. (1977). Minimum uncorrelated unit noise in
state-space digital filtering. IEEE Trans. Acoust. Speech Sig.
Process., ASSP-25, 273.
Jacklin, S. A., J. A. Leyland and W. Warmbrodt (1985). Highspeed, automatic controller design considerations for integrating array processor, multi-microprocessor, and host computer
system architectures. Am Control Conj, Boston, 1223.
Jackson, L. B. (1970a). On the interaction of roundoff noise and
dynamic range in digital filters. Bell Syst. Tech. J., 49, 159.
Jackson, L. B. (1970b). Roundoff-noise analysis for fixed-point
digital filters realized in cascade or parallel form. IEEE Trans.
Audio Electroacoust., AU-IS. 107.
Jackson, L. B. (1976). Roundoff noise bounds derived from
coefficient sensitivities for digital filters. IEEE Trans. Ccts
Syst., CAS-D, 481.
Jackson, L. B. (1979). Limit Cycles in State-Space Structures for
Digital Filters. IEEE Trans. ects Syst., CAS-26, 67.
Jackson, 1.. B., A. G. Lindgren and Y. Kim (1979). Optimal
Synthesis of Second-Order State-Space Structures for Digital
Filters. IEEE Trans. Ccts Syst., CAS-Z6, 149.
Jacquot. R. G. (1981). Modern Digital Control Systems. Marcel
Dekker, New York.
Jaeger, R. C. (1982). Analog data acquisition technology. IEEE
Micro. Aug., 46.
Jain, R., J. Vandewalle and H. J. de Man (1985). Efficient and
accurate multiparameter analysis of linear digital filters using
a multi variable feedback representation. IEEE Trans. Ccts
Syst., CAS-3Z, 225.
Jaswa, V. c., C. E. Thomas and J. T. Pedicone (1985). CPACconcurrent processor architecture for control. IEEE Trans.
Comput., C-34, 163.
Johnson, G. W. (1965). Upper bound on dynamic quantization
error in digital control systems via the direct method of
Liapunov. IEEE Trans. Aut. Control, AC-IO, 439.
Johnson, G. W. (1966). Author's reply. I EEE 1l'ans. Aut. Control,
AC-ll,333.
Jover, J. M. and T. Kailath (1986). A parallel architecture for
Kalman filter measurement update and parameter estimation.
Automatica, 22, 43.
Kaiser, J. F. (1966). Digital Filters, System AnalysiS by Digital
Computer. Wiley, New York.
Kiillstrom, C. (1973). Computing exp (A) and integral exp (As)ds.
Report 7309, Lund Inst. Technol.. Div. Aut. Control.
Kanade, T. and D. Schmitz (1985). Development ofCMU directdrive arm II. Proc. 1985 Am. Control Con[. Boston, p. 703.
Kaneko, T. and B. Liu (1973). On local roundoff errors in
floating-point arithmetic. JJ ACM, 20, 391.
Katz, P. (1981). Digital Control using Microprocessors. PrenticeHan, Englewood Cliffs, New Jersey.
Katzenelson, J. (1962). On errors introduced by combined
sampling and quantization. I R'E Trans. Aut. Control, AC-72,
58.

167

Survey Paper
Kawamata, M. and T. Higuchi (1985~ A unified approach to
the optimal synthesis of fixed-point state-space digital filters.
IEEE Ira ... Acoust. Speech Sig. Process., ASSP-33, 911.
Kerckhoffs, E. J. H., B. Dobhelaere and G. C. Vansteenkiste
(1985). Some Donconventional digital computers in simulation. Proc. 11th IMACS Wid Co.gr., Oslo.
Kingsbury, N. G. and P. J. W. Rayner (1971). Digital filtering
using logarithmic arithmetic. Electron. Lett., 7. 56. Also in
Swartzlander (1980).
Kleinman, D. L. and P. K. Rao (1977). Continuous-discrete
gain transformation methods for linear feedback control.
Automatica, 13, 425.
Knowles, J. B. and E. M. Olcayto (1968). Coefficient accuracy
and digital filter response. IEEE Ira ... Cct Theory, CT-1S,
31.
Knowles, J. B. and R. Edwards (1965a). Effect of a finite-wordlength computer in a sampled-data feedback system. Proc.
lEE, 112, 1197.
Knowles, J. B. and R. Edwards (1965b). Finite word-length
effects in multi rate direct digital control systems. Proc. lEE.
112,2376.
Knowles, 1. B. and R. Edwards (1966). Computational error
effects in a direct digital control system. Automatica. 4, 7.
Kung, S. Y. (1984). 'On supercomputing ··'ith systolic/wavefront
array processors. Proc. IEEE, 72., 86).
Kuo, B. C. (1980). Digital Control Systems. Holt, Rinehart and
Winston, Tokyo.
Kuo, B. c., G. Singh and R. Yackel (1973). Digital approximation
of continuous-data cqp.trol systems by point-by-point state
comparison" Comput. Elect. Engng, I, 155.
Kuo, B. C. and D. W. Peterson (1973). Optimal discretization
. of continuous-data control system. Automatica, 9, 125.
Kwakernaak, H. and R. Sivan (1972). Linear Optimal Control
systems. Wiley, New York.
Lack, G. N. T. (1966). Comments on "Upper bound on dynamic
quantization error in digital control systems via the direct
method of Liapunov".IEEE Trans. Aut. Control, AC-ll, 331.
Lang, J. H. (1984). On the design of a special-purpose digital
control processor. IEEE Trans. Aut. Control, AC-29, 195.
Lee, S. C. and A. D. Edgar (1917). The focus number system.
IEEE Trans. Comput., C-16, 1167.
Leonhard, W. (1986). Microcomputer control of high dynamic
performance ac-drives-a survey. Automatica, 22, 1.
Liu, B. and T. Kaneko (1969). Error analysis of digital filters
realized with floating-point arithmetic. Proc. IEEE, 57, 1735.
Loges, W. (1983). Regelsysteme hoberer Ordnung mit dem
Signalprozessor TMS 320. Elektronik, 32, 25, 53.
Loges, W. (1984). Codegenerator erstellt Reglerprogramm fUr
den TMS 320. Elektro.ik, 33, 22, 154.
Loges, W. (1985). Realisierung schneller digitaler Regier hoher
Ordnung mit Signalprozessoren. Doctoral dissertation, University of Paderborn, Dept. Aut. Control in Mech. Eng.; also
VDI Verlag, DUsseldorf.
Long, J. L. and T. N. Trick (1973). An absolute bound on limit
cycles due to roundoff errors in digital filters. IEEE Trans.
Audio Electroacoust., AU-21, 27.
MacSorley, O. L. (1961). High-speed arithmetic in binary computers. IRE Proc., 49, 67. Also in Swartzlander, E. E. (Ed.)
(1980). Computer Arithmetic. Dowden, Hutchinson & Ross,
Stroudsberg, Pennsylvania.
Magar, S., E. Caudel, D. Essig and C. Erskine (1985). Digital
signal processor borrows from JlP to step up performance.
Electron Des., 21 Feb., 175.
Magar, S., S. J. Robertson and W. Gass (1985). Interface
arrangement suits digital processor to multiprocessing. Electron. Des., 7 March, 189.
Marrin, K. (1986). Six DSP processors tackle high-end signalprocessing applications. Comput. Des., 1 March, 21.
Marrin, K. E. (1985). VL!'>I and software move DSP techniques
into mainstream. Comput. Des., 15 Sept., 69.
McDonough, K., E. Caudel, S. Magar and A. Leigh (1982).
Microcomputer with 32-bit arithmetic does high-precision
number crunching. ElectronicS, Feb., 105.
Meisinger, R. and B. Lange (1976). Beriicksichtignng der Rechnertotzeit beim Entwurf eines diskreten Regelungs- uod
Beobachtungssystems. Regelungstechnik, 24, 232.
Middleton, R. H. and G. C. Goodwin (1985). Improved finite

168

word length characteristics in digital control using delta
operators. Dept. Electr. Compo Eng. Report, Univ. of Newcastle,
Australia.
Miller, D. F. (1985). Multivariable linear digital control via
state-space output matching. Opt. Control Applic. M eth., 6,
13.
Mills, W. L., C. T. Mullis and R. A. Roberts (1978). Digital filter
realizations without overflow oscillations. Proc. IEEE Int.
Corif. Acous!. Speech Sig. Process., Tulsa, Oklahoma.
Mills, W. L., C. T. Mullis and R. A. Roberts (198n Low roundoff
noise and normal realizations of fixed point IIR digital filters.
IEEE Irans. Acoust., Speech Sig. Process., ASSP-29, 893.
Mintzer, F., K. Davies, A. Peled and F. N. Ris (1983). The
real-time signal processor. IEEE Trans. Acoust. Speech Sig.
Process., ASSP-31, 83.
Mita, T. (1985). Optimal digital feedback control systems
counting computation time of control laws. IEEE Trans. Aut.
Control, AC-30, 542.
Mitchell, E. E. and R. Demoyer (1985). A versatile digital
controller algorithm incorporating a state observer and state
feedback. IEEE Irans. Ind. Electron., IE-32, 78.
Mitra, S. K., K. Hirano and H. Sakaguchi (1974). A simple
method of computing the input quantization and multiplication roundoff errors in a digital filter. IEEE Trans. Acoust.
Speech Sig. Process., ASSP-22, 326.
Moroney, P. (1983). Issues in the Implementation of Digital
Feedback Compensators. MIT Press, Cambridge, Massachusetts.
Moroney, P., A. S. Willsky and P. K. Houpt (1980). The digital
implementation of control compensators: the coefficient wordlength issue. IEEE Trans. Aut. Control, AC~25, 621.
Moroney, P., A. S. Willsky and P. K. Houpt (1981). Architectural
issues in the implementation of digital compensators. Proc.
8th lFAC Wid Congr., Kyoto.
Moroney, P., A. S. Willsky and P. K. Houpt (1983). Roundoff
noise and scaling in the digital implementation of control
compensators. IEEE Trans. Acoust. Speech Sig. Process.,
ASSP-31, 1464.
Mullis, C. T. and R. A. Roberts (1976). Synthesis of minimum
roundoff noise fixed point digital filters. IEEE Trans. Ccts
Syst., CAS-23, 551.
Mullis, C. T. and R. A. Roherts (1982). An interpretation of
error spectrum shaping in digital filters. IEEE Trans. Acoust.
Speech Sig. Process., ASSP-30, 1013.
Mullis, C. T. and R. A. Roberts (1984). Digital processing
structures for VLSI implementation. In Cappello, P. R. (Ed.),
VLSI Signal Processing. IEEE Press, New York.
Nagle, H. T. and V. P. Nelson (1981). Digital filter implementation on 16-bit microcomputers. IEEE Micro, 23 Feb.
.
Neuman, C. P. and C. S. Baradello (1979). Digital transfer
functions for microcomputer contro1./EEE Trans. Syst. Man
Cybern., SMC-9, 856.
Nishimura, S., K. Hirano and R. N. Pal (1981). A new class of
very low sensitivity and low roundoff noise recursive digital
filter structures. lEEE Trans. Cets. Syst., CAS-28, 1152.
Nishitani, T., R. Maruta, Y. Kawakami and H. Goto (1981). A
single-chip digital signal processor for telecommunication
applications. IEEE Jl Solid State Ccts, SC-16, 372.
Orlandi, G and G. Martinelli (1984). Low-sensitivity recursive
digital filters obtained via the delay replacement. IEEE Trans.
Ccts. Syst., CAS-31, 654.
Oppenheim, A. V. and A. S. Willsky (1983). Signals and Systems.
Prentice-Hall, Englewood Cliffs, New Jersey.
Oppenheim, A. V. and R. W. Schafer (1975). Digital Signal
Processing. Prentice-Hall, Englewood. Cliffs, New Jersey.
Patney, R. K. and S. C. Dutla Roy (1980). A different look at
roundoff noise in digital filters. IEEE Trans. Ccts Syst., CAS27,59.
Pei, S. C. (1985). Comments on "Properties of the Qn-matrix in
bilinear transformation". Proc. IEEE, 73, 841.
Pei, S. C. and K. C. Ho (1984). Comments on "Adaptive digital
control implemented using residue number systems". IEEE
Trans. Aut. Control, AC-Z9, 863.
Peled, A. and B. Liu (1976). Digital Signal Processing. Wiley,
New York.
Peled, U. and J. D. Powell (1978). The effect of prefilter design
on sample rate selection in digital flight control systems. Proc.

Survey Paper
AIAA Guid. Control Conf, Palo Alto, California.
Phillips, C. L. (1980). Using simulation to calculate floatingpoint quantization errors. Simulation, June, 207.
Phillips, C. L. and H. T. Nagle (1984). Digital Control Systems
Analysis and Design. Prentice-Han, Englewood-Cliffs, New
Jersey.
Pickvance, R. (1985). A single chip digital signal processor.
Electron. Engng, Feb., 53; March, 55; Apr., 87.
Pope, S., J. Rabaey and R. W. Brodersen (1984). Automated
design of signal processors using macrocells. In Cappello, P.
R. (Ed.), V LSI Signal Processing. IEEE Press, New York.
Pountain, D. (1985). Multitasking FORTH. Byte, March, 363.
Powers, W. F. (1985). Computer tools for modern control
systems design. IEEE Control Syst. Mag., Feb., 14.
Quarmby, D. 1. (1984). Signal Processor Chips. Granada, London.
Quong, D. and R. Perlman (1984). Single-chip accelerators speed
floating-point and binary computations. Electron. Des., 15
Nov., 246.
Rabaey, J., S. Pope and R. W. Brodersen (1987). An integrated
automated layout generation system for DSP circuits. J.
Comput. Aided Des. (to appear).
Rattan, K. S. (1981). Digital redesign of existing multiloop
continuous control systems. Proc. Jt Aut. Control Coni,
Charlottesville, Virginia.
Rattan, K. S. (1982). Digitalizing existing continuous-data control systems via "continous frequency matching". Proc. IFAC
Symp. Theory Applic. Dig. Control, New Delhi.
Rattan, K. S. (1984). Digitalization of existing continuous control
systems. IEEE Trans. Aut. Control, AC-29, 282.
Rattan, K. S. and H. H. Yeh (1978). Discretizing continuousdata control systems. Comput.-Aided Des., 10, 299.
Ready, 1. F. (1984). Operating systems conform to application
needs. Mini-Micro Systems, Dec., 137.
Rink. R. E. and H. Y. Chong (1979a). Performance of state
regulator systems with floating-point computation. IEEE
Trans. Aut. Control, AC-24, 411.
Rink. R. E. and H. Y. Chong (1979b). Covariance equation for
a floating-point regulator system. IEEE Trans. Aut. Control,
AC-24,980.
Rojek, P. and W. Wetzel (1984). Mehrgrossenregelung mit
Signalprozessoren. Elektronik, 33, 16; 109.
Rubinfield, L. P. (1975). A proof of the modified Booth's
algorithm for multiplication. IEEE Trans. Comput., C-24,
1014.
Sandberg, I. W. (1967). Floating-point-roundoff accumulation
in digital-filter realizations. Bell Syst. Tech. J., 46,1175.
Sandberg, I. W. and J. F. Kaiser (1972). A bound on limit cycles
in fixed-point implementations of digital filters. IEEE Trans.
Audio Electroacoust., AU-20, 110.
Sasahara, H., M. Kawamata and T. Higuchi (1984). Design of
microprocessor-based LQG control systems with minimum
quantization error. Proc. I ECON '84, Tokyo.
Schafer, R., R. M. Mersereau and T. P. Barnwell (1984). Software
package brings filter design to PCs. Comput. Des., Nov., 119.
Scharf, L L. and S. Sigurdsson (1984). Fixed point implementation of fast Kalman predictors. IEEE Trans. Aut. Control,
AC-29,850.
Schittke, H. J. and R. Dettinger (1975). Simulation von linearen
zeitinvarianten Systemen bei stiickweise linearem Verlauf des
Steuervektors. Regelungstechnik, 23, 422; 24, 27.
Schmidt, L. A. (1978). Designing programmable digital filters
for LSI implementation, Hewlett-Packard J., 29, 13, 15.
Schumacher, W. and W. Leonhard (1983). Transistor-Fed ACservo drive with microprocessor control. Proc. Int. Power
Electron. Con/, Tokyo.
Shah, S. c., M. A. Floyd and L. L. Lehman (1985). MATRIX,:
control design and model building CAE capability. In Jamshidi, M. and C. J. Herget (Eds), Advances in Computer-Aided
Control Systems Engineering. North-Holland, Amsterdam.
Shaw, R. F. (1950). Arithmetic operations in a binary computer.
Rev. Sci. Instrum., 21, 687. Also in Swartzlander, E. E. (Ed.)
(1980), Computer Arithmetic. Dowden, Hutchinson & Ross,
Stroudsberg, Pennsylvania.
Shieh, L. S., Y. F. Chang and R. E. Yates (1982). Model
simplification and digital design of multi variable sampleddata control systems via a dominant-data matching method.

Proc. IFAC Symp. Theory Applic. Dig. Control, New Delhi.
Shoreys, F. (1982). New approach to high-speed high-resolution
analogue-to-digital conversion. lEE Electron. Power, Feb.,
175.
Simmers, C. and D. Arnett (1985). Specialized I/O and highspeed CPU yields efficient microcontroller for automotive
applications. IEEE Trans. Ind. ELectron., IE-32, 278.
Singh, G., B. C. Kuo and R. A. Yackel (1974). Digital approximation by point-by-point state matching with high-order
holds. Int. J. Control, 20, 81.
Sjoding, T. W. (1973). Noise variance for rounded two's complement product quantization. IEEE Trans. Audio Electroacoust., AU-21, 378.
Skytta, J., O. Hyvarinen, I. Hartimo and O. Simula (1983).
Experimental signal processing and development system.
Proc. Eur. Con! eCI Theory Des., Stuttgart.
Slaughter, J. B. (1964). Quantization errors in digital control
systems. IEEE Trans. Aut. Control, AC-9, 70.
Slivinski, Ch. and J. Borninski (1985). Control system compensation and implementation with the TMS3201 O. Texas Instru·
ments Application Report.
Smith, J. M. (1977). Mathematical Modelling and Digital Simulationfor Engineers and Scientist$. Wiley, New York.
Spang, H. A. (1985). Experience and future needs in computeraided control design. IEEE Control Syst. Mag., Feb., 18.
Sripad. A. B. and D. L. Snyder (1977). A necessary and sufficient
condition for quantization errors to be uniform and white.
I EEE Trans. Acoust., Speech Sig. Process., ASSP-2S, 442.
Srodawa, R. J., R. E. Gach and A. Glicker (1985). Preliminary
experience with the automatic generation of productionquality code for the Ford/Intel 8061 microprocessor. IEEE
Trans. Ind. Electron., IE-32, 318.
Steinlechner, S., E. Auer and E. Lueder (1983). A fast digital
signal processor without multipliers. Proc. Conf. Cct Theory
ECCTD, Stuttgart.
Stirling, R. (1983). Simulation of a digital aircraft flight control
system. Simulation, May, 171.
Strejc, V. (1981). State Space Theory of Discrete Linear Control.
Wiley, New York.
Sutherland, H. A. and K. L. Sanin (1985). Control engineers
workbench--a methodology for microcomputer implementation of controls. IEEE Control Syst. Mag., Feb., 22.
Swartzlander, E. E. (Ed.) (1980). Computer Arithmetic. Dowden,
Hutchinson & Ross, Stroudsburg, Pennsylvania.
Swartzlander, E. E. and A. G. Alexopoulos(1975). The signjlogarithm number system. IEEE Trans. Comput., C-24, 1238. Also
in Swartzlander, E. E. (Ed) (1980), Computer Arithmetic.
Dowden, Hutchinson & Ross, Stroudsberg, Pennsylvania.
Tabak, D. and G. J. Lipovski (1980). MOVE architecture in
digital controllers. IEEE Trans. Comput., C-29, 180.
Taetow, W. (1984). eM OS Bausteine fur mikroprogrammierbare
Signalprozesoren. Elektronik, 33, 22, 136, 23, 138.
Tan, C. and B. C. McInnis (1982). Adaptive digital control
implemented using residue number systems. IEEE Trans. Aut.
Control, AC-27, 449, 499.
Taylor, R. (1984). Signal processing with occam and the transputer. lEE Proc., 131,610.
Toong, H. D. and A. Gupta (1982). Evaluation kernels for
microprocessor performance analyses. Perform. Evaluat., 2, 1.
Vaidyanathan, P. P. (1985). On error-spectrum shaping in statespace digital filters. IEEE Tram•. ects Syst., CAS-32, 88.
Van Wingerden, A. J. M. and W. L de Koning (1984). The
influence of finite word length on digital optimal control.
IEEE Trans. Aut. Control, AC-29, 385.
Wal1ich, P. (1985). Toward simpler, faster computers. IEEE
Spectrum, Aug., 38.
Walrath, C. D. (1984). Adaptive bearing friction compensation
based on recent knowledge of dynamic friction. Autmatica,
20, 717.
Waser, Sh. and M. 1. Flynn (1982). Introduction to Arithmetic
for Digital Systems Designers. CBS College Publishing, New
York.
Weinstein, C. and A. V. Oppenheim (1969). A comparison of
roundoff noise in floating point and fixed point digital filter
realizations. Proc. IEEE,57, 1181.
Widrow, B. (1956). A study of rough amplitude quantization by
means of Nyquist sampling theory. IRE Trans. Gct Theory,

169

Survey Paper
PGCT-3, 266.

Widrow, B. (1961). Statistical analysis of amplitudtHIuantized
sampled-data systems. Tra .... AlEE, 79, 555.
Widrow, B. and E. Walach (1983). Adaptive signal processing
for adaptive control. Proc. IF AC Workshop Adapt. Syst.
Control Sig. Process., San Francisco.
Williamson, D. (1985). Finite wordlength design of digital
Kalman filters for state estimation. IEEE 7rans. Aut. Control,
AC.30,93O.

Willsky, A. S. (1979). Digital Signal Processing and Control and
Estimation Theory. MIT Press, Cambridge, Massachusetts.

170

Windsor, W. A. (1985~ IEEE floating point chips implement
DSP architectures. Comput. Des., Jan., 165.
Wittenmark, B. (1985). Sampling of a system with a time delay.
IEEE Trans. Aut. Control, AC..JO, 507.
Yackel, R. A., B. C. Kuo and G. Singh (1974). Digital redesign
'of continuous systems by matching of states at multiple
sampling periods. Automatica, 10, 105.
Yekutiel, O. (1980). A reduced-deiay sampled·data hold. IEEE
Trans. Auto. Control, AC-25, 847.
Zimmerman, B. G. (1983). MODEL S, a sainpled-data simul·
ation language. Simulation, May, 183.

The Programming Language DSPL
a problem oriented approach for
digital signal processing using DSP
Albert Schwarte and Herbert Hanselmann
dSPACE digital signal processing and control engineering GmbH
Paderbom, West Germany

Abstract
Digital signal processors (DSP) are increasingly used in many application fields like motion
control systems and power conversion systems due to their impressive computl\tional performance. However, appropriate tools for programming such 'devices are still lacking. Therefore
DSPs are mainly programmed using assembly language. The high level language DSPL
introduced here has been developed with the typical application fields in mind. Characteristic
elements of DSPs have also been regarded. This results in compilers capable of generating
extremely efficient code. Furthermore DSPL's automatic scaling features simplify programming of applications for DSP with fixed-point arithmetic.

Introduction
For a few years now digital signal processors
have been available as very powerful devices for
computational intensive applications possibly
demanding real-time performance. DSPs have
been developed primarily for signal processing
applications like filtering, speech analysis, data
communication and the like. Comparing the
mathematical algorithms used in these fields with
the algorithms used in modem multi-variable
control theory shows however, that both application field!! have to deal with many common
problems. Thus DSPs are increasingly used for
the implementation of complex control systems
and other industrial applications like. motion
control systems, power conversion systems and
hardware-in-the-loop simulation systems.
DSPs are a very special class of microprocessors.
They typically contain hardware optimized to
carry out multiplications and accumulations.
Most DSPs are able to perform a multiplication
within a single machine cycle and perform the
accumulations of products in parallel. This leads
to extremely high throughput for the computation
of scalar products, a central element· of signal

processing algorithms. Another feature that distinguishes DSPs from conventional microprocessors is the Harvard-architecture used by many
such devices. They usually have several separate
memory blocks connected to the CPU core with
multiple data and address busses. These data
paths can be used in parallel so that several
operands can be transferred at the same time.
Utilizing such specific DSP elements is nearly
impossible with conventional high level programming languages like C or Pascal, because such
languages have no appropriate constructs which
allow a compiler writer to make use of these
elements. Another problem not addressed by
these languages is the lack of an appropriate data
type for DSPs using fixed-point arithmetic.
Fixed-point arithmetic is however still used by
most DSPs, and especially the low-cost ones
embedded in products manufactured in large
quantities.

Special features of DSPL
Nevertheless most of the few high level language
compilers available represent a more or less
comprehensive subset of the C programming

Reprinted, with permission. from PCIM, June 25 - 28. 1990.

171

language. The Digital Signal Processing Language (DSPL) introduced here follows a more
problem oriented approach. It has been developed
with the intention to be particularly useful for the
special application fields of digital signal processing using DSPs for the implementation.
Especially for fixed-point DSPs DSPL provides
extensive support by defining an appropriate data
type and automatic scaling features.

DSPL data formats
Standard DSPs like the first and second generation TMS 320 series use a 16 bit fixed-point data
fonnat. Using this fonnat for the conventional
integer arithmetic leads to a quantization of 8 bit
for data and coefficients in order to avoid overflows when computing a product. Accumulation
of products as required for a scalar product
requires additional scaling, so that the worst case
sum of the partial products does not overflow the
integer value range. Using only 8 bits for the
representation of data and coefficients however
results in a very small number range with low
resolution. This is not acceptable for most industrial applications. As the same problem arises for
conventional microprocessors system designers
have developed algorithms to perfonn floating-point arithmetic with fixed-point processors.
With the aid of floating-point arithmetic an
arbitrary number range with arbitrary resolution
can be realized according to the number fonnat
selected, but at the cost of largely increased
execution time. This is also possible for DSPs, of
course, but using such a floating-point software
package decreases the DSP's perfonnance so far
that conventional microprocessors combined with
hardware floating-point coprocessors seem more
attractive, at least for applications where the price
of the processors is not a primary issue.
DSPL follows a third way which can provide a
good compromise for most applications. It couples the speed of integer arithmetic with a
resolution of 16 bit for the above mentioned
processors. To achieve this DSPL provides the
fractional data fonnat. Data are interpreted as
two's complement numbers having the binary
point directly right to the sign bit (MSB) which

172

leads to a value range of -1.0 .. 0.99996...

Obviously the multiplication of fractional numbers can never overflow the fractional value
range and can be implemented easily as most
DSPs provide an accumulation register at least
twice as long as the data fonnat used, e.g. 32 bit
for the TMS 320 series. The fractional fonnat
allows to use all 16 bit for the representation of
data which results in a quantization good enough
for most applications. Only when accumulating
fractional numbers the result can overflow the
value range. On the one hand this can be avoided
by properly scaling data during preparation of the
implementation, and on the other hand by using
DSPL's automatic scaling features for the computation of scalar products. As the fractional data
fonnat is just another interpretation of the binary
data fractional arithmetic except division can be
implemented on the machine instruction level
which results in the same execution speed as
integer arithmetic. Fractional numbers are the
main vehicle for carrying out computations in
. DSPL. They are supported by the compilers not
only for the computation of scalar products but
also for any other basic arithmetic expression
including division.
Besides the fractional fonnat a conventional
integer data type and boolean data are also
supported. In addition to the basic operations
DSPL allows bitwise handling of integer variables with logical operators. This is especially
useful for manipulating hardware devices on the
bit level, particularly because variables can be
allocated at arbitrary physical addresses. Boolean
variables can be used in arbitrary expressions as
well. They are mainly useful for controlling
program flow in conjunction with if-statements.

Scalar product computation
Many digital signal processing algorithms consist
mainly of the computation of scalar products.
FIR filters and difference equations of controllers
or IIR filters provide good examples.

Implementing scalar products on processors with
fixed-point arithmetic is a cumbersome and errorprone task due to the scaling requirements. DSPL
supports the implementation of scalar products
by providing the necessary constructs on the
language level including automatic scaling for
products of a coefficient vector and a variable
vector. Scalar product scaling guarantees that
overflows can be detected and handled appropriately by saturation conditions simulated in
software
coefficients outside the fractional value range
can be realized
coefficient scaling can be performed automatically by the compiler.
Scaling of all Co is performed completely at
compile time. Only the necessary rescaling operations for the final result (r) need to be done at
runtime. Rescaling is implemented by optimized
code constructs depending on the actual data.
Within scalar products even coefficients outside
the fractional number range can be realized with
special code constructs. If scalar product scaling
is performed automatically by the DSPL compiler a worst case scaling is performed. Maximum
scaling values can optionally be speCified by the
user in case they are already known from simulation or measurements. for example. Scalar product scaling can also be completely disabled. The
code necessary for rescaling can automatically
include instructions to test for overflows of the
scalar product result. Saturation conditions can
then be simulated by software upon request. A
special form of the scalar product statement
allows the implementation of a FIR filter with a
single DSPL statement. In this case the update of
the variable vector is performed in parallel to the
computation of the filter taps.
High level language compilers usually rely on
library routines for the computations of scalar
products. which simply execute a loop for all
elements of the vectors involved to compute the

sum of the partial· products. However. this kind of
computation is very inefficient. particularly for
control algorithms where often sparse coefficient
matrices have to be multiplied by variable vectors. This leads to the problem of loading the
processor with unnecessary code for multiplying
zeroes. Using appropriate transformations the
number of non-zero coefficients can be minimized. DSPL does never use library routines but
generates the appropriate code in-line depending
on the actual data. The code is extremely efficient
because every information the compiler needs for
code generation is already known at compile
time. Not a single instruction is wasted to perform address computations or adjust loop counters at runtime. Immediate instructions can often
be used to realize small coefficients which leads
to very economical use of data memory. a very
scarce resource on some DSPs.

Block moves of data
Many DSPs contain special hardware provisions
or at ll!ast efficient machine instructions to perform moving a block of data in memory. Block
moves are required by many signal processing
algorithms to implement the .-1 operation or to
move the data samples through a filter. Such
special elements can only be utilized by a compiler if an appropriate language construct is
defined. DSPL provides the update-statement for
this purpose. It allows to copy a data vector to a
second one. Because the size of the vectors is
already known during compile time code can be
generated code without containing time consuming instructions for address computations.

Realization of sampling systems
Digital signal processing systems often require
the algorithm to be carried out with a dermed
sampling period. DSPL provides an appropriate
statement which allows the specification of the
required sampling period. The compiler generates
appropriate code to realize the sampling clock
based on macros adaptable to the target hardware. Usually a timer capable of generating
interrupts will be used for this purpose. In case a
hardware system contains several timers with

173

interrupt capabilities even multi-rate systems can
easily be implemented.

declarations and statements available in DSPL.
Short comments will describe !he meaning of
each element.

DSPL language constructs
The follOwing tables provide a summary of !he

IDeclaration

IPurpose

TYPE

declaration of fractional data representation details

fractional, integer, boolean

scalar data types, fractional data can also be declared as
vectors, constants and variables possible

SCPTYPE

type declaration used for defining details of scalar product
computations like automatic scaling and saturation handling

SCALABLE

attribute of a fractional constant, allows !he constant to be
scaled by the compiler

ALTERABLE

attribute of a fractional constant, allows a scalable constant
to be included in scalar product computation, such a
constant may be altered during runtime as required by
adaptive systems

AT

address clause, allows to specify the physical address where
the declared object shall be allocated

INPUT / OUTPUT

instructs the compiler to associate the declared variable as
with a physical input or output channel

EXTERNAL

allows the declaration of formal procedure headers, external
procedures must be implemented in assembly language

INTERRUPT

instructs the compiler to associate this name with an
interrupt source

RENAME

declares an alias name for a component of a fractional
vector
Table 1 : Declarations provided by DSPL

Statement

Purpose

BEGIN

start of executable program body

ON ident DO

surrounds the interrupt service routine for an
identifier declared as an interrupt source, any
number of interrupt-statements are possible

..

END INTERRUPT

174

EVERY time DO

..

END EVERY
ACCUMULATE SCALPRO (ident)

..
END ACCUMULATE
ACCUMULATE PRESCALPRO (ident)

..
END ACCUMULATE

surrounds the block of statements to be executed with regular time intelVals. the time specified represents the sampling period of sampled
data systems
a complete scalar product with an arbitrary
number of partial products is accumulated. the
identifier references a scalar product type declaration
same as before except that the accumulation
register is pre-loaded with a full accumulator-length value

ACCUMULATE SCALPRO (ident) AND UP- special form of scalar product accumulation.
DATEident
allows efficient computation of FIR filter

..

END ACCUMULATE
INPUT

a scalar or a vector of input variables is read
from an I/O channel

OUTPUT

a scalar or vector of output variables is written
to an I/O channel

UPDATE

copies a variable vector to a second one

assignment

the assignment statement allows the computation of arbitrarily complex arithmetic expressions.

ABS*/+-

operators dermed for fractional operands

=/=«=>=>
ABS NOT * / MOD + -

operators defined for integer operands

=/=«=>=>
ANDORXOR
NOT AND OR XOR

operators defined for boolean operands

IF condition THEN
ELSIF condition THEN
ELSE
END IF

the if-statement allows to control program flow.
any number of ELSIF parts are allowed. the
ELSIF and ELSE parts are optional

LOOP

the loop-statement in conjunction with the
exit-statement allows the implementation of
any kind of program loops

..

EXIT

..

END LOOP
FOR ident IN range LOOP

..

END LOOP

the for-statement allows the implementation of
loops with a determined number of repetitions •
up-counting and down-counting loops are possible

175

procedure call

allows the call of external procedures defined in
the declarative section, actual parameters must
be specified according to the fonnal procedure
header declaration

in-line assembler

assembly language statements may be inserted
anywhere, access to DSPL variables by name is
supported
Table 2 : Statements provided by DSPL

Hardware independence
A DSPL program is nearly independent from the
target hardware system. Each DSPL compiler can
support arbitrary hardware environments surrounding a particular target DSP. This great
flexibility is possible because every DSPL program is augmented by an environment description. This description instructs the compiler
which address ranges it may use for program and
data allocation, for example. It also contains the
necessary connections between logical input and
output variables of the DSPL program and the

IDeclaration

176

physical I/O channels. DSPL compilers are
open-ended with respect to all the language
constructs depending on hardware characteristics.
They use macros for the implementation of input
and output and for the realization of the sampling
clock for example. These macros can easily be
adapted to any target hardware system by the
user, which needs to be done only once.
The following table describes the infonnation
contained in the environment description valid
for the DSPL compiler for TMS 320C25 DSPs.

IPurpose

PROCESSOR IS "TMS 320C25"

declares the target processor, used by the
compiler for consistency check

PROGRAM SPACE OFF CHIP IS ...

declaration of memory section available for
program code allocation

DATA SPACE ON CHIP IS ...
DATA SPACE OFF CHIP IS ...

declaration of memory sections available for
data allocation

STACK SPACE IS ...

declaration of memory section available for
stack allocation

CYCLE TIME IS ...

declaration of basic machine cycle, used for
the computation of execution time statistics

PROGRAM MEMORY WAIT STATE IS ...

number of wait states required by the target
hardware when accessing external program
memory, used for the computation of execution time statistics

DATA MEMORY WAIT STATE IS ...

number of wait states required by the target
hardware when accessing external data
memory, used for the computation of execution time statistics

INTERRUPT ident IS VECfOR ...

declares the connection between the DSPL
name of an intelTllpt source and an actual
hardware intelTllpt

INPUT SPECIFICATION IS
ident IS CHANNEL number USING macro

declares the connection between the DSPL
name of an input variable and a physical
input channel, for each single input channel
an appropriate macro can be used, optionally
sequential inputs can be used in cases. the
target hardware prescribes
a particular sequence for reading input channels

..

SEQUENTIAL
ident IS CHANNEL number USING macro

..

END INPUT
OUTPUT SPECIFICATION IS
ident IS CHANNEL number USING macro

..

SEQUENTIAL
ident IS CHANNEL. number USING macro

..

END OUTPUT

declares the connection between the DSPL
name of an output variable and a physical
output channel, for each single output channel
an appropriate macro can be used, optionally
sequential outputs can be used in cases the
target hardware prescribes a particular sequence for writing output channels

Table 3 : Elements of the environment description

Compiler output
A DSPL compiler generates completely documented assembly language source files which a
user might optionally try to optimize .. After
assembling the program it can be downloaded to
the target hardware and is ready for execution.
Complete statistical information is also generated. This includes a detailed cross-reference listing showing allocation information for code and
data sections. More interesting however is that
the compiler also computes execution time statistics as far as possible. The cross-reference listing
will contain information about the execution time
requirements of the block-statements and compute the processor load based on the requested
sampling rates. The assembly language source
listing will contain information about the machine cycles used by the code generated for each
single DSPL statement. These statistics will even
regard such issues as the influence of wait-states
required by the target hardware for the access to
different memory sections. In case of programming errors the compilers generate a source
listing with interspersed error messages giving
detailed information about the errors detected.

Depending on the program compiled the DSPL
compilers compile from several hundred to several thousand lines of code per minute on typical
PCs.

Development system
Currently DSPL compilers for the TMS320C25
DSP and the TMS 3IOCIX DSP family are
available. Although they can be used stand-alone
as powerful development tools they can also be
used in conjunction with a complete development
system primarily designed for the realization of
control systems. This development system consists of additional software and hardware components using PC-AT class machines as host. The
IMPEX software supports all the necessary steps
for the preparation of linear multi-variable control systems prior to the implementation. Starting
from differential or difference equations IMPEX
supports discretization, scaling, structure transformation, simulation of closed loop systems
including effects of DSP arithmetic and NO and
D/A converters and the generation of the appropriate DSPL program. On the DSPL level any
non-linear extensions can be added to the pro-

177

gram. This can be supported by the NMAC tool
whi~ can generale optimized table-lookup based
external DSPL procedures for the implementation
of arbitrary one-dimensional non-linear functions. After assembling the assembly language
source file resulting from the DSPL compilation
the object code can be down-loaded to the target
hardware where it can be examined with a
powerful real-time TRACE module. This module
wodes on the system level rather than on the
machine instruction level and is capable of
displaying the time response of arbitrary variables. Sophisticated hardware systems built
around the TMS 320 family DSPs, including the
new TMS 320C30 floating-point DSP which is
programmed in C rather than DSPL, augmented
by powerful peripheral boards for analog and
digital I/O and incremental encoder interfaces
support the automatic implementation of standard
applications often within minutes by providing
completely software controlled board setups, for
example.

Examples and applications
A large number of applications have already
been realized using DSPL as the programming
language. Some examples are described below in
order to give an estimate about the computational performance of DSPs and of the quality of the
code generated by the DSPL compilers. Impressive sampling rates can be achieved even for
very complex applications.
The first example regards a 3rd order PD controller with notch filter as described by the
following equations.
.., = (

0.333333
0.0
0.0

1,=(-16.723549

0.0
0383240
-'-0.518211

J "'_I

0.0
0.252007
0383240

-13.152899 0.0) ...

{'-O.66666~
0.473315
0.587474

~.098620

-,_,

_,

Assuming that all state variables are properly
scaled for the fractional number range so that no
overflow test and saturation handling is required
for the states, and that overflow test and saturation handling are included for the output by
using scalar product scaling, a TMS 320C25
DSP can execute the code generated by the

178

DSPL25 compiler within 7.3 IU. The same
program compiled with the DSPLlX compiler
can be executed within 10.4 IU by a TMS
320El4 DSP. This does not include time required for i/o and timer interrupt processing. The
corresponding DSPL program and excerpts from
the compiler generated assembly language
source are presented below. The statistical information computed by the compiler is also presented.
The second example represents a 9th order state
controller with Kalman filter having 2 inputs and
one output. The controller was designed for a
disk drive (computer peripheral). As this controller includes an integrator the corresponding
state variable is computed with saturation using
scalar product scaling. Otherwise the same assumptions apply as given above. A TMS 320C25
DSP can execute the necessary code within 19
IU. The execution time for a TMS 320El4 DSP
is 27.5 IU.
Other applications implemented with DSPL include the following (sampling rates are given for
a TMS 320C25).
Compliant articulated robot with electrical
drives: Linear vibration damping I tracking
controller with 10 sensors, 3 motors, 9 reference
inputs, running at 20 kHz.
High-acceleration gantry type robot with hydraulic drives: Vibration damping I tracking
controller of order 10 (including Kalman filter
and non-linear compensation for hydraulic effects) for each single axis, with 1 sensor (position encoder), I motor and 3 reference inputs,
running at 10 kHz. Several axes can be selVed by
a single DSP.
Kalman-filter-based track following control (see
second example above).
Notch-filter-based controller of 11th order for
the same application runs at > 30kHz.
Vehicle control: Various active suspension controllers of up to 40th order running with sampling rates in the kHz range.

Hardware-in-the-Ioop simulation: Hydraulic
cylinder for active vehicle suspension under test
and actuating cylinder simulating the stress and
motion, both given in hardware. The DSP hardware system does the rest, i.e. controls the
suspension and actuating cylinder, simulates
wheel and car body dynamics, and performs the
noise filtering for road surface simulation, all at
14kHz.
Anti-skid-braking (ABS) hardware-in-the-loop
simulation: Four-wheeled non-linear vehicle

model of 18th order (11 mechanical degrees of
freedom) running at 6 kHz on a TMS 320C2S.
Used to test and optimize ABS in the lab.
Simplified proportional-differential control and
plant identification: lust to show a mixture of
DSPL constructs in an application program. The
sampling rate is > 20 kHz for a TMS 320C2S.
The listings below show the DSPL prognlm, the
associated environment description, the statistical information and excerpts from the code
generated by the DSPL2S compiler.

system specification controller_gain_ident is
type fractional is
fix' (bits => 16, fraction => 15, representation => twoscomplement);
scptype state1 is
fix' (acculength => 32, round => on, scale => on, saturation => on);
scptype del is
fix' (acculength => 32, round => on, scale => off, saturation => off);
scptype out1 is
fix' (acculength -> 32, round => on, scale => common, saturation -> on);
a1
scalable constant vector (1) of fractional ;= (0.333);
b1
scalable constant vector (2) of fractional
(0.330, -0.330);
c1
scalable constant vector (1) of fractional ;= (-14.141);
d1
scalable constant vector (2) of fractional ;= (7.699, -7.699);
xk
vector (1) of fractional;
xk1 ; vector (1) of fractional;
u
vector (2) of fractional;
input is u;
y
vector (1) of fractional;
output is y;
temp1 ; rawaccumulator;
r_coeff ; scalable constant vector (1) of fractional := (17.2405);
rk_del~coeff ; scalable constant vector (3) of fractional ;~ (0, 0, 1);
cnt ;' integer;
lk ; fractional;
rk_del ; vector (3) of fractional;
rk ; fractional;
yfk ; fractional;
ufk ; vector(l) of fractional;
yfkl ; fractional;
0.2;
gain old ; fractional
gain ; fractional;
ginc : fractional;
al _flt
scalable constant vector (2) of fractional .= (0.950, 0.074);
a2 flt
scalable constant vector (2) of fractional := (-0.017, 0.950);
scalable constant vector (1) of fractional
(0.067) ;
bl flt
b2 _flt
scalable constant vector (1) of fractional .= (-0.046) ;
cl flt
scalable constant vector (2) of fractional := (-0.671, -1. 049) ;
dl flt
(9.379E-04);
scalable constant vector (1) of fractional

..

-

-

.-

..

179

xk_flt1
vector (2) of fractional;
vector (2) of fractional;
xk1_fltl
u_fltl
vector (1) of fractional;
y fltl
vector (1) of fractional;
temp1_fltl : rawaccumulator;
xk_flt2
vector (2) of fractional;
xk1_flt2' vector (2) of fractional;
u_flt2
vector (1) of fractional;
y_flt2
vector (1) of fractional;
temp1_flt2 : rawaccumulator;
begin
every 1.OE-04 do
-- controller
update (xk1, xk);
-- sample inputs
input (u);
accumulate prescalpro (outl)
yIlt :- templ + dl * u;
end accumulate;
-~ output to plant
output (y);
accumulate scalpro (statel)
xkl(l) :- al * xk + b1 * u;
end accumulate;
accumulate scalpro (outl)
templ :- cl * xk1;
end accumulate;
-- identification
u_fltl(l) :- yIlt;
u_flt2 (1) :- u (2);
-- low-rate identification
cnt :- cnt + 1;
if cnt > 10 then
cnt :- 0;
ufk(l) :- y,:""fltl(l);
yfk1 :- yfk;
yfk :- y_flt2(1);
lk :- yfk - yfkl;
accumulate scalpro (statel)
rk :- r_coeff*ufk;
end accumulate;
rk_del(l) :- rk;
-- FIR delay-line
accumulate scalpro (del) and update rk_del
rk :- rk_del_coeff*rk_del;
end accumulate;
gain_old :- gain;
ginc :c (lk-rk*gain_old)*rk;
gain :- gain_old + ginc + ginc;
end if;
~- high-rate lowpass filtering for gain identification
-- input filter
update (xk1_fltl, xk_flt1);

180

accumulate prescalpro (outl)
y_fltl(l) :z templ_fltl + dl_flt * u_fltl;
end accumulate;
accumulate scalpro (statel)
xkl_flt~ (1) :- al_flt * xk_fltl + bl_flt *
end accumulate;
accumulate scalpro (statel)
xkl_fltl(2) := a2_flt * xk_fltl + b2_flt *
end accumulate;
accumulate scalpra (outl)
templ_fltl :- cl_flt * xkl_fltl;
end accumulate;
-- output filter
update (xkl_flt2, xk_flt2);
accumulate prescalpro (outl)
y_flt2(1) := templ_flt2 + dl_flt * u~flt2;
end accumulate;
accumulate scalpro (statel)
xkl_flt2(1) := al_flt * xk_flt2 + bl_flt *
end accumulate;
accumulate scalpro (statel)
xkl_flt2(2) := a2_flt * xk_flt2 + b2_flt *
end accumulate;
accumulate scalpro (outl)
templ_flt2 := cl_flt * xkl_flt2;
end accumulate;
end every;
end controller_gain_ident;

u_fltl;

u_fltl;

u_flt2;

u_flt2;

Listing 1: DSPL example program

environment "D5l00l" is
processor is "TM5 320C2S";
program space off chip is from 20h to 3fffh;
data space on chip is from 200h to 3ffh;
data space off chip is from 400h to 3fffh;
stack space is from 60h to 7fh;
cycle time is 100;
program memory wait state is 0;
data memory wait state is 0;
input specification is
u(l) is channel OeeOlh using ds200l with start;
u(2) is channel Oee03h using ds200l with start;
end input;
output specification is
y(l) is channel OefObh using ds2l0l;
end output;
end environment;

Listing 2: Environment description

181

DSPL - cross compiler, Vs 2.01, MS-DOS, target CPU
Copyright (C) 1988, 1989 by dSPACE GmbH
source file
environment file
assembler file
xref file
error file

TMS 320C25

pcim.dsp
pcim.env
pcim.asm
pcim.xrf
pcim.err

Compilation completed. No errors detected.
execution time requirements
task I

cycles

1 I

I rate (kHz) I time (us)

431

total processor load
498
45
32
134
2323

words
words
words
lines
lines

23.202 I

I rqst (us) I

43.100 I

100.000 I

use (%)
43.10

43.10 %

of code (off-chip) .
of data (on-chip) .
(on-chip) .
stack
compiled.
I minute.

Listing 3: Statistical infonnation generated by DSPL25 compiler
;

line 113
zac

xk_flt1 (1)
It
v6
a2_flt (1)
mpyk -564
xk_flt1(2)
Ita
v7
a2_flt (2)
mpy
_c8
u_flt1 (1)
Ita
_v18
b2_flt (1)
mpyk -1533
apac
adlk 1, 14 - 0
perform rounding
overflow test and rescaling 0 bit
sach *, 1
save result
sf1
sign bit into carry flag
bc
_120
branch if result < 0
bgez
121
branch if no positive overflow
1alk 07fffh, 0
use positive saturation
b
_122
update result
_120
_121
branch if no negative overflow
blz
use negative saturation
lalk 08000h, 0
update result
b
_122
_121
reload result
lac
*, 0
122
sacl _v5, 0
24 cycles

182

line 116
zac
It
rnpyk
Ita
rnpyk
apac
sacl
sach

_v4
-688
_vS
-1074

xkl_fltl (1)
cl_flt (1)
xkl_fltl (2)
cl_flt (2)

_v3l, 0
_v3l + 1, 0

templ_fltl
raw format

8 cycles

line 119
blkd
blkd

00207h,
0020Bh,

vlO
v11

xkl_flt2(1) --> xk_flt2(1)
xkl_flt2(2) --> xk_flt2(2)

6 cycles

Listing 4:

Excetpts from assembly language source code generated by DSPL25 compiler

Conclusions
General purpose programming languages seem
not very suitable for signal processing applications because of the lack of appropriate language
constructs. Taking into account the special problems of digital signal processing and the special
features of DSPs when designing a programming
language, allows the implementation of compilers capable of generating extremely compact and
efficient code. It also allows to provide the user
with powerful support in the area of scaling,
which is particularly important when working
with fixed-point processors.

H. Henrichfieise, "The Control of an elastic
Manipulation Device Using DSP", Proceedings
American Control Conference, Atlanta, Georgia,
Vol. 2, pp. 1029 - 1035, June 15 - 17, 1988.

H. HanseJrnann and A. Engelke, "LQG-Control
of a Highly Resonant Disk Drive Head Positioning Actuator", IEEE Transactions on Industrial
Electronics, pp. 100 - 104, February 1988.

Literature

H. HanseJrnann and W. Moritz, "High Bandwidth Control of the Head Positioning Mechanism in a Winchester Disk Drive", IEEE Control
Systems Magazine, pp. 15 - 19, October 1987.

H. HanseJrnann, "Digital Signal Processors in
Motion Control" , Proceedings International
Workshop on Microcomputer Control of Electric
Drives, Triest, Italy, July 3 - 4, 1989.

H. HanseJrnann and A. Schwane, "Generation of
Fast Target Processor Code From High Level
Controller Descriptions", Proceedings 10th
lFAC World Congress, Munich, 1987.

183

184

Application or Kalman Filtering in Motion Control Using TMS310Cl5
Dr. S. Meahkat
The Control Group

One common problem in many industrial drive/control applications is sensing-sensing variables such as position,
velocity or current for the purpose of control. The task of seDsiDg signals that truly represent system variable is
difficult either because of cost, imperfect sensors or environmentally induced random noise. The result is a control
loop with less than optimum performance. To perform a proper control one has to "estimate" all or some of the
missing system variables from a measurement that may be corrupted by noise (like a noisy encoder or current
sensor) from a system that is excited by a random external force such as torque disturbance. The output of an
optimum observer can be used in a feedback control system for the purpose of tracking or regulation.
But let's rarst define an estimation process. Estimation is referred to the process of extracting information,
unavailable for measurement for any reason, from the available data. This data may contain measurement error
and may also be influenced by external random disturbance. You may imagine, for instance, in a radar antenna
positioning application where wind acts as a random torque disturbance, upon the motor shaft - a shaft whose
position measurement is corrupted by random noise. In this application the observer or estimator must estimate the
pure values for position and velocity. A Kalman fdter is an optimum observer for these problems when state
excitation noise (i.e., torques disturbance) and observation noise (i.e., the encoder noise) are uncorrelated, in other
words encoder noise is totally unrelated to the torque disturbance.
To present the idea of designing a Kalman filter let's start with the model of a dc motor (See Appendix A. "Model of
adcMotor.")

9(s)
(1)
u(s)

s(T 015

+ 1)

Since the filter is implemented in a digital control environment we transfer this equations to the z domain.
aT-l + e-aT

(z + b)
G(z)

a

1

b=

(2)

, a =
aT-l + e-aT

In terms of state space representation:

[

O(a + 1)

1

A

oo(n + 1)

A-[

1

0

[ O(a)

1

+ B u(n)

oo(n)

(. _, -.T)/,]
e-aT

Reprinted, with permission from author.

B-

r

K",(T-1/. . ,4T)
~(l_e-aT)

I

(3)

185

u(n) =·F

[

8(n)

1



•
X(n-1)

F"1gUl"C 1:

State space representation of a motion control
system with torque disturbance W and measurement
noise V

Optimum Observer
The problem can be stated as follows: design an observer that uses the measurement, z(n), as well as the statistical
information about the measurement noise, V(n), and disturbance, W(n), to optimally estimate the actual position
and velocity.
The reconstruction of data must be based on a structure that penalizes the deviation of estimator's output from the
actual system output to oorrect the estimation process.
~(n) .. A ~(n.l) + K [z(n) • ~n-l»)

(4)

where ~n) is the estimated vector of position and
velocity and H is the output vector (e.g. for position
H = [1 0))

This is presented in figure 2.

•

~(n)

•

......"~~,......:1,... 1

~--~

i1. . .

!,Cn-1)

F"1gUl"C 2: State space representation of optimum observer
186

Therefore the design problem can be simplified to finding mter K. Designing K requires the statistical information
about the random disturbance and the measurement noise. This must be intuitively clear; simply because one could
not imagine that without this information any "proper" reconstruction would be possible. This statistical information
can be obtained from the knowledge of the torque disturbance intensity and the frequency range over which it is
active. For our disturbance intensity and the frequency range over which it is active. For our measurement noise,
we need to know the rms value of the noise and its frequency range. To be more precise, this information helps us
compute the "state variance matrix of reconstruction error" from which K can be extracted.
Let's assume our motor is disturbed by external torque with an intensity of 12.5 N2m2s over the frequency spectrum
of 0 - 30 Hz and the position mea.~urement is corrupted by noise with the rms value of 0.2 degrees which has a nat
spectral density over a 350 Hz range.
Figures 3 (a) and (b) show the actual position and velocity of our motor shaft when the motor is driven by the
torque disturbance only. That is, if we had perfect position and velocity sensors we could take measurements like
those illustrated in figures 3 a and b.

O.02r---__~:----~~:~~~~~~n~u~.~T~I~~L-----~:------~

.

+................ ~.........\.......!.................i........
t..·........·.... t..........·.. ·.. t......·. ·...... j..·..·........ 1......•......•..

~ O. 015 ................ ~ ................

1

o. 01

no.

: : : :

·..· .... ·.... · ..

001 ................: ................ ~ ................ ~................

R

=

0

.. ........... :............... .

a

..T................ t" ....·......... j"..·............t ..............

: : :
.

:

-0.0010r-----,5~0.---~1r.O~O.---~~----~Obr----~25t.O.---~~

v 0.25
~

o.

j

O. 1S

n

~

O. 2.

o.

:

Ro~1 VIID

:

:

:

i..·.... ·· .. ··· .. t...............·i................. j.............. .
···· ............r................ t.u ............. 1
................. ............0.:. ...........
················~·················r················r·· .............. : .. , . ········r···············

2 · .... ···· ..· .. ··~ .... ······ ......

~

b

0: ·. . . . . . ···~···::. : : :. . .f::::::::::::::::;::::: .........::T::::::::::::::r::::::::::::::

·-O.050~----orl.-----..
10~Or----.l~5ftO----~O~0r---~t.50.---~3~00
TI •• In •

If

,..,llnl Perlld.

Figure 3: (a) the actual motor shaft position (b) actual shaft velocity
However, the real measurement is totally distorted by a random noise making it appear as shown in figure 4. Figure
4 shows a position measurement that looks absolutely hopeless. You must remember the noise, corrupting our
position measurement, is not a high frequency noise that may be filtered by a conventional low pass filter. This
noise spans a wide frequency range! The Kalman IiIter, however, can estimate the actual position (see figure 5)
form the measurement signal (see figure 4). You may observe the perfect performance of this filter for velocity
estimation as well (see figure 6). Our assumption is that the mean values of random disturbance and measurement
noise are both zero.

187

~

o.• ~----~----~~~~~~~~----:-----,
0."

I

0.2

~
0
R -0.2

= -0."

-0.60~~--~----~~---'~----~r---~~~-;~

Figure 4: The measured position corrupted by noise

0.01~----~----~~~~~~~~~

:0. 011 ...............
I O. 01 ..............

____~__~

·t..............··t' ....·.......... j"............·.. j"............... ~ ......·

··t................t..·............ ......·........·t'......·.... :................
·~

1".............. t" ..............:-................ .............:................
.. T ............... t" .............. j................. j"' .............

;0. 005 ................

=

0

-0.00·O~----~O~----r.10~O~--~1~5~O----~2~ObO~--~~----~00

TI •• In _ It ' ••,lln8 '.rlldl

Figure 5: Estimated position using the optimum observer

O. IS r-___""!'"...!:IlLL!l.I.L~.u.li.l..llI¥l&...uJI....!:+-..L.L.u.""""!"-__- - ,

\I

~

0.2

I O.lS

n

0.1

: 0.05
,
0
I

-0.050~----,t,r-----~10~0r----,1~5~0----~1~0~0r---~1~50~--~3~OO
TI •• In_ .t
n, ,./,I

".'11

I'"

F'agure 6: Estimated velocity using the optimum observer

Optimum Control
The estimation process provides the feedback data useful for the purpose of control. Using optimum control theory
we may use the output of an optimum observer in a state feedback control configuration. The state feedback
controller multiplies a designed control gain matrix, F, by the output of our estimator, x(n), in order to compute the
control signal, u(n). Like any control design, F must be designed such that it satisfies a certain performance criteria.
The performance criteria are dictated by the application. For example, in punch press application where achieving a
fast response time is of crucial importance we need a time optimal control design. In machine tool applications
where the instantaneous position/velocity error must be minimized, a linear quadratic controller may be an
optimum choice. Although the performanee criteria will influence the design procedure for matrix F, the
implementation process in a state feedback control algorithm .remains the same.

Combining Observer and Controller
Let's now look at the combination of our optimum observer and state feedback controller using linear quadratic
criteria. Again, we start with the actual position and velocity values depicted in figures 7(a) and 7(b). Figure 8
shows the noisy measurement signal. The idea is to de.~ign an estimator combined with a regulator that use the

188

measured position, estimate the variables and use them in a state feedback control for the purpose of position and
velocity regulation. F'JgIII'CS 9(a) and 9(b) show the contrast between what was available to the controller and the
regulated results. The impressive contrast shows the power of optimum control in motion control applications.

I

P

,

n
R

2

••
I

•d

a

0
-2 0

v

0; 1

r O. 01
I

n
R

•
d

b
0

;'

1-0'

050

SO

100
1 0
TI .. In _ .f S..,lIna "rUdl

Figure 7: (a) actual motor shaft position (b) actual shaft velocity
". 1 r d POI ••.

TI ...

P

•
1

I

n

Figure 8: Measured position signal corrupted by noise

189

o
P

:

,

0.5

Po

(+)

T.

·
..........................................
... ........ _.............................
... .
··
·

..

n

a

: -0.5
d

-10~----~5~0~----1~0~--~1~5~0----~2~ObO~--~2~50~--~3dOO

v O.

l~

____~~~~~~~~~~~v~.~.~.~~T'uu~______~

•
I

, 0.05

"R

a,

o

b

·-0.0·0~----'5"0r-----r.l0~or----.~----~tr----~n---~~
TI •• In - .t 541.,11"1 P.rlod.

Figure 9: (a) measured position vsregulated position (b) measured vs regulated velocity
In a DSP environment an optimum observer combined with a linear quadratic regulator can be implemented and
run at a sampling rate less than 30 microseconds. This can be done through cascading the various blocks of our
control algorithm. Control algorithms implemented hy DSPs allow systems with imperfect sensors to achieve and
impressive level of performance - performance that is not achievable with classical control techniques.

Design and Implementation or Kalman Filter
To design a Kalman filter you may follow the steps discussed in sections "Theoretical Background..." then you may
proceed with the selection of a simulation program. Three of the more popular control simulation programs that
run on an IBM PC are: Matlab, Control C and Matrix X.
We used Matlab for the design and simulation of the Kalman filter. The Matlab program starts with data entry for
your system matrices. You are also required to enter the statistical information about the plant disturbance and
measurement noise. The program will simulate and plot the actual position and velocity on your EGA screen. It
discretizes your system using the sampling period it was initially provided with. From this informa(ion, the discrete,
stationary Kalman gain is computed and used in an optimum state observer. The estimated position is plotted and
contrasted with the measurement signal. (Please see appendix B)

Hardware Setup
Once the design proves successful, you may readily convert your Kalman IiIter to a form that is implementable on
TMS320C25 processor. For our implementation experiment we use the setup as appears in Figure 1. We used the
IBM PC with a 80386 processor and 80387 co-processor to emulate the motor, in realtime. The C program on the
PC was also responsible for the generation of the uncorrelated normally distributed random disturbance and
measurement noise. We used the IBM Data Communication Card to DIA and AID our data. Through the
communication card and Texas Instruments AlB board we could connect our emulated system to a TMS320C25
processor board. Needless to say, the TMS32OC25 processor board was responsible for the implementation of
Kalman filter and the initiation
of the two A/Ds and one D / A with a sampling period of 1 ms on the AlB board.
The filter we implemented embodies the generic form of equation 4 in the section entitled, "Application of Kalman
Filtering in Motion Control Using DSPs." The state equations which were finally implemented are:

190

xl(n+l)

= xl(n)

~(n+l)

= bOu(n) + bl~(n)

+ al~(n) +azyp
+ b2yp

w

m

u

-..

ADC

-cSr rO

.. I

[
IBM PC 8. Data
Communication
Board

KALMAN FIL TER

TMS320C25
PROCESSOR BOARD

~

DAC1

PLANT

ADC

l tOPIO~

I

l

un

I

on CRT

optimally
observec
state

----

DAC2

1~

l l
DAC

-...

r ADC2 ~
I ADC1

AlB BOARD

.~

where:
a1 .. 0.0020
~.

0.1737

bO " 16.000
b l = 0.9905

b2" 0.0008

][1 is the estimated positiOn
~ is the estimated velocity
lin is the input signal, u, plus the disturbance ud
yp is the measured signal corrupted by noise wm

(Please see appendix B)

Theoretical Background for Designing Kalman Filter
Let's present a continuous time system by the following state equations.

x(t) .. a(t)x(t) + b(t)u(t) + w1(t)
(1)

y(t) = c(t)x(t) + w2(t)

where wI(t) and w2(t) are the state excitation noise and the measurement noise respectively.

The joint process of the two noise signals (i.e. col[wt w2J) can be expressed, as white noise, by the intensity matrix,
V(t):

E{ col[wt(tt) W2(tt)][w?(tv wl(tvll •

V(t1) (t l •

tV

(2)

When the two noise signals are uncorrelated vl2 = v21 = 0
and the intensity matrix becomes:

V(t) ..

(3)

We can form the full order observer as:

~(t) = a(t)~(t) + b(t)u(t) + K(t)[y(t) • c(t)~(t)l

192

(4)

The rec:oDStruetion error can be defined as:

e(t)

= x(t) - ~t)

(5)

Further we define the mean square l'CCOIIStruc:tion error as:

E{ eT(t)W(t)e(t)}

(6)

where W(t) is a positive-defmite symmetric matrix.

The mean square reconstruction error value is a criterion to measure the observer's reconstruction capability.
So, the design problem can be stated as: DcsiKn K(!} such that the mean square reconstruction error is minimi7&d.
It can be proven that the solution to the optimum observer problem can be obtained from:

(7)
Where O(t) is the solution of the matrix Riccati equation:

O(t)

= a(t)O(t) + O(t)aT(t) + vl(t) - 0(t)cT (t)vi 1(t)c(t)0(t)
(8)

Therefore, the design process starts with obtaining all information regarding the procc.~ and the initial conditions
for the estimated states. In addition, you need to obtain the values for the disturbance covariance matrix, Vt, and
the measurement covariance matrix, v2. This information helps you solve the matrix Riccati equations (8).
In the time-invariant case where all the matrices of equation 8 are constant, the steady-state solution to the
observer's Riccati equation (8) can be obtained from:

(9)

Accordingly, "the steady-state optimum observer Sain matrix" can be calculated as:

(10)

Notice that in the time invariant case, there is always a trade off between the observer's speed and the immunity to
the observation noise. In terms of design practice, one may experiment with the two factors of observer speed and
noise immunity. To do this: keep vl constant, choose a positive-definite symmetric matrix for v2 with a positive
scalar multiplier m. Clearly, increasmg m will increase the state reconstruction speed. The value for m may be
increased to a pOint that while the observer attains a fast speed, noise immunity is not compromised.

193

Example:
Let's assume that our plant is a motl)r, disturbed by a zero mean white noise external torque, T d' and our shart
position measurement is corrupted by a zero mean white noise, Mn , uncorrelated to the disturbance noise. This
plant can be modeled as:

9(5)

Km

where ~ =[l/K.r] (KT is motor torque constant) and Tm =
total inerllalload.)

RJ/K.r2

(R is the armature resistanc and J is the

In terms of state equations:

:

[

x(t) =

1
x(t) +

u(t) +

111m

The state disturbance noise intensity, vd may be obtained from the variance of the torque disturbance and the
frequency range over which it is active.
.
torque disturbance variance
2(active frequency range)
The same for the measurement noise intensity, vm
Measurement noise variance

vm =

2(active frequency range)

from this inform/ltiori VI and v2 can be obtained as follows:

0

0

0

vd/J2

VI =

and
v2

= vm

The above information enables us to solve equation 9 for Q and plug in the result in equation 10. The solution
obtained for the optimum observer gain, K can be used in our reconstruction equation (i.e., equation 4).

194

APPENDIX A

Model For. DC Motor
The model equations are obtained using the physical relation between the variables in each functional block. We
use Laplace operator, S, to simplify the solution method; but remember that "5" is an appropriate operator only for
linear systems. The mathematical model of a dc motor will allow us to simulate the system dynamic response on a
computer before an actual design.

...

PIIyIIaII MCIdII

..

t

L

--

~

e

VII

I

Figure 1 shows an electromechanical block diagram of a DC motor. This model describes the relationship between
the voltage applied across the armature winding ,0, and velocity, w.
Where

R
L

J
B

~e

armature's resistance
armature's inductance
motor inertia
viscous damping coefficient
Torque constant
back emf voltage constant

Using figure I, the relationship between (5) and U(5) can be written as:

00(5)

1

u(s)

U

s2 +(R/L + B/J)s + RB + ~Ke/U

Equation 1 describes a second order model for a DC motor. In MKS system, Ke =

1

00(5)
u(s)

~.

u

52 + (R/L + B/J)s + RB + ~2/U

In a practical motor the roots of the denominator, ·poles" are in general real and negative. These roots are:

1.. 1,1.. 2 = (-1/2)(R/L + B/J) + /- j!/2 (R/L + B/J)2 - 4(RB + ~)/U
For a step-wise input voltage to a motor, u(s) = 1/s, output velocity is:

195

1
(0(5) =
5

1

(1)

u

The model presented by the above equation can be simplified to a first order model using the following
assumptions. The fust assumption is that the electrical time constant, 1 e' in most conventional DC motors is much
shorter than the mechanical time constant, 1 m. This will let us ignore the term s.L in equation 1.
(0(5)

U(5)

RJs + RB +

The second assumption is

Kr.2 > > RB
1

(0(5)

Kr.2
1

u(s)

1 + RJ

5

Kr.2
Where 1 JD = RJ/Kt2 is the mechanical time constant. So, for a stepwise input voltage applied to the armature
winding the shaft speed w(s) is given by:

(0(8)

1

=

1

1

S

Extending this relation to the angular position, will result in:

8(6)
u(s)

196

'"

S(1 mS

+ 1)

AppendlxB
%

DiscnJte 7ime, Sllltionary KIllman FilteT

%

%
%
%

In this segment ofprogram you will enter the
continuous time system motrices a,b and c.
We assume d = O.

%

%
subplot (211)
input ('input the continuous time system matrix a: ')
a = ansi
input('input the continuous time input vector b: ')
b=ans;/
input('input the continuous time output vector c::. ')
c=ans;
%
%

%

At this point you will enter the sampling period, T.
This value efUlbles your program to discretize the
%
entered system equations.
%
%
%
input('input the sampling period T: ')
T = ansi
%
%
At this point you will be asked to enter the
%
statistical information about the disturlHlnce
%
noise and the measurement noise. For more
%
information please refer to the document entitled
%
"theoretical background. "
%
%
input('input the system disturbance vector g: ')
g = ansi
input('input the disturbance covariance matrix q; ')
q = ansi
input('input the variance value for the disturbance vard: ')
vard = anSi
input('input the variance value for the measurement varm: ')
varms = anSi
r=varm/1OOO;
%
%
In this part you will enter any known input signal from
%
which the program will generate the total input.
%
%
%
input('Enter the input u')
u2 = ansi
[A,S] = c2d(a,b,T)
pause
u=rand('normal');
ul = rand('normal');
u=vard*rand(300,l);
u=u+u2;

197

ul .. varm*rand(300,I);
%
%
At this point program can simulate the
%
actual positioo and velocity signals as well as the
%
optimum discrete observer gains.
%
%
%
yp = dlsim(A,B,c,O,u);
yv = dlsim(A,B,[O 1],O,u);
q1 = vard/1000;
q = q1*q;
[L,M,P] = dlqe(A,T*g,c,q*T,r/T)
pause
t = 1:1:300;
plot(t,yp)
title('Real Position vs. TilJle')
grid
ylabel(,Pos in Rad')
plot(t,yv)
title('Real Velocity vs. TilJle')
grid
x1abel('Twe in II of S8IJlpling Periods')
ylabel('Vel in Rad/s')
pause
yp = yp+ul;
plot(t,yp)
title('Measured POI vs. Twe')
grid
ylabel('Pos in Rad')
x = [O;OJ;
%
%
%
%
%

%
kgain

Using the Kalman gain, the program will structure a
recursive equation for the optimum estimation process.

= A-L*c;

for i = 1:1:300;
x = kgain*x+ B*u(i,l) + L·yp(~I);
pos(~l) = x(l,1);
vel(i,I) = x(2,I);
end
%
%
At this point the program will plot the estimated
%
position and velocity, and contrast them against
%
measured ones.
%
%
plot(t,pos)
titlc('Estimated POI vs Twc')
grid
ylabel('Pos in Rad')
x1abel('Time in II of Sampling Periods')
pause
plot( t,yp,'.',t,pos)
198

title('Measured & Estimated Pos vs. Time')
grid
ylabel('Pos in Rad')
plot(t,yv,'.',t,vel)
title('Actual & Estimated Vel vs. Time')
grid
ylabel('Vel in Rad/s')
xJabel('Time in /I of Sampling Periods')
metamm
subplot
end

199

AppendixC
(}(}()1
(}(}()2
(}(}()3

(}(}()4

(}(}()5
()(J()6

.....................................................
•
•
•
Ko1man Filtering using TMS32OC25
•
•
••••••••••••••••••••••••••••••••••••••••••••••••••••

0007
(}()()9

()(}()()

(J(JIO
(J(Jll
(J(J12

()(}()()

text
TEMP

.equ

Oh

;For temportary storage

(}(}()I

yp

.equ

Ih

(}(}()2

VN
XNL
XNH
UN

.equ
.equ
.equ
.equ

;BLOCK BO FOR STATE
VARIABLES
;DATA MEMORY

5h

(}()()6

A2

.equ

6h

(J(J20
0021
0022
0023
0024
(J(J25
(J(J26
(J(J27

0007
0008

Al
B2
Bl
BO

.equ
.equ
.equ
.equ

7h
8h
9h
OAh

.asect
B

"AORG(J(J"

(}(}()1

(J(J20

(J(J13
(J(J14
(J(J15
(J(J16
(J(J17
(J(J18
(J(J19

(J(J28
(J(J29
(J(J30
(J(J31
(J(J32
(J(J33
(J(J34
(J(J35
(J(J36
(J(J37
0038
(J(J39
0040
0041
0042
0043
0044
0045

(}(}()3

(}(}()4

(}(}()5

•
•

(}()()9

OOOA

()(}()()

()(}()()

200

FFBO

()(}()()

(J(JI0

4h

00h

.aseet

.data
''AORGOl"

lOh

1387

RATE

.word

4999

(J(Jll

(J(JFA

MODE

.wor,)

OFAh

.text
.aseet

''AORG02"

20h

.equ
RSXM

$

()(}()() .

(J(J20

•
•
•
•
•
0020
(J(J20

;THEY STORE THE
COEFFICIENTS

STRT

0010

0046

0047
0048
0049
(J(J50
(J(J51

•

2h

3h

;samplingperiod 1 msee {= (5MHz
/(RATE+l)J
;For AlB initization

Start main program

STRT:
CE06

;TURN OFF THE SIGN

EXTENTION MODE
oo52
oo53
oo54
oo55
oo56
oo57
oo58
oo59
0060
0061

0021
0022

CBOO
5589

LDPK
LARP

0
ARI

0023

Cloo

LARK

ARl,O

0024
0025
oo26

CAoo
CB07
6OA0

ZAC
RPTK
SACL

7
*+

:ZERO THE DATA
MEMORY (Oh TO Sh)

0062

0063
0064
0065
0066
0067
0068
0069
0070

oo27

DOOI

LALK.

712,0

:STORlNG COEFFS IN DATA
MEM (A2 = .1737012)

0071

0028
0029

02C8
6006

SACL

A2,O

;AND XFER THEM TO
PROGMEM

002A

DOOI

LALK

8,0

:(AI = 0.002 STORED IN
012)

002B
OO2C

0008
6007

SACL

AI,O

002D

DODI

LALK

3,0

oo2E
oo2F

6008

SACL

B2,O

0030
0031
0032

DODI
'OFD9
6009

LALK.

4057,0

SACL

Bl,O

0033
oo34
0035

DODI
I(}()()
600A

LALK

4(}96,0

SACL

BO,O

.equ
LACK

$
RATE

TBLR

TEMP

*
• Initialize the coefficients
*

0072

0073
0074
0075
0076
0077

0078
oo79
0080
0081
0082

0083
0084
0085

0086
0087

(}()()3

= 0.9905 STORED IN 012)

:(BO

= 16

STORED IN 08)

.------------0036
0036

LOOP:
CAIO

•

0093

0094

,'(BI

*• Initialize the AlB board

0088

0089
0090
0091
0092

:(B2 = 0.(}()()7324 STORED
IN 012)

0037

5800

;AlB BOARD SET FOR I MS
SAMPLlNG.RATE
;AND FOR 2ANALOG TO
DIGITAL
CONVERTERS

201

0095

0096
0097
0098
0099
0100
0101
0102
0103
0104
0105
0106
0107

0108
0109
0110
0111
0112
0113
0114
0115
0116
0117
0118
0119
0120
0121
0122
0123
0124
0125
0126
0127
0128
0129
0130
0131
0132
0133
0134
0135
0136
0137
0138
0139
0140
0141
0142

202

0038

EIOO

OUT

TEMP,I

;WRlTE THE SAMPLING PERIOD
TOAIB
;BOARD PORT I

LACK
TBLR
OUT

MODE
TEMP
TEMP,O

SPM

0

;INITIALlZE THE AlB BOARD
;
;WRITE THE SAMPLING PERIOD
TOAIB
;BOARD PORT 0
;reset the P register output shift mode

BIOZ

TAKE

;WAIT FOR THEA/D
INTERRUPT COMES

B

WAlT

*
0039

CAll

OOlA

5800

003B

EOOO

003C

CE08

003D

FA80

003E
003F
0040

0041
FF80
003D

*

WAlT:

*====================================

*

* Stl11t doing each sampling calculations

*

*====================================

0041

TAKE:

.equ

$

0041

8201

IN

yp,2

0042

8305

IN

UN,J

OO4J

CAOO

ZAC

0044
0045

JC01
3806

LT
MPY

A2

0046

CEOA

SPM

2

0047

3D02

LTA

VN

0048
0049
004A

3807
CEOB
CE15

MPY
SPM
HAC

Al

OO4B
OO4C

4903

ADDS
ADDH

XNL
XNH

;ADD XNH,xHL TO ACC

4804

OO4D
OO4E

6003
6804

SACL
SACH

XNL,O
XNH,O

;SAVE THE NEW STATE VALUE

;TAKE SAMPLE OF FIRST ADC-STORE IN yP
;TAKE SAMPLE OF SECOND
ADC - STORE IN UN
;CLEAR ACCUMULA TOR

yP

3

;
;P REG. = A2*YP SHIFTED 4
PLACES LEFT

;MULT REG = A2*YP SHIFTED 4
PLACES LEFT
;
;RlGHT SHIFT 6 PLACES
;ACC = A1*VN + A2*YP

0143
0144
0145
0146
0147
0148
0149
0150
0151
0152
0153
0154
0155
0156
0157

004F

CAOO

ZAC

0050
0051

3809
CEOA

MPY
SPM

B1
Z

;P=B1*VN

0052
0053

3005
380A

LTA
MPY

UN
BO

;ACC=B1*VN
;P=BO*UN

0054
0055
0056
0057

CEOB
3D01
3808
CE15

SPM
LTA
MPY
APAC

3

;

W
B2

;ACC = B1·VN+BO*UN
;P=W*B2
;ACC = BZ·W + BI·VN +
BO*UN(RESULn
;lS SHIFTED LEFT 4 PLACE TO
MAKE
;BO Q12 AND THUS BO·UN Q1Bu

0158

•

0159
0160
0161
016Z
0163

•

;CLEARACCUMULATOR

0058

6802

SACH

VN

;STORE THE NEWVN

0059

E205

OUT

UN,Z

;CHECK TO SEE IF OPERATION
ENDED

OOSA

FFBO

B

WAIT

005B

003D

0164
0165
0166
0167

;BACK FOR MORE SAMPLES

END

203

204

Implementation of a PID Controller on a DSpt
Hermann Steingrimsson
Graduate School of Business
University of Wisconsin
Madison, Wisconsin, USA

Karl J ohan Astrom
Department of Automatic Control
Lund Institute of Technology
Lund, Sweden

1.

Introduction

The PIDcontroller is by far the most commonly used control algorithm. [Deshpande 1981]
Although it is of limited complexity it can be used
to solve a large number of industrial control problems. The textbook version of the PID controller
can be described by the equation

u(t) = Xc (e(t) + ~i

t

e(s)ds + Tel

~~»)

(1)

where u is the control variable and e is the control
error, defined as e = 'Y,p - 'Y, where 'Y,p is the set
point and 'Y is the process output. The parameters
of the controller are: gain Xc, integral time Ti, and
derivative time Tel.
The purpose of the integral action is to increase the low-frequency gain and thus reduce
steady-state errors. Derivative action adds phase
lead, which improves stability and increases system bandwidth.
Implementation of a PID controller using a
DSP will be discussed in this paper. A lot of experience has accumulated over many years of use of
the algorithm. This has led to significant modification of the algorithm (1). These modifications will
be discussed in Section 2, where the discretization
issues are also dealt with. The result is a nonlinear
digital algorithm that is suitable for implementation on a general purpose digital computer.
The algorithm can be implemented in a
straightforward way in a DSP with floating point
hardware. Implementation using an ordinary DSP
does, however, require special considerations, because all calculations have to be made in integer
arithmetic. These issues are discussed in Section 3;
, Part of this work was done when the first author was
visiting professor and the second author a graduate student
at the University of Texas at Austin.

Reprinted with permission from author.

Some special problems related to quantization in
AD- and DA-converters are discussed in Section 4.
An overview of the DSP code for a PID controller is
described in Section 5. The complete code is given
in the Appendix. In Section 6 it is described how
the code can be tested. The tests given include both
linear and nonlinear behavior.

2.

Modification and
Discretization

The algorithm (1) has several drawbacks. Significant modifications of linear and nonlinear behavior
are necessary in order to obtain a practically useful algorithm. See [Astrom and Hagglund 1988].
To obtain equations that can be implemented using computer control it is also necessary to replace
continuous time operations like derivation and integration by discrete time operations. See [Astrom
and Wittenmark 1990]. These modifications will be
described in this section.

Proportional Term
The proportional term Xce(t) is implemented simply by replacing the continuous time variables with
their sampled equivalences. One additional modification set point weighting [Astrom and Hagglund
1988] has been found useful. This means that the
proportional term only acts on a fraction b of the
command signal. The proportional term then becomes
P(tlc) = X c (b1l,p(tlc) - 1I(tlc»
(2)
where {tic} denotes the sampling instants. The
parameter b admits independent adjustment of set
point and load disturbance responses. It may also
be viewed as "zero-placement".

205

Integral Term

Derivative Term

When a controller operates over a wide range of operating conditions, the control variable may reach
actuator limits. The feedback loop is then broken
and the system effectively runs open loop. When
this happens in a controller with integral action,
the error will continue to be integrated and the
integral term may become very large. The integrator "winds up". The error must then change sign
for a long period of time to "unwind" the integrator and bring the system back to normal. Windup
can also cause problems when the controller is implemented on a microprocessor having finite word
length. Since the processor can only store numbers
limited in magnitude, windup may cause overllow
oscillations in the control variable, unless saturation arithmetic is used.
There are several ways to avoid windup. One
possibility is to introduce an extra feedback loop
by measuring the output from the actuator and
forming an error signal as the difference between
the controller output v, and the actuator output u.
If the output of the actuator is not available, the
signal may be computed by using a mathematical
model of the actuator. The error signal is fed
to the input of the integrator through the gain
11Th where the constant Tt is called the tracking
time constant. The extra feedback will ensure that
the integral obtains a value so that the controller
output tracks the saturated output. Tracking is
accomplished with the time constant Tt . Using
this method of avoiding windup the integral term
becomes

A pure derivative should not be implemented,
because the controller gain becomes very large at
high frequency. This leads to amplification of highfrequency noise. The derivative term is therefore
approximated by

let) =

~;

!

e(s)ds + ~t

!

(u(s) - v(s))ds (3)

To obtain an algorithm that can be implemented
on a computer, the integral term let) is differentiated
dl(t) = Kc e(t) + e.(t)
dt
T;
Tt
where e.(t) = u(t) - vet). Approximating the
derivative by a forward difference gives

!.

l(tle+l) - I(tlc)
h

= Kc e(tlc) + !. e.(tlc)
T,

Tt

where h is the sampling period. Finally, by rearranging terms, we get the following equation to
compute the integral term

f(tlcH)

206

= f(tlc) + i;.h e(tlc) + ;t e.(tlc)

(4)

(5)
Notice that the approximation is good for signals
whose frequency contents are significantly below
N lTd. Also notice that the approximating transfer
function has a maximum gain of N. Parameter
N is therefore called maximum derivative gain. In
analog controllers N is given a fixed value, typically
in the range of 5-20.
It is also advantageous not to let the derivative
act on the set point signal. The set point is constant
for most of the time and its derivative is therefore
zero. A step change in the set point may, however,
cause an undesirable jump in the control variable
if the derivative acts on the set point. With these
modifications the derivative term can be written as

There are several methods to approximate the
derivative. Common methods are the forward difference approximation, the backward difference
approximation, Tustin's approximation and ramp
equivalence. See [As tram and Wittenmark 1990].
These approximations all have the same form

D(tlc) = aD(tlc_l) - b(y(tlc) - y(tlc-l»)

(7)

and are stable only if lal < 1. The forward difference approximation is stable if,Td > Nh12. It thus
becomes unstable for small values of Td. Tustin's
approximation has the disadvantages that a goes to
1 as Td goes to zero. This gives a ringing response
for small Td. The ramp equivalence approximation
gives exact outputs at the sampling instants if the
signal is continuous and piece wise linear between
the sampling instants, but it requires computations
of an exponential. The backward difference approximation gives good results for all values of Td, The
parameter a goes to zero as Td goes to zero. Here
the backward difference approximation is chosen.

The following is obtained when Equation (6)
is approximated by a backward difference:

Rearranging terms, gives (7) with

a=

Td
Td+ Nh

and

b= KcTd N
Tel + Nh

Since Equations (10) have to be updated only
when the controller parameters are changed, the
code should be organized so that parameters ael,
bel, b. and be are computed initially and when the
PID parameters are changed. This will reduce the
computational load during the execution of the
PID algorithm. The structure of the PID algorithm
given by Equation (8) is shown in Figure 1. Notice
that the algorithm is in parallel form.
The PI algorithm

which is the formula that will be used to compute
the derivative term.

In many cases the derivative action is not necessary. The algorithm then reduces to

P(tle) = K c (b1l.p - 1I(tTe»
v(tTe) = P(tle) + J(tTe)
u(tTe) = f(v(tTe»)
J(tlc+l) = J(tTe) + b.(1I.p - 1I(tTe»)
+ bt(u(tTe) - vetTe»)

The PID Algorithm
Summarizing we find that a practical version ofthe

PID algorithm can be described by the following
equations:

P(tTe) =
D(tTe) =
v(tTe) =
u(tTe) =
J(tTe+1) =

K c (b1l.p - 1I(tTe»
aelD(tTe_l) + bel (1I(tTe-l) - 1I(tTe»
P(tTe) + J(tTe) + D(tTe)
f(v(tTe»)
J(tTe) + b.(1I,p - 1I(tTe»)
+ bt(u(tTe) - v(tTe»

(8)

This algorithm has anti-windup reset, limitation of
derivative gain (N) and set point weighting (b).
The function / describes the nonlinear characteristic of the actuator. For a linear actuator with
saturation at Urn... and Urnaz we have

f(v(tTe» =

Urnaz
{ Urni..

v( tTe)

if v(tTe) > Urn.
ifv(tTe) < Urn...
otherwise

b. = KchlT;.
bt = hlTt

which is a PI controller with anti· windup reset and
set point weighting (b).
The function f is the same as in Equation (9)
and the parameters b. and bt are related to the
parameters K c , T. and T, as follows:

bi = KchlT;.
b, = hiT,

(12)

which is the same as Equation (10). The reason for
considering this special case is that PI controllers
are in fact more common than controllers with
derivative action.

(9)

Y

For actuators with other limitations the function f
should be modified. The parameters ael, bel, bi and
b, are related to the primary parameters K c , T.,
Tel, T t and N at the PID controller as follows:

ael = TeI+ Nh
b _ KcNTeI
eI- TeI+ Nh

(11)

Y-,s::c..._--+J._ _ _J
Y
::...-------+1

L...-_--'

(10)
Figure 1. Structure of the PID controller with
anti-windup.

200

Table 1. Number of arithmetic operations for PI
and pm control.

M

PI
A

I

2
0
0
X
2

1
0
1
X
4

Tot

4

6

P
D
v
f

O. 7

PID
M A

0.5

1

0.3

2
2

0
X

2
2
X

2

4

6

9

r---.---r---r-----.,r--~=____,._____,.____,

0.6

0.4
It

0.2
0.1

15

10

5

20

25

30

3~

40

Figure 2. Step response of the s,.stem.
Operations Count
It is a common practice to estimate c()mputation
times by a simple operation cOWlt. This can be
strongly misleading when using fixed point calculation, because much of the ~omputation time may
be spent on overflow handling and scaling. Table 1
shows the minimum number of multiplications and
additions required for the PID and PI algorithms.
The PID algorithm requires 15 arithmetic operations, while the PI algorithm requires 10 operations.

the Texas Instruments DSP by using a 32-bit accumulator and a 16-bit word length. The effect of
using different resolution of the A/D- and DI Aconverters can also be simulated.
Selection of Sampling Period
There are several rules of thumb for choosing the
sampling period for digital controllers. For a PIcontroller the sampling period is related to the
integration time. A rule of thumb [Astrom and
Wittenmark 1990] is
h

7i

3.

Implementation Issues

Implementation of a PID-controller using a DSP
with fixed point will now be discussed. General
practice on implementing algorithms for DSP are
given in [Texas Instruments 1986], [Texas Instruments 1989a], [Texas Instruments 1989b], [Texas
Instruments 1990a] and [Texas Instruments 1990b].
To perform fix-point calculations it is necessary to know orders of magnitude of all variables.
Simulations were performed to get this information. In the simulations the process model
1

G(s)

= (s + 1)4

was used. Figure 2 shows the step response of
the system with parameters K" = 0.6, T"
0.5,
7i = 2.2, Tt = 0.5, N = 8, and a sampling period
of 0.1 s. At the time t = 0.3 s a load disturbance
of 0.3 V is introduced.
Two C-programs were written to test the effects of scaling and roundoff. One pr<>gram implements the PID controller in double precision arithmetics with no attempt to simulate the effect of
finite word length. The other program simulates

=

208

~

0.1- 0.3

A PID controller requires a much shorter sampling period. The sampling period should be short
enough 80 that the pole II = - NIT", introduced to
limit the high frequency gain of the derivative, can
be approximated appropriately. This leads to the
following rule of thumb:
hN
T" ~ 0.2- 0.6

See [Astrom and Wittenmark 1990].
Integral

Offs~t

ROWldoffmay give an offset when the integral term
is implemented on a computer with a short word
length. This can be understood as follows. Consider
the equation for the integral term in Equation (8).
The correction term bce(tlo) = KchlTi ·e(tlo} is usually small in comparison to l(tlo} and may therefore be rounded off. With fractional arithmetic, the
largest magnitude of the correction term is Kch/7i.
To avoid roundoff, it is therefore necessary to have
a word length of at least
n

um.b

er

0

f b·
Its

=-

loge K"h/n}
log(2)

More bits are of course required to obtain meaningful values. For example, with h 0.02 s, ~ 10 s
and Ke = 0.1 the number of bits required to obtain
less than 5% error in the integral requires a word
length of at least

=

number of bits

=

10g(0.0002 . 0.05)
log(2)

=

I::l

0.6
0.5
0.4
0.3
0.2
0.1

/

11

Longer sampling periods for computing the integral may be used to avoid the offset. This can be
done simply by adding the error over each sampling period and updating the integral term in regular intervals. Another way to avoid offset due to
roundoff is to store the integral with higher precision. In most DSPs (like the TMS320xx) values can
be stored in double precision, with little overhead.
Scaling
The PID controller given by Equations (8) is
already in parallel form, with the modules of zero
and first order. Figure 1 illustrates the realization
of the controller. Because of the parallel form, the
P, I and D terms can be scaled and computed
separately and then unified to form 11.
Coefficient Scaling

Because of the wide number range of the parameters, some restrictions must be imposed on the
magnitude of coefficients. It Collows from Equation
(10) that bd is the largest parameter. A limit should
therefore be set on the gain K a , and the highfrequency derivative gain N. H Kc and N are limited to 16, we have bd < KeN = 256 and Kc ~ 16.
These parameters must therefore be divided by 256
and 16 respectively before they are stored. To restore the magnitude of the signal, the derivative
term must be shifted left by 8 bits and the proportional term shifted left by 4 bits.
The other parameters, ad, bi and bt are within
the number range, but because b. and bt may
become very small, it is advantageous to also set a
lower limit on hiT. and hlTt •
Signal Scaling and Saturation Arithmetic
It must be insured that overflow does not occur
when computing the states of the controller. With
the structure of the PID controller shown in Figure 1 the states are D(tlo) and I(tlo+1). Care must
also be taken so that overflow does not occur when
the P, I and D terms are added to obtain 11.

-0.1
-0.2

0

5

Figure S.

10

15

20

25

30

35

40

s
The term. of the pm controller.

The proportional term will always be within
the number range, since the multiplication of a
fraction with a fraction gives a fraction. Overflow
can occur if K c is larger than 1 when the magnitude
of the signal is restored. It is thereCore necessary
to use saturation arithmetic when computing the
proportional term.
One additional advantage of using the antiwindup reset when computing the integral term
is that the integral is within the number range.
Saturation arithmetic is therefore not necessary.
Integration can result in overflow if anti-windup
is not used or if 7t is chosen poorly. Saturation
arithmetic should therefore be used before the
integral is stored.
Since the derivative depends only on the process output, it is difficult to use analytic scaling
methods effectively. It is easy to predict the worst
possible input, but Cor most processes that would
be too pessimistic. A good engineering approach is
therefore to simulate the closed loop system and
store the output of the derivative for a few representative examples. The derivative should normally
not account for more than 20% of the control signal. Since bd can take large values, saturation arithmetic should be used before storing the derivative.
A number of simulations were made in order to
obtain typical orders of magnitude of the proportional, integral and derivative term. It turns out,
that under normal operation conditions, the variables are within the number range. Since we are
allowing a gain larger than one, it is very likely
that an overflow will occur under some operation
condition, for example during start-up. Saturation
arithmetic is therefore used on both states and on
the control signal 11. Figure 3 shows Simnon plots
of the P, I and D terms for step response and load
disturbances, for the process and the controller previously used.

209

Gain, Input and Output Scaling
To implement a high gain (Ke > 1) one can
either include the gain in the digital algorithm
or move the gain "outside" of the DSP by using
a linear amplifier. The advantage of the latter
approach is that the control algorithm can be
scaled to eliminate the danger of overflow and
therefore avoiding the large overhead associated
with saturation arithmetic. This gives a shorter
code and a faster controller. But there is also a
disadvantage. Under normal steady-state operation
the error is small and any changes in the control
signal will be a relatively small part of the whole
dynamic range. A change in the control signal of
one quantization step will be amplified, resulting
in a large jump. It may also give rise to limit
cycles. When a high gain is incorporated in the
DSP code, saturation arithmetic must be used on
internal calculations.

4.

Quantization Effects

Issues related to the interfacing of the DSP to the
plant will now be considered. The key questions
are related to quantization of A/D- and DI Aconverters
Quantization of the Set-Point Value
When implementing the controller the set point
should be quantized in the same way as the
controller input. That is, the set-point value should
either be read through the same, or a similar, A/Dconverter as is used for the input signal (if A/Dconverter is being used) or quantized internally by
using the same resolution as of the AID-converter.
If this is not done there may be an offset or a
limit cycle due to the quantization. Figure 4 shows
the result of a simulation, when a 6-bits A/Dconverter is used for the input signal but the setpoint value of 0.455 V is represented with a 16bit accuracy. The system goes into a limit cycle
with a period of 6.77 seconds and an amplitude
of 3.8 mY. The reason for this is that the setpoint value of 0.455 V can not be represented by
the 6-bits AID-converter. In steady-state the error
between the process output and the set-point value
will be either 17.5 mV or -13.8 mY. This error
will be summed up by the integrator, resulting in
a limit cycle.

210

0.55

r-~--"--~--'-----'------'

25

35

30

40

Figure 4. Limit cycles due to high resolution of
the set point.

Because the limit cycle is very close to a sinusoid it is reasonable to assume that the period
and the amplitude of the limit cycle can be predicted by using describing function analysis. Since
the system is in steady-state and the oscillation
corresponds to one quantization step of the A/Dconverter, we can assume a zero set-point value and
model the AID-converter by a relay nonlinearity
centered around zero with the quantization limits
+0.00157 and -0.00157. The describing function for
this nonlinearity is
N(A)

=

2q

= 0.0199
a

7ra

where a is the amplitude of the input signal and q12
is half the quantization step. The calculations are
simplified if the digital PID-controller is approximated by a continuous-time PI-controller with the
transfer function
Ge(s)

K
= K +Ts

where K = 0.6 and T = 2.2. Possible limit cycle is
given by the equation
1 + Yq(A)L(jw)

=0

Which is equivalent to
-1

L(jw)

= N(a)

(13)

where L is the loop transfer function of the controller and the process, in cascade, i.e.

Ls_ K + TKs
( ) - Ts(s + 1)4

"(14)

Since the describing is real-valued, one simply has
to find the intersection of L(jw) with the negative
real axis. When jw is substituted for. in Equation
(14) we get, after separating the real and the
imaginary part

.
L(Jw)

K(A(w) + iB{w»

= T{4w4 _ 4w2)2 + (w5 _ 6w3 + w)2

-

2.8w2

-

1

=0

,.---,----r---,---~--,--_,

0.455
0.45
0.445

(15)

where A = T{w 8 - 6w4 + w2) + 4w 4 - 4w2 and
B = T{4w 5 - 4w S ) - w5 + 6ws - w). The problem
is therefore reduced to finding the frequency where
the imaginary part is zero, i.e.
7.8w 4

0.46

(16)

The equation has one positive real root w = 0.7616,
which corresponds to a limit-cycle period of 8.25 s.
This is longer than the period T = 6.778, obtained
in the simulation. The amplitude of the limit cycle
is then determined by solving Equation (13) for
w = 0.7616, which gives a = 5.6 mY. The value
a = 3.8 mV was obtained in the simulation.

A/D- and D / A-Conversion
If the controller is interfaced to the plant by A/Dand D I A-converters the effect of the resolution
of the converters has to be determined. Figure 5
shows the result of one of several simulations where
the AID-converter has a higher resolution than the
DI A-converter. A limit cycle was observed in those
simulations. Because of the higher resolution of
the AID-converter, the controller produces control
signals which are not representable by the D/Aconverter. This results in an oscillation over one
quantization step of the D I A-converter. This phenomenon can also be predicted by using describing
function analysis, where we assume a zero set-point
0.46 r - - - , . - - - - - , r - - - - , - - - r - - - y - - - - ,

0.442'-0---2'-5--...J30---3..L5---4,c.O---4'-5--~50·
Figure 8.
bit D/A.

value and the DI A-converter is approximated by a
relay. The problem can be avoided by replacing the
function I giyen by Equation (9) by a function that
also models the roundoff in the DI A-converter.
Figure 6 shows a good result when an 8bit AID-converter and a lO-bit DI A-converter is
used when a step input of 0.45 V is applied.
These observations indicate that using a DI Aconverter with a lower resolution than the A/Dconverter may give rise to a limit cycle. It should
be emphasized that there are of course many other
factors which may be responsible for limit cycles.
There are also many other factors that in1I.uence
the selection of the resolution of the A/D- and
D I A-converters, e.g. the required accuracy of the
system.
Simulations also showed that a very low resolution (down to 4-bits) of the converters did not
have much effect on the step response of the system. The accuracy of the system is, of course, less
with low resolution converters. Figure 7 shows the
response of the same system when a load disturbance of 0.3 V is introduced at t = 20 s.
0.7
0.6

0.455

..

0.5

\

0.4

0.45

0.3

/

0.2

0.445

0.1
0.442'-0--...J25'---'30---3..L5---4.L0---4'-5--...J50
Figure 5.
bit D/A.

Response with a 8-bit AID and a 10-

Response with a lO-bit AID and a 8-

00

5

10

15

20

\

"-

25

30

~

35

40

Figure 7. Same as Figure 6 but with a load
disturbance.

211

5.

The nSP-Code

To develop and test assembly code of the PIDcontroller on the Texas Instruments Family of
DSPs the Texas Instruments Software Development System (SWDS) was used. This'system consists of a PC-board with a TMS320C25 signal processor and PC development environment, which
has many features. It is possible to set break-points
and single-step through the program. One useful
feature is the possibility to specify an input file (or
files) to the DSP and to direct the output (or outputs) of the DSP to an output file. This feature
makes it easy to test an algorithm, since a predefined input signal can be fed to the controller to
test its open loop response.
Programs for PI- and PID-controllers were
written for the signal processors TMS32010 and
TMS320C25. The complete codes are given in
Appendices A, B, C and D. The code for the PIDcontroller is organized in the following way:
INITIALIZE
load constants from program memory
to data memory
clear variables
load y(n-1) and ysp
reset external devices (f.ex. analog board)
PID
wait for input yen)
compute derivative (D)
round off, cheCk for overflow and store D
compute proportional part (P)
add D, P and I
round off, check for overflow and store in v(n)
compute u(n) from saturation function
output u(n)
compute I
check for overflow and store I
in double precision
GO TO PID
The code for the PI-controller is obtained by
deleting the computations of the D-term.

Initialization
After reset the program jumps into the initialization routine. This part disables interrupts, sets
overflow mode and loads coefficient. from program
memory (where they are stored permanently) into
data memory. Then the states of the controller are

212

cleared, the set point value (yap) is read from PA3
and the process output (y(n - 1» from PAO. By
filling up the y-vector before entering the PID loop
a jump due to the derivative is avoided. The program then goes into an infinite loop, to compute
the control signal.

Pin Calculations
The magnitude of the coefficient bd 'of the derivative term is less than 256. To represent it in the
DSP it must be scaled by dividing by 256. This can
be done by shifts. Before the derivative is stored it
is therefore Shifted left by 9 bits (8 bits plus one
left shift to account for the extra sign bit which is
generated in the multiplication).
The largest proportional gain is 16. The proportional term is therefore divided by 16. It was
advantageous also to divide the D and I terms by
16 and restore the signal after the control signal 11
has been calculated. The same saturation, rounding and shifting can then be applied to both the
derivative term and the control signal. Since the
derivative must be divided by 16 before it is added
to the proportional part, it is advantageous to store
ad divided by 16. A little trick was used to calculate
the correct derivative. After adD(tli_l) has been
calculated and stored in the accumulator the term
bd(y(tli-l) - y(tli» is calculated, and the result is
stored in the P register. The value of the P register
is then added 16 times to the accumulator to form
the correct derivative divided by 16. By doing this
in overflow mode, overflow results in saturation of
the accumulator. This would not be the case if the
value in the accumulator were simply shifted left.
With the TMS320C25 adding is easily done using
the repeat instruction. After these calculations the
derivative is in the accumulator. The proportional
term is then added to the accumulator to obtain
(P+D)/16. In this way the proportional term does
not have to be stored separately.
To obtain the output 11, the integral computed
pr~viously is divided by 16 by shifting the value
right 4 bits. It is then added to P+D in the accumulator. The output then goes through the saturation arithmetic. It is rounded and shifted before it
is stored as a 16-bit number. The saturation function f is called to form the final output u.
Since the control signal u depends on the integral from the previous sample, it can be converted
to analog form before the integral is updated. This
shortens the computational delay between the AID

Table 3. Cycles count for difFerent parb of the
PID-controller.

Table 3. Cycle count and maximum IILmpling
frequency for PI- and pm-c:ontrollera.
PI
DEVICE
TMS32010
TMS320C14
TMS320C25

cy:les

KHs

94
94
89

53

66
112

pm
cycle.
KH.
145
145
141

OPERATION

.
-Proportional
Derivative

34
43
70

and D/ A-conversions. To avoid integral offset, the
integral is computed and stored in double precision. Saturation arithmetic is performed before it
is stored, although it is actually not necessary if
proper anti-windup is used.
With the chosen method of organizing the
calculations the P, D and I terms are added, to
form 11, with a precision of 27 bits. The terms D
and 11 are then stored with a precision of 16 bits
and the integral is calculated and stored with a
precision of 31 bits.

Saturation Arithmetic. Before the derivative
or the control signal 11 is stored in memory as a 16bit value, it nmst by shifted left by 5 bits, because
the signal is divided by 16 in internal calculations
and an additional left shift must be performed to
account for the extra sign bit generated in the nmltiplication. The value is rounded and checked for
overflow before shirting it. If overflow is detected,
the value is replaced by the largest positive or negative number.
Set-Point Value. The set point is read via
interrupt. This interrupt is disabled when the
control value is computed, but is allowed for a short
period, before the next process output is read.
Computation Time
By using the timer on the TMS320C25 it was possible to count the cycles required for one execution
of the PID (or PI) loop. To find the number of
cycles required for one execution of the TMS32010
(TMS320C14) code, a simple cycle count was done.
In all instances it is assumed that the internal
memory of the DSPs are used.
Table 2 shows the number of cycles for each
controller 'and the maxinmm sampling frequency
which can be used. From this table we see that the
calculation of the derivative consumes a large portion of the total cycles, approximately 50%. The
reason for this is that the shifting and saturation
arithmetic on the derivative is complicated, because the coefficients of the controller are scaled

TMS32010
cycle.

TMS32OC25
cycleo

9

9

Integral Bhifting
BlSB on v
anti-windup
Integral
Integral B.a.
I/O and other

43
7
12
23
12
15
10
14

45
7
8
23
12
13
10
14

Total

145

141

-

BrBI

B.a.

arll

= saturattoD, round, shift, store

= Baturation arithmetic

differently. If the coefficients would all have the
same upper limit the same scaling constant could
be used and the shifting and saturation arithmetic
would be simpler and faster. Table 3 shows how
the cycles are divided between different functions of
the algorithm. Notice that the division is somewhat
arbitrary, because it is not obvious when one operation begins and the other ends. The saturation
arithmetic-, rounding- and shifting-function used
on the derivative and the output 11 uses 19 cycles,
the saturation arithmetic on the integral uses 10
cycles and the anti-windup function uses 12 cycles.
Notice that the code must be modified if Kc
and N are to be larger than 16. Also notice that
the code can be improved if the parameters of
the controller can be limited to smaller ranges.
For specific applications, where tighter bounds on
parameters and controller states are available, the
code can be shortened drastically by removing
saturation arithmetic and by simplifying scaling.
It is interesting to note that a crude time
estimate, based on the operation count in Table I,
underestimates the computation time by an order
of magnitude.

6.

Testing

To obtain high quality code it is necessary to
develop good testing procedures. The DSP code for
the PI and PID controllers were tested by simple
laboratory experiments to verify that the controller
worked as a proper PID controller. To ensure that
the code gives the correct numerical results, the
following procedure was introduced. Since a PID

213

1
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8
-1

0.2
0.1
0
-0.1
-0.2
-0.3
-0.4

5

10

Figure 8.

15

20

25

30

35

40

10

5

15

20

30

35

40

Figure 10. Test of the saturation arithmetic.

Test of the proportional and integral

actions.

0

0.3

~-~---'---"-----'r-r--"'---,

0.2
0.1

o~------~--~--------------~
-0.1

10

Figure 9.

Figure 11. Test

Test of the derivative action.

controller is a dynamical system, its behavior can
be tested by computing its response to given input
data with known responses. The test can easily
be automated by storing the data in files. This
was easily done using the facilities in the Texas
Instrument Software Development System. This
section describes how the testing was done. The
parameters used were K
0.6, Tel 0.5, Ti 2.2,
T t = 0.5 and N = 8. Parameters ael, bel, bi and bl
were calculated by assuming a sampling period of
0.1 s.
To test proportional and integral action a
symmetrical square wave with a period of 40 s
and an amplitude of 0.1 V was used as an input
sequence. To get a simple case the parameters of
the derivative term were set to zero (which is really
not necessary, since the derivative dies out very
quickly). This sequence can therefore also be used
to test a PI controller. Figure 8 shows the input
and the resulting output. For a constant input the
outPllt of the controller at the time t should be

=

'U(t)

=

30
o~

40

50

60

the anti-windup.

is also drawn in Figure 8 indicating that the
proportional and the integral term work properly.
To test the derivative action two impulses,
lasting one sampling period, Qf magnitude -0.1 V
and +0.1 V where applied to the input at the time
t = 1 sec. and t = 3 sec. Figure 9 shows the result.
The formula for the derivative term is

=

= tK
Tie e + 1(0) + Kee

With 1(0) = 0 the output should be equal to
-0.6055 V after 20 seconds. The line y = -0.6055

214

20

If an impulse of magnitude 0.1 V is applied to the
derivative we get the sequence: -0.2446, 0.1136,
0.0437, 0.0168, 0.0065, .... The first numbers of
this sequence are also plotted on the Figure 9,
showing that the derivative action works properly.
The small error in the beginning of the second
response is due to the integral of the first impulse.
This integral is canceled out by the second impulse
resulting in a final output equal to zero. To test the
saturation arithmetic the amplitude of the input
square wave was increased to 0.7 V. Figure 10
shows good results. When the output reaches
the limit it is saturated without causing overflow
oscillations. Finally, Figure 11 shows the result
when the anti-windup reset function is used to limit
the output to ±0.3 V. All versions of the PI- and
PID-controller were tested by using these input

sequences. Once a correct set of output files have
been obtained one can test modified algorithms
simply by comparing the output files. either by
plotting the output or by using a file-compare
program.
Other testing procedures were also developed
using ideas similar to the ones described above.

7.

Conclusions

This paper has given algorithms for high quality
PI and PID controllers with features like set-point
weighting, limitation of derivative gain and antiwindup. It has also been demonstrated how the
code can be implemented on a DSP using fix-point
calculations. Such an implementation necessarily
requires some a priori knowledge of signal and parameter ranges. This means that the code given
here only works well in cases that fit the assumptions made.
We have attempted to describe our reasoning
in sufficient detail so that the code can be easily
adapted to other situations. Some test procedures
that we have found useful are also presented. The
performance estimates show that PI controller can
be executed at 53 kHz on a TMS32010 and at
112 kHz on a TMS320C25.

8.

References

.!strOm. K. J .• and T. Hiigglund (1988): AutolllJ1tic Tuning ofPID Controllers. ISA. Research
Triangle Park. NC.
Astrom, K. J .• and B. Wittenma.rk (1990): Computer Controlled Systems - Theory and Design,
Second edition, Prentice-Hall, Englewood Cliffs,
NJ.
Deshpande. P. B., and R. H. Ash (1981): Computer Process Control, ISA, Research Triangle
Park, NC.
Texas Instruments (1986): Digital Signal Processing Applications with the TMS320 Family - Theory, Algorithms, and Implementations, Digital
Signal Processing, Semiconductor Group.
Texas Instruments (1989a): TMS320Clx /
TMS320C2x - User's Guide, Digital Signal Processor Products.
Texas Instruments (198gb): TMS320 Family Development Support - Reference Guide, Digital
Signal Processor Products.
Texas Instruments (1990a): Digital Signal Processing - Applications with the TMS320 Family,
Application book volume 3, Digital Signal Processor Products.
Texas Instruments (1990b): TMS320C3x - User's
Guide, Digital Signal Processor Products.

215

Appendix A: Pl-Controller for TMS32010
PI Controller for TMS32010 Version 1.0
Author: le~ Steinsrimsson
Date: 3-26-1990
RESERVE SPACE IN DATA MEMORY FOR CONSTANTS AND VARIABLES
.bss
ITE1,1
i Temporary storages
.bss
LTE1,1
.bss
ITE2,1
LTE2,1
.bss
.bss
11,1
ilntegral high
.bss
IL,l
ilntegral low
KC,l
iCoeff for P
.bss
.bss
KCB,l
jCoeff for I
.bss
BI,l
BT,l
.bss
.bss
UMU,l
jMaximum output
UMIN,l
.bss
iMinimum output
.bss
MODE,l
iExtra constant
.bss
CLOCK,l
i Sampling rate
.bss
ONE,l
iOne
.bss
MUNUM,l
i Maximum number
.bss
MINNUM, 1
i Minimum number
DTend
.bss
MINUS ,1
iFFFF
iEnd of parameters in data memory
.bss
.bss
.bss
.bss
.bss
.bss

YN,l
YNM1,1
YSP,l
UN,l
VN,l
STAO,l

jy(n)
iy(n-1)
iY set point
iOutPUt
iOutput before f
iSpace to store status register

iBegin program memory
.sect
B
B

"IRUPTS"
START
ISR

iBranch to start of program
iInterupt service routine

iStore parameters in program memory
Ptable

Ptend
SCALE

216

. data
.set
. word
. word
. set
.set

$
1229,1229,894,6554,9830,-9830,1,1,1,32767,-32768
-1
$-1,

15

PI Controller for TMS32010 Version 1.0
iInitialize
START

•text
DINT
NOP
SOVM

iDisable interupts
iSet overflow mode

iLoad coeff from prog. mem to data memo use TBLR (not BLKP) for 1. generation
idevices

LOAD

LARK ARO,DTend
LARK AR1,Ptend-Ptable
LACK Ptend
LARP ARO
TBLR *-,ARl
SUB
ONE
BANZ LOAD

iARO points to end of data block
iCounter
iBeginning addre,ss :j.n program memory
iPoint to ARO
i Move , decr. ARO and point to ARl
iSubtract one from accumulator
iARl not 0 then decr. ARl and branch
i=> Coeff loaded into data memory

iInitialize variables

WAITl
GETl
WAIT2
GET2

LDPK IH
ZAC
SACL IH
SACL IL

iPoint to correct data page
iClear variables

OUT MODE,PA4
OUT CLOCK ,PAS

iInit analog board

BIOZ
B
IN

iLoad ysp

GETl
WAITl
YSP,PA3

BIOZ GET2
B
WAIT2
TNM1,PAO
IN

iLoad yen-l)

iBegin PI
WAIT
GET

BIOZ GET
B
WAIT
IN
TN,PAO

iWait for input

ZAC

iClear accumulator

iP-section
LT
MPJ

YSP
KCB

iy(n) * KCB
217

PI Controller for TMS32010 Version 1.0
LTl Yli
MPY XC
SPAC

iacc • y(n)*XCB - ysp*XC

SACH HTEl
SACL LTEl

iStore P temporarily

ZALH
ADDS
SACH
SACL
LAC
SACH
LAC
lOR
AND
ADD

m

iShift integral right 4

HTE2
LTE2
LTE2.12
LTE2
MINUS. 12
MINUS
LTE2
HTE2.12

iI in acc rigth shifted 4

n

ADDS LTEl
ADDH HTEl

iAdd P to acc to for.m P + I

LARK ARO.VB
LARP ARO
CALL ROUOF4

iPoint ARO to VN

CALL FUNCT
OUT UN.PAl

iActuator saturation function
iOutput control signal

iRound off and overflow check

iI-section
ZAC
LT
MPY

YSP
BI

LTl IN
MPY BI
SPAC
LT
MPY

UN
BT

LTl VB
MPY BT
SPAC
ADDS
ADDH

218

n

m

iAdd old I with double precision

Page 3

PI Controller for TMS32010
SACH
SACL

IH

Version 1.0
;Store integral

IL
;Overflow check (10 instr. cycles)
;Subtract maximum pos. number
;If acc <= 0 then no overflow
;else store maximum number

SACH
SACL
B

INEG
MAlNUH,SCALE
OUT4
MAlNUH,SCALE
IH
IL
OUT5

INEG

SUB
BGEZ
LAC
SACH
SACL
B

MINNUH,SCALE
OUT4
MINNUH , SCALE
IH
IL
OUT5

;Subtract maximum neg number
;If acc >= 0 then no overflow
;else store minimum number

OUT4

NOP
NOP
NOP
NOP
NOP
EINT
NOP
NOP
DINT

BLZ
SUB
BLEZ
LAC

OUT5

B

;Enable interupt

WAIT

;Disable interupt
;Loop again

;Rounding and overflow function (11 cycles)
ROUOF4 BLZ
ADD
SACH
SACL
SUB
BLEZ
ZALS
SACL

RNEG
ONE,SCALE-5
HTEl
LTEl
MAlNUM,SCALE-4
RNO
MAlNUM

;Check if number negative
; Round
;Store value
;Subtract scaled max pos number
;If acc <= 0 then no overflow
;else store max num

*

RET

RNEG

ADD
SACH
SACL
SUB
BGEZ
ZALS
SACL

ONE,SCALE-5
HTEl
LTEl
MINNUM,SCALE-4
RNO
MINNUM

*

; Round
;Store value
;Subtiact scaled min neg number
;If acc >= 0 then no overflow
;else store min neg number

RET

219

PI Controller for TMSS2010 Version 1.0
RNO

ZALH
ADDS
SACH
SACL
ZALH
SACH
ZALH
ADDS
SACH
RET

HTE1
LIE1
HTE1,4
LIE1
LIE1
LIE1,4
HTE1
LIE1
.,16-SC.lLE

jShift number left 4 before store

jSaturation function (14 instr. cycles)
FUNCT

LOWER1

SAME

Z.lLH
SUBH
BLZ
Z.lIJl
SUBH
BLZ
B

VN

jLoad VN

OMIN
LOWER1

jBranch if v < umin

VN
UH.lI

SAME
HIGHER

ZALH OMIN
SACH UN
NOP
NOP
NOP
NOP
NOP
NOP
RET
Z.lLH VN
SACH UN
NOP
NOP
RET

HIGHER ZILH OM.lI
SACH UN
RET

jBranch if v < umax
jV >.. umax
jU .. umin
jAlways same time

jU = v

jU .. umax

jlnterupt service routine. To read set point value
ISR

220

SST STAO
IN
YSP,PAS
LST STAO
RET
.endaa

jSave status
ysp
iRestore status
jReturn

j~oad

Appendix B: PI-Controller for TMS320C25
PI Controller for TMS320C25 Version 1.0
Author: Hermann Steingrimsson
Date: 3-26-1990

RESERVE SPACE IN DATA MEMORY FOR CONSTANTS AND VARIABLES
.bss
HTE1, 1
iTemporary storages
.bss
LTE1,l
HTE2, 1
.bss
LTE2,l
.bss
IH,l
.bss
iIntegral high
IL,l
.bss
ilntegral low
KC,l
.bss
iCoeff for P
.bss
KCB,l
BI,l
.bss
iCoeff for I
.bss
BT,l
.bss
UMAI,l
i Maximum output
.bss
UMIN,l
i Minimum output
MODE,l
.bss
iExtra constant
CLOCK,l
.bss
i Sampling rate
ONE,l
.bss
iOne
.bss
MAINUM,l
i Maximum number
.bss
MINNUM, 1
i Minimum number
.bss
MINUS ,1
DTend
iFFFF
iEnd of parameters in data memory
.bss
.bss
.bss
.bss
.bss
.bss
.bss

YN,l
YNM1,l
YSP,l
UN,l
VN,l
STAO,l
ST11,l

iy(n)
iy(n-l)
iY set point
iOutput
iOutput before f
iSpace to store status register

iBegin program memory
. sect
B
B

"IRUPTS"
START
ISR

iBranch to start of program
ilnterupt service routine

i Store parameters in program memory
Ptable
Ptend
SCALE

. data
.set
•word
•word
.set
.set

$
1229,1229,894,6554,9830,-9830,1,1,1,32767,-32768
-1

$-1
15

221

PI Controller for TMS320C25

Version 1.0

jlnitialize
START

.text
DINT
NOP
SOVM
SSIM
SPM 0

jDisable interupts
jSet overflow mode
jSet sign-extension mode
jNo shifting from P register

jLoad coeff from prog. mem to data memo

LOAD

LRLK
LARK
LALK
LARP
TBLR
SUBK
BANZ

ARO,DTend
AR1,Ptend-Ptable
Ptend
ARO
*-,AR1
1
LOAD

jARO points to end of data block
jCounter
jBeginning address in program memory
jPoint to ARO
jMove. decr. ARO and point to AR1
jSubtract one from accumulator
jAR1 not 0 then decr. AR1 and branch
j=> Coeff loaded into data memory

jlnitialize variables
LDPK
ZAC
SACL
SACL

jWAITi
GET1

IH

jPoint to correct data page
j Clear variables

IH
IL

OUT MODE,P14
OUT CLOCK,PA5

jlnit analog board

BIOZ GET1
B
WAITi
IN
YSP,PA3

jLoad ysp

jBegin PID
JWAIT
WAIT

BIOZ GET
B
WAIT
IN
Yll,PAO

jWait for input
jChange WAIT to GET when

jP-section
LT
MPY

YSP
KCB

LTP
Yll
MPY KC
SPAC

222

jy(n) * KCB
jacc = y(n)*KCB - ysp*KC

are removed

PI Controller for TMS320C25

Version 1.0
,

SACB
SACL

BTE1
LTE1

iStore P

ZALB
ADDS
SPR
SPR
SPR
SPR

IH

iShift integral right 4
ibecause coeff of P where divided by 16

ADDS
ADDB

LTE1
BTE1

IL

!

iAdd,P to acc to form P + I

LRLK .&.RO,VN
LARP .&.RO
CALL ROUOP4

iPoint ARO to VN
iRound off and overflow check

CALL FUllCT
OUT UN,PA1

iActuator saturation function
iOutPUt control signal

iI-section
IT
MPY

YSP
BI

lTP
MPY

BI

lTS
MPY

UN
BT

LTA
MPY
SPAC

VN
BT

ADDS
ADDH

Il
IH

iAdd old I with double precision

SACH
SACL

IH
IL

iStore integral

TN

BlZ
INEG
SUB
MAlNUM, SCALE
BlEZ OUT4
LAC
MAlNUM ,SCALE
SACH m
SACl Il
B
OUT5

iOverflov check (10 instr. cycles)
iSubtract maximum pos. number
iIf acc <= 0 then no overflow
ielse store maximum number

223

PI Controller for TMS320C25 Version 1.0
INEG

SUB MINNUM,SCALE
BGEZ OUT4
LAC
MINNUM,SCALE
SACH IH
SACL IL
OUT5
B

OUT4

NOP
NOP
NOP
NOP
NOP
ElNT
NOP
NOP
DINT
B
WAIT

OUT5

;Subtract maximum neg number
;If acc )- 0 then no overflow
;else store minimum number

;Enable interupt
;Disable interupt
;Loop again

Rounding, overflow and shifting function (13 cycles)
ROUOF4 BLZ
RNEG
ADD
ONE,SCALE-5
SACH BrEl
SACL LTEl
SUB
MAlNOM ,SCALE-4
BLEZ RNO
ZJ.LS MAlNOK
SACL
*
NOP
RET
RNEG

RNO

ONE,SCJ.LE-5
ADD
SACH BTEl
SACL LTEl
SUB
KINNUM,SCALE-4
BGEZ RNO
ZALS KINNOK
SACL *
NOP
RET
ZALH BTEl
ADDS LTEl
SACH *,5
RET

;Check if number negative
; Round
; Store value .
;Subtract scaled max pos number
;If acc <= 0 then no overflow
;else store max num

; Round
;Store value
;Subtract scaled min neg number
;If acc )= 0 then no overflow
;else store min neg number

;Shift number left 4+1 before store

;Saturation function (12 instr. cycles)

224

PI Controller tor TMS320C25 Version 1.0
FUNCf

LOWER1

SAME

ZILI
SUBI
BLZ
ZALI
SUBI
BLZ
ZILI
SACI
RET

VB

iLoad VN

UMIIl
LOWER1

iBranch it v < umin

VB

UMll
SIME
UM1l
Ull

ZILI UMI!l
SICI UN
!lOP
!lOP
!lOP
!lOP
RET
ZILI VB
SACI UN
RET

: Branch it v <
:v >= UlIUlX
:u = UlIUlX

UlIUlX

:u • umin
:llways same time

:u

=v

:Interupt service routine. To read set point value
ISR

SST
SST1
III

LST
LST1
RET
.endaa

STAO
ST11
YSP,PA3
STAO
ST11

:Save status
:Load ysp
: Restore status
: Return

225

Appendix C: PID-Controller for TMS32010
PID Controller for TMS32010 Version 1.0
Roundoff Corrected
Bermann Steingrimsson
Date: 3-26-1990

ad and Kc must be divided by 16 before stored
bd must be divided by 256 before storage
RESERVE SPACE IN DATA MEMORY FOR CONSTANTS,AND VARIABLES
.bss
HTE1,1
;Temporary storages
.bss
LTE1,1
.bss
BTE2,1
LTE2, 1
.bss
.bss
IB,1
;Integral high
IL,1
.bss
;Integral lOll
DB,1
.bss
; Derivative high
.bss
KC,1
;Coeff for P
.bss
KCB,1
.bss
BI,1
;Coeff for I
.bss
BT,1
.bss
BD,1
;Coeff for D
.bss
AD,1
UMAI,1
.bss
; Maximum output
.bss
UMIN,1
;Minimum output
MODE,1
.bss
;Extra constant
CLOCK,1
.bss
; Sampling rate
ONE,1
.bss
; One
.bss
MAIWM,1
; Maximum number
.bss
MINNUM,1
; Minimum number
MINUS,1
;FFFF
DTend
.bss
;End of parameters in data memory
.bss
.bss
.bss
.bss
.bss
.bss

YN,1
YBM1,1
YSP,1
UN,1
VN,1
STAO,1

;y(n)
;y(n-1)
;y set point
; Output
;Output before f
;Space to store status register

;Begin program memory
.sect
B

B

"IRUPTS"
START
ISR

;Branch to start of program
;Interupt service routine

;Store parameters in program memory

226

PID Controller for TMS32010

Ptable
Ptend
SCALE

. data
.set
. word
. word
.set
.set

Version 1.0

$
1229,1229,894,6554,236,788,9830,-9830,1,1,1,32767,-32768
-1

$-1
15

jInitialize
START

. text
DINT
NOP
SOVM

jDisable interupts
jSet overflow mode

jload coeff from prog. mem to data memo use TBlR (not BlKP) for 1. generation
jdevices
ARO,DTend
JARO points to end of data block
AR1,Ptend-Ptable jCounter
LACK
Ptend
jBeginning address in program memory
jPoint to ARO
LARP no
jMove, decr. ARO and point to ARl
TBlR *- ,ARl
SUB
ONE
jSubtract one from accumulator
BANZ lOAD
jARl not 0 then decr. ARl and branch
j=> Coeff loaded into data memory

lARK
LARK

lOAD

jInitialize variables

WAITl
GETl
WAIT2
GET2

lDPK II
ZAC
SACl II
SACl IL
SACl DI

jPoint to correct data page
jClear variables

OUT MODE,PA4
OUT ClOCK,PA5

jInit analog board

BIOZ
B
IN

GETl
WAITl
YSP,PA3

jload ysp

BIOZ
B
IN

GET2
WAIT2
YNM1,PAO

jload y(n-l)

GET
WAIT

jWait for input

jBegin PID
WAIT

BIOZ
B

227

PID Controller for TMS32010 Version 1.0
GET

II

Ylf,PI0

;Change WAIT to GET when

are removed

Z1LK
SUBK
SACK
DMOV

YlfMl
Ylf
!TEl
Ylf

iy(n-l) - yen)

LT
HPY
PIC

DB

;ad*D (ad was divided by 16)

LT
MPY

!TEl
BD

;D-section

iStore difference
iCOPY YI into YIMl

AD

IPIC
IPIC
!PAC
APAC
IPIC
!PAC
APAC
APAC
!PAC
APAC
APAC
IPIC
APAC
IPIC
!PIC
!PAC

il
i2
i3

;4
;5
;6

;difference * bd
;Since bd was divided by 256, bd*diff is
iadded 16 times to the accumulator to
iform D divided by 16. By doing this the
ioverflow mode will take care of overflow

i7
;8

i9

;10
ill
i12

;13
i14
i15
i16

SACB !TE2
SICL LTE2
LARK ARO,DB
L1RP ARO
CILL ROUOF4
ZALB HTE2
ADDS LTE2

iStore derivative
iPoint to DB
;Check for overfl. shift and store
iRestore the derivative

iP-section

228

LT
HPY

YSP
KCB

LTl
HPY

Ylf
KC

;y(n) * KCB
iacc = y(n)*KCB - ysp*KC

PID Controller for TMS32010

Version 1.0

SPAC
SACH HTEl
SACL LTEl
Z!LH
ADDS
SACH
SACL
LAC
SACH
LAC
lOR
AND

ADD

;Store P + D

m

jShift integral right 4

IL
HTE2
LTE2
LTE2,12
LTE2
MINUS, 12
MINUS
LTE2
HTE2,12

jI in acc right shifted 4

ADDS LTEl
ADDH HTEl

jAdd P + I to acc to form P + I + D

LARK !RO,VN
LARP !RO
CALL ROUOF4

jPoint ARO to VN
jRound off and overflow check

CALL PUNCT
OUT UN ,PAl

jActuator saturation function
jOutput control signal

jI-section
ZAC
LT
MPY

YSP
BI

LTA TN
MPY BI
SPAC
LT
MPY

UN
BT

LT!
MPY
SPAC

BT

VN

ADDS IL
ADDH m

jAdd old I with double precision

SACH m
SACL IL

;Store integral

229

PID Controller for TMS32010

Version 1.0

BLZ
!NEG
SUB
MlINUM,SCALE
BLEZ OUT4
LAC MAINUM,SCALE
SACH IH
SACL IL
B
OUTS

;Overflow check (10 instr. cycles)
;Subtract maximum pos. number
;If acc <= 0 then no overflow
;else store maximum number

!NEG

SUB
MINNUM,SCALE
BGEZ OUT4
LAC
MINNUM , SCALE
SACH IH
SACL IL
B
OUTS

;Subtract maximum neg number
;If acc >= 0 then no overflow
;else store minimum number

OUT4

NOP
NOP
NOP
NOP
NOP
EINT
NOP
NOP
DINT

OUTS

B

;Enable interupt

WAIT

;Disable interupt
;Loop again

Rounding, overflow and shifting function (19 cycles)
ROUOF4 BLZ
ADD
SACH
SACL
SUB
BLEZ
ZALS
SACL
NOP
NOP
NOP
NOP
NOP
NOP
NOP

RNEG
ONE,SCALE-5
HTEl
LTEl
MllNUM,SCALE-4
RNO
MAINUM
•

;Check if number negative
; Round
;Store value
;Subtract scaled max pos number
;If acc <= 0 then no overflow
;else store max num

RET

RNEG

230

ADD
ONE,SCALE-5
SACH HTEl
SACL LTEl
SUB
MINNUM,SCALE-4

; Round
;Store value
;Subtract scaled min neg number

PID Controller for TMS32010

RNO

BGEZ
ZALS
SACL
NOP
NOP
NOP
NOP
NOP
NOP
NOP
RET

RNO
MINNUM

ZALH
ADDS
SACH
SACL
ZALH
SACH
ZALH
ADDS
SACH
RET

HTEl
LTEl
HTE1.4
LTEl
LUl
LTE1.4
HTEl
LTEl
.... 16-SCALE

Version 1.0

Page 6

iIf acc >= o then no overflow
ie1se store min neg number

...

iShift number left 4 before store

iSaturation function (12 instr. cycles)
FUNCT

LOWERl

SAME

ZALH VN
SUBH UMIN
BLZ
LOWERl
ZALH VN
SUBH UMAI
BLZ
SAME
ZALH UMAI
SACH UN
RET
ZALH UMIN
SACH UN
NOP
NOP
NOP
NOP
RET
2ALH VN
SACH UN
RET

iLoad VN
iBranch if v < umin
iBranch if v < umax
iV >= umax
iU = umax

iU = umin
iA1ways same time

;u

=v

iInterupt service routine. To read set point value

231

PID Controller for TMS32010
ISR

232

SST S110
IN
YSP.P13
LST S110
RET
.endal

Version 1.0
iSave status
iLoad ysp
iRestore status
iReturn

Appendix D: PID-Controller for TMS320C25
PID Controlle~ for TKS320C25
Roundoff Corrected
Author: Hermann Steingrimsson
Date: 3-26-1990

ad and Kc must be divided by 16 before stored
bd must be divided by 256 before storage
RESERVE SPACE IN DATA MEMORY FOR CONSTANTS AND VARIABLES
.bss
HTE1,1
iTemporary storages
LTE1,1
.bss
.bss
HTE2,1
.bss
LTE2,1
.bss
IH,1
i Integral high
.bss
IL,1
i Integral low
.bss
DH,1
iDerivative high
.bss
KC,1
DTbeg
iCoeff for P
KCB,1
.bss
.bss
BI,1
iCoeff for I
.bss
BT,1
.bss
BD,1
iCoeff for D
.bss
AD,1
UMAl,1
.bss
i Maximum output
.bss
UMIN,1
i Minimum output
MoDE,1
.bss
i Extra constant
CLoCK,1
.bss
i Sampling rate
oNE,1
.bss
ione
.bss
MAlNUM,1
i Maximum number
.bss
MINNUM,1
i Minimum number
DTend
.bss
MINUS, 1
iFFFF
iEnd of parameters in data memory
.bss
.bss
.bss
.bss
.bss
.bss
.bss

YN,1
YNM1,1
YSP,1
UN,1
VN,1
STAO,1
ST11,1

iy(n)
iy(n-1)
iY set point
ioutput
ioutput before f
iSpace to store status register

iBegin program memory
.sect
B
B

"IRUPTS"
START
ISR

iBranch to start of program
iInterupt service routine

iStore parameters in program memory
233

PID Controller for TMS320C25

Ptable
Ptend
SCALE

•data
.set
•word
. word
.set
.set

Version 1.0

Page 2

$
1229,1229,894,6554,236,788,9830,-9830,1,1,1,32767,-32768
-1

$-1
15

; Initialize
START

.text
DINT
NOP
SOVM
SSIM
SPM
0

iDisable interupts
iSet overflow mode
;Set Sign-extension mode
iNo shifting from P register

iLoad coeff from prog. mem to data memo use BLKP
LRLK

ARO ,DTbeg

LARP

ARO

RPTK
BLKP

Ptend-Ptable
Ptable,*+

iARO points to end of data block
iSet up counter
;Move data
i=> Coeff loaded into data memory

;Initialize variables

WAITl
GETl
WAIT2
GET2

LDPK IR
ZAC
SACL IR
SACL IL
SACL DR

iPoint to correct data page
;Clear variables

OUT
OUT

MODE,PH
CLOCK,PA5

,i Ini t analog board

BIOZ
B
IN

GET 1
WAITl
YSP,PA3

iLoad ysp

BIOZ
B
IN

GET2
WAIT2
TNM1,PAO

iLoad y(n-1)

;Begin PID
WAIT
GET

234

BIOZ GET
B
WAIT
IN
TN,PAO

;Wait for input
;Change WAIT to GET when

are removed

PlD Controller for THS320C25

Version 1.0

;D-section
ZALB INHl
SUBB IN
SACB HTEl
DHOV IN

;y(n-l) - yen)

LT
HPY

DB

;ad*D (ad was divided by 16)

LTP
HPY

HTEl
BD

;difference * bd, and store previous product

RPTK
APAC

15

;Since bd was divided by 256, bd*diff is
;added 16 times to the accumulator to
;form D divided by 16. By doing this the
;overflow mode will take care of overflow

SACB
SACL
LRLK
LARP
CALL
ZALB
ADDS

HTE2
LTE2
ARO,DB
ARO
ROUOF4
HTE2
LTE2

;Store derivative

;Store difference
;Copy YN into YNHl

AD

;Point to DB
;Check for overfl. shift and store
;Restore the derivative

;P-section
LT
HPY

YSP
KCB

L11
HPY
SPAC

YN
KC

;acc

SACB
SACL

HTEl
LTEl

;Store P + D

;y(n) * KCB

= y(n)*KCB

- ysp*KC

;P + D are now divided by 16 => shift integral right 4 bits before adding
P + D

ito

ZALB
ADDS
SFR
SFR
SFR
SFR

lB

;Shift integral right 4

n

235

PID Controller for TMS320C25

Version 1.0

ADDS
ADDH

lTEl
UTEl

;Add P + I to acc to form P + I + D

lRLK
LARP
CAll

ARO,VN
ARO
ROUOF4

;Point ARO to VN
;Round off and overflow check

CAll FUMCT
OUT UN,PAl

;Actuator saturation function
;Output control signal

; I-section
IT
MPY

YSP
BI

lTP
MPY

TN

lTS
MPY

UN
BT

lTA
MPY
SPAC

BT

ADDS
ADDH

Il
IH

;Add old I with double precision

SACH
SACl

IH

;Store integral

n

BI

VN

BlZ
INEG
MAlNUM , SCALE
SUB
BlEZ OUT4
LAC
MAlNUM,SCAlE
SACH IH
SACl n
B
OUT5

;Overflow check (10 instr. cycles)
; Subtract maximum pos. number
;If acc <= 0 then no overflow
; else store maximum number

INEG

SUB
MINNUM,SCAlE
BGEZ OUT4
LAC
MINNUM, SCALE
SACH IH
SACl n
B
OUT5

;Subtract maximum neg number
;If acc >= 0 then no overflow
;else store minimum number

OUT4

NOP
NOP
NOP

236

Page 4

PID Controller for TMS320C25 Version 1.0

Page 5

NOP
NOP
OUT5

;Enable interupt

EDIT

NOP
NOP
DINT
B

WAIT

;Disable interupt
;Loop again

Rounding, overflow and shifting function (19 cycles)
ROUOF4 BLZ
ADD
SACH
SACL
SUB
BLEZ
ZALS
SACL
NOP
RET

RNEG
ONE,SCALE-5
HTE1
LTE1
MAlNUM,SCALE-4
RNO
MAlNUM
*

ADD
SACH
SACL
SUB
BGEZ
ZALS
SACL
NOP
RET

ONE,SCALE-5
HTE1
LTE1
HINNUM,SCALE-4
RNO
MINNUM
*

RNEG

RNO

ZALH HTE1
ADDS LTE1
SACH *,5
RET

;Check if number negative
; Round
;Store value
;Subtract scaled max pos number
;If acc <= 0 then no overflow
;else store max num

; Round
;Store value
;Subtract scaled min neg number
;If acc>= 0 then no overflow
;else store min neg number

;Shift number left 4 before store
;+1 shift because of sign

;Saturation function (12 instr. cycles)

FUNCT

LOWER1

ZALH
SUBH
BLZ
ZALH
SUBH
BLZ
ZALH
SACH
RET

VN

;Load VN

UMIN

LOWER1
VN

; Branch if v < umin

UMAI

SAME

;Branch if v < umax

UMAI

;v >= umax

UN

iU

= umax

ZALH UMIN

237

PID Controller for TMS320C25 Version 1.0
SiCH UN

NOP
NOP
ROP
NOP

jU = umin
jilllays same time

RET
SAME

ZUH VR
SiCH UN

jU

=v

RET

jInterupt service routine. To read set point value
ISR

SST
SST1
IN

LST
LST1
RET
.endaa

238

SUO
STU
YSP,Pi3
SUO
STU

jSave status
jLoad ysp
jRestore status
jReturn

Page 6

DSP Implementation of a Disk Drive Controllert ,
Hermann Steingrimsson
Graduate School of Business
University of Wisconsin
Madison, Wisconsin, USA

1.

Introduction

The purpose of this paper is to study implementation of a controller based on state estimation and
feedback from estimated states on a digital signal
processor. Design of a control system for a disk
drive is chosen as an example. The controller is
implemented on a DSP that does not have floating point hardware. The control problem is described in Section 2, which also describes mathematical models of different complexity. Design of
a controller is discussed in Section 3. This section
contains a derivation of a continuous time controller and a discrete time controller. The continuous controller is used to choose design parameters
and to estimate orders of magnitude. The discrete
time controller is the algorithm implemented on
the DSP. The section on control design also contains a discussion of design trade-offs. Implementation of the controller on a DSP is discussed in
Section 4. Scaling of parameters and states is a
major issue. An outline of the code is given. The
complete code is listed in the appendix. Testing of
the code is described in Section 5 and the paper
ends with conclusions and references.

2.

Disk Drive Control

Modern disk drive use fast voice coil actuators to
position the magnetic heads on a track and to
keep them on track under closed loop control. The
task of the position control system is twofold: to
position the heads over a desired track and to keep
it there. The first task is a servo proble~ whereas
the second task is a regulation problem. This paper
treats the regulation problem.
, Part of this work was done when the first author w....
visiting professor and the second author a graduate student
at the University of Texas at Austin.

Reprinted, with permission from author.

Karl Johan Astrom
Department of Automatic Control
Lund Institute of Technology
Lund, Sweden

Two methods are currently used for feedback
measurements. In a dedicated servo an entire surface is used for position information, that could
have been used for data. In an embedded servo the
position information are embedded into the data
track at the beginning of each sector, instead of
using a separate surface. It is also possible to have
dual layers so that the servo information is on a
layer below the data layer.
The advantage of the dedicated servo is that
position information available continuously. With
a dedicated servo it is therefore possible to use a
controller with a high bandwidth. In an embedded
servo, position information is only obtained at a
sector boundary. This limits the track following
bandwidth and results in longer seek times, and
more sluggish track following. A dedicated servo
uses an extra surface for the position information.
Thermal differences between the position surface
and the data surfaces also give rise to errors.
Linear or rotary actuators with a permanent
magnet and a voice coil are used to move the head
across the tracks. The arm is ideally a rigid body
which can be modeled as a double integrator. The
large accelerations will, however, excite resonant
modes. This makes it difficult to achieve a high
bandwidth for positioning and track following.
Analog controllers have been used for servos.
They contain amplifiers, compensation networks,
notch filters, switches and passive components. The
parameters of the analog components change with
temperature and component aging can result in
deteriorated performance of the servo.
There are several advantages in using a digital servo. Components having drift and aging are
avoided, the number of components can be reduced
and servo performance can be increased. a digital
servo will, however, require high sampling rates.
This makes a micro controller less suitable. The inexpensive DSP's offer computational power an or239

der of magnitude greater than the microcontrollers
and some, like the TMS320C14, do also have the
hardware for input-output similar to a micro controller. Such components are ideally suited for implementation of fast servos of the type used in disk
drives.

where

Kl' = KpoKcR/ J ~ 72m/s2V
The model given by Equation (1) neglects the fact
that the arm has compliance. If this is considered,
the plant transfer function becomes Gpl = GpG1 ,
where

Position Detector

(3)

The head/track misalignment is the only information available to the controller. Control thus has
to be based on error feedback. The position detector generates a voltage which is proportional to
the misalignment of the -head and track. The operating range is 23JtID. and the output voltage is
in the range 0-5 V. After A/D-conversion one unit
in the processor corresponds to a track/head misalignment of 1l.5JtID. . The useful track width is
approximately 4.3JtID..
Control Signal

The D / A-converter generates a voltage in the range
±5 V. This voltage is amplified by an amplifier
which generates a current. The current passes
through the voice coil and generates a torque to
move the arm.
Physical Constants of the Drive

The drive system has the following parameters:
Pivot to head radius
R:
0.08m
Power amplifier gain
Kpo:
Q.5 A/V
Torque constant of the actuator
Kc:
0.09 Nm/ A
Total moment of inertia
J:
50.10-8 Kgm2
Mathematical Model

Typical values of w and "'1 are 2 KHz and 3 KHz.
The model given by Equation (1) is a good approximation of low frequencies. Because of the resonances this model does, however, not describe the
system well at frequencies approaching one kHz.
For those frequencies it is necessary to use models
like (3) and (4) or even more complicated models.
Disturbances

The major disturbances acting on the system are
low frequency load disturbance and a periodic
tracking error. Load disturbances are due to the
torque from the wires connected to the arm. This
torque is almost constant at a given track, but it
changes with the track. It may also change with
temperature. The second disturbance is due to
the eccentricity of the disk which translates into
a periodical tracking error. Since the amplitude
of this error is small, the disturbance can be
approximated by a sinusoid with the rotational
frequency of the disk. By introducing the state :1:3,
the load disturbance can be added to equation (1),
giving

A mathematical model describing the position of
the arm as a function of the current trough the
coil is a double integrator

(5)

(1)

where '(J is the angle of the arm. The transfer
function from voltage u to arm position y is

G (.I)
l'

240

= yes) = Kl'
U(s).s 2

(2)

3.

Controller Design

Control algorithms for the disk drive will be derived in this section. A continuous time controller
for the simple rigid body model is first derived.

This derivation gives insight into the control problem and guide lines for choosing the design parameters. The controller is obtained using a straightforward pole-placement method. See [.A.strom and
Wittenmark, 1990]. A discrete time algorithm is
then derived. This algorithm is the basis for the
DSP implementation.
A state-space model of (5) is

To obtain (9) the feedback gains 11 and 12 should
thus be chosen as

:iI(t) = Az(t) + Bu(t)
yet) = Cz(t)

13 = l/Kp

A=
and

[~

~lB= [~plY= (1

12 =

The gain 13 is chosen to give perfect disturbance
cancellation, i.e.
(11)

The control law (7) can be interpreted as a feedback from the process states ZI and Z2 and a feedforward from the disturbance state Z3.

0 0)

position [m]
velocity [m/s]
Z3: torque [Nm]
Kp: gain [m/s2V]
u:
control signal [V]

Sta.te Obllerver.

A state observer is given by

i(t) = A:i:(t) + Bu(t) + K(y(t) - C:i:(t»

(12)

where :i: is the estimate of the state vector z. The
reconstruction error iii = z - :i: is given by

ZI:

Z2:

:ii(t) = (A - KC)i(t)

(13)

The characteristic polynomial of this system is

Continuous-Time Controller
It is easily verified that the states

(10)

2Cp"'p/ Kp

(6)

where
1
0
0

11 = ",;/Kp

8S

and Z2 of the
model (6) are controllable. The disturbance state
Z3 is naturally not controllable. All the states of
the system are observable. A controller based on
a state-feedback and an observer can therefore be
designed.
ZI

+ 1:1 ,,2 + k211 + k3

The observer gains k1, k2 and k3 are chosen so that
the observer has the characteristic polynomial

The following observer gains are then obtained

Sta.te Feedback. The controller will now be derived in the straightforward manner. See [.A.strom
and Wittenmark, 1990]. It is first assumed that all
states are measurable. The state feedback

kl = 2'0"'0 + 0.0

1:2 = "'~ + 2'oWoao
1:s = ",!ao

(15)

Discrete-Time Controller
gives the closed-loop system

:iI(t) = (A - BL)z(t)

(8)

The gains 11 and 12 are selected such that the
characteristic polynomial of the closed loop system
becomes
(9)
8(,,2 + 2CpWp" + III;)
Notice that the zero at the origin is due to the uncontrollable disturbance mode. The characteristic
polynomial of (8) is

11(82 + Kpl28 + Kp1l)

To derive a discrete time controller the system (6)
is sampled. This gives

z(k + 1) = 4!z(k) + ruCk)
y(k) = Cz(1:)

(16)

where

4!=

[~

h

1
0

h 2 /2
h

1

]

,

r=

ph2/2j
[ KKph
0

(17)

C= ( 1 0 0)

241

and h is the sampling interval. The states :1:1 and
:1:2 of the discrete time system (17) are controllable
but disturbance state :1:3 is of course uncontrollable.
All states are observable.
First consider the case when all states are
measured. With state feedback the closed loop
system has the characteristic polynomial

Requiring that this polynomial be equal to

where ao 1 and ao 2 are given by equation (19) and
a03

= e-

(23)

tJoh

gives
(18)

k1 = 1 - ao2aoa
k _ ao1 - ao2 - aoa
2-

+ ao1ao3 + 3ao2aoa + 3
n

k _ a Q1 + ao2 - aoa - ao1ao3 - ao2aoa

Notice that the pole z = 1 is due to the uncontrollable disturbance mode. The desired closed loop
characteristic polynomial is obtained by sampling
(9). This gives

(z- 1) (z2

+ 1Zp1Z + 1Zp2)

where

h2

3 -

The Control Algorithm
Reorganizing the calculations to minimize the delay between the AID- and D IA-conversions gives
the following algorithm.
ALGORITHM

1Zp1
1Zp2

= -2e-Cp"'phcos

(wphJl - ,~ )

= e -2Cp"'ph

1. Read

(19)

Choosing the feedback gains It and h so that (19)
and (18) are the same gives
I _ 1Zp1
1 -

+ 1lp2 + 1
K p h2

I _ 1Zp1 - 1Zp2 + 3
2 2Kph

State Obse.,."e,..

2. Compute

3. Output
4. Update

(20)

A state-observer of the form

z(klk) = z(klk - 1) + K(y(k) - y(klk - 1»
z(k + 11k) = ~z(klk) + ruCk)
y(k + 11k) = Cz(k + 11k)

(21)
is chosen. The reconstruction error is then given by
:il(k + 11k) = ~(I - KC):il(klk - 1)

This system has the characteristic polynomial

(22)

1

y(k)
z(klk) = z(klk -1) + Ke(t)
e(t) = y(k) - y(klk - 1)
v(k) = -Lz(klk)
u(k) = f(v(k»
u(k)
z(k + 11k) == ~z(klk) + ruCk)
y(k + 11k) = Cz(k + 11k)

5. Wait
where the function
nonlinearity.

f is a model of the actuator
0

Notice that the algorithm has been organized so
that the computational delay between the AID and
D I A converters are minimized. Notice also that
Step 2 of this algorithm can be expressed as

z(k + 11k) = ~nz(klk - 1) + r ny(k)
v(k) = Cnz(klk - 1) + Dny(k)
where
~n

=

~

-

~KC

-rL +rLKC

rn=~K-rLK

Cn = -L+LKC
Dn = -LK

242

~~

+1

(25)

Sampling Frequency and Anti-Aliasing
Filter
The following rule of thumb for the selection of
sampling frequency for a digital controller with a
zero-order hold, is given by [Astrom and Wittenmark, 1990j.
0.2 $; weh $; 0.6
(26)
where We is the crossover frequency. With a sampling frequency of 20 KHz the crossover frequency
can be at least 1 kHz. This was judged adequate
for the application.
A prefilter in the form of a second order Bessel
filter with the bandwidth 7500Hz was chosen to
avoid aliasing.

Design Parameters
The controller has the design parameters: wP ' (p,
W O, (0' ao and h that must be chosen. The choice of
sampling interval has already been discussed. Parameters (p and (0' which represent relative damping, can easily be chosen. Then there remain three
parameters wp , W o and ao • Requirements on desired
settling time and disturbance rejection have to be
matched against constraints due to model uncertainty. Recall that the rigid body model used for
the design was not valid for frequencies approach-

Head position

ing 1 kHz. After some experimentation the following design parameters were chosen for the nominal
case.
Wp = 100011'
(p = 0.80

= 150011'
(0 = 0.80
ao = 20011'

Wo

Figure 1 shows a simulation of the response of
the system with the nominal design parameters.
In the simulation a step command at 11.5 pm is
first applied. After 3 rna a torque disturbance in
the form of a step of 0.013 Nm is applied.
The simulation was performed assuming that
the plant model is given by Equation (3), which
has a resonance of 2 kHz. The settling time iii
about 1.5 ms and the resonant modes are not
much excited by the command signal. With the
rigid body process model given by Equation (2)
the system has an amplitude margin of 3.2 and
phase margin of 31 0 • The gain cross-over frequency
is 460 Hz and the phase cross-over is 1036 Hz. This
indicates that the design based on the rigid body
model has acceptable margins.
The effects of the neglected dynamics on the
margins can be estimated as follows. Assuming

ImJ

15x 10- 6

o~------------.-------------.--------------r~

Control signal (V)

ol-~v-1t-------------.-------------,,-------------r--a

0.005
Figure.1.

0.010

0.015
Time

Step response of the closed loop system.

243

Introducing W = W.pc = 1036 Hz this equation gives
M = 1.36. The gain margin is thus decreased to
1. 77. The argument of the transfer function of W is
a = - arctan

_lL--L~

-4

__L--L~__L--L~__L-~

-3

-2

-1

0

Figure 2. Nyquist curves for the loop transfer
functions with the rigid body dynamics, Equation
(2), and the dynamics with one resonant mode,
Equations (2) and (3).

2,W/Wl
1-w

2/WI2

(29)

with Wgc = 460 Hz, which gives 2.8 0 • Figure 2
shows the Nyquist curves with the nominal process
transfer function (2) and the transfer function with
one resonant mode (3). These curves show that the
essential effects of the resonant mode is to decrease
the amplitude margin.
An additional illustration of the sensitivity to
gain variations is illustrated in the simulation in
Figure 3, which shows the time response of the
closed loop systems, where the loop gain changes
with ±20%. Compare with the nominal case in
Figure 1.

Tracking Error

that the system dynamics is described by the
model having one resonant mode, Equation (3).
The additional dynamics is then given by

Misalignment errors is a common source of tracking
errors. Such disturbances can be approximated by
a sinusoidal. The sensitivity of the closed loop
system to such errors can be modeled by the pulse
transfer function.
1

Htraclc(Z)
where WI is the undamped natural frequency
(2 KHz) and , is the relative damping (0.1). The
magnitude M of the transfer function G 1 at W is
1

M

= -V7.(;=l=_=W==2=;/W::;~;;:)==2+=::;(2==,==w=;/W=1""")2

(28)

= 1- H(z)G(z)

With the chosen controller we find that disturbances of 60 Hz are attenuated by a factor of 32.
This agrees well with the simulation results that
showed a reduction from 5 p.m to 0.2 p.m.

Head position (m)
15x10- 6

-20·'.

IOxlO- 6
5xlO- 6

0.005

0.010

Figure 3. Responses of the closed loop system to a step command and
a step change in the torque when the process gain changes by ±20%.

244

(30)

0.015
Time

4.

The vector r is in the same way as L. Hence

DSP Implementation

Implementation of the controller using a nsp
with fixed point Cl: '.culations will now be discussed. The key issues are scaling of coefficients and
states. See [Roberts and Mullins, 1987], [Hanselmann, 1987], [Texas Instruments, 1986], [Texas
Instruments, 1988a], [Texas Instruments, 1988b],
[Texas Instruments, 1989a], [Texas Instruments,
1989b], [Texas Instruments, 1990a] and [Texas Instruments, 1990b].
The controller derived can be described by the
matrices:
1
T = [ 0

o
r =

5 ·10-&
1
0

(9.25. 10- 8

C=(1

and the scaled vectors

r =

r

and L become

4.021739130434783· 10- 2
[ 1.60869565~173913 . 103'

2.706892428494014. 10-1

]

(32)

]

T

L= [ 1.449116467212040.10-4

(33)

3.108108108108108.10-8
1.25 .10- 9
5 ·10-&
1

)

Coefficient Scaling

3.7.1O-3 0)T

00)

3.352917424019266.10- 1
K = [ 1.100808656418762· 103
5.695461161564441 . 10 5

1

1.176909751519137. 10& )
L = ( 6.300506379182784.101
1.351351351351351.10- 2

(31)

The coefficients of system (31), and (32) and (33)
can not be represented in the nsp. A similarity
transform Z <=> Tcz is used to scale the coefficients.
This gives

(T

r C K L)<=>

(TcTTc-1

(34)

Tcr

CT;l

TcK

LTc-I)

T

The elements of these matrices have numbers that
are widely spread. To accommodate this on a nsp
with fix point arithmetic it is necessary to scale the
numbers appropriately.

The elements of the matrices T, K, r and L are
proportional to powers of h. It is therefore natural
to use a scaling matrix of the form

The following scaling matrix was obtained after
some trial and error

I/O-Scaling

The range of the output signal in tracking mode
corresponds to ±11.5,an. The scaling will be chosen so that this corresponds to ±1 units in the nsp.
The input scaling factor 81/ is therefore
8" = U.51'm
Since the dimensions of III 12 and 13 are [vim],
[sv/m] and [s 2 v/m] respectively, it is advantageous
to multiply L with 8" rather than dividing C with
81/' The output must also be scaled since the n I Aconverter converts ±1 into ±5 V. The matrix L is
thus multiplied by the output scaling factor
2

"u = 10 V = 0.2 V-I

1

&r.

~)

(35)

State-vector scaling

With the chosen scaling of all controller coefficients
have magnitudes less than one. It now remains
to scale the state-vector. Simulations showed that
overflow could occur when the head is positioned
at the edge of the track and the disk controller is
switched to track-following. The scale factors 81, 82
and "3 were chosen from a simulation of this case. It
was found that 1Il1 had to be scaled down and that
1Il2 could be scaled up. Scaling of IllS depends on
the maximum possible load disturbance. For a load
disturbance of 0.3 Nm it was not necessary to scale

245

The following transformation was therefore
chosen to scale the state vector

:1:3.

T. e = diag( 1/2.8 1/0.3

1)

(36)

The largest roundoff error 0.04% occurs in q,13. To
find how the poles of the controller are affected by
the coefficient rounding, the characteristic equation of the controller was calculated. The largest
pole deviation is 0.0013% from the design value.

The following controller matrices were obtained
after scaling:

5.

~ = [~

q,;2 :::]

o

r =

0
1
4.079845420409798. 10- 2
[ 9.742508490121855.10- 1

)

o
C = (9.857577226616642.10- 1
3.401360544217687. 10-1
K = [ 6.666666666666667. 10-1
2.5.10- 2

L=

2.668340115802361 . 10-1
[ 2.392799926898983.10- 1
7.080843606269305.10- 1

0

0)

(37)

)

] T

where

q,12 =
q,13 =
q,23 =

8.375348965918672.10- 2
2.888874735967583.10- 2
6.898517895130376.10-1

The system matrices are finally transformed to integers to fit the 16 bit fractional format of the DSP.
The transformation is done by multiplying the coefficients with 216 and rounding each coefficient to
the nearest integer. The matrices then become

The DSP Code

The control algorithm was implemented on the
TMS320C25 by using the Texas Instruments Software Development System. The complete code is
listed in Appendix A. The organization of the code
is straightforward. It is composed of the following
steps:
1. Perform A/D conversion.
2. Compute the state estimate.
3. Compute the new control signal.
4. Saturate control signal.
5. Perform D / A conversion.
6. Update equations for state estimate.
Compare with Algorithm 1. Approximately 32% of
the computational power of the TMS320C25 used
when the controller was running.
It can be estimated how processor loading increases with the order of the controller. Neglecting saturation arithmetic and anti-windup calculations, the number of multiply/accumulate instructions are proportional to n 2 + 5n where n is the order of the controller. A 6th order controller would
therefore exhaust the computational power of the
C25. The saturation arithmetic routine must be
(aJ

0.2
0.1

r=[;::~r
~

1 2744
= [ 0
1
o 0

C = (32301
K =

[~~~::
819

L = (8744

246

0
-0.1

947]
22605
1
0

I
7841

r\
!

n
p,
l"l

-0.2

(bJ

(38)

0)

o

23203) T

0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008

Figure 4. Impulse response of the controller (a)
and the error compared to an ideal implementation

(b).

called approximately 2n times. In our case, the saturation arithmetic consumes almost 50% of the total execution time. Therefore, if saturation arithmetic can be avoided by using more careful scaling,
one can estimate that a 10th order controller can
be implemented on the C25.

6.

(a)
0.8
0.6
0.4
0.2
0
-0.2
-0.4
-0.6
-0.8

Testing

/

"...........

(b)

The open loop behavior of the controller was tested
using the development system. The impulse response of the controller was generated and compared to the ideal impulse response. Figure 4(a)
shows the responses of the controller to two impulses of magnitude 0.9 and -0.9. Figure 4(b) shows
the error between the ideal and actual impulse response of the controller. The small error small is
due to the roundoff in the controller. Notice that
the quantization step is approximately 3 . 10- 5 •
The observer was tested separately. A control signal was generated and the corresponding
ideal response of the arm was calculated. The input signal was piecewise constant with jumps at
t = 0, 0.001, 0.0018, and 0.0021. A load disturbance that was unknown to the observer was added
at time t = 0.0025. All signals were scaled appropriately and fed to the observer whose response
was recorded. Figure 5 shows the velocity estimate
and its error. Figure 6 shows the position estimate
and its error. The error is very small before time
(a) 0.8

r---.".L---,---.-----,r-----,-~

-0.005

o

0.001

0.002

0.003

0.004

0.005

Figure 6. Actual and estimated position (a) and
estimation error (b).

t = 0.0025, where the load disturbance was introduced. The load disturbance does, however, introduce significant errors both in velocity and position
estimates. This is natural, because the observer
does not have information about this load disturbance. The error will, however, decrease when the
observer improves its estimate of the disturbance
as is indicated in Figure 6.
Although open loop testing can never replace
actual closed loop testing of the whole system,
these results indicate that the controller works
properly.

,.1

0.6

r-'

0.4

./E

'-~

Remarks on a Roundoff Algorithm

1.

t,

0.2

o ".r

-0.2

L
I,

-0.4

\

\1

-0.6
-0.8 L..._---'_ _-'L.J._ _--'-_ _...L._ _-'-----1

(')~~I
o

: ,G~
0.001

0.002

0.003

0.004

0.005

Figure 5. Actual and estimated velocity (a) and
estimation error (b).

The first tests of the algorithm used a roundoff
scheme found in a programming example in [Texas
Instruments, 1986]. This resulted in a large estimation error, see Figure 7. The problem was investigated, since the error was larger than estimates
based on analysis of roundoff errors. The reason for
this is an error in the roundoff algorithm. To reduce quantization errors the numbers are rounded
off, rather than truncated, before they are stored as
16-bit numbers. This roundoff is done in software.
To roundoff a positive number, a bit is added to the
MSB of the lower half of the 32-bit number before
it is stored away. At first sight it appears natural to
subtract the bit from the number to roundoff a negative number. This was done in the coding example [Texas Instruments, 1986]. This is not correct

247

0.0008 . - - - - - , . - - - - . - - , - - - - , , - - - - , . - - ,
0.0007
0.0006

/'~A
\//.

J

I

0.0005
0.00Q4

~

:: )

8.

t

0.0001

o 1r"L

!

{c..t""i

'lr'--y

-0.0001 0

0.001

0.002

0.003

0.004

0.005

Figure 7. Position error with an incorrect roundoff algorithm.

with the chosen number representation. The roundoff algorithm gives -2 when applied to the number
-1 because of the computational scheme used in
the DSP. If the upper half of the number is complemented without considering the lower half, the
result will not be the same as if the whole number
is complemented. The correct code for the roundoff
is given in Appendix A.

7.

Conclusions

This paper shows that it is straightforward to
implement a controller based on an observer and
feedback from the observed states using a DSP
with fix point calculations. Some effort is required
to obtain proper scaling. The coefficient scaling
is quite straightforward and can be automated.
Scaling of the states is more difficult. It requires
that the ranges of the states are known. This can
be determined from simulation. Great care has to
be exercised to find the worst cases. The code for
the disk controller is much simpler than the code
for the PIDccontroller discussed in [Astromand
Steingrlmsson, 1991]. The reason is that the disk

248

controller is designed for a specific process while
the PID-controller is designed as a general purpose
controller. The coefficient ranges for the PIDcontroller are therefore much wider. This requires
more complex scaling and saturation arithmetic,
which is a large part of the code [Astrom and
Steingrimsson, 1990].

References

Astrom, K. J. and H. Steingrimsson (1990): "Implementation of a PID controller on a DSP ,"
Texal Instruments.
Astrom, K. J., and B. Wittenmark (1990): Computer Controlled Systems - Theory and Design,
Second edition, Prentice-Hall, Englewood Cliffs,
NJ.
Dote, Y. (1990): Servo Motor Control Using
Digital Signal Processor, Prentice Hall, Texas
Instruments.
Hanselmann, H. (1987): "Implementation of digital controllers - A survey," Automatica, 23.
Roberts, R. A., and C. T. Mullins (1987): Digital
Signal Processing, Addison-Wesley Publ Co.
Texas Instruments (1986): Digital Signal Processing Applications with the TMS320 Family - Theory, Algorithms, and Implementations, Digital
Signal Processing, Semiconductor Group.
Texas Instruments (1988a): First-Generation
TMS320 - User's Guide, TI Digital Signal Processing, Prentice Hall.
Texas Instruments (1988b): Second-Generation
TMS320 - User's Guide, TI Digit&! Signal
Processing, Prentice Hall.
Texas Instruments (1989a): TMS320Clx /
TMS320C2x - User's Guide, Digital Signal Processor Products.
Texas Instrumen~s (1989b): TMS320 Family Development Support - Reference Guide, Digital
Signal Processor Products.
Texas Instruments (1990a): Digital Signal Processing - Applications with the TMS320 Family,
Application book volume 3, Digital Signal Processor Products.
Texas Instruments (1990b): TMS320C3x - User's
Guide, Digital Signal Processor Products.

Appendix A: Disk Controller for TMS320C25
Disk Controller for the TMS320C2S
Based on a Rigid Body Model of the Arm
Version 1.0
Author: 'Hermann Steingrimsson
Date: 3-31-1990
RESERVE SPACE IN DATA MEMORY FOR CONSTANTS AND VARIABLES
DTbeg .bss A12,1 :The matrix A (or Phi)
.bss A13,1
.bss A23,1
.bss B1,1 :The vector B (or Gamma)
.bss B2,1
.bss C1,1 :C1
.bss K1,1 :The vector K (in this case CK/2)
.bss K2,1
.bss K3,1
.bss L1,1 :The vector L
.bss L2,1
.bss L3,1
.bss
MA1NUM,1
:Maximum number
.bss MINNUM,1
:Minimum number
.bss UMA1,1 :Saturation limits
.bss UMIN,1
.bss ONE,1 :ONE=1
.bss MODE,1
DTend .bss CLOCK,1 :End of parameters in data memory
.bss 1E1,1 :State vector x(k+1lk)
.bss 1E2,1
.bss 1K1,1
:Vector x(klk)
.bss
1K2,1
.bss 1K3,1
.bss YE,1 :Estimate of ye
.bss Y,1 :Input
.bss ERR,1
.bss V,1 :Control signal before saturation
.bss U,1 :Control signal after saturation U=SAT(V)
:Begin program memory
.sect "IRUPTS"
B START :Branch to start of program
:Store parameters in program memory
. data
Ptable .set •
•word 2744,947,22608,1337,31924,32301,11146,21845,819,8744,7841,23203

249

Disk Controller for TMS320C25 Version 1.0

Page 2

. word 32767,-32768,32766,-32766,1,1,1
.word ,...1
Ptend . set $-1
SCALE .set 15
jInitialize
.text
START DINT jDisable interupts
Nap
SoVM jSet overflow mode
SSIM jSet sign-extension mode
SPM
0 jNo shifting from P register
jLoad coeff from prog. mem to data memo use BLKP
LRLK ARo,DTbeg jARO points to begining of data block
LARP ARO
RPTK Ptend-Ptable iSet up counter
BLKP Ptable,*+ iMove data
j=> Coeff loaded into data memory
jlnitialize variables
LDPK A12 jPoint to correct data page
ZAC iClear variables
SACL IEl
SACL XE2
SACL IK3
SACL YE
SACL U

OUT MoDE,PA4 jlnit analog board
OUT CLOCK,PA5
LARP 0 jPoint to ARO
jBegin loop
JVAIT BIoZ GET jWait for input
; B
WAIT
WAIT IN
Y,PAO ;Change WAIT to GET when

ZALH Y jForm ERR = y{k) - ye{klk-l)
SUBH YE
SACH ERR
jCompute x{klk) = x{klk-l) + K*err

250

are removed

Disk Controller for TMS320C26 Version 1.0

Page 3

lEl,SCAlE ;Calculate xl(klk)
LAC
LT
ERR
MPY
Kl
APAC
lRlK ARO,lKl
CAll ROUOF
LAC
LT
MPY
APAC
lRlK
CALL

lE2,SCALE :Calculate x2(klk)
ERR
K2
ARO,lK2
ROUoF

LAC
LT
MPY

lK3,SCAlE :Calculate x3(klk) (Estimate xe3 not needed)
ERR
K3
APAC
LRLK ARO,lK3
CAll RoUoF
: Calculate control signal u(k)
ZAC
LT
MPY

lKl

LTS
MPY

lK2
l2

-lx(klk)

L1

lTS
lK3
MPY
l3
SPAC
lRlK
CALL

ARo,V
RoUoF

:Saturation function (12 instr. cycles)
ZAlH
SUBH
BlZ
ZAlH
SUBH
BlZ
ZAlH
SACH
B

V

UMIN
loWERl : Branch if v < umin
V

UMAI
SAME :Branch if v < umax
UMAI :v >= umax
U :u = umax
FIN :Begining of loop

251

Disk Controller for TMS320C26 Version 1.0
LOWERl ZALH UMIN
SACH U iU = umin
NOP iAlways same time
NOP
NOP
NOP
B
FIN
SAME ZALH V
SACH U iU = v
NOP
NOP
FIN OUT

U,PA2 iOutput control signal

iUpdate the estimate xe(k+llk) • Ax(klk) +·Bu(k)
ye(k+llk) • Cxe(k+llk)
LAC

IK1,SCALE iCalculate xel

LT
MPY

IK2
Al2

LTA
MPY

IK3
Al3

LTA U
MPY Bl
APAC
LRLK ARO,IEl
CALL ROUOF
LAC

IK2,SCALE iCalculate xe2

LT
MPY

IK3
A23

LTA U
MPY B2
APAC
LRLK ARO,IE2
CALL ROUOF
iNo need to update xe3 (xe3 = xk3)
LT
MPY

252

IEl iCalculate ye
Cl

Page 4

Disk Controller for TMS320C25 Version 1.0

Page 5

PAC
LRLK
CALL

ARO,YE
RoUoF

B

WAIT :Loop

Rounding, overflow and shifting function (11 cycles)
ROUoF
ADD
SACH
SUB
BLEZ
ZALS
SACL
RET

BLZ
NEG :Check if number negative
ONE,SCALE-l :Round
*,16-SCALE :Store value
MAINUM,SCALE :Subtract scaled max pos number
NOOV ;If acc <- 0 then no overflow
MAINUM ;else store max num
*

NEG ADD
ONE,SCALE-l ;Round
SACH *,16-SCALE ;Store value
SUB MINNUM,SCALE :Subtract scaled min neg number
BGEZ NOoy iIf acc )= 0 then no overflow
ZALS MINNUM ielse store min neg number
SACL *
RET
NOoy NOP
NOP
RET

.end;e

253

254

PART IV
Applications of Digital Controllers with the TMS320

;;; :; :;;;; ;: ! : : : : :

_::. ."":: :. ..i"1.1l.Ji.l.i. . .:~. . . .~. ;: . . . . . . . . . . . 1. . :". .:." . . :. .". :. .L.:. ..:...L.:. .". .___". . . , :. .;. #I.;;;. :. ~". .~. ."_:.,,.L.~:_.: . _. _. .". . . .t. .:.~

Digital Control Applications with the TMS320 . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . .. . . . .. 257

Computer Peripherals
DSP Helps Keep Disk Drives on Track. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .. . . . . . . . . . . . . .. 259
(James Corliss and Richard Neubert)
LQG - Control of a Highly Resonant Disk Drive Head Positioning Actuator .............. 265
(Herbert Hanselmann and Andreas Engelke)
High Bandwidth Control of the Head Positioning Mechanism in a Winchester Disc Drive ... 271
(Herbert Hanselmann and Wolfgang Moritz)
Fast Access Control of the Head Positioning Using a Digital Signal Processor ............. 277
(S. Hasegawa, Y. Mizoshita, T. Ueno, and K. Takaishi)

Motion Control and Robotics
Implementation of a MRAC for a Two Axis Direct Drive Robot Manipulator
Using a Digital Signal Processor ............................................... 287
(G. Anwar, R. Horowitz, and M. Tomizuka)
Implementation ofa Self-Tuning Controller Using Digital Signal Processor Chips ......... 291
(K.H. Gurubasavaraj)
Motion Controller Employs DSP Technology ........................................ 297
(Robert van der Kruk and John Scannell)

Power Electronics
Using DSPs in AC Induction Motor Drives .......................................... 303
(Dr. S. Meshkat and Mr. I. Ahmed)
Microprocessor-Controlled AC-Servo Drives with Synchronous or Induction Motors:
Which is Preferable? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 307
(R. Lessmeier, W. Schumacher, and W. Leonhard)
A Microcomputer-Based Control and Simulation of an Advanced IPM Synchronous
Machine Drive System for Electric Vehicle Propulsion . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 315
(Bimal K. Bose and Paul M. Szczesny)
DSP-Based Adaptive Control of a Brushless Motor ................................... 329
(Nobuyuki Matsui and Hironori Ohashi)

High Precision Torque Control of Reluctance Motors ................................. 335
(Nobuyuki Matsui, Norihiko Akao, and Tomoo Wakino)
High Resolution Position Control Under 1 Sec. of an Induction Motor with
Full Digitized Methods ....................................................... 341
(Isao Takahashi and Makoto Iwata)
A TMS32010 Based Near Optimized Pulse Width Modulated Waveform Generator. . . . . . .. 349
(R.J. Chance and J.A. Taufiq)
Design and Implementation of an Extended Kalman Filter for the State Estimation
of a Permanent Magnet Synchronous Motor ................ . . . . . . . . . . . . . . . . . . . .. 355
(Rached Dhaouadi, Ned Mohan, and Lars Norum)

Automotive
Trends of Digital Signal Processing in Automotive .................................... 363
(Kun-Shan Lin)
Application of the Digital Signal Processor to an Automotive Control System ............. 375
(D. Williams and S. Oxley)
Dual-Processor Controller with Vehicle Suspension Applications. . . . . . . . . . . . . . . . . . . . . . .. 383
(Kamal N. Majeed)
An Advanced Racing Ignition System .............................................. 389
(T. Mears and S. Oxley)
Active Reduction of Low-Frequency Tire Impact Noise Using Digital Feedback Control
(Mark H. Costin and Donald R. Elzinga)

395

Specialized Applications
Implementation of a Tracking Kalman Filter on a Digital Signal Processor
(Jimfron Tan and Nicholas Kyriakopoulos)

399

A Stand-Alone Digital Protective Relay for Power Transformers . . . . . . . . . . . . . . . . . . . . . . .. 409
(lvi Hermanto, Y.V.V.S. Murty, and M.A. Rahman)
A Real-Time Digital Simulation of Synchronous Machines: Stability Considerations and
Implementation .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 421
(Jonathan Pratt and Sheldon Gruber)
Real-'lime Dynamic Control of an Industrial Manipulator U/ling a Neural-Network-Based
Learning Controller ...................... ;.................................. 433
(W.Thomas Miller, m, Robert P. Hewes, Filson H. Glanz, and L. Gordon Kraft, Ill)

Digital Control Applications with the TMS320
More designers are using DSPs to solve problems that commonly occur in control applications. DSPs now
make practical some applications that were previously difficult to implement or were not cost-effective.
As the cost of DSPs decreases, these processors are rapidly replacing microcontrollers and analog components in many control applications.
Some applications in which DSPs are already cost-effective are servo control for computer peripherals,
power control in uninterruptible power supply (UPS) and DC power supply systems, motion control for
numerical control (CNC) systems and robotics, suspension/engine/brake control for automotive systems.
and vector control for AC and other brushless motors. Other applications are missile guidance and "smart"
weapon control for military systems.
This introduction presents a few areas of DSP-controlled applications. Following it, papers discuss topics
pertainiog to those and other areas. Most of these documented applications have evolved into very successful commercial products.
Computer Peripherals
Many computer peripherals use DSPs for applications such as read/write head control in winchester disk
drives, tape control in tape drives, pen control in plotters, and optical beam positioning and focusing in optical disks.
Disk Drives: Disk drives were early to adopt DSPs. DSPs are used for servo control of the actuator driving
the read/write head. Disk drives employ a voice-activated coil motor with high bandwidth. Data is read from
the disk at a very high rate; sampling rates of up to 50 kHz are sometimes used. In addition to implementing
the compensator, DSPs can implement notch filters to attenuate undesirable frequencies that cause mechanical resonances or vibrations.
Tape Drives: In tape drives, DSPs are used to control the tape mechanism. A tape drive has two servo loops:
one controls the tape speed, and the other controls the tension placed on the tape. Position feedback is obtained from an optical encoder, and tension infonnation is fed from a tension sensor. DSPs are also used
to filter undesirable frequencies that cause mechanical resonances.
Power Electronics
DSPs can be used in multiple applications in power electronics. These applications include AC servo drives,
inverter control, robotics, and motion control.
AC Servo Drives: In AC servo drives, DSPsare used for vector control of AC motors. AC drives are less
expensive and easier to maintain than DC drives. However, AC drives have complex control structures as
a result of the cross-coupling of three-phase currents. Vector rotation techniques are used to transfonn threephase axes into rotating two-phase "d - q" axes. This two-phase rotation technique greatly simplifies the
analysis, making it equivalent to analyzing field-wound DC motors.
UPSs and Power Converters: In uninterruptible power supplies (UPSs) and power converters, DSPs are
used for PWM generation along with power factor correction and harmonic elimination. Advanced mathe-

257

matical techniques can be used to control the firing angles of the inverter, creating low-hannonic PWM with
unity power factors.
Robotics and Motion Control: DSPs are used in large-scale applications in robotics and other axis control
applications. DSPs support high-precision control alo~g with implementation of advanced techniques like
state estimators and adaptive control. A single controller can handle speed/position control as well as current control. Time-varying loads Can be handled with adaptive control techniques. Adaptive control techniques can also be used to create universal controllers that can be used with different motors. In addition
to implementing controllers, DSPs implement notch filters to attenuate undesirable frequencies that causes
resonances or vibrations.
Automotive
DSPs can be used for many automotive applications such as active suspension. anti-skid braking, engine
and transmission control, and noise cancellation.
Active Suspension: Active suspension systems use hydraulic actuators. DSPs can take into consideration
body dynamics, such as pitch, heave, andfOll, and then use this information to control four actuators independently and dynamically for counteracting external forces and the car's attitude changes.
Anti·Skid Braking: In anti-skid braking systems, DSPs can read the wheel speed from sensors, calculate
the skid distance, and control the pressure in the wheel's brake cylinder. Traction-regulating systems can
be added to control the vehicle in adverse driving conditions, to prevent wheel(s) from locking or spinning,
and to increase general vehicular stability, steerability, and drivability.
Engine Control: In engine control applications, DSPs can be used with in-cylinderpressure sensors to perform engine pressure waveform analysis. This information can be applied to determine the best spark timing, most effective firing angles, and optimal air/fuel ratios. The closed-loop engine control scheme can
tolerate external turbulences, aging, and wearing. while maintaining optimum engine performance and fuel
efficiency.

258

DSP helps keep
disk drives on track
Using a sophisticated DSP chip to implement adaptive embedded
servo control avoids the head-positioning errors tbat can plague
high-density Winchester disk drives.
onventional design approaches are inadequate to
Cmeet
the demand for ever-higher track densities on Winchester disk drives. When densities exceed 1,200 tracks/in., drives relying on dedicated
servo feedback for positioning accuracy become
unpredictable parts of computer systems. Implementing embedded servo control with adaptive positioning features, however, allows the design and
manufacture of adequately margined disk drives
that provide a solid platform for higher densities.
Since designers can't predict exact performance,
a disk drive with adequate margin requires reserve
capability in all areas. Materials or components,
for example, may not be within specifications, and
environmental conditions may also exceed specifications or combine in unpredictable ways. For instance, electrical noise may combine with temperature changes in a peculiar way that even an exhaustive testing schedule could miss. In addition,
materials and components change with time.
The search for ample safety margins led Vermont
Research to use the 32020 digital signal processing
(DSP) chip, from Texas Instruments (Dallas, TX),
to incorporate adaptive embedded servo control
into its Model 7030 hard disk drive. Digital signal
processing of feedback signals offers immense flexibility for designers of many products, from disk
drives to numerically controlled machine tools to

James M. Corliss and Richard Neubert
Corliss is principal engineer and Neubert is a design
engineer at Vermont Research (North Springfield, VT).

aircraft control systems. Exploited to its fullest,
the power of DSP can be used to expand reliability
margins in numerous motion control applications.
The dedicated servo approach
The most common method of locating a track on a
Winchester disk drive has been the dedicated servo
approach. The designer reserves one surface in the
stack of platters where servo control information is
written. If the head on that surface is correctly located, it's assumed that all other heads on the carriage are also on their tracks.
Sometimes dedicated servo drives work well, but
higher track densities can make them hypersensitive
to temperature changes, especially when combined
with shock or vibration. The drives develop high
error rates and may not retrieve data at all if conditions have changed since the recording. The problem is that positioning errors that may be manageable at lower track densities can cause significant positioning difficulty at higher track densities
because the errors represent a larger percentage of
the narrower tracks .. If the heads aren't properly
positioned, the analog signal-to-noise ratio plummets on readback, causing skyrocketing error rates
and, sometimes, an unusable drive.
Embedded servo control provides feedback in
the form of bursts of prerecorded positioning information embedded in data on the track that's being
read. Adaptive positioning actively compensates
for both external disturbances such as shock and
vibration and internal changes such as the aging of
shock mounts and creep of materials. Of course,
the effectiveness of embedded servo control is

Reprinted with permission from the June 15, 1988 issue of COMPUTER DESIGN Magazine, copyright 1988, PennWell

Publishing Company, Advanced Technology Group.
259

The shock sensitivity of a drive
with embedded servo is a funclion of the amount of time between sampling. At a 100kHz
sampling rate, a 2-G shock induces only a 4-,un. off-track
error; at 1.2 kHz, It's 256
,un.-enough to accidentally destroy data on an adjacent track.

limited by the frequency at which positioning feedback is provided. If the sampling frequency is too
low, the track-following errors that accumulate between samples will be larger than the errors a dedicated servo approach would have allowed. 'Inadequate feedback also makes positioning performance suffer.
Adaptive embedded servo positioning, as implemented in the Model. 7030 disk drive, wasn't
practical before the advent of sophisticated DSP
chips, which can analyze rapid-fire bursts of servo
information and make quick position corrections.
Implementing adaptive positioning without sacrificing access time or user flexibility required a new
level of servo information analysis that relies heavily on digital signal processing. Pre-DSP electronics
wouldn't have been practical for the adaptive embedded servo approach at a satisfactory sampling
rate. The cost and real estate requirements of discrete logic would have been prohibitive.
That's not to say that using an advanced DSP
chip like the TI 32020 for multiple signal processing
functions is completely straightforward. Since the
functions can't be truly simultaneous, priorities
must be carefully established. Also, there are some
disadvantages to using adaptive. embedded servo
control. One is a recording overhead of 15 percent
of a drive's capacity, compared to 10 percent for
dedicated servo and 7 or 8 percent for embedded
servo with lower sampling rates. Fortunately,
though, this overhead cost is more than o~f~et by
. the ability to reliably use higher track denSities.

260

Living within a budget
Like every physical device, a disk drive has tolerances. Absolute perfection in head positioning isn't
required for reliable drive performance, but there's
a set limit on how much deviation is acceptable for
each case. Disk drive designers commonly use a
"tracking error budget" when analyzing all possible sources of track-following deviations. If the
drive can't achieve an acceptable bit error rate unless the heads are, say, within 60 /Lin. of perfect positioning, then 60 /Lin. is the tracking error budget.
Suppose, for example, that differential thermal
expansion may cause as much as 35 /Lin. of trackfollowing error, despite the servo system's best
compensation efforts. If shock and vibration contribute no more than 10 /Lin. and all other sources of
error combined will be no more than 10 /Lin., the
total possible error is 55 pin., and the 60-pin. error
budget won't be exceeded.
As track widths diminish, however, error budget
shrinks disproportionately. At 1,200 tracks/in., for
example, a 60-/Lin. error budget is 10 percent of the
track width. At a 1,500-track/in. density, though,
with the accompanying decrease in absolute signal
strength from the head, the error budget may have
to shrink to 8 percent of the track width. In this
case, the error budget becomes a mere 38 pin.
A matter of degrees
Temperature changes caused by operating or environmental conditions are a common source of
trouble for reliable positioning. Differential ther-

mal expansion among the various materials in head
support arms, disks, carriages, spindles, bearings
and housings in the 5-in.-long chain of parts between the head and the disk is typically 5 ",in. lin.! dc.
At 1,200 tracks/in. on an 8-in. drive, that can
mean that a track written when a drive is cold can
shift half a track or more when the drive is warm.
A mere change of 2.5°C can consume an entire
60-Itin. error budget.
Careful attention to air circulation in the drive
can minimize temperature differences within it but
can't eliminate them. While the temperature is
changing, parts within the drive will expand or contract differently, even if they subsequently stabilize
at a new temperature. A drive depending on dedicated servo information is blind to head shift during temperature changes. Thus, even though the
servo head is positioned properly, the data head
may not be. The drive will compensate for the
dimensional changes affecting the reference head,
but it can't compensate for the fact that the parts
that locate the data heads are changing in a different way. In practice, temperature sensitivity
means that a disk drive may need a warm-up period
before it works reliably. It also may mean that information recorded last week is unavailable this
week because of changes in the room temperature.
With adaptive embedded servo control, however,

temperature sensitivity ceases to be an issue. The
drive needs no warmup, data written cold can be
read hot and vice versa, and changes in room temperature won't affect performance. System builders can ship software on the disk and be sure the
disk will boot up. They'll see dramatic reductions in
the number of dead-on-arrival drives and systems
and won't have to carry a large inventory to compensate for failures.
Other sources of head-track misalignment that
can cause positioning difficulty include creep and
stress relief in materials, bearing runout, shock and
vibration, head stack tilt, disk slip, and bending or
twisting of the main frame chassis. All of these phenomena may affect one disk in the stack differently
than the others, creating track-following errors for
data heads even if the servo head is on track.

Coping with internal variables
One significant internal variable is head width,
which typically varies ± 10 percent and, thus, affects positioning accuracy. In the Model 7030 drive,
head widths are measured by the DSP hardware
during the factory configuration and test process
and stored in nonvolatile RAM for use during operation. Other component characteristics, such as
the actuator motor's magnet profile, are also programmed in firmware when the disk is built. One of

,TraCking Error Components
-1

Cause

Range
("in.)

-2
-3

50
w

10

5
5
5
5

~

a:
0
a:
a:
w
t::
CD

-,
-5

-6
-7
NORMAL TRACKING ERROR (9):
• TEMPERATURE CHANGES
• NOISE
• SHOCK AND VIBRIITION

-8

......

-9
-10

5

5

-150

TRACKING ERROR
(b)
(a)

(1)

(2)
(3)
(4)
(5)

When a typical disk drive's tracking error budget from all sources (a)
exceeds 100 !'in., bit error rates begin to rise dramatically (b). A 60 "in.
budget provides a greater margin for safety.

(6)

(7)

(8)

261

'I"

Embedded servo scheme guarantees accurate positioning
ations In the timing of the servo bursts have no
effect on the measurement as long as they occur
within an allotted time window.
Each time the drive is started up, a self-test pro·
gram is loaded into a Texas Instruments 32020
digital Signal processor. The program verifies its
own operation, the exte'mal data bus and the interrupt structure,and calibrates the analog·to-digital
and digital·to·analog conversion circuitry and the
signal path for positioning signals. It reads signal
A into both channel A and B, and does the same
with signal B, to measure the dc offsets and the
gain ratio between the two channels. This guards
against the possibility that the gains of the two
position-indicator Signal converters may have
drifted apart.
To determine signal amplitude, the peak-to·peak
value of each set of dipulses is averaged, which
removes some high·frequency noise. The amplitude value of each position signal is digitized, and
the DSP applies the measured
values of dc offset and gain ra·
SERVO ZONE (8 BYTES)
tio to make any necessary cor·
rections. The DSP then comTRACK~CENTERS
... /,
putes the difference of tl:1e two
GREY-CODED TRACK ADDRESS
___
(8 BYTES)
Signals and divides the result
__ _
by their sum to obtain a raw
amplitude·compensated ratio
that Indicates the degree by
which the head is off·track.
This ratio is numerically filtered in a finite impulse reo
-(APPROX, 128
sponse filter to remove further
high·frequency noise.
The processed signal is now
DATA ZONE
(APPROX. 128 BYTES)
a position indicator. The mea·
sured rate of positional change
also provides velocity and ac·
WRITE·TO-READ
celeration information, which
RECOVERY ZONE
the DSP uses to compute the
amount of current to supply to
PREAMBLE (1 BYTES)
the linear motor powering the
SECTOR INDEX
head actuator.
(1 BYTE)

Vermont Research's recently introduced 648Mbyte, 8-in. Winchester disk drive (Model 7030),
factory-recorded servo information bursts are embedded in data every 128 bytes, providing a posi·
tlonsampling rate of 9.6 kHz. This rate is, in fact,
the practical equivalent of continuous feedback
.lor track following and ensures quick, accurate
positioning. Even a 1·G shock can move the heads
only 3 "in. between samples, which isn't enough to
perceptibly affect the data signal integrity.
The servo zone is divided into multiple parts.
Before it enters a data zone, a head encounters a
short preamble and a sector index mark, then a
Gray·coded track address mark, and finally a pair
of servo marks, offset in time, with one lying inside
the track (toward the disk spindle) and one outside.
The servo bursts are displaced in time so they can
be distinguished. If the head is centered on the
data track, the servo Signals have the same ampli·
tude; if not, one Is larger than the other. Small vari-

~~=

~~~ -1--=.,...,

m"~~~
B~S~
---

7

the major strengths of adaptive positioning, however, is its ability to dynamically compensate for
changing variables.
The force of the flex circuits connecting the heads
to the drive electronics tends to move heads offtrack. Mechanically, the flex circuit is a spring, so
its force varies with the track address; during operation, the DSP computes the offsetting actuator current for each track address. As the flex circuit ages,
its spring constant changes, but the drive adapts to
the change with each startup. The DSP chip measures the force constant of the linear head actuator
motor on startup by applying a pulse of current and
measuring the resulting motion. If necessary, the
drive can be rezeroed during operation to adjust
262

for a temperature-changed force constant-or any
other parameter.
A more complex adaptive feature is compensation for movement of the head-disk assembly on its
shock mounts. The high forces needed to accelerate
the head carriage displace the entire head-disk
assembly on its shock mounts, typically by 0.020
in.-the equivalent of 20 tracks or more. The displacement becomes a damped oscillation in the 20to 40-Hz range after the seek is completed.
Without compensation for the shock-mount oscillation, the drive's servo loop could follow about
99.7 percent of the displacement induced by the
mounts, but that still leaves a 0.3 percent uncompep.sated displacement on the first cycle of the

damped oscillation, which represents 10 percent of
a track width, or 60 /-tin.-the maximum that can be
tolerated from all Sources of error combined. Even
after the first cycle, uncompensated oscillation
would consume at least half of the drive's error
allowance. That error represents margin that could
hinder quick actuator settling time and, thus, adversely affect overall seek performance.
In the Model 7030, a mathematical model of the
response of the shock-mounted assembly, which includes factors for frequency, amplitude and damping, is stored in the firmware of the 32020 DSP,
which uses it to predict the damped oscillation and
apply the inverse actuator current to cancel it. The
DSP continually updates the parameters of the
model, automatically adjusting for changes in the
elastomer of the shock mounts and in other materials as they age or change temperature. Once per
minute, the updated model is pulled into nonvolatile RAM, along with the updated flex circuit
spring constant, as an accurate starting point in
subsequent startups. A similar method is used to
compensate for resonance of the actuator assembly
after a seek, a problem that introduces some degree
of track-following error in all disk drives.

In addition to improving performance, the inherent flexibility of adaptive embedded servo control lets a disk drive design take advantage of ongoing improvements in heads and disk coatings
without requiring radical redesign. An additional
benefit of insensitivity to component variations is
that no adjustments are needed when boards are
changed in the field, which saves time and requires
less expertise from field service technicians.
The advanced DSP technology used for adaptive
positioning also provides for sophisticated selfdiagnosis and monitoring of environmental and
power supply conditions. This capability can save
hours of field service time by making it possible to
pinpoint temperatures or voltages outside of specs
as the source of difficulties that otherwise would
cause a wild goose chase.
CD

Please rate the value oj this article to you by
circling the appropriate number in the "Editorial
Score Box" on the Inquiry Card.
High 264

Average 265

Low 266

263

264

LQG-Control of a Highly Resonant Disk Drive
Head Positioning Actuator
HERBERT HANSELMANN, MEMBER,

IEEE,

AND ANDREAS ENGELKE

Abstract-A fast fine-positioning controller bas been designed for a
rotary actuator type magnetic stoel&e di.k drive. The controller was
de.lgned using the Iqg (linear quadratic gau••ian) methodology and bas
been Implemented on a digital signal pro....or. It is sbown tbat Iqg design
is a viable approacb, and that varlou. problems a.sociated witb tbe
structural resonances of the actuator can be solved.
Keywords-Iftagnetic disk stonge, position control, microproeessor
control.

I. INTRODUCTION

M

ODERN DISK DRIVES use fast voice coil actuators for
positioning magnetic heads on desired tracks and
keeping them on track against various disturbances using
closed-loop control. Two types of actuators are predominant
in state-of-the-art drives: rotary and linear actuators, both
driven by a current passing through a coil in a strong magnetic
field.
In high-performance drives the head position is measured
from a dedicated servo platter. Measurement electronics
supply a head/track misalignment error voltage which is
proportional to this error within track width. Current flowing
through the coil generates torque or force so there must be
closed-loop control.
We investigated fine-positioning control in an industrial
prototype 8-in drive using a rotary actuator, as shown in Fig.
1. As described in some detail in [I], the studies were carried
out on an experimental version of the drive with fixed diskspindle. With an operating drive the investigations would have
been hindered because of the required clean-room conditions.
The position error measurement was achiev(,d through an
optical sensing device capable of measuring in the real position
range of an operating drive (useful track width + - 9 /tm) with
excellent resolution.
In [I] results of modal structural analysis as well as of
control using a classical approach have been presented. The
controller was of double PD-type with 3 notch-filters. It had
finally been extended by a synthetic disturbance feedforward
system.
It is the purpose of this paper to report on our effort to
design and implement an appropriate controller using the Iqg/
Itr methodology. The plant is of the SISO (single-input singleoutput) type because only position error measurement is
Manuscript received March 27, 1987.
H. Hsnselmann was with the Department of Automatic Control in
Mechanical Engineering, University of Paderbom, Paderbom, W. Germany.
He is now with dSPACE Digital Signal Processing and Control Engineering
Gmbtt, Paderbom, W. Germany.
A. Engelke is with the Department of Automatic Control in Mechanical
Engineering, University of Paderbom, Paderbom, W. Gennany.
IEEE Log Number 8718148.

Fig. 1.

Prototype disk drive.

available. Thus designing a controller might seem to be fairly
simple at first glance, but the strong structural resonance
effects posed some problems, and the experience with the
approaches taken might be of interest for others working on
the control of mechanical systems.
II. PLANT MODEL
A simple mathematical model would be a double integrator
(torque to position). But the control bandwidth desired is so
high that structural mechanics effects can by no means be
neglected. Figure 2 shows the measured frequency response
from input current to position in the 1- to lO-kHz frequency
range. There are many resonances and notches, and zoomed
analysis would show even more.
The second curve in Fig. 2 shows the frequency response as
computed from a 30th order input-output (black-box) model,
which has been formulated in state-space form. This model has
basically' been formed from resonance frequency, damping,
and residue data obtained with the curve fitting facility of the
structural dynamics analyzer we used. Additional tedious
manipulations were however necessary in order to improve the
model in both phase and amplitude response. The model is
fairly good below 2 kHz and above 4 kHz, but less so between
these frequencies. In particular the two deep notches between
3 and 4 kHz are not well represented in the model, and
correspondingly there is considerable phase mismatch. The
classical (notch-filter based) controller from [I], which was
designed with a view to amplitude stabilization, was not
sensitive to this model mismatch, in contrast to the Iqg
controller, as shown below. Certaiuly we should do some
work on improving the model by better computerized model
matching methods.

© 1988 IEEE. Reprinted, with pennission, from IEEE TranstictiollS ol1ll1dustrial Electrol1ics.
Vol. 35, No.1, Feb. 1988.

265

HANSELMANN AND ~GELKE: A HIGHLY RESONANT DISC DRIVE ACTUATOR

Magnitude

20 dB

101

phase deviations in the real plant (which have been observed
and are due for instance to temperature-dependent stress)
could be expected,
3) due to the 20 dB/decade slope of the open loop when true
Iqg state feedback is employed, problems with the highfrequency resonance peaks of the 'truth model' and the real
plant were thought to be likely.

A. Design Model
1 kHz

10 kHz

Phase

125 deg

1 kHz

Fig. 2.

10 kHz

Frequency response truth model (2)/measurement (I).

III.

CONTROL DESIGN

Fine-positioning control may seem to be very easy because
the plant is SISO and is virtually linear. Linearity is only lost
when the current saturates. It turns out that the current limit is
not reached with fine-positioning regulating control because
the current range is designed for fast large-distance positioning
at high torque. The plant is SISO because for economic
reasons the only measurement presently available for control
is the track position error itself.
From a design viewpoint, the difficulties of control arise
mainly from the structural resonances ranging up to 10 kHz. A
classical approach to coping with resonances is the wellknown introduction of properly designed notch-filters. They
compensate for resonance peaks to such an extent that these
peaks drop significantly below 0 dB. In this case insensitivity
to phase behavior of the plant is gained (amplitude stabilization). The controller from [1], which had been designed that
way, yielded high bandwidth and performed well in the
experiment. The design procedure was however not satisfactorily systematic. Fine-tuning of the filter parameters took a lot
of time because we always had to look carefully at the total
phase introduced by the filters around the projected crossover
frequency. Tuning the controller was easier when we used a
numerical parameter optimization program as reported in [1],
but amplitude stabilization was lost, and design was still timeconsuming.
This experience provided the motivation of trying. the Iqg
approach, from which we hoped to get useful controllers in a
systematic way with little effort. Some obstacles had however
been anticipated; namely
1) a reduced:order design model was necessary, in contrast
to the classical design which always worked with the full order
'truth model',
2) insensitivity to pkase behavior is not guaranteed, so
trouble with the phase mismatch of the model, as well as with

266

The order of the design model determines directly the order
of the final dynamic compensator/controller which consists of
an observer or a Kalman filter with state feedback. In order to
be comparable to the classical controller and to keep the
control processor's workload reasonably low, a design model
of 8th order was derived. Figure 3 shows the frequency
responses of both the 'truth model' and the design model.
There is good matching only up to 3 kHz. Even to achieve this,
it was necessary to include the 7-kHz resonance in the design
model, because it had much 'stray influence' into the lower
frequency range.
Since the range of good matching is fairly small with respect
to the projected crossover frequency range of 600-900 Hz, we
had to be prepared to face robustness problems when applying
the Iqg controller to the 'truth model' or to the real plant.
This design model has been augmented by an integrator, the
output of which adds to the control input. This integrator
models a constant torque disturbance. This disturbance can be
observed by the Kalman filter and the estimate fed forward to
the control input. The final design model thus was of 9th order
both for the feedback and the Kalman filter design.

B. Open-Loop Shaping
As noted above we expected to run into problems with the
high-frequency behavior of the true plant, which is not well
represented in the design model. We therefore aimed first at
forcing rapid rolloff of the open-loop frequency response
beyond the projected crossover-frequency into the controller
design.
A viable method of doing this is
1) put a low-pass filter into the loop at the plant's input,
2) design state feedback and Kalman filter for the augmented plant,
3) implement the controller structure from Fig. 4.
This approach corresponds to the 'frequency-shaped cost
functionals' technique given in [2] (particularly example 4).
It might seem to be a problem that the filter states
themselves are also fed back and thus the filter characteristic
changes. We chose the corner frequency of the 2nd-order filter
near to the desired crossover frequency (usually a bit lower)
and there was only weak feedback of the filter states. The
filter's poles were only slightly shifted and the desired rolloff
was achieved without significant loss of control bandwidth, as
shown in Fig. 5. The small effect of filter feedback can be
explained by the cost which any larger changes in the filter
dynamics would introduce, which are therefore avoided by the
Iq-optimal feedback design as long as the control bandwidth is
not forced to be much higher than the filter corner frequency.
It turned out however that operi-Ioop shaping was not really

102

IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 35, NO. I, FEBRUARY 1988

the near O-dB resonance peak at 2 kHz, could also be avoided
by weighting a suitably defined variable in the Iq cost
functional (see below).
20 dB

1 kHz

3 kHz

10 kHz

100 deg

1 kHz

Fig. 3.

3 kHz

10 kHz

Frequency response truth model (I)/design model (2).

Fig. 4.

Control structure with open-loop sharing.

20 dB

I
0.1 kHz

10kHz

b

o rnsec

5msec

Fig. 5. Results of open-loop shaping: (a) Open-loop frequency response
magnitude, (h) step response, (1) with open-loop shaping, (2) without openloop shaping.

C. Igq/ltr Design
The Itr method of designing a state-feedback plus Kalman
filter (observer) based controller is now, well established and
already appears in textbooks, for instance [3]. In effect it
means introducing fictitiously high process' noise at the control
inputs (one in our case). Increasing this noise forces the
Kalman filter to rely more and more on the measurements,
thus using the control input information less and less. In the
limit case this information is no longer used at all. The loop
transfer functions of the original full state feedback without
Kalman filter are then 'recovered'. The purpose of this
strategy is to make the control loop less sensitive to certain
mismatches between the plant model used in the Kalman filter
design and the real plant.
The limit case is however not practically useful, because it
yields an ultrafast filter with too noisy estimates. A compromise has to be found. Our strategy generally is to observe the
filter poles when increasing the control input noise and to
locate the poles somewhere in the region which corresponds to
the desired dynamics of the control system. The pole
corresponding to the disturbance model is also located in an
appropriate region through a suitable disturbance noise intensity.
Figure 6 shows the control system structure. Due to the Itr
procedure no problems arose with' using the equivalent SISO
compensator instead of the original 2-input controller with the
control signal being explicitly fed into the Kalman filter. Note
that without the fictitious Itr noise the Kalman filter would
have relied far more on the control signal than on the
measurement. In such cases the compensator may turn out to
be unstable, so that the control system becomes only 'conditionally stable', which is very undesirable.
The result of the first attempt to design the state feedback
and the Kalman filter is shown by the step response in Fig.
7(a). This response was simulated with the design model as
plant, i.e., without any model mismatches. The state feedback
had been designed with cost function weight on the control
signal and the head position error.
The problem with this design was the 2-kHz oscillation
visible in the step response. It is typical of Iqg designs that
lightly damped plant modes do not always come out well
damped in the closed loop. Even if the damping achieved here
were considered to be sufficient, it was unfortunately not
retained when the controller was applied to the truth model of
the plant. In fact, the oscillation built up, and the closed loop
was unstable. To remedy this it was necessary to force the Iqg
design to yield more damping of the critical mode.

D. Modal Weighting
necessary for our drive actuator control. The controller
described below had moderate but sufficient roll-off without
this technique, yet open-loop shaping might be necessary when
manufacturing tolerances or the like require a more safe
design. The oscillation visible in Fig. 5(b), which results from

In resonant mechanical systems it is often necessary to
achieve higher damping than given by the Iqg design when
weight is put only on the controlled or related variables [4].
Frequently the mechanical motion dominantly associated with
a critical mode can be identified, such as the relative motion of
267

103

HANSELMANN AND ENGELKE: A mGRLY RESONANT DISC DRIVE ACTUATOR

(x being the state vector) yields the required auxiliary output
variable.
It is crucial to take the constants in such a way that the 'sterm' is in the numerator of (I). It can be shown [4] that this
ensures that damping is achieved by weighting 1M without
affecting the eigenfrequency too much. Strictly spoken, it is
possible to move the critical eigenvalues exactly along the
'constanteigenfrequency/more damping-path' by weighting
1M, provided that not the eigenvectors of the uncontrolled
plant are used, but those. of the already controlled (but
insufficiently damped) plant. If the critical eigenvalues have
not been affected significantly in the previous control design,
it may be assumed that the eigenvectors are almost nnchanged
too. Then it is more convenient, and sufficient, to use the
plant's eigenvectors, and this we did.
With this 'modal-weighting' technique we achieved sufficient damping very easily (Fig. 7(b}), and this then carried over
to the truth model based simulation, and later on to the real
implementation (see below).

compe.... lor

Fig. 6.

Control structure.

IV. IMPLEMENTATION REsULTS

o msec

5maec

J

o msec

Fig. 7.

5 msec

Step responses simulated with design model: <

~

d

position

'"I<

?

d

2i

Normalized
time

~

d

;;

Estimated
velocity

~

a
d

Feedforward
drive

Ii!

?

Current drive
8

8
d

D.40

!/fo
Fig. 8.

D.'"

position

Optimal state trajectories
Fig. 9.

Seek operation

4.2, Comparison wjth conventional trajectory
Figure 10 shows the difference between conventional trajectory and new developed trajectory. These trajectories are
output from the reference DAC every sampling period.
In conventional seek control, the actuator is accelerated wit!) full power amplifier until its speed reaches the desired
velocity trajectory given in the table. In this case, therefore, the amplifier saturates at first stage of acceleration and the
transient stage from acceleration to deceleration in specific track seek.
In our new seek control, desired trajectory is calculated in real time • And through full stage, smooth transient of
current drive can be observed.

Velocity trajectory

,

... ~
... ;~..

"

Estimated velocity
Current

....

~~

~

~

~
,
..

-

iiiii

I

~

.~

~
.... ....

~

.

2,U: LOll1
U

New trajectory

r

..

1,]

Conventional trajectory
Fig. 10 New and conventional trajectories

283

4 3. Yibratjon reduction

Figure 11 shows simulation of acceIemtion and power spectrum against current drive for the three kinds of conbO] :
minimum time conbO] (bang-bang control), conventional trajectory conbO], and our new trajectory conbOI. The ideal
minimujn time cannot achieved because of the motor coil inductanc:e.
In our trajectory conbO], the vibration is reduced. effectively. At 2 kHz, it can reduce power spectrum gain 20 to 30 dB
lower than the (lthers.
Figure 12 shows the vibration reduction in the time domain between these three conbOls. The residual vibration after
access can be most efl'ectively reduced by our trajectory (F'lg. 13).

Power spectrum

Acceleration type

M'

~r---------r-------~
Bang-bang control

i

200

.~

0

.fiR -200
~

~O

r
so

~r-----------------~
Bang-bang control

-20

100

~'----------~----~~

o

2500

5000

~~----------------~
Conventional trajectory

Conventionallrajectory

o
-20

-500 L......______---''--______---J
o
SO
100

~O

2500

5000

60~------------------~

New lrajectory

New Irajectory

M'

i

.~

.fi

~ -200
~L......--------'-----------J

o

SO

Time [xO.l ms]

100

2500
Frequency [Hz]

Fig. 11. Comparison with conventionaluajectories (power spectrum)

284

5000

Position

Current and AcceleratioD

• •
BaDg-bang
control

• ~AAA
• VVVV

•
T

•

T, 00

-

TOle!!

-

TOle!!

,.

I. 00

Conventional
trajectory

,.

.00

:IS

00

•

New
trajectory

.1('\
I\.)
•
T

•

T 00

-.

T "e!!

Fig. 12 Comparison willi conventional trajectories (time domain: resonance = 1.8 kHz)

New lrajectory

Conventional trajectory

Position

Current

Fig. 13 Comparison of measured vibration after access (repeat seek: 100 tracJcs)

285

5. CONCLUSION
We have developed a new digilal servo controller for a S· hard disk drive. The state control with estimator was made
using a digital signal processor. A stable tracking control and seet opeJlIIiOll results in an avetllge access time of 10 ms.
To avoid the high frequency mechanical resonance. which depends 011 Current driving during fast access, we proposed a
new velocity ttajectory calculated to minimize the square of dift'aentiated acceIel3tion.

6. REFERENCE
I. I. Yamada and M. Nakagawa, Positioning Control of a Servomotor Mechanism with an Oscillatory Load, I. the
Society ofInstrument and Control Engineers, vol.18, 1,1982
2. K. Arup. eL ai, Accelel3tion Feedforward Control for Head Positioning in Magnetic Disc Drives, Trans. of ICAM
89, pp.19-24, 1989
3. T. Flash and N. Hogan, The Coordination of Arm Movements: An experimentally confinned mathematical m04el,
1. Neurosc:i. vol.S, pp. 1688-1703, 1985
4. M. Kawato, Mathematical Sciences, pp 76-83, no.289, Iuly 1987
S. M. C. Stich, Digilal Servo Algorithm for Disk Actuator Control, Conference on Applied Motion Control 1987
6. S. Hasegawa, eL ai, Trans. of no.944 Symposium, pp.16-18, ISME, 1987
7. I. Ahmed and S. Lindquist, Digital Signal Processors Simplifying High-Perfonnance Control, Machine Design,
Sep. 10, 1987
8. H. Hanselman, Using Digital Signal Processors for Control, mCON'86, IEEE, 1986
9. G. Franklin and 1. Powell, Digital Control of Dynamic Systems, Addison-Wesley, 1980
10. Y. Mizoshita and A. Futamata, Head-Positioning Servo Design for Disk Drives, Fujitsu Sci. & Tech. 1.,18, 1 pp.
101-lIS, 1982

286

IMPLEMENTATION OF A MRAC FOR A TWO AXIS DIRECT
DRIVE ROBOT MANIPULATOR USING A DIGITAL SIGNAL
PROCESSOR
R. HoroWilZ

N$n'QIII "0/"101
DeporImeat of Mec.....icaI Ba~
Ullivenity of Califomia, BcrUIey. CA 94120
A"umiag tloat tile torque vector q is preceded by a zero .,.dec
hold. tho dynamic equations (I) and (2) are discre'izod to

ABSTRACT

'Ibis poper is _
witII tile diJitaI impIomeDta'ion of a Model aew-. Adaptiv. Coo...1 (MRAC)
algorithm 00 a Tuu IaIcrumoat TMS32OIO Diptal SiIDaI ProcesIQl' (DSP). Tho MRAC wu delipocl to coo...1
a two nis diRct drive SCAIlA typo lObo! _p"ator.
Tho prinwy purpoao of lb. adaptive conIIO.... is to comPOIllllIe for !be iaortiaI varialioar duo to cbaDgo. ia
cODli.uratiaa and payload. &porimeDtaI _alta JlI1!IODIed
c1.arIy iUUltnto !be ...... for adaptive cODln>I over coonational PID
for tile typo of
ItI1ICIUrO
used iD tlo. oxporimeall. D_lIioo 00 tile ... of DSP iD
controls is p"'aootod ia IOImI of tboir c:apabilitiea and !be
ialUCDC. tboir ~ will hav. GO tile aampliag
tim. of diSital coDln>I 'Y........

con.......

x,14+I) = x,(t) + Ta.lk)
+ fM(l)-'(q(l)-VIl)-dll»
x.(HI)

Ba.... on Eqs. (3) and (4) a Serio. Parallel Model (4J can be

arm

doMed u

q(l) = Mlk)Ulk) + ;,Il) + d(k)

real-time computational speed limitation,

of a Model Ref.rence Adaptive Control (MRAC) for a two nis
SCARA-typo robol manipulator using tloo TMS32OIO from T....
Instruments. The paper will concenll'ate on the details of implemen·
talion and actual ex.perimenrarion rather than rhe derivation of the
adaptive controller. "The details of the adaptive conlrOl design are
referenced from our previous works [4],
The remainmg sections of this paper are organized as foOows.
section 2 will briefty describe the adaptive conb'ol algorithm used,
followed by 01 detail discussion of the implementation of the algo.
"thm on the TMS32010 in section 3. SectioD 4 will discuss the
ex-penmenlal resuhs, and the paper wiD conclude with section S discussmg some of the advantages of using the TMS32010 DSP for
real-lime control applications.
1. ADAPTIVE CONTROL SCHEME
The rwo axis direcl drive robot arm used for the experiments is
shown in Fig. 1.0. The dynamic equalions for such a two axis manipulalor can be C"lI.pre::lsed as lll,{2}.and [31,
(I)
(2)

lhe :!x:! inerlia matnll. which is symmetric and positiVI! dl!llIlIt~, "I"x, and q an! respectively the: two dlmens.ional Joint
wsplacement, joint velocity, and lorque vectors. The vector v
repr~!oen15 Ihe nonhnear lerms due to Coriolls and centripelal
ac(;ekrallon~. The Coulomb friction torque vector IS represented by
d.

(5)

(6)

whore MU) , ;,(l) • iJ(l) • are tho ••timare. of M ••• d • respectively. Defining the adapCatlon error as
elk) = x•• (k) - x,(k).

(7)

tloe parameter adaptation algorithm for M(l) • vU)

. d(l) • are giv.n

by

much of the studies in adaptive CORbOl have been limited 10
mathematical analysis and computer simulation. however. it is now
possible to implement adaptive conll'Ol on direct drive manipulalon
with the availabilily of affordable high speed digilal signal processors
(DSP). '1 is Ihe aim of this paper 10 presenl a digital implementation

IS

= x.lk) + Tu(l)

and tloo torque iDpal vOClOr is doacribed by

configuration and payload c:hanges. makin, Ibem ideal caadidales for

where M(XI'I

_

x_(k+I)

With lbe advent of ~t drive robot manipUlators. there has
been 3 rising interest in the implementation of adaptive control to
this panicolar cill. of moo' anns (IJ.(2J.(3J. and (4J t . Direct drive
robolS. unlike indirect drive robots, are much more sensitive to
to

(4)

whore T is tho sampliag poriod.

1. INTRODUCTION

adaptive conttol. Due

= x.(l) + TM(l)-tlq(kl-v(kHl(l))

(3)

M(t)
;,(k)

= MIt-I) + TK.e(k)uTlk-l)

=

[v,(k) v,(k) ... vo(k)]

T

(8)

(9)

d(k) = d..(k)s(l<,(k).U(k))

(10)

v.(k) = X~lk)N"'(k)x'(k)

(II)

H'"

(12)

where

= H'·'lk-I)+TK ....... (k)•• (k-I)x~(k-l)

d.(k) = d.lt-I) + TK,s(x,lk),u(k»)elk)

(\3)

The Coulomb friction function, s(x,,(k ).u(.t) , is given by
Si 8 n IX",1
8(x,.lk),u(k)) = { sign,".J

IX~kl

if
> £ ....
if IX,.I $ _'.

(14)

where signIA.~] = 0 if Ix"1 == 0, and E~ is a velocity resoluuon
deadband. K .. , KN,I, • and K~ are constant poSitive adaptation gain
mab'ices, and ,.,(t) is the k-th element of the vector e(t). 11le block
diagram of the model reference adaptive conb'Ol scheme is shown in
Fig. 2.0. Interested readers should refer to Horowitz et. al. 141 for
the details of derivation and stability analysis of this algonthm.
3. TMS31010 IMPLEMENTATION
The NSK-UCB rooo, illustrated in Fig. 1.0 is a SCAR A-typo
arm dnvCR by two NSK direct drive moton. Axis I is driven by a
model 1410 motor with a maJtimum torque capability of 245 Nm.
AXIS :2 i.~ driven by a smaller motor. model 608. capable of deliverIDg up 10 39.2 Nm torque. 80th motors are powered by swilching
amplifiers from NSK. SCrlCS 1.5 and Serics 1.0 for the model 1410
and model 608. respectively. A block diagram of the real-time can·
ltol syslem is iIlustraled in Fig. 3.0.

© 1988 IEEE. Reprinted, with pelmission, from Proceedings {d"A/J/('/"iC£ln Control C(JJ!fi'r('llc(',
June 1988,

287

- -- -_. ..........

'-0

:..

..,.......

tI
.:------.
I

NON-ADAPTIVE

TIM

71!5.~

i
1

3,3

=-~
. / - - :=.::.

Q
-..:
e::

I,

--

--.... ..........--

CAP

/Oh

IMIL_
. ....

/'

..
"'YSTF.M TUNF.D WITHOUT PAYlDAIl

F

V

~ ....... ,","

"

AYLO~

....... I--'

I~
i1'Dr.s1RED

-...;;

V

~

~

!J.

7!

_ _ _ &117 ....

=-"'L.

a

~.

FIG. I.' NSK·UCB Two ADs Direct Driye Manipulator

,

2,

~

"

"WlTIIOUT PAYLOAD

VI

\

s ••

F)lCSX V <4.0!5

8::tSO

S08 POS HAD

•. 62

TIMF. (SF-C)

FIG. 4.0 Non-Adapaive Control

NON- ADAPTIVE
CAP

3.3

S YSTEM TUNF.D 1I1TJ1 PAYlDAIl

TIM au",

1'\

715.=
~

/Dlv

JI J

e.
FIG.

2.' Adaptive Control'System

~

;::

11'\
.\

~

~

K

WTnlOlIT PAYIDAD

Ii'DESIRED

" •• 1

I/~ ""J

'(II
,F

F)CdXV

....

PAvloAD

J

I.

oe

\!\
--

~

!J

~v
2,

rl\

J'\

kPf .J.

Se

SI

so eoa

TIMI~

.-:"'ii'2

POS NAD

(SEC)

FIG. 5.0 Non-Adaptive Control

CAP

3.3

ADAPTJvr~

TIM AEC

!!II P~YlDAb

7!!5.~

~
e::

/D' v

!l

wr IIOUT PAYIDAl

ty'\,

ESIRID

j

"-\

I

FIG. 3.' Experimental Stt-up

I(f

,
v •• 05

..

Sl80

O. POS AD ._-L-.•r:"a'i;

TIMt: Vn-:C)

FIG. 6.' Adaptin Control

288

1\
\

Two IBM-AT's are used to implement the algorithm described
III Iht,;' prl"ViOU5 sCL":tion. TIle firsl IBM·AT is used to close the
proporlional position loop for both axes in 7 ms
1be NSK
amplifiers prov ide a two phase quadrature signal for posicion feed·
bad:.. Both motors provide a resolution of 153,600 pulses per revolution. The qdadralure signals are decoded to a 16 bit integer which is
sampled by the IBM·AT and internally converted to a 32 bit integer
by software. The IBM-AT calcul.tes the appropriate velocity command signal for each axis and delivers the command to the second
IBM·AT through two digital to analog con\'erlers (O/A).
The .«ond IBM-AT which houses the TMS32010 DSP board
from Atlanta Signal Processors, Inc .• samples the velocity command
from the first IBM·AT via two Analog to Digiral cooverters lAID).
Th~ nunor adaptive velocilY loop for each axis resides on Ihe
TMS32010 board. The second 18M·AT serves only as a data acquisition computer for the TMS. The IBM-AT is responsible for sampling
four AID's and controlling two DIA's, during real-lime execution.
The four AID's are two tor Ihe velocity command, and two for me
velocity fcedback signal. The velocity feedback signals for both axes
are provided by the NSK amplifiers as analog signals ranging from
+10 volts to -10 volts. which corresponds to 1.0 RPS to -1.0 RPS.
Tb~ Iwo D/A's are used to deliver the computed torque commands
from Ihe TMS 10 the NSK amplifiers. For our system the NSK
amplifiers have a gain of 47 Nm/V for axis I. and 25 Nm/V for axis
2. The system is configured such that the TMS is a high speed
numeric processor for the IBM. The real-time program is interrupt
drivcn Ihrough the system timer of the IBM. 1be IBM controls the
sample, and when data is ready delivers them to the TMS through a
common sbared memory space between the IBM and the TMS. The
IBM in turns signal the TMS to begin execution. Upon completion
the TMS delivers the computed torque command to the mM lhrough
the same shared memory space and signals the IBM tbat the compuration for that rime slice is complete. The IBM then delivers the
torque command to the appropriate NSK amplifiers via the two
D/A's.
The adaptive velocitY loop for both axis were implemented in
TMS assembly. The resulting code was 755 bytes, with a minimum
However. the overall algorithm was
limited to .bout 700 ,",S, due to the limiting speed of the IBM-AT
and me IBM Oaea AcquisitioQ Board used. A similar version of the
algorithm was written in assembly for the 80286 on the IBM-AT
whi<:h raD at a minimum tale of 2 ms . NOle. that if it was not for the
limitation of the 1/0 drivers. by virtue of pure software execution
time, the use of the DSP over conventional general purpose CPU's
can decreue the sampling linle by almost one order of magnitude.

Po..ibl< loop time of IS I ,",' .

The entire algorithm was coded with fix point arithmetic in
mind. For this particular adaptive conuol algorithm. a fix point for·
mat of 7 bit integer and 8 bit fraction, which corresponds to a numer·
ical r'olnge of :t27.r", was sufficient. The task or scaling was
simplified by Ihe TMS, since it provides a 0 to IS bit barrel shifter
which can shift the data as it is heing loaded from the memory to the
arithmetic logic unit \ALU). Another feature of the TMS which
makes it performance superior to mos. general purpose processors is
the: pardUel hardware: mullipier ~hich aUows the TMS to perlonn a
16xl6 bit multiply in 200 ns. An imponanl fealure of the TMS,
which is beneficial 10 most conlJ'Ol application and is not available in
general purpose processon is the owrjlow modt. which when set
prevents numeric overilows and underflows. Anodler" point which
sbould be mentioned is that the macro capabiliry of the TMS assembly language has made the programming task bearable and actuaUy
rather liimple.

Fig. 4.0 illustr.ues Ih~ non-adaptive case in which the system was
tuned without the payload. The perfonnance is fair without payload.
however, the system expeflence tremendous overshoot when the payload was adckd. Fig 5.0 illustrales Ihe opposing case. where the sys·
tem is tuned wilh lh~ payload and becomes highly oscillatol)' when
the payload is lost. Fig. 6.0 shows the adaptive case which is nearly
ind.isungulshabJe for either payload configuration. The results are for
a sampling nle of 7 ms for both the position and adaptive velocity
loops.

s.

CONCLUSIONS

Th\! digital implt!m~nlation of a MRAC to a two axis direct
drive robot using the TMS32010 Digital Signal Processor in conjunc·
tion wllh two IBM·A T's was pre~nted. From Ihe actual ex.perience
gamcd Ihrough impl~mc"IHalion. few features of I~ TMS were found
to" be extremely benefiCial to controls. these are :

• Macro capability of TMS assembly language
... Small and simple inslruclion set
*' 0 to 15 bit banel shifter for scOiling
• 200 ns 16x16 bit hardware multiplier
• Single cycle instruction for simple timing analysis
• 32 bi1 Accumulator
... OverOow Mode lor automatic numerical wrap·around
prevention.
A k.ey to the success of this implemcoution was the t.:areful
scaling and unsealing of intennediate values throughout the calcula·
tions. It should be noted thai this may no longer be a I.:oncern with
today'li availability of floating point digital slgnaJ processors, such as
AT&T',DSP32.
ACKNOWLEDGMENT
Thu work was supported by .he National Science Foundation
under grant MSM-8SI195S, the IBM Corporation under the U.C.
Berkeley grant for distributed academic computing environment and
the NSK corporation.

REFERENCES
[I]

Horowitz. It. and Tomizuta, M.( 1980).• An Adaptive Control Scheme for Mechanical Manipulators---Compensation
of Nonlinearity and Decoupling Control", ASME P.per
#80- W A/DSC-6 also in ASME Journal of Dynamics System, Measurement and Control. Vol 108. No.2, June 1986.
pp 127-13S.

[21

Tomizuka, M. Horowitz, R. and Anwar, G. (1986) .. Adaptive Techniques for Motion Controls of Robotic Manipula·
tors", Japan-U.S.A: Symposium on flexible Automauon.
July 1986, pp 117-224.

[31

S.degh, N. and Horowitz. R.,(l987) "Stability Analysis of
an Adapcive ControUer for Robotic ManipuJalors". Proceedings of the 1987 IEEE Int. Conf. on Robotics and Autom.tion. pp. 1223-1229, April 1987.

[41

Horowitz, R., Tsai, M.C .• Anwar. G., Tomizuta, M .• (1987)
"Model Reference Control of a Two Axis Direct Drive
Manipulator Arm·', Proceedings of the 1987 IEEE Int. Conf.
on Robotics and Automation, pp. 1216-1222, April 1987.

[51

Texas Inslfuments, "TMS32010 User's Guide". Texas
Instruments Incorpor.ted. 1983.

4. EXPERIMENTAL RESULTS
The adaptive control allorilhm was implemented on the NSKUeB robot, and the resulls are illustrated in Figs. 4.0 to 6.0. The
robot was subjected to a payload change for approximately 7.S Kg.
Both axes wen: tracking • third order trajectory which required each
axis to b'averse over ISO" and return. The p10ts shown are closeups
of 1he second axis posilion response as it reaches its 180" destination.

289

290

Implementation of a Self-Tuning
Controller Using Digital Signal
Processor Chips
K. H. Gurubasavaraj
ABSTRACT: This paper describes implementation aspects of a self-tuning motion
controller, which uses the Texas Instruments
TMS32010 digital signal processor (DSP)
chip. The potential advantages in using a
DSP chip include reduced operation time,
reduced development time, and reduced cost.
The self-tuning controller can track variations in system parameters as well as system
disturbances. Algorithms are described, experimental results are presented, and implementation strategies to overcome I imitations
of such systems are discussed.

Introduction
In many applications of electromechanical
systems, parameters such as inertia and load
torque may vary over time. Variation of load
torque, manufacturing variations, and aging
can degrade system performance. The design framework of self-tuning control is suitable for adjusting control parameters as well
as compensating for disturbances [I], [2].
However, cost considerations have tended to
limit the implementation of adaptive control
to process control applications, where the
control costs can be justified. The advances
in microprocessor technology with reduced
cost have made it possible to apply adaptive
control to electromechanical systems because digital signal processing (DSP) chips
reduce cost and development time. In particular, the implementation of adaptive control presented in this paper is currently being
considered for use in a commercial product.
Digital control normally is implemented
with a microcontroller, and microcontroller
architecture is well suited for handling inputs
and outputs for motion control systems.
However, the arithmetic logic unit of such
devices is slow, due to the general-purpose
microprocessor architecture. For example,
16-bit multiplications normally require 5-20
!,sec. These slow times preclude using these
devices for simultaneous identification and
control of an electromechanical system. On
K. H. Gurubasavaraj is with the Xerox Corporation. Business Products and Systems Group, 1350
Jeffer.;on Road. Rochester, NY 14623.

the other hand, the architecture of a DSP
chip is quite suitable for intensive computation. Multiplication times for 16-bit DSP
chips are in the range of 60-200 nsec. This
time improvement makes possible real-time
on-line adaptive control.
This paper discusses the implementation
of self-tuning adaptive control using a DSP.
Many vendors, such as Texas Instruments,
Fujitsu, AT&T, Motorola, National Semiconductor, and NEC, produce a wide range
of DSPs. The AT&T DSP32, Texas Instruments TMS32030, and NEC !,PD77230 have
32-bit floating-point hardware architecture.
They are capable of producing multiply and
accumulate floating-point operations within
60-150 nsec. Other DSPs have fixed-point
data architecture. The cycle time of these
first-generation devices is in the range of
100-200 nsec. As a result of hardware multipliers, multiply and accumulate fixed-point
operations are performed in 160 nsec compared to 12-16!,sec using Intel 8086 or 8096
devices. The cost of these devices is comparable to other microcontrollers, costing less
than $10. The Texas Instruments TMS32010
device is used here for self-tuning controller
development because of cost considerations
and availability of development systems.
Furthermore, all control functions of a microcontr"lIer are integrated with the
TMS32010 central processing unit (CPU) in
the new device, referred to as digital signal
controller DSC32014. This chip can be considered as a true single-chip controller capable of performing identification, control,
and input-output signal processing in real
time.
The TMS320 family of processors have
Harvard-type architecture with separate data
and address lines. The instructions are suited
for implementation of digital filters. For example, the combination of LT.D and MPY
instructions load a coefficient in a register,
multiply and accumulate with previous products, and move the data memory to the next
higher memory address space. Hence, implementation of each additional pole and zero
can be performed with two instructions.
More information about these devices can be
found in Refs. [3]-[5].

There are implementation constraints with
these chips because of the fixed size of random access memory (RAM) space and hardware architecture suited for fixed-point operations. Our objective is to design the
adaptive control with the capability to estimate the maximum number of parameters
and control the system. Simultaneously, the
implemented controller can track the velocity or position of a given electromechanical
system. Experiments have been conducted to
measure the effect of mismatch between the
assumed model and the actual system.

System Model
The dynamics of many electromechanical
systems can be represented using well-known
models. For illustrative purposes, the permanent magnet DC motor driving a load with
total inertia J can be represented by the following equations, where R, L, K" K" B, v,
i. and Td indicate resistance, inductance,
torque constant, back electromotive-force
constant, damping coefficient, voltage applied, current through the armature, and disturbance torque, respectively.
L di(t)ldt

+ Ri(t) + K, wet)

J dw(t)ldt

+ Bw(r)

= vet)

(I)

= K,i(r) - Td

(2)

The operation of the Laplace transform
yields the following transfer-function relationships. where K*. a*, b*, and c* are determined from Eqs. (1) and (2).
w(s) = [K!'V(s) .;- (s2

K: (s + ef) Td(S)]

+ afs + at)

(3)

The equivalent Z-domain transfer function
using zero-order hold gives the following relationship among velocity, voltage, and
torque disturbance.
w(z) = [K,(z

+ b,) V(z)

Td(z)]/(z2

- K,(z

+ a, Z + a2)

+

e,)

(4)

The adaptive control methodology presented here develops the control parameters
as if the system parameters were known. A
suitable identification procedure is used to

© 1989 IEEE. Reprinted. with pennission. from IEEE C011fml Systems Maga=ine. June 1989.

291

tune the initial control parameters. The following section describes the identification
procedure.

System Identification
The discrete dynamic equations for the
system in Eq. (4) can be written in the time
domain using the following recursive equation, where the system parameters are represented by 8;.
w(k

+

1) = 8,w(k)

+

82 w(k -

+ 8",(k) +

1)

-

+

1) = T(k)e(k)

292

+

I)(w(k

+

I)
(7)

(7).

1)

(6)

If the torque disturbance is constant, then
Td(k) equals TAk - 1), and the last two
terms in Eq. (5) can be combined to give a
single bias term. This unknown bias can be
included in the system parameters 8;. A
straightforward recursive least-squares (RLS)
estimation procedure can be used to identify
the system parameter vector e. However,
when the torque disturbances vary over time
and the torque disturbance sequence is not
known, the estimation process becomes nonlinear due to unknown torque disturbance
terms T,,(k) and T,,(k - 1) in the  signal
vector, which mUltiply the unknown parameter vector e.
For the preceding class of problems, the
elements of the parameter vector e as well
as the unknown elements of the signal vector
 need to be estimated. The estimation problem can be solved by using either the extended least-squares (ELS) method or the approximate maximum likelihood (AML)
estimation method [6). If the propenies of
the disturbance noise distribution are known,
the AML estimate has superior convergence
propenies compared to the ELS method. In
the absence of such knowledge, both
schemes exhibit similar convergence propenies. The simplicity of the ELS algorithm
and the absence of knowledge about the disturbance prompted us to use ELS estimation.
The recursive estimation scheme is given by
tbe following equations, which are similar to
Kalman filter equations for linearized estimation, where the superscript on indicates
the estimat~ and the vector B represents the
gain.

e

B(k

I)

= (i/{l5)[W(k) - {I, w(k - I)

The system equations can be written in
vector notation, where the vector e represents system parameters, the  vector constitutes all known signals, and superscript T
denotes transpose of the vector.

+

+

- T(k){I(k»

_. {I,w(k - 2)

(5)

w(k

I) = {I(k)

Since not all elements of the  vector are
known in this equation, the unknllwn elements Td(k) and Td(k - I) are replaced by
their residual sequence.· The residual sequence of Td(k) is obtained from the following version of Eq. (5), where the parameters
'are replaced by estimates obtained from Eq.

Td(k -

84 v(k - 1)

+ 85 T,,(k) + 86 Td(k

{I(k

1) - {I.v(k - 2)

- {I,v(k -

- {l6 T.,(k

-

2»)

(8)

The Td(k) estimates in the  vector of Eq.
(7) are replaced by Td (k - 1) from Eq. (8).
The preceding substitution assumes that the
disturbances are continuous in nature aod the
bandwidth of such disturbances is much
lower than the sampling rate. The recursive
equations to determine the vector gain B are
similar to the Kalman filter equations.
B(k

+

1) = P(k)(k)
~ [1

+ T(k)P(k)(k))
(9)

P(k

+

I) = [/ - B(k

+

1)(k))P(k)
(10)

Here P(k) is the covariance matrix, which is
initialized by setting P (0) equal to a diagonal
matrix. Elements of the parameter vector e
are initialized by some initial guess.
The design of the controller is carried out
by using estimates of the system parameters
instead of actual values. The single-stepahead prediction is used to generate control
signals. The desired reference velocity during the next sample time is equated to the
single-step-ahead velocity prediction by
using the following version of Eq. (5):
v(k) = (l/il,)[w", (k

+

1)

- il,w(k) - il,w(k - I)
- il4 v(k -

- 8.Td (k

1) -

- 1))

85 Td(k)
(11)

Implementation Considerations
The preceding real-time identification and
control laws have been implemented using
an Intel 8051 family controller with a Texas
Instruments TMS32010 DSP as a coproces-

sor. The block diagram of the hardware
schematic is as shown in Fig. l. The DSP
is used to generate velocity profiles and perform estimation and control calculations to
generate the controlled input. The timers and
counters on the 8051 are used to perform
bookkeeping functions. All interface logic,
input and output processing, and the
TMS32010 CPU are integrated in the device
DSC320l4. This device provides the needed
input and output processing capabilities as
well as the fast computation capabilities of
a DSP. Hardware design based on the
DSC32014 is in progress.
The implementation of the preceding
equations should consider the internal hardware architecture of the device. The architecture of the TMS320 devices is optimized
to implement digital filters. For example, the
TMS32010 can implement loading the register, adding the value to the accumulator,
and moving the signal value into the next
memory location. These capabilities are well
suited for any classical filter implementation.
However. the estimation routines need ma-

trix or vector manipulations. Subroutines can
be written for doing these manipulations. The
time needed for these calls can be saved if
the operations are performed in scalar form.
Scalar manipulations decrease the RAM size
requirement, while increasing the read-only
memory (ROM) requirement due to additional coding. At the present juncture, this
trade-off is advantageous due to limited RAM
space (144 words) compared to ROM space
(1536 words) available on these chips.

Estimation Routine and Control Design
The estimation routine used in Eqs. (7)(10) can be directly 'implemented for estimating a small number of parameters. Estimating a larger number of parameters requires larger memory space. The covariance
matrix P in Eqs. (9) and (10) should be positive definite for assuring convergence. The
matrix P can lose positive definiteness due
to subtraction operations in Eq. (10), leading
to divergence. To provide numerical stability, the update of the covariance matrix can
be accomplished with the square-root version of the P matrix instead of the P matrix
itself, which is known as square-root filtering in the literature. However, square-root
filtering is computationally expensive. Bierman's UDU T method [7) provides the advantage of less memory space and does not
need square-root calculations, while accomplishing the same. objective. Bierman's
method requires n(n - 1)12 locations for
covariance matrix manipulation instead of n'
locations in a regular filter implementation.

r-----~------------,

Experimental Results

I
I

I

Fig. 1.

Block diagram of implementation hardware.

An Electrocraft E543 motor is used in the
laboratory experiments. Using a magnotrol
brake, torque disturbances of varying magnitudes are induced. The amplitude of such
torque disturbance variation is limited to 31
oz.-in. Different torque magnitudes are used
in the experiments. A sample period of 400
I"sec is used.
Identification and Predictive Control
for Constant Disturbance Torques

Hence, Biennan's UDU' method is employed to provide numerical robustness and
for its applicability to estimation of other
higher-order systems. Details about this algorithm can be found in Ref. [7].
The norm of the covariance matrix (and.
hence, the filter gain) decreases as time increases and eventually goes to zero. This is
desirable if the system is indeed time invariant. If the parameters are time varying,
the decrease causes the loss of adaptive capability. To keep the filter active, the covariance matrix elements are divided by a
constant less than I, which is known as a
forgetting factor. At the same time, Eq. (9)
is also modified slightly. The TMS320 architecture needs a subroutine to perfonn divisions. The reciprocal of the forgetting factor is used to multiply the covariance matrix
elements. Experiments were conducted to
observe the effects of different forgetting factors. In some cases, the mismatch of the forgetting factor can increase the covariance
matrix values to cause numerical instabilities
or decrease them to small numbers. (Forgetting factors of less than 0.9 lead to increases in the elements of the covariance matrix. Forgetting factors of 0.98 and 0.99
provided better results for our specific applications.) To provide some protection
against these situations, bounds on the minimum and maximum nonns were employed.
Resetting of the covariance matrix was performed at these boundaries, and this strategy
worked fairly well in practice.
The convergence of RLS estimation can
be assured while tracking only system parameter variations. The convergence is independent of the initial parameter estimates.
When significant disturbances are present,
the equation error due to the disturbance sequences leads to biased estimates. The consistency of RLS relies on the uncorrelated
residual sequence, which requires a special
noise structure. A correlated residual sequence leads to biased estimates. The ELS
estimation is used to estimate disturbance
torques and associated parameters. In this
case, convergence is dependent on the initial
parameter estimates. For many electrome-

chanical systems, the parameter bounds are
known. The estimator 'converges to the true
values when the initial estimates of the parameters are in the proximity of the actual
values. The estimates of the product 9, and
Td(k) are known to a greater degree of certainty than the individual components.
In actual implementation, the gain tenn 9,
can be normalized. For the case presented,
the total number of parameters that need to
be adapted is five. OverHow and underHow
situations may occur due to fixed-point representation of numbers. Appropriate scaling
becomes important to avoid these problems.
Scaling is a continuous conflict between the
dynamic range and resolution of signals or
coefficients. Sign··plus two's complement
arithmetic is used to represent numbers.
Coefficient scaling is accomplished by esti-

The predictive control using RLS estimation is shown in Fig. 2. The servo is tracking
a series of trapezoidal profiles. As can be
seen, the adaptation is complete during the
ramp-up period. Ten or twelve samples of
data are needed for convergence. Similar results confinn adaptation to different sets of
system parameters by using different motors,
inertias, and frictional loads. However, this
estimation scheme leads to biased results
when significant variations of torque disturbances are present. Figure 3 shows the perfonnance for triangular torque disturbances
at a frequency of 5 Hz. The velocity variation is limited to ±5 counts/sample. The
nonadaptive compensator perfonnance for
torque variation is similar to the RLS
method.

mating the maximum value of the coefficient

Performance Under Varying Torque Loads

estimates. Then, all coefficients can be normalized within the available word length.
Similarly, signals are scaled. Setting of the
overHow mode saturates the coefficient value
at the maximum. This feature recovers the
estimator from soft saturation without leading to damaging consequences. Appropriate
safeguards need to be provided for eventual
saturation problems. Many different ad hoc
strategi~s can be used, depending on the type
of saturation. The ELS algorithm using fixedpoint arithmetic requires approximately 200
I"sec for computation.
For a set-point regulator problem, the absence of persistent excitation may cause the
filter to diverge. For the cases studied, the
frequency components of torque disturbances appear to provide the needed frequency components and prevent divergence.
Some divergence-related problems are noticed in linear systems without disturbances.
Many investigations are being carried out to
detennine the cause of such divergence. At
present, it is ascribed to insufficient excitation of input signals. If these disturbances
are absent, then some perturbation may have
to be provided in the input signal to prevent
filter divergence [8]. Providing the needed
excitation for the estimator and good regulatory perfonnance seem to be a challenge.

The penonnance of the controller based
on ELS identification is compared with a
well-tuned proportional-integral-derivative
Q)

0

I

~2 ~

~il

V

V-

I

~~V
ue
a'a:,
Refe~,,:nce

--"

pO~I11on

The MCV60
The control strategies described in the
preceding sections are implemented on a
motion control card, the MCV60. This card
communicates with a host computer via the
VMEbus. Software on the host provides a
user·friendly environment for commissioning of the system and adjusting of the

:~

-lS

0.0

Sec.

3.5

~E3
Figure 10. Position Error During Cubic Set
Point Movement [DC motor: velocity =
29.3 rev/sec, acceleration = 39 rev/sec 2,
jerk = 78 rev/sec3j.

parameters during operation. This card is
designed to control two brush-type DC
motors or one brushless DC motor. Up to
sixteen cards may be used together on the
same bus. A functional block diagram of
the MCV60 is in Figure 11.
The MCV60 hardware is based around
the TMS320C25 signal processor running
at 40MHz. This high clock frequency,
together with the arithmetic capabilities of

Cubic spline
interpolator
Poinl to point

Joe
Command Quick stop
HomiDI

'""--'
Acc_

l~I

/'..

,.t.

.

../
P••.

DialDostics

68000

'orword

-

K,

I

Se:cr:g Slahll

Host

O...ersbool
Settling time
Max. I!0s. err.

I\rL---

Stalus

- --

.

C.n. ace.L

ailDal

~

~

-V

Di.ital I/O

+

.1

+;

j. __~t~__

'Jl

,/,/,/,/,/,/,/

Figure 11, Block Diagram of the MCV60 Connected

to

Power rl
-I amplifier

Co• .

11I

OUtUA.

ob..n.

Servo
mecbanism

--

•••
., ri{fJ
......
rEDCod.r/,....I..

r;-j

I Aaal•• iD\.l'fac. I-

8

foooooo61

,
I

.~

W

Hardware
Status
~

Figures 12 and 13 demonstrate the speed
and accuracy of the MCV60 motion
controller. In Figure 12 the response of a
brushless linear motor to a set point
displacement of IOmm (10,000 increments)
is shown. Figure 13 shows the motor
velocity profile and the position error of the
same servo when running at maximum
velocity (data speed = 1.5MHz). The
maximum position error is only 12
increments (12I-'m).

ob••",.

Position error
Velocity
CODtr~l

commutation software is available ll .
Extensive hardware and servo diagnostics
are performed at power-up and critical
hardware checks are performed each
sample period. System monitoring is
performed each sample period generating
the two monitor DAC variables. Other
performance indicators are also recorded
such as the maximum error during a
movement, the overshoot, the settling time,
etc.
At the host level, drivers are provided for
communicating with the MCV60_ A menudriven, user-friendly test environment
initializes and tunes system parameters.
Self-tuning facilities initialize the controller
parameters to suitable values based on the
system characteristics. More refined
parameter tuning may then be simply
carried out by a series of well- X ds) in this type of machine permits
economical machine design because torque is contributed by
the magnet field as well as by the reluctance effect. In the past,
ferrite and Cobalt-Samarium magnets have generally been
used in PM machines. Recently, a Neodymium-Iron-Boron
(NeFeB) magnet has been introduced, which shows considera·
Manuscript received June 25, 1987; revised June 15, 1988. Thi. work was
supported by the Department of Energy under cost .hared contract (Contract
DE-AC07-85NVl04l8) with Ford Motor Company.
B. K. Bose was with the General Electric Research and Development
Center, Schenectady. He is now with the Department of Electrical Engineer·
ing, The Uni.ersity of Teones... , Knoxville, TN 37996-2100, and also with
the Power Electronic. Applications Center (PEAC), Knoxville, TN.
IEEE Log Number 8823552.
, Simnon was developed by Loud In.titute of Technology, Sweden.

PAUL M. SZCZESNY

ble promise. The NeFeB magnet has much higher energy
density at reasonably "low cost" and therefore permits
economical machine design. However, one characteristic of
the magnet is that its field strength weakens as the temperature
increases. Considering the recent research and development
trends, it is expected that the price of the NeFeB magnet will
fall considerably and its characteristics will improve, thus
promoting extensive applications in the future.
In the past, induction motors have generally been consid·
ered as viable ac machines for electric vehicle drive applica·
tions. Although induction machines are simple, economical,
and satisfy all the performance needs of electric vehicle drives,
this type of machine has some additional loss penalties
compared to the PM synchronous machine because of rotor
copper loss. Besides conservation of energy, which is of
paramount importance in the EV drive system, extensive
analysis indicates that the life cycle cost of the !PM machine
drive system is generally lower than that with an induction
motor, not only for EV but for general industrial applications
also. An IPM machine can be operated near unity power factor
(unlike an induction machine), except at high speed and low
torque where the power factor becomes low (leading) because
of excessive counter emf.
The high performance requirements of the IPM machine
drive system for EV application demands a considerable
amount of control complexity and this is the subject of
discussion in this paper. Fortunately, microcomputer technol·
ogy and computer·aided control system design techniques
have advanced tremendously in recent years and advanced
control laws are being implemented easily in real time that
could not be done before. The paper will first review the
control principles of the IPM machine, which include con·
stant·torque and constant·power regions. Then, after review·
ing the salient features of the simulation language SIMNON,
the drive system simulation is described. The hardware and
software design features of the distributed microcomputer·
based control are then described. Finally, laboratory tests that
verify the simulation results are discussed.

II.

DESCRIPTION OF CONTROL SYSTEM

The complete. drive control system of the IPM synchronous
machine is described in [1]. It will be briefly reviewed here for
completeness of the paper. Fig. 1 shows the simplified
schematic of the drive system power circuit. The traction
battery (204-V nominal) shown on the left is the lead·acid type

© 1988 IEEE. Reprinted, with permission, from IEEE Transactions on/nelustria/ E/ectronics.
Vol. 35, No.4, Nov_ 1988.

315

DC
LINK
CAPACITOR

PWM TRANSISTOR
INVERTER

DC

BATTERY
SOURCE

ABSOLUTE
SHAFT
POSITION
ENCODER

PHASE
CURRENT
FEEDBACK
TORQUE
COMMAND

INVERTERI MOTOR
CONTROLS
ROTOR ANGLE
FEEDBACK
Fig. 1. Simplified schematic of the drive system power circuit.

and supplies power to the PWM transistor inverter. The
inverter generates variable-frequency variable-voltage (or
current) power supply for the IPM machine. The machine
shaft power is transmitted to the drive axle through a twospeed transmission. The drive system operates in all four
quadrants, and regenerative braking energy is easily absorbed
by the battery. The machine shaft has an analog resolver type
absolute position encoder that permits the drive system to be
controlled in the "brushless dc machine" mode. The drive
system has essentially two different modes of operation. In the
constant-torque region the inverter is current-controlled in the
PWM mode so that the desired flux -torque relationship can be
maintained. In an IPM machine, the flux can be controlled by
stator injected reactive current, which can be lagging (magnetizing) or leading (demagnetizing). As the inverter saturates at
higher speed, the current control is lost and then the drive
system enters into the field-weakening constant-power region.
With this condition, the inverter generates a six-step squarewave voltage, which is phase-shifted to control·the developed
torque. As the machine speed increases in the constant-power
mode, the induced voltage increases proportionally with
speed, thus demanding more leading reactive current to
balance with the constant stator voltage. The invertor/motor
controls are shown as a block in Fig. 1 where transistor base
drive signals are generated from the operator torque command
and feedback signals.
A simplified control block diagram of the drive system in
the constant-torque region is shown in Fig. 2. The core drive
system in this region is current-controlled by. using the
hysteresis-band (bang-bang) PWM principle. The vector or
field-oriented control principle is used to enhance the system
transient performance. The IPM synchronous machine can be
considered as somewhat analogous to a wound-field synchronous machine where the "field current" is controlled from the
slater s'dt Therefore in vector control the direct axis has been
aligned t.. the stator flux [2}, [3}, [16} instead of magnet flux.

316

In such a control mode, the in-phase or active component of
the stator current can be controlled to control the developed
torque, whereas the quadrature or reactive component of the
current can be controlled to control the stator flux. In Fig. 2,
the operator commanded torque is controlled by the close loop
and the torque component of current (It) is generated by the
torque loop. The drive system incorporates a flux control loop
to prevent flux drift due to parameter variation. The command
flux (9":) is programmed with torque (T:) to optimize the
core loss so that the overall drive efficiency is improved. The
flux is essentially controlled in the feed-forward manner with
the help of the current program ·as shown, except the
incremental 1!1It, from the flux loop supplements the current
program output. The current signals It and It,are processed
through the overlay current control loops (Fig. 3), and the
output current signals in the synchronously rotating reference
frame are then vector rotated to transform into stationary
frame phase current commands for the inverter currentcontroller.
All the essential feedback signals for the control system as
shown in the feedback signal processing block are estimated
with precision. These signals include torque (T.), stator flux
(9",), torque angle (cos 8, sin 8), rotor position (cos 9" sin 9,),
and rotor temperature for magnet flux compensation. The
detailed description of feedback signal processing can be
found in [I}. Basically, the d' and q' components of stator flux
are described respectively as a function of magnet flux with
the stator d'-q' currents and the stator d'-q' currents. The
relations are derived by extensive. modeling and laboratory
calibration where parameter saturation and cross-coupling
effects have been taken into consideration. The torque and
torque angle are estimated by the equations

T,=K,[", dsiqa- "'qaids }
sin 8=

"'qa

""'~ H~

(1)

(2)

V.

T.

cos (e,+Il).
SIN (e,+bI
COS e,. SIN e,

e,
• COS e,. SIN a,SYNTHESIS
• TORQUE (TJ ESTIMATION
• TORQUE A!'IGLE (II) ESTIMATION

.....-----------1. STATOR FLUX (P,) ESTIMATION

• TEMPERATURE COMPENSATION
OF MAGNET FLUX ( ")

....------------------1. VECTOR ROTATION OF i

T,
STATOR
TEMPERATURE

FEEDBACK SIGNAL
PROCESSING

Fig. 2.

Simplified control block diagram of the drive system in constant torque region (PWM mode).

sw
6=0

COS

_VECTOR---t....
ROTATOR

rv
(e.+ !»)
~_....L.-.

'--.,...........,.-J"...... CURRENT

~

cose.

~

SINe.

PHASE
SHIFTER

CO-ORDINATE

SHIFTER

L....r--;-...

COS!) SIN!)

Fig. 3.

COS/) SIN/)
Overlay active and reactive current control loops with forward vector rotation.

(3)

where
it~. it., are the stator de_qe flux components (rotating
frame) and ids, i., stator de_qe current components (rotating
frame).
It was mentioned before that the NeFeB magnet flux has some
negative temperature sensitivity that should be corrected with

the help of magnet temperature information. A simplified
single time constant dynamic thermal model is solved to
compute the magnet temperature approximately from the
Stator temperature.
Fig. 3 shows the overlay active and reactive current control
loops with forward vector rotation. These loops permit vector
control to be effective in partial saturation of the currentcontroller (quasi-PWM) and help smooth transition between
the PWM and square-wave modes. The operati.on of the loops

317

can be considered as redundant in normal PWM operation. A
smaIl amount of coupling is introduced [5] in vector control by
the loops that tends to slow down the response, but this can be
ignored because of high loop gains. The current coordinate
shifter converts the d'-q' current components to 1M and Ir by
the relations

IM=iq, cos /j-ids sin /j

(4)

IT= iqs sin /j + ids cos /j.

(5)

The loops compare the respective command and feedback
currents and generate outputs through the PI compensators.
The PI control assures matching of command and feedback
currents as long as current control remains effective. The loop
outputs I~ and It, are vector rotated by the unit vector
signals cos (IJ. + /j) and sin (IJ. + /j) such that I~ and 1* are
aligned to phase voltage V, and stator flux v" respecti~ely.
In normal PWM mode, I~ signals remain identical to the
respective command signals. But, as speed increases in the
constant-torque region, the current-controller enters into the
quasi-PWM mode due to increasing counter emf. With this
condition, the loop outputs become higher than the respective
command inputs while assuring matching between the command and feedback signals. As speed increases, the number of
chops in the current-controller decreases and eventually at
square-wave output voltage, the loop outputs I~ and It,
saturate to the clamped values A and B, respectively. Then,
the control of the overlay loops is completely lost and the
switch is thrown to the "SW" position, as shown. The drive
system then enters into the constant-power region with squarewave impressed voltage and the control block diagram shown
in Fig. 4 becomes valid. It should be mentioned here that
efficiency consideration dictated that the drive system should
operate in square-wave mode in the constant power region;
otherwise, vector control, which gives better transient response (but demands PWM operation mode), could have been
implemented. The strocture change in Fig. 4 for the PWM
control mode is shown by the two switches. The torque loop
error generates the sin /j* command through a PI compensator,
which is then converted into the torque angle command /j*
through a look-up table. The /j* angie is then added with the
rotor position angie IJ. to generate the unit vector signals cos
(IJ. + /j) and sin (IJ. + /j). These signals permit phase shift
angle (/j) cpntrol of the machine input voltage by the same
vector rotator and current-controller as described before. In
fact, the control principle is essentially the saIile as shown in
Fig. 3 with the switch in "SW" position and considering cos /j
and sin /j as the command signals. Since the magnitude of A is
very high and B = 0, the vector rotated signals can be
expressed as

where

318

¥;'O'

i:= V:o=A cos (IJ.+/j*)

(6)

it= Vto=A cos (IJ.+/)*-1200)

(7)

j~= V~o=A

(8)

vto'

cos (IJ.+/j*+ 120°)

and ~o are the respective phase voltage

commands with respect to the hypothetical battery center
point.
Note that the steep sides of the current commands will force
the current-controller to switch only at the edges of the halfcycle, thus generating square voltage waves. The above
equations indicate that the applied phase voltage will be
aligned at an angie /j with the respective induced voltage. Fig.
4 indicates an alternate /j* angle control loop where the threephase square command current waves are fabricated directly
from the IJ. + /j* angle. This control is implemented at higher
speed (forward direction only) where small computation
sampling time is needed for the desired torque resolution.
Although flux control loop is inactive in the square-wave
mode, the loop error, as shown, helps in the transition to the
PWM mode, which is explained later.
The transition between PWM and square-wave modes is
required to be fast. and smooth under all conditions of
operation of the drive system. The transition performance is
especially demanding if it overlaps with gear shifting. An UP
or DOWN shifting request placed independently by the higher
level vehicle control computer will cause a fast speed change
in the machine and therefore the control response should be
fast compared to the rate of speed change. The transition is
designed such that if gear shifting is requested during
transition, -it will be inhibited until transition is completed
successfully. However, transition should be successful if
initiated during gear shifting. Fig. 5 shows the sequence
diagram for the transition, which also indicates the criteria for
transitions and the corresponding actions. The transition from
the PWM to square-wave mode is initiated when the currentcontroller is near saturation that is indicated by the transistor
base drive pulse transistion counts in two successive fundamental frequency cycles. As this condition is detected, sin /j*
control is activated with the initial value updated by computation and then the switch in Fig. 3 is transferred to the "SW"
position. For successful operation, the control requires that the
polarity of A is to be sensitive to the direction of machine
rotation (+ A for forward rotation). Once the system is
transitioned to the square-wave mode, a delay time is added to
settle the transients and then the look-up table control method
is activated (in forward direction only). The criterion for the
square-wave to PWM mode transition is determined by the
flux loop error as indicated in Fig. 4. As the error decreases
and eventually becomes negative, the PWM mode is activated
by enabling the overlay currents and flux control loops. Note
that a transition may occur at constant torque due to speed
variation, at constant speed due to torque variation, or due to
battery voltage variation at the same operating point on torquespeed plane.

III.

DRIVE SYSTEM SIMULATION

Computer-aided control system design tools are playing
increasingly impbrtant roles in the design of power electronic
and drive systems. These tools are becoming simple, economical to use, and more user-friendly day by day. A complex
newly developed control system can be conveniently designed
and simulated on a computer to verify the feasibility 'of the
control laws. The control system design parameters can be

TORQUE
COMMAND

T,

RESOLVER
WITH RID
CONVERTER

9,
' -_ _ _ _ _ _ _ _~_ _ __I-TORQUE (T,) ESTIMATION
-STATOR FLUX

(~ESTIMATION

- TEMPERATURE COMPENSATION
OF MAGNET FLUX (-.>

T,
STATOR
TEMPERATURE

FEEDBACK SIGNAL
PROCESSING

Fig. 4.

Simplified control block diagram of the drive system in constant power region (square-wave mode).

BASE DRIVE PULSE TRANSITIONS N<12 IN 2 CYCLES

• LOAD I i = +A FOR +w,
= -A FOR -w,

.LOADI~=O

• ENABLE OVERLAY LOOPS WITH
INITIAL I T AND 1M

• ENABLE TABLE MODE IF w,
IS POSITIVE (ELSE STAY IN VR MODE)

• ENABLE PWM MODE

Fig. 5.

Sequence diagram for PWM-square wave transitions.

iterated on simulation until the static and dynamic performances become optimal_ Besides, the' harmonics and the fault
performance of the system can be studied in considerable
detaiL The simulation approach is often time-saving and
economical and has less ,risk of damage than the trial-and-error
method of breadboard design_ However, it should be noted
that simulation performance of a system can be only as good as
its model description, and therefore, this approach should be
considered for preliminary study of a system_ An approximate
model simulation with a breadboard test is usually the
desirable approach because an accurate model description of a
physical system is often very involved_

A_ Review oj the Simulation Language SIMNON
SIMNON is a popular simulation language among a number
of computer-aided design tools that have been available
recently [17], [19]. This language has been used in the present
drive system simulation, and therefore its salient features will
be briefly reviewed. SIMNON is a command driven interactive program for simulation of dynamical systems that can be
described by linear/nonlinear ordinary differential and difference equations. The commands, for example, can change
parameters of the model, perform simulation, graphically plot
results on a terminal, and modify the modeL With the macro

319

facility, the user can construct a command string. ThiS
compiler is included in the program and works in parallel with
an editor. This enables the user to correct the erroneous lines
of the program immediately.
For SIMNON simulation, a large system is normally
resolved into a number of subprograms. These subprograms
are then interconnected by input and output signals through a
connecting routine. The SIMNON programs can be connected
with specially formatted FORTRAN files. SIMNON offers a
sPecial advantage for a microcomputer-controlled system.
Here, the physical process, which is normally a continuous
system, can be modeled by differential equations, whereas the
controller, which is a discrete time system, can be described
by difference equations. All the system descriptions are in
state space form. Table I shows the general structure of a
SIMNON program, for a continuous system. The structure of
the discrete time system follows a similar pattern, and is
illustrated later. The program starts with a heading that defines
the type of system and gives a filename. The body of the
program consists of three sections: declarations, initial section, and assignments, and then terminates with an END
statement. The sequence of program statements is arbitrary
and SIMNON automatically sorts them into proper order. The
INPUT and OUTPUT statements indicate the signals that link
with other programs. The TIME declaration is necessary if a
time related statement appears in the program. The STATE
and DER statements· relate to state variables and their
derivatives, respectively, of the state space equations, and
must be declared in the same order. The SORT statement is
required only if an INITIAL statement has been included and it
acts as the terminator lor the section. The assignment
statements are FORTRAN-like and these include' parameter
and state initial values. This section may include standard
functiOll8, such as SIN(X), SQRT(X), ABS(X) , etc. When
multiple programs are interconnected by INPUT and OUTPUT signals, a connecting system of the following structure
should be used:
CONNECTING SYSTEM 
Declarations
Connect section
END
For integration of state space equations in a continuous
system, one of the following algorithms can be selected:
HAMPC

RK
RKFIX'
DAS

Hanuning predictor corrector (default)
Runge-Kutta variable step size
Runge-Kutta fixed step size
Integration routine for stiff systems

Once the SIMNON program for the entire system is written, a
typical string of commands as follows can be exercised:
> SYST X Y Z

Compiles the system containing.

X, Y, Zflles

> EDIT

X
>STOREABC
>ALGOR 
> SIMU 0 T

Changes the program
Stores the variables
Selects the algorithm
Simulates the system for interval

T
>ASHOW A

320

Plots the stored variable A with
automatic scaling

TABLE I
GIlNERAL STRUCTURE OF SIMNON PROGRAM
UDO"!!

<•.yat•• identifier»
<1i1lP1. variabl.>

(S"rA"rB


<01111'1. variable>

ICOIITINDOOS niTA

IOO"rpO"r <.illPl. varioble>
(TID

IDBR

~I.ITIAL

COlIPutatlon of initial "alu•• for atate variabl ••
Coaputation of par • • ter.
'
SOR"I!

]

[Coaput.tion of auxiliary .ariablee)
[Computation of output variablesl
(ColIPutation of derivatiye.]
(Parameter assignments]
(Initial value a •• ign.enta)

IIRD)

B. Drive System Simulation in SIMNON
The complete drive system including the inverter and the
machine was simulated in the computer using the VAX-based
SIMNON program. The purpose of simulation is to verify the
complex control algorithms, design the controller parameters,
and study the static and dynamic performances of the system
before building the laboratory breadboard. In fact, once the
initial simulation phase was completed, the iteration of
simulation and laboratory tests. went hand-in-hand whenever
the test results were not up to expectation. It may be of interest
to mention here that the simulation also included the study of
dc link harmonics and fault performance of the invertermachine system, but these aspects will not be described here.
Fig. 6 shows the simulation block diagram of the drive
system where each functional block can be identified from
Figs. 2 and 4. A SIMNON program is written for each
functional block with the program name as indicated, and then
all the blocks are interconnected with the 110 signals using the
connecting system CON. The nature of the system (continuous
or discrete time) is indicated in each block. The discrete time
systems use the actuaJ sampling times that are used for
microcomputer implementation. Thus, the design of sampling
times in multitasking microcomputer control could be verified
by simulation. The PWM and square-wave control modes
were simulated independently using the common program
modules as indicated, i.e., the simulation does not incorporate
the sequence diagram of Fig. 5. SIMNON has some limitations in looping and sequencing operations and therefore
further study is needed to simulate the sequencing control. In
Fig. 6, the basic simulation functions are
1) Controller transfer functions-converted to difference

equations in state space form
2) Flux and current programs-described by segmented
straight lines
3) Algebraic relations
4) Standard functions
5) Inverter-described by ideal on-ilff switches
6) Machine-described by differential equations in state
space form
Table II illustrates the simulation program for the machine
(the machine rotor has negligible damping and therefore the
rotor eqnivalent circuits are considered open). It is developed
in the format described in Table I. The comments in each

,..,

TORQUE CONTROL

OVERlAY CONTROL
LOOPS

VECTOR ROTATION
WITH,
INVERTER WITH
PHASE SHIFT
CURRENT CONTROL

dWe
-=6e •
dt

IPM MACHINE

(13)

Vector rotation equations
i~s =

iqs cos 8e + ids sin 8e

(14)

ids= - iqs sin 8e+ ids cos Be

(15)

FEEDBACK SIGNAL

FWl CONTROL

(16)

PROCESSING

'OK

(17)
Fig. 6.

(18)

Simulation block diagram of the drive system.

where

TABLE II
SIMULATION PROGRAM FOR THE MACHINE

Vfo is the machine induced voltage at base speed
radians per second)
P is the number of poles

CONTINUOUS SYSTEM IPMM

uIPM MACHINE MODEL IN SYNC. REF. FRAME(INERTIA LOAD)
INPUT VQSE VOSE
"MODULE INPUT SIGNALS
OUTPUT IA IB Ie TE TEM WE X3 X4 IQS! rOSl
TIME T
"IN SECONDS
STATE IQS IDS W TH
OER DIQS DIDS OW DTH
"DERIVATIVE OF STATES

Wb

(in

All other quantities are given in standard notation [3]. The
machine parameters are given in the lower part of the table.
Table III illustrates the simulation program for the current
controller (CC), which is described as a discrete time system
with a sampling time of 0.1 ms. In laboratory breadboard, the
hysteresis-band current -controller has been designed by using
dedicated hardware. The format of a discrete time system is
similar to that of a continuous system except that the
statements STATE, NEW, TSAMP and TS characterize the
description of difference equations. In the program, the
command currents lAC, IBC, and ICC are compared with the
feedback currents lA, IB, and IC, respectively to generate the
current loop errors as shown. The state of the inverter switches
is generated by comparing the current error with the hysteresis
band HB. The inverter output voltages in the rotating frame
are then generated by the following equations [3]:

DIQS = (WB/XQS) * (VQSE-RS*IQS-WE*XDS*IDSjWB-WE*EFFjWB)
DIDS = (WB/xes) * (VDSE+WE*XQS*IQSjWB-RS*IDS)
FDSEP = IDS*XDS
liD-AXIS ARMATURE REACTION
FQSE =- IQS*XQS
TE = (3/WB)" «FDSEP+EFF) *IQS-FQSE*IDS) "IN Hm.
ow = (2jJ)'" (TE-TL) 119PEED EQUATION
DTH '" W

WE = W
THE::: MOD(TH,6.2831)
uROTOR ANGLE, 0-360 DEG.
X3 = cos (THE)
X4 = SIN (THE)
IQSS = IQS*X3+IDS*X4
"STA. FRAME Q-CURRENT
lOSS = -IQS*X4+IOS*X3
lA = lQSS
"PHASE A CURRENT
IB = -(IDSS*SQRT(3)+IQSS)/2
IC "" -IA-IB
TEM = TE*.738
"TORQUE IN LB. FT.
IQSl = lQS
"ROTATING FRAME Q-CURRENT
10Sl = IDS
WB:710.48
"B4SE FREQUENCY (RAD. / S. )
XQS:O.16
"MACHINE PARAMETERS AT WB
RS:O.00443
XOS;0.103
EFF:57.4
"MAG. FLUX AT WB (VOLTS)
J: 1.2
" INERTIA
W: 100
II
INITIAL SPEED
TH:O
" 'INITIAL ANGLE
END

statement make the program self-explanatory. The program
inputs the voltages (from the program CC-see Table III) to
the synchronously rotating frame equivalent circuits and
solves the stator currents using the following sets of equations:
Machine equations

(II)

(12)

(19)

ubs= VB . NB

(20)

Vcs= VB . NC

(21)

2

v~s="3 vas -"3 Vbs-"3 Vcs

I

I

(22)

-J3 Ub,+ -J3 Ue,

(23)

cos Be - vds sin (Je

(24)

Vds = v~s sin 8e + v:U cos De

(25)

ud.,= Vqs =

(9)

(10)

U",= VB' NA

v~s

where
VB is the Battery voltage
NA, NB, NC are the new states of the inverter phase legs
and all other variables are in standard symbols. The inverter
starts with the initial state shown in the table.
The simulation program of the whole drive system was built

321

TABLEll
SIMULATION PROORAM FOR THE CURRENT CONTROLLER

388.

DISCRETE SYSTEM CC
"HYSTERESIS-BAND CURRENT-CONTROLLED PWM INVERTER
INPUT VB lAC ISC ICC IA IS IC X3 X4
OUTPUT VQSE VDSE
TIME T
"IN SECONDS
STATE ABC
NEW NA NE Ne

"STATE OF A PHASE LEG: 1 OR 0
"NEW STATE

TSAMP TS

"SAMPLING INSTANT

IS - T+O,1E-3
lAE .. IAC-lA
"CURRENT LOOP ERROR
IBE .. IBC-,IB
ICE - ICC-IC
NA ... IF IAE>H8 THEN 1 ELSE IF IAE<-HB
H8 - IF IBDHB THEN 1 ELSE IF IBE<-HB
Ne .. IF leDHB THEN 1 ELSE IF ICE<-HB
VQSS .. VB"'(NA*2-NB-NC)/3 "STA. FRAME
voss .. VB*(NC·NB)/SQRT(3)
VQSE - VQSS*X3.VDSS*X4

288.

lee.

e.
THEN 0 ELSE A
mEN 0 ELSE B
mEN 0 ELSE C
Q-VOLTAGE

-lee.

"ROTATING FRAME Q-VOLTAGE

VDSE - VQSS*X4+VDSS*X3
HE: 30
nHYSTERESIS BAND
A: 1
~INlTIAL STATE DEFINATION

-288.

'.1

8:1

C:O

Fig. 7.

END

up step by step starting with the inner core drive elements.
Fig. 7 shows the typical simulation command and feedback
current waves in the PWM mode. The large sampling time of
the simulation often causes the current to exceed the 20-A band
of the command wave as evident in the figure. pig. 8 shows
the typical close loop torque response in PWM mode with a 25
lb ft step command. The ripple in the estimated torque was
found to be higher than that of the shaft output.
IV. MICROCOMPUTER CONTROL
Since the advent of microcomputers in the early 1970's, the
technology has gone through a dynamic evolution in the last
one and a half decades. Microcomputers are available today
with large word-size, high computation speed, and large
functional integration, and this trend will continue in the
future. Super microcomputers, based on the same principle as
the super computer (such as CRAY 2) where parallel
processors add to the processing speed, look very promising
and will add tremendous capability for real time control of
systems in the future. The control system under consideration
uses state-of-the-art microcomputers and their hardware and
software design features are described as follows:

Typical command and feedback current waves.

/ ' COMKAND TORQUE

....

..
ESTIMATED
FEEDBACK l'OllQUE

...

..

----------------------~....~--

~.~----~---....

TIME (aee.)

Fig. 8.

Close loop torque response in PWM mode.

A. Hardware Design
The microcomputer-based control hardware uses two Texas
Instruments TMS32010 digital signal processors (DSP) and
one Intel-8097 (generic name 8096) microcontroller. The key
features of these devices are given in Tables IV and V,
respectively. Both are 16-bit high-performance microcomputers and are ideally suitable for real time control applications.
The 16 x 16-bit dedicated parallel multiplier on the DSP chip
that multiples in 200 ns permits very time-critical 110 signal
processing (including vector rotation) in the drive control
system. The TMS32010 DSP chip was selected over the
alternate DSP chips based on performance benchmarks,
military spec. availability, and excellent hardwarel software
development support. Although the DSP chips are extremely
fast and allow software implementation for high-speed control
functions, they do not provide general purpose hardware
interfaces that allow simple connections to standard 110

322

TABLEN
KEY FEATURES OF THE DiOlTAL SIGNAL PROCESSOR TMS32010
• l60-u in~~ion c)'Ck
• 144·word on-chip data RAM

• ROMltss \'Crsioa- TMS32OIO
• l.SK-word oa-chip pIOJI'am ROM-TMSll(lMIO
• E:tlcmal memory expansion to a total of 4K WO.D
.1 {uU spud

• 16-bit inst1'UCtionldata word
• 32-bit ALU/accum....lor
• 16 )( 16-bil multiply in l60-ns
.0 to IS·bit barrel Wher
• EiJbt input and ciJht output channels
• 16-bil bidircdioDal data bus with SO-mc••bilt-pCr·
secoad traasfer rate
'
• IntemIpt willi full c:ontnt IIVC
• Sipcd
c:ompIcmcDl tied-poiDl aritIunetk
oNMOS-,y
o SiqIe 50V sappty

lWeI"

• Two venioaI available
'IMS3201.0020 ••• 20.5 MHz Clock
TMS320100ZS ••• ZS.O MHz Clock

TABLE V
KEY.PEATURES OF THE INTEL 8096 MICROCOMPUTER

•
•
•
•
•
•
•

8K-byte on-chip ROM
232-byte register .pace (RAM)
lO-bit, eight-channel AID converter
Five 8-bit YO porl1
FuU-duple. serial pori
High-speed pulse YO
Pulse-width-modulated output

• Eight interrupt sources
• Four 16-bit software timers and two 16-bit hardware timers
• Watchdog timer

• Hardware siJlled and unsigned multiply/divide

RS-2)2

Fig. 9.

Simplified block diagram of controller hardware.

devices. Furthermore, implementation of functions that do not
require high-speed processing becomes cumbersome because
of small stack size and limited program and data memory
spaces. The Intel-8097 microcontroller, which incorporates
the bulk of control functions, overcomes the above problems.
Besides an expansive instruction set, it has a high level of
functional integration.
Fig. 9 shows the simplified controller hardware architecture. The two signal processors have the same core hardware
design and each is tailored to its specific tasks via the
respective I/O devices. The input signal processor (ISP) is
interfaced to A/D converters for acquiring the machine
current signals, whereas the output signal processor (OSP) has
D/A converters to supply reference current waves to the
current-controller. The resolver-to-digital (RID) converter
provides 100bit (0.352° resolution) shaft angle (8.) information up to the maximum tracking rate of 20 400 rpm. All
interprocessor communications are accomplished with 160bit
wide l6-location FIFO (first-in-first-out) registers. A key goal

in the DSP-hased I/O hardware design is to use the full
potential of the processors by minimi2ing the software
overhead required to perform 110.
The Intel-8097 consists of a powerful CPU tightly coupled
with program and data memory along with several 110 features
all integrated into a single chip. The 8097 chip incorporates a
lO-bit unipolar (0-5 V) AID converter and an 8~hannel
analog multiplexer on the same chip. This converter is used
for acquisition of signals required for drive system sequencing
and in-line monitoring functions.

B. Software Design
The distribution of control functions among the three
microcomputers and their processing rates were determined by
system analysis. The processing rate, i.e., the sampling time
interval of each task, was verified by SIMNON simulation.
Obviously, the control functions that require high sampling
rates (5-30 KHz) are executed by the signal processors
whereas the less time~ritical functions are executed in the

323

1........,..,
30 }IS
RESIT

TASk
IIAI1DLER

* PHASE SHIrrING (PWK)
* 8096 FIFO
~ COS(".~ ). SIN(,,"" )SYNTHESIS (SW) COHKUNICATIOH
• POItVARD VICTOR ROTATIOR
• CUIIIUIIIT CLAIIP

*
*

*

OVERLAY CURUNT IDOPS (PWK)
CUIUlERT CO-ORD. SUIFla (PUll)
DIAGNOSTICS AND TEST HODES

CUIRBRT COIIIWQ)S GBlfERATION
COIDIDN'ICATION WITH ISP

Fig. 10.

Simplified structure of TMS32010 software (output DSP).

8097 microcomputer. The high throughput capability of the
DSP's is utilized for performing transformations from rotating
to stationary reference frame and vice-versa, regulation of the
active and reactive currents in the PWM mode, and generation
of the transistor switching commands in the. square-wave
mode. The table look-up capability of the TMS320 permits
easy synthesis of sin lie and COS lie functions from the input lie
signal. The ability to make decisions in software to reconfigure the control schemes in real time provides a great
advantage over a dedicated hardware approach. Additionally,
diagnostic functions are incorporated to ease the development
process.
Software for all the three microcomputers is written in
assembly language (ASM-96, XASM) using scaled integer
arithmetic. An Intel Series IV development system and an Intel
SBE-96 board were used for the development of the 8096
software system. The software for the TMS32010 signal
processing systems was developed with the V AX-based crossassembler and bench tested with the TI TMS32010 simulator.
Real time testing of the DSP software was performed on the TI
EVM-32010 evaluation module boards.
A simplified structure chart of the output DSP is shown in
Fig. 10 that also indicates the key functions under each task
and the task processing intervals. The interrupt input to the
signal processor is connected to a 30-I's pulse train that serves
to set the basic sampling rate (33 KHz). Upon receipt of the
interrupt, the return address of the interrupted program is
saved on the processor stack, interrupts are disabled, and
control is passed to the task handler. Since the TMS32010
stack is only four words deep, another logic stack in data RAM
is utilized to save the status of key registers. The TASK 1 (30
I's) functions are executed and a counter to detect if TASK 3 is
ready is decremented. The state of the BIO input is polled to
determine if the 8096 processor is requesting an interaction. If
so the interrupt is enabled and TASK 2 is started. TASK 2
either loads (lSP) or unloads (OSP) the FIFO registers that are
interfaced to the 8096 computer system and serves to
synchfonize the inter-processor communications. If no inter-

324

*

*

action is requested or when TASK 2 is complete, the TASK 3
counter is tested to determine if TASK 3 should be executed.
Finally, whether or not TASK 3 is run, the status of the
registers is restored and execution of the interrupted program
is resumed. Task priorities are established by the sequence in
which the task handler schedules them (TASK 1 highest -- >
TASK 3 lowest). A fourth lowest priority task called BACKGROUND task that never finishes (loops) serves to occupy the
CPU when all the essential tasks are completed.
The ISP software structure is similar to Fig. 10 and both
signal processors share the same operating system design,
except that the OSP is structured differently depending on the
mode of control. In other words, the ISP executes the same set
of software routines from power-up to power-down, whereas
the OSP software is configured by the 8096 to allow operation
in and transitions between the PWM and square-wave modes
of control. The two DSP's also share a common diagnostic 1/0
routine and data RAM initialization scheme.
The 8096 computer system is primarily responsible for
estimating and regulating the torque and flux of the IPM
machine. Inputs to the estimators are obtained from the input
signal proCessor and outputs from the regulators are transformed into three-phase current references by the output signal
processor. Additional 8096 microcomputer system functions
include: vehicle control microcomputer interface, start-upl
shut-down sequencing, in-line monitoring functions, PWM <
--- > square-wave mode transitions, and diagnostics.
An operating system similar to the one used for the DSP
systems serves to schedule the tasks at fixed sampiing rates.
The 8096 software timer interrupt is programmed to generate
the basic 2-ms clock ticks and software counters are maintained in RAM to generate the additional sampling intervals.
The 8096 architecture differs from most c\lmputers in that
there are no general purpose registers; any internal RAM
location can serve as the operand in instructions. In order to
maintain compatibility with the PUM-% language and
provide a more conventional environment, four 16-bit RAM
locations are defined as working registers. The statuses of

i.

: !iro-~::.==

• Vehicle Control.
Input

In

• Component Flux
Est.
• Flux Temp.
Correction

• Flux Calculation
• Torque CalculatJon

• Torque Loop

• Signal Processor Out

• Flux Loop

• D/A Cony. Out

• Fault Checks
• Drl... Sequencins

• Current Program _ Vehicle Controls Out

• Flux Program·

(E:::: )
• Report Routine

(1I!1~;:

)

• Square Root
• Arc Sine

Fig. 11. Simplified stnu:ture of Intel-8096 software.

these registers are preserved and restored every time it task is
perfonned. Once again task busy flags are used to prevent
stack overrun.
Fig. 11 shows the simplified structure chart of 8097
software, which also indicates the functions uoder various
tasks and task sampling intervals. Every 2 ms a new set of
inputs is obtained (input fullctions) and used for the
computation of the machine fluxes and torque. These estimations are regulated to match the commanded values and the
data are output (output functions). Sequencing logic serves to
select the appropriate control algorithm for either the PWM or
square-wave mode and open loop modes are incorporated for
debugging. The output routines, feedback calculations, and
outpUt fullctions remain the same in all modes of control. The
software also permits various feedback loop configurations so
that the system can be debugged systematically starting with
the core drive elements. The configurations can be summarized as follows:
Mode 0 Open all the loops with = 0
Mode 1 Open all the loops and release
Mode 2 Close ovetlay current loops and initialize torque
loop integrator
Mode 3 Close torque loop and use current program
Mode 4 Close torque loop and get loop gains from AID
channels
Mode 5 Initialize flux loop integrator
Mode 6 Close all PWM loops
Mode 7 Open vector rotator square-wave mode loop
Mode 8 Open table look-up square-wave mode loop
Mode 9 Close vectOr rotator square-wave mode
Mode 10 Close table look-up square-wave mode loop
Mode 11 Close all PWM modes and evaluate transition to
square-wave
Mode 12 PWM -- > square-wave mode transition
Mode 13 Square-wave --> PWM transition

a

a

V. Dms SYSTEM TESTS
The complete drive system with the microcomputer controller was thoroughly tested in laboratory on a dynamometer and
performances were found to be excellent. The test also showed
genetal correlation with the simulation results. The 70-hp 4pole star-connected IPM machine under test was custom
designed using an NeFeB (Crumax 30A) magnet in segments.
The key machine parameters are included in the simulation
program shown in Table n. The machine luis a base or cornerpoint speed of 3394 rpm, crossover speed (the speed at which
the machine countetemf balances the fundamental frequency
square-wave voltage) of 5044 rpm, and a maximum speed of
13750 rpm. the battery voltage varied from 135 to 265 V
corresponding to worst case motoring lind regeneration,
respectively. The inverter consists of three phase-leg modules
wbete each Darlington transistor was rated for 500 A, 500 V.
A Darlington transistor again consists of three component
matched transistors in parallel of200-A rating. The dynamometer used for the tests could be operated in constant (but
programmable) speed or inettia mode. The test set-up includes
a computer-based data acquisition and analysis system [18],
where steady state waveforms can be captured and drive
performances, such as efficiencies, power factor, various
losses, etc., can be calculated and displayed on a video
terminal.
Once the drive system was simulated successfully and the
controller hardware and software were debugged, the syatem
was ready for extensive laboratory tests on the dynamometer.
A careful test procedure was formulated so that the task
becomes smooth and time efficient. The microcomputetcontrollet permitted various test modes where, starting with
the inlier core drive system, the outer loops could be added in
ateps and thoroughly tested. Initially, all the tests were
performed on the dynamometet in COIIstant speed mode, then

325

Fig. 12. Waves at PWM forward motoring mode (Te = 25 Ib ft, 1000
rpm). Top: Command and feedback currents (50 Aid). Bottom: Rotor
position (ISO' Aid).

the inertia mode was exercised, and finally the transitions as
shown in Fig. 5 were tested.
Fig. 12 shows the typical command and feedback phase
current waves (top) in the PWM mode when the dynamometer
was operating at constant speed. The rotor position (Be)
obtained from the RID converter is also shown at the bottom.
The Be = 0 corresponds to alignment of the magnet north pole
with the stator phase a-axis. The figure indicates that the
current phasor leads the magnet flux by an obtuse angle. Fig.
13 shows the typical phase voltage (with respect to the battery
center-tap) and phase current waves in square-wave mode.
The current slightly leads the voltage wave, and the inverter
switching at each edge of the square-wave is evident. As the
speed increases at constant torque, the phase lead increases
because of increasing machine counter emf. Fig. 14 shows the
four-quadrant operation of the drive system with the dynamomete~ in inertia mode. The system starts at zero speed in
the forward direction with a constant motoring torque as
shown. As the speed increases beyond a critical value,
transition occurs smoothly from PWM to square-wave mode.
As the torque command is reversed, the drive system enters
into regeneration with immediate transition to PWM mode
because of increase of the battery voltage. The system then
goes through zero speed and eventually speed builds up in the
reverse direction. The performance in the reverse direction is
essentially symmetrical to that in the forward direction.
VI.

CONCLUSION

An advanced digital control of a drive system that uses an
interior magnet synchronous machine with an NeodymiumIron-Boron permanent magnet has been described. The drive
system operated with full performance in the constant-torque
region as well as in the field-weakening constant-power
region. The drive system has been designed with close loop
torque control for electric vehicle application, but the control
can be easily extended for other industrial applications also.
The drive system has been extensively simulated using the
VAX-based simulation language SIMNON. The salient features of SIMNON have been revieWed, and then the drive
simulation has beCn described. A simulation study of the
compl~" drive system was found to be extremely valuable to

326

Fig. 13.

Phase voltage

current waves at square wave

motoring

mode (Te = 25 Ib ft, SOO4 rpm, SO Aid, V = 166 V).

.

T'

21b. H./d

T.
21b. flJd

."

200 RPMld

Fig. 14.

Four-quadrant operation of the drive system.

verify feasibility of the control laws and to design the
controller parameters.
The drive system uses a distributed. microcomputer-based
control system where state-of-the-art Intel-8096 microcontroller and Texas Instruments TMS32010 digital signal processors
are used. The 8096 is essentially responsible for feedback
control and signals estimation functions, whereas 32010
processors perform the time-critical 1/0 signal processing
functions. The hardware and software design features of the
controller have been discussed.
A 70-hp inverter-fed drive system has been extensively
tested in the laboratory with the help of a dynamometer, and
experimental results show good correlation with the simulation
results. The test results, including four-quadrant operation on a
dynamometer that was programmed in the inertia mode, have
heen discussed. The performance in transition between the
PWM and square-wave modes with and without gear shifting
was found to be excellent. The results of this study will help to
promote IPM synchronous machine drives for various industrial applications in the future.
ACKNOWLEDGMENT

The authors are grateful to R. D. King for his help during
the experimentation phase of the project.
REFERENCIlS

[I]

B. K. Bose, "A higb performance inverter-fed drive system of an

[2]

[3]
[4]

[5]
[6]
[7J

[8]
[9]
[\0]

interior permanent magnet synchronous machine," in Coni. Rec.
IEBEflAS Annu. Meeting, pp. 269-276, 1987.
T. Nakano, H. Ohsawa, and K. Endoh, "A high perfonnance
cycloconverter fed synchronous machine drive system," in Can/. Rec.
lEEEflAS Int. Sem. Power Converter Con/., pp. 334-341, 1982.
B. K. Bose, Power Electronics and AC Drives. Prentice-Hall,
Englewood Cliffs, N.J. 1986.
B. K. Bose, Ed., Microcomputer Control of Power Electronics and
Drives. New York: IEEE Press, 1987.
K. Hasse, "Control of cycloconverters for feeding asynchronous
machines," in Proc. [FAC Symp. Control Power Electronics and
Elec. Drives, pp. 537-545, 1977.
K. J. Astrom, A SIMNON Tutorial, Dept. Automatic Control, Lund
Institute of Technology, 1982.
K. J. Astrom and B. Wittenmark, Computer Controlled Systems.
Englewood Cliffs, NJ: Prentice-Hall, 1984.
H. A. Spang, "The federated computer-aided control design system,"
Proc. IEEE, vol. 72, pp. 1724-1731, Dec. 1984.
H. E1mqvist, "SIMNON: an interactive simultation program for
nonlinear systems," Proc. Simulation, 1977.
D. K. Frederick, SIMNON Reference Manual, General Electric Co.,

Schenectady, NY, 1982.
B. K. Bose, "Sliding mode control of induction motors," in IEEEf
lAS Annu. Meeting Corif'. Rec., pp. 479-486, 1985.
[12] P. Katz, Digital Control Using Microprocessor. Englewood Cliffs,
NJ: Prentice-Hall, 1981.

[II]

[13] Intel Micracontroller Handbook, 1984.
[14] TMS320JO User's GUide, Texas Instruments, 1983.
[15] B. K. Bose, "Technology trends in microcomputer control of electrical
machines," IEEE Trans. Ind. Electron., vol. 35, pp. 160-ln, Feb.
1988.
[16] K. H. Bayer, H. Waldmann, and M. Weizbelzahl, "Field-oriented
close-loop control of a synchronous machine with the new transvector
control system," Siemens Review, vol. 39, pp. 220-223, 1972.
[17] D. K. Frederick, "Computer packages for the simulation and design of
control systems," Arab School on Science and Technology. 4th
Summer Session, Blouden, Syria, Sept. 1981.
[18] A. B. Plunkett, G. B. Kiiman, and M. J. Boyle, "Digital techniques in
the evaluation of high-efficiency induction motors for inverter drives,"
IEEE Trans. Ind. Appl., vol. IA-21, pp. 456-463, Mar.lApr. 1985.
[19] D. K. Frederick and C. J. Herget, "Tbe extended list of control
software," ELCS, U.S. Edition, no. 2, June 1986.

327

328

DSP-BASED ADAPTIVE CONTROL OF A BRUSHLESS MOTOR
Nobuyuki Matsui and Hironori Ohashi
Department of Electrical and Computer Engineering
Nagoya Institute of Technology
Gokiso, Showa, Nagoya 466, JAPAN.

ABSTRACT
The paper presents the software control
of the brushless DC motor with the parameter
identification. Not only speed and current
controls but a real time identification of the
motor parameter can be implemented by software
of the digital signal processor, TMS320C25.
The unique current control is performed
according to an instantaneous voltage equation
of the d-q model of the motor. In the system,
the control accuracy depends on the motor
parameters so the parameter identification in
regard to armature inductance and emf constant
is necessary. The identification algorithm
has been verified by both simulations and
experiments. The control program, including an
parameter identification is of 2.SK words and
the processing time is 99 ~s.
INTRODUCTION
The inverter drive of AC motors has many
advantages over the conventional DC motors and
high performance drives have increased in
popularity as AC servo motors. The required
control characteristics are becoming higher so
the introduction of modern control theories
and high performance processors is positively
tried to meet the requirements.
Especially,
by using the high performance processor, it is
possible not only to implement the feedback or
feedforward control but to realize the various
compensating capabilities.
It is well known that the precise current
Control is the key technology to realize the
high performance AC drives such as brushless
:notors and vector controlled induction motors.
Consequently, the problem of obtaining precise
Current control has received much attention.
!t is
requested that the motor current is
always coincident with the sinusoidal current
command under the steady state and transient
::onditions.
In the existing current control
,lUng the voltage-fed inverter, the current
.lYsteresis controlled PWM and the sub-harmonic

PWM shown in Fig.l have been widely used(~)(3) In
the current hysteresis controlled PWM, the
sinusoidal current is maintained within the
hysteresis band but a voltage waveform is not
necessarily desirable and the switching frequency of the inverter changes according to
the operating condition of the motor. On the
other hand, the sub-harmonic PWM has
no
problem associated with the voltage waveform
and the switching frequency but the steady
state phase lag is the problem to the high
frequency operation.
In this paper, a new current control for
brushless motors using the digital signal
processor, TMS320C25, is presented.
In the
system, DSP performs not only the current
control but the necessary control processing
such as the rotor position sensing, the speed
calculation and the calculation of the torque
command through the speed control loop.
The
current control is performed by selecting PWM
patterns for the inverter according to the on
line calculation of the ideal voltage.
The
calculation is based on the d-q axis voltage
equation of the brushless motor.
Two PWM
strategies are explained and compared.
This
control leads to good coincidence of the motor
current with the current command under steady
state and transient conditions.
As the control presented in this paper is
based on the voltage equation of the motor,
the control error depends on the parameters
used in the controller. The dependency of the
current control error on the parameters is
investigated and the identification using the
reference model is explained. The simulations
and experiments show the effectiveness of the
proposed identification.

CURRENT CONTROL ALGORITHM(I)
Fig.2 shows the well known equivalent d-q
axis model of the brushless motor. The voltage
equation is obtained as follows;

v

_1fT
i

4i
i

*

+

vd, vq Jr,

-

(a)

= ( R + pL ) X + E

--------------(1)

In equation (1), w, n: and IE represent,
respectively, the applied voltage, cur,rent and
the induced emf vectors which are defined by
the following relations.

r--::k;base

E

e[

J[

[ id, iq { ----(2)

L iq, Ke - L id JT ---------- (3)

(b)

Fig.1. Conventional current controls.
(a)current hysteresis controlled PWM,
(b) sub-harmonic PWM.

In equation (1), Rand L are the armature
resistance and inductance and Ke(=Mif) is the
emf constant, respectively. These parameters

© 1988 IEEE. Reprinted, with pelmission. from COIl/i'renee R('('onl (!lth('
I 988 IEEE Illdusf1:v Applimtiolls Society.

329

i d

R. L

v.

--.·iq

Fig.2. Equivalent d-q model of brush less motor
are reduced to the equivalent d-q axis

To perform the software

control,

model.

the differ-

ence equation corresponding to equation (1) is
necessary.
The instantaneous applied voltage,
current and induced emf are approximated to
the corresponding average values during the
sampling interval, that is;

w

= W(n),

11:

= K(n),

IE

= iE(n)

(b)

Fig.3. Two PWM strategies.
(a)vector
selection PWM, (b)average voltage PWM.
the ideal applied voltage is, therefore, given

by the following equations.

.

-----(4)
W*(n) = 2 R K(n-l) - Wen-I) + 2 lE(n)

and

the

derivative term in equation

(I)

is

approximated to the term given below:
pK = [ K(n+l) - K(n) J I T
where

+ (LIT)

lE(n) ,; 9(n-1)

-------(5)

T is the sampling period.

Using

Wen) = R '[en) + (LIT)

[ ][(n+l)

- K(n)

J

L iq(n-l)

- ][(n-l) J (12)
+ T [ vq(n-l)

- Ke 9(n-1) - R iq(n-I) J, Ke JT

these

approximations, the difference equations below
are obtained.

[ ][*(n+l)

--(13)

These equations show that the ideal applied
voltage between the n-th and n+l-th samplings
can be calculated ,using the voltage
and
current prior to the n-th sampling. This means

that the ideal applied voltage can be obtained

+ iE(n) -------------(6)
E(n)

i(n) [ L Iq(n), Ke - L Id(n)

by

f

PWM

In Fig.2, it is noted that the torque
the brushless motor is proportional to the
current and the d-axis current does

sam-

control of the inverter without

sampling

delay.

-------------(7)

axis

on line calculations before the n-th

pling and, therefore, it can be applied by the

of
q-

PWM STRATEGY

not

contribute to the production of torque. Therefore,

the d-axis current is controlled to

be

zero. Now, the ideal voltage V*(n) is defined.
This voltage is applied between the n-th and
n+l-th

samplings to makes the

motor

current

][(n) equal to the current command 1I:*(n+l) at
the
next sampling.
Replacing J[(n+l)
in
equation (6) with the current command ][*(n+l),
the equation for the ideal voltage is obtained
below.
W*(n) = R '[en) + (LIT)
+ E(n)

men)
The
equation

Sen) [

[ 1I:*(n+l) - J[(n)

J

----------------------(8)
L iq(n), Ke

f

---------(9)

current at the n-th sampling in
(8) can be predicted by voltage and

current prior to the n-th sampling using equa-

tions (6) and (7). Inserting this prediction
into equations (8) and (9), together with the
approximate relations below;

R E(n) ~ R E(n-l) ~ R ][(n-l) -------(10)
(1/2)

330

[E(n) + E(n-l)

J

~

IE(n)

----(11)

The ideal voltage given by equations (12)
and (13) is the space vector in the d-q model.
Therefore, it should be transformed to the
three phase voltage to drive the inverter. The
transformed voltage 'vector W3*(n) and the
eight possible voltage vectors of the three
phase voltage-fed inverter are shown in Fig.3.
rt is noted that there are six vdltage vectors

with amplitude of (2/3)Vdc and two (called as
zero vectors, hereafter.) without amplitude.
Two PWM methods, vector selection PWM and
average voltage PWM, are proposed in order to

realize the V3*(n) with the inverter.
(l)Vector

Selection

selection

PWM,

PWM

In

one of the eight

the

vector

vectors

is

selected during the sampling period.
For
selecting the vectors, the space is divided
into eight regions [OJ-[6J as shown in Fig. 3
(a) and the vector may be selected depending
on the position of V3*(n); for example, V(1102 )J
may be chosen when W3*(n) exists in region [.
and zero vector may be chosen for W3*(n) In
region [OJ. As a result, the selected voltagr
vector differs from the calculated
ldea
voltage vector both in amplitude and phase.

(2lAverage voltage PWM
~)

In this PWM method,

is synthesized by two adjacent

vectors

nd zero vectors}2) For example, V3*(n) in Fig.
~(bl can be synthesized by the combination of
V(lOO), V{llO) and zero vector as shown in the
figure.
The time interval for each vector
is
aasi1y
calculated and is controlled by
the

interrupt from the internal timer of the

DSP.

the method is similar to the conventional subharmonic PWM but the switching frequency would
be reduced to 2/3 when the ideal voltage is
~ithin a hexagon shown in Fig.3(b).

CURRENT CONTROL CHARACTERISTICS

2000
Motor speed [rpm]

Fige6. Switching frequency characteristics.

PWM
shoWS

Here, the experimental comparison of two
strategies are briefly explained . . Fig.4
the
steady state voltage and
current

waveforms

for

the 1.5 kW,

4-pole

brushless

motor under the rated current load.
Fig.4(a)
was obtained when the inverter was operated by
the vector selection PWM, where the
sampling
ceriod of the current control loop was 100 pSe
the other hand, the waveforms in (b)
were
obtained
for the average voltage
PWM,
where
the sampling period was 100 MSe It is apparent
from the figures that the current waveform for

On

the average voltage PWM is better than that of
(a

Compared with the vector selection
PWM,
the average voltage PWM has a reduced harmonic
contents
below 5 KHz,
thus
improving
the
current waveformse

pSe

In Fig.6 are shown the characteristics of
the switching frequency versus motor speede As
apparent from the figure, switching
frequency
of
the vector selection PWM varies
according
to
the motor speed and has the maximum near

1000 rpm. ,When the operating frequency is low
in the lower speed range, the required applied
voltage
is also low and the zero vectors
are
frequently selected in sequence.
On the other

hand, the required voltage is high in the high

I

Vector selection

speed
range and the,voltage
vectors with an
amplitude are frequently selected in sequence

PWM.

to

produce the applied voltage.

It is

noted

that no switching occurs when the same vector
is
selected i~ succession at every
sampling.
However, in the intermediate speed range,
the
zero vectors and the voltage vectors with an
amplitude are often selected alternately
to
produce an intermediate applied voltage
near

1000 rpm. This is the reason why the switching
Average vol tage

frequency
has
the maximum near 1000
rpm.
However,
the
switching frequency
for
the
average voltage PWM are substantially constant

PWM.

and nearly equal to (2/31*(1/2TI, where T is a

(bl

sampling periode

Fig.4.
the vector selection PWMe
However, it may be
concluded from the experiments that the vector
selection PWM can reduce the acoustic
noise

compared with the average voltage PWM.

Fig.5

Shows the comparisons of the harmonic analysis

Fig.7
shows the step response of the qaxis current for two types of PWM methods when
the stepwise change in the current command
is
applied.
These figures show that there is
no
appreciable difference
between the
two PWM
methods
in regard to the transient
responsee
For a small change in the current command, the
motor current settles in one sampling because
the
output voltage of the inverter does
not
saturates.
However, for the large change
in
the current command, for example, from 2 A to
10
A as shown in the figure, the current
can
not settle in one sampling and 3 - 4 samplings

(this corresponds to 300-400 ps) are required
since the inverter voltage saturates.

between the two PWM methods under 25 Hz rated
current load when the sampling period is
100

Fig.5. Harmonic

current.

step response

current.

331

the output voltage of the inverter saturates,
the settling time is as short as JOO or 400
ps, thus realizing the high speed transient

is defined for each parameter as follows;
R/L/Ke of motor
k

response of the motor current.

R/L/Ke of controller
EFFECTS OF PARAMETER VARIATION

and

------- (14 1

control error is defined as follows;

th~

Iqx - Iqo
As stated, the proposed current control
is
implemented by calculating the
ideal
voltage based on equation (12) and the control
error would increase when the parameters used
in calculation differ from ,those of the motor.
The experiments were done to investigate the
variation of the motor parameters due to the
maqnetic saturation and temperature rise.

results

showed. that there was no

The

appreciable

variation in armature inductance even when the

motor current was increased up to five times
the
rated current, whereas the
armature
resistance increased by about 50 percent.
On
the other hand, the emf constant increased by
about 16 percent due to the temperature rise.

e

----------------(151

Iqo
where

Iqo

means the average

q-axis

current

when the motor parameters are coincident

witb

the controller parameters whereas Iqx is the
average q-axis current when there is the para_
meter disagreement in motor and controller
The conclusion is summarized as follows.
•
(a)Armature Res'istance: The control error
is not hardly affected by the variation of the
armature

resistance, regardless of the

motor

speed.
(b)Armature Inductance: The effect of the
armature

inductance variation on the

control

error
The effect of parameter variations on the
accuracy

of current

control

characteristics

has been investigated by the simulation.
In
simulating the system, the inverter has been
treated to perform the PWM strategy explained
in the preceding chapter but the dead time has
not been considered. The results obtained for
two kinds of PWMs have shown no appreciable
difference and, therefore, the results for the
average voltage PWM with sampling time of 100
ps are shown hereafter.
Steady State Characteristics
current

control

Fig.8 shows

error for the

same

the

current

command when parameters R, Land Ke of the
motor varies while parameters used in the DSPbased controller are constant. Fig.S (a) gives
the result for low speed operation and (b) for
the rated speed. The variation coefficient k

.0

is somewhat different depending on the
speed, but not serious in the range of
k larger than 0.5, as shown in the figure.
Below k=O.s, the fluctuation of the motor

mot~r

current

increases because the

inductance

of

the motor is small compared with that used

in

the controller, so approximations psed in

the

development of the current control algoritro.
are assumed to be ineffective.
(c)EMF Constant(=torque constant):
The
effect of the variation of the emf constant is
also somewhat different depending on the motor
speed and the error increases with an increase
of the motor speed. Due to the limitation of
the inverter voltage, the error increases with
the variation coefficient in the range of k
larger than 1.5 when the motor speed is high.
Transient Characteristics

.,

O-L

O-x.
l>-R

The q-axis current

1500

·
·=..

."

!i

1000

0

c

(a)

(a)

N_ 500[rpm)
iq*·ll[A)

.•..

.3

500
O-L

O-Ie

N.2000 I rpm)
iq*.O-.131Al

l>-R
1.0

1.5

2.0

0.5

Variation coefficient X

;;

O-L

so

1.0

1.5

2.0

Variation coefficient It

N.2000Irpal)
., SO

iq*.llIA)

!•

40

.e

~ 30

c

(b)

(b)

-50

..

.3

20
O-L

: 10

0-[0

N_ 4001rpmJ
1q*.O.-...3(Al

l>-l
O.S

1.0

1.S

2.0

Vazoiation coefficient X

Fig.8. Control error for parameter variation.
(a)sOO rpm, (b)2000 rpm.

332

0.5

1.0

1.5

2.0

Vari.ation coefficient It

Fig.9. Performance index. (a)saturated volta~e
command, (b)un-saturated voltage cornman •

's used to estimate the effects of parameter
~ariation on the transient characteristics.
The performance index given by equation (16)
is introduced for a stepwise change of the
current command;
J = [ [ iq*(1) - l q ( i ) ] '

---------(16)

For calculating equation (16), summation was
made from i=O to i=99, because the oscillatory
response might be obtained in some cases where
there were the disagreements in parameters but,
even in that case, an oscillation would settle
within 100 samplings. Fig.9 shows the performance index, where in (a) the inverter voltage
is saturated for the large change of current
command and in (b) the change of the current
command is small so that the inverter voltage
does not saturate.

It is concluded from these

estimations that the yariation of armature
resistance has no appreciable effect on the
transient characteristics whereas variations
of

armature inductance and emf constant

significant
teristics.

effects on the transient

have

charac-

According to the previous explanation, the
parameters could be identified by processing
the current error jx given by equation (20)
through the PI controller and by taking the daxis component as L(n-l) and q-axis component
as ~(n-1).
Here, the d-axis component of
equation (20) is;
Llid(n-1) = Ad(n-2) [1/L(n-2) -1/L J
-----------------(21 )
Ad(n-2)

T [ Vd(n-2) - R id(n-2)

It is apparent from equations above that
would converge to the motor inductance L
when the terms [ 1!E(n-2) - 1/L ] and Llid(n-1)
would
have the same sign. For the
emf
constant, on the other hand, the relation for
identification would be obtained from the qaxis component of equation (20) under the
assumption that the armature inductance should
have been identified as L = ~(n-1) using the
equations (21) and (22) and is given below.

L

IDENTIFICATION ALGORITHM

Aq(n~2)

./liqtn-1 )

x [

Parameter identification in regard to the
armature inductance and the emf constant is
performed by using the reference model of the
brushless motor. The mathematical reference
model is obtained by replacing n with n-2 in
equations (6) and (7) and solving for X(n-l)
and;

]

-----------------(22)

Ke

Aq(n-~) = 9(n-2)

[T/L(n-1)]
- Ke(n-2)

] ----(23)

------------------(24)

Ke would converge when [Ke - Ke(n-2) ] and
Lliq(n-1) would have the same sign.
From the
discussion above, the identification algorithm
is, therefore, given as in equation (25).
L(n-1 )

lI(n-l) = (T/L)[V(n-2) - E(n-2) - RX(n-2)]

] = IKp Sgn[ .QI.(n-2) ] dll(n-1)
+ X(n-2)

lE(n-2)

=

Ke(n-1)

-----------------(17)

n-I

L iq(n-2)
9(n-2) [

+ IKi [ [ Sgn[ ./A(k-1)
]

,

-- (18)

Ke - L id(n-2)

x :> 0
x =0
x <: 0

+ 1

where approximate relation ~(n-2) ~ E(n-2) was
used.
Taking X(n-1) in equation above as the
reference output ~(n-1) and replacing Land Ke
with corresponding parameters to be identified
~(n-2)
and Ke(n-2), the reference model is
finally given by the following relation.
E(n-1)

=

[T!L(n-2) J[ V(n-2) - R 1I(n-2)]

o

S,gn(x)
-

1

-------(26)

Ad(k), 0
.QI.(k,l = [

---------------(27)
0, Aq(k)

and IKp and IKi are the gain matrix of the
proportional integral type compensator and are
given as follows.

-iq(n-2)
]

+ 9(n-2) T

id(n-2) - Ke(n-2)/L(n-2)

Kpd, 0

-----------------------(19)

Kid, 0
],

IKp =
+ ][(n-2)

] ,1[(k) ----(25)

where

0, Kpq

IKi

]

-- (28)

0, Kiq

Then, the difference dX(n-1) is obtained
as follows by using equations (17) and (19).
dlI(n-1 )
Ke/L - Ke(n-2)!L(n-2)
x

9(n-2) T + T [ 1!L(n-2) - 1/L ]

x

[ V(n-2) - R 1I(n-2)] -----(20)

Equation (20) means that ~][(n-1) would be
Zero when the identified parameters should be
coincident with the motor parameters. When the
parameters should not be identified correctly,
dlI(n-1) • 0 would result.

Fig.10. Adaptive

current control system.

333

Fig.10 shows the block diagram of
adaptive current control system including
parameter identification. In the figure,

the
the
the

current control algorithm, the reference model
and the identification algOrithm correspond,

respectively, to equations (12) and
(13),
equation (19) and equations (25) through (28).
To verify the identification algorithm,
simulations were made.
The results of simulations
have proved that the current control error can

D-without identification
(true parameters)

be greatly reduced by adding the ability of
parameters identification to the precediqg
current control algorithm.

.6.-without identification
(false parameters)

1000

EXPERIMENTAL RESULTS

2000

Motor speed

rrpm I

Fig.13. Current control characteristics.

Fig.11 is the control system configuration
of the prototype of the 1.5 kW brushless motor
with 10X resolver. The configuration of the
controller is quite simple because TMS320C25
can perform all necessary controls such as the
position and speed calculations, identification and current control by software.
The
external electronics are necessary only
the resolver, the current detection and

for
the

was as short as 99 ps.
Fig.12 is the convergence characteristics
of the parameters identification where in (a)
the armature inductance and emf constant in
the DSP-based controller were set 1.5 times as

large as those of the motor, while in (b)

the

parameters used in the controller were set 0.5
times ~the motor parameters.
The current control characteristics with
and. without identification are shown in Fig.13.

1 OX resolver

In the figure, 0 corresponds to the case where
the parameters in a controller are coincident

with those of the motor itself,

~

corresponds

to the case where inductance of the controller

is 70 percent of that of the motor and the emf
constant is 30 percent of that of the motor.

CONCLUSIONS
Speed command

Fig.11. Control system configuration.
In
base amplifier.
The position information is
received from the 10X resolver every 800 ps
and is interpolated every 100 ~s by using the
speed information to obtain the intermediate
position information for the current control.
The 16-bit speed information is obtained using
the difference of position divided by sampling
period. The motor current is detected every
100 ~s by a Hall-CT and transformed through a
12-bit AID converter. The control program was
of 2.5K words and the required processing time

this paper, the new

current

control

scheme for brushless DC motors using the

high

performance digital signal processors has been

. described.
motor

The system has a feature that the

parameters

determine

used in the

controller

the voltage command are

control

can

be

always

attained

to

identified

at every sampling and, therefore, the

current

with

high

accuracy,
regardless
of
the
operating
conditions such as the temperature rise.
The

algorithm has been verified by simulations and
experiments.

Identification

't'[~6

stT rts

REFERENCES

40
20

(1)T.
1.0

-20

1.5
t(s1

Speed Current

Control

of

Brushless

N.

High"

Motor

-40

Trans. Inst. Elect. Eng. Japan, vol-106B, No.

-60

9, Sep. 1986.
(2 )A. J. Pollmann, "Software Pulsewidth Modulation for ~p Control of AC Drives"
Trans.
on Ind. App.
vol-IA-22, No.4, July/August,
1986.
(3)P.Enjeti, P. D. Ziogas, J. F. Lindsay, and
M. H. Rashid. " A New Current Control scheme
for AC Motor Drives" I.E.E.E. 1987 IAS Annual
Meeting Conf. Record, pp202-208.

Re[t1
60

40
20
-20

0.5

1.0

-40
-60

Fig.12. Convergence characteristics.

334

Takeshita, K. Kameda, H. Ohashi and

Matsui,"Digital Signal Processor-Based

1.5
t[sJ

HIGH PRECISION TORQUE CONTROL OF RELUCTANCE MOTORS

Nobuyuki Matsui, Norihiko Akao and Tomoo Wakino
Department of Electrical and Computer Engineering

Nagoya Institute of Technology
Gokiso, Showa, Nagoya 466, Japan.

mented

ABSTRACT
This
torque

paper presents the high

control of the reluctance

precision
motor

for

servo applications. The prototype is the 3phase, 8-pole reluctance motor driven by the
MOSFET inverter. The current control as well
as the speed control is performed by software
of the digital signal processor, TMS 32010.
The motor is supplied by the sinusoidal
current

and two current control methods

are

proposed.
One is based on the vector control
principle to achieve the
linearity between
current and torque and another is developed
to obtain the maximum torque/current ratio.

Due

to the saliency, the

instantaneous

by

software of

the

digital

signal

processor(DSP),
is performed to obtain the
linear
relationship between current
and
torque similar to the concept of the vector
controlled
induction motor.
In addition to
that, nnother current control is proposed to
obtain the maximum torque for
the given
winding current.
In any case,
the winding
current
is sinusoidal.
However, due to the
saliency,
motor produces the torque ripple
under the sinusoidal current excitation.
The
current control can also perform the compen-

sation of the torque ripple by
the

compensation

superimposing

component to

the

current

command.

The amplitude and frequency of the
compensation component can be determined by

torque 'contains a large amount of ripple
component. In case of the test motor, the

the information of the winding inductance.

ripple torque was as much as 26% of the rated
torque under the sinusoidal
current drive.
The experiment showed the ripple component

The complete control system has been
constructed and tested and the test results
have been found excellent as a servo motor.

could be reduced to 6 % by superimposing the
compensation current component to the current
BASIC ANALYTICAL MODEL

reference.

INTRODUCTION

Fig.1
motor

Recently, the research on variable speed

control of the reluctance motor has been done
as

"the switched reluctance motor" allover

the world.
The reasons for tqis tendency is
that the motor is simple in construction and
economical compared to the synchronous

motor

and the induction motor. In addition to that,
the unipolar drive of the reluctance motor is

is the configuration of the

whose construction is the same as

test
the

3-phase variable reluctance type stepping
motor. As the first approximation, the flux
distribution along the air gap is assumed to
be sinusoidal, then, the analytical model for
one pole pair of the motor is obtained as in
Fig.2. The inductance varies with the rotor
position and, therefore, the self
is assumed as;

inductance

possible and,
therefore, the converter to
drive the motor requires fewer switching
devices compared to the inverter.
From these
reasons, the drive system can be more simple,

economical and reliable.
Many

papers have been reported

on

the

switched reluctance motor in the past,
but
their main interests have been focused to the

analysis

and

design

of the

drive circuit configurations.

motor

or

the

There are

few

papers which discuss the control aspects of
the reluctance motor.
In most of the control
discussed in
the
literature,
the
current has
the constant amplitude

winding
and is

supplied to the motor in accordance with

the

rotor position.

This paper presents the digital signal
processor-based control of the reluctance
motor which is capable of operation as the
servo
motor.
The
controller
functions
include
the computations of
the
rotor

position and the feedback speed, current
control and the compensation of the torque
ripple. The current control, wholly imple-

Rotor
Fig.1. Configuration of test motor.
(All dimensions are in mm.)

© 1989 IEEE. Reprinted, with permission. from C0/1ft'rellc£, Reconl of tile
19891£££ Industry Applications Society.

335

Fig.3. Equivalent d-q axis model of test

motor.
Fig.2. Analytical model for one pole pair of
test motor.

obtained from Fig.3 and;
T =

Ld - Lq ) id iq

-------------(7)

LgO + Lg2 cos 29

Lu
Lv

LgO + Lg2 cos

29 + 21r/3

Lw

LgO + Lg2 cos

29

-

---(1 )

21(/3

where,
LgO

Lmax + Lmin )/2

Lg2

Lmax - Lmin )/2

----------(2)
The mutual inductance between
windings is also assumed as;

-

the

2Tr/3

stator

Muv

MgO + Mg2 cos ( 29

Mvw

MgO + Mg2 cos 29

Mwu

MgO + Mg2 cos ( 29 + 2Tr/3 )

)

As a result, the two torque
methods can be proposed as follows;

control

(1) Vector Control Based on the model,
the
winding current can be controlled in the same
way
as that of the
vector
controlled
induction motor, that is, the d-axis current
is controlled as the exciting component and
q-axis current as the torque component.
In
this case, the q-axis inductance is generally
smaller than the d-axis inductance
and,
therefore, q-axis current is chosen as the
torque component to achieve the fast response
of the torque. As a result, the motor torque
can be controlled to be proportional to the
q-axis current as follows;

--(3)
T = K iq,

where,
MgO

Mmax + Mmin )/2

-

Mg2

Mmax - Mmin )/2

Lg2

K

( Ld -Lq ) id

------(8)

(2)Maximum Torgue Control For the
given
winding current iw, the ratio id/iq can be
controlled.
In this case, the linearity
between
current and torque can not
be
achieved and the torque is given by the
following relation;

LgO/2

Using these definitions, the voltage equation
of the motor is obtained as follows:

T

= (3/2)

iw 2 ( Ld - Lq ) sin 2D

---(9)

where,
--I 1 0)

tanD

Neglecting the magnetic saturation of the
motor, the maximum torque is obtained for D
= 45(deg.)

---------------(4)
Using the well known d-q axis defined in
Fig.2, the voltage equation (4) can
be
transformed into;
R + pLd,

vd

) = [
vq

e Ld

-9

id

Lq
)[

, R + pLq

)

(5)

iq

where,
Ld

3( LgO + Lg2 )/2

Lq

3( LgO - Lg2 )/2

------------(6)
and, from this equation, the analytical model
of the reluctance motor is obtained as shown
in Fig.3, assuming that the flux distribution
is sinusoidal. The torque equation can be

336

CONTROL SYSTEM
Fig.4 shows the control system configuration of the reluctance motor. Unlike the
many drives of the reluctance motor in the
literatures, the FET inverter supplies the
sinusoidal current to the motor. The simple
unipolar drive circuit for the sinusoidal
current drive is. now under consideration. The
current hysteresis controlled PWM implemented
by software was used to supply the sinusoidal
current, where current control program was of
1.4 k words and the processing time was as
short as 34 usec. The rotor position is
obtained by the incremental type encoder(1000
ppr, Nikon RX1000-22-1). The output of the
encoder is multiplied by four and is trans-

Voltage-fed inverter

R.M.

The estimation of the motor torque was
performed by using the measuring system shown
in

Fig.5.

This system was used to

both the steady state torque

estimate

characteristics

and the instantaneous torque characteristics.

The steady state torque-speed characteristics
can be obtained when the load DC generator is
coupled with the axis. On the other hand, the
torque ripple can be measured by connecting
the stepping motor and the harmonic gear
(1:100) to the shaft in place of the load
generator. As the step angle of the stepping
motor
is 0.36 deg/step,
the
resu'ltant
resolution is 0.0036 deg/step and, therefore,
it is possible to measure the instantaneous
torque with respect to every rotor position
by rotating the reluctance motor at very low
speed( 1.9 rpm).
TORQUE CONTROL CHARACTERISTICS

Curro reference id*,iq*
Fig.4. Control
3<1 8ple
rated volt 11.6V

UPHS66H-8
(Oriental
motor Co.)

The steady state torque-current curves
are shown in Fig.6 when the vector control is
performed. In the figure, the d-axis current
(exciting component) is the parameter and the
dashed line corresponds to the rated current.
Within the rated current region, the torque
can be controlled to be proportional to the
torque current.
Fig.7 shows the relation between torque
and the current ratio angle 0 for the rated
winding
current.
Two calculated
curves
obtained from equation (9) are shown in the
figure, one is based on the motor inductances
measured by impedance method and another by
torque method. The details of the measurement
will be explained later. Fig.S shows torquewinding current characteristics when
the
maximum torque control is performed.

i

Fig.S. Torque measuring system.

1.5

S

0
1::1.

~

[J

0>

J(

(:;'

'"

¢

Q)

.."

II'

1.0

0

11
D

0
E-

id=0.4(AJ
0.6
0.8
1.0
1.2
1.4
1.6
1.8

In Fig.4, the speed control loop can be
added by modifying the control software. Fig.
9 shows the step response of the motor speed
when the speed command is changed from -900
rpm to 900 rpm. In the figure, the current
limit of the winding current is 1(A) and (a)
was obtained according to the vector control
for id=1(A) and (b) was obtained according
to the maximum torque control.

e

u

-(a)

o
o

(b)
(c)

000

B

40

60

""ogo 0.5

0.5

E-<

(for rated curr.)

0.5

1.0

1.5

iq(AJ
Fig.6. Torque-current characteristics for
vector control.
formed into 12-bit digital quantity in the
position
detecting
circuit.
The winding
current is detected by the Hall CT(NANA
Electronics, 20CA-W) and is also transformed
into 12-bit digital quantity.

10

20

30

50

10

80

90

D(degJ
(a)Impedance method
(b)Torque method
(c) Experimented
Fig.7. Torque and current ratio angle for
the rated winding current.

337

A Impedance method

e

o Torque method
o Experimented

u

...e:;' 1.0
tl'

(a)

0

360

o

Q)

"o

0'
k

(b)

Eo<

0.5

O[kg'cm)
(c)

0.4[kg·cm/div)
l[s/dlv)
(a)Rotor position elec. deg.[deg)
(b)Torque for iq=I[A)
(c)Torque for iq=O[A)

1.0
Winding curro iw[A)
Fig.8. Torque-winding current characteristics
for the maximum torque control.

Fig.IO. Instantaneous torque current versus
rotor position.

wattmeter. The measurement was done at every
five mechanical degrees under the 60 Hz
commercial supply. It should be noted that
the current is distorted at a certain rotor
position, which may produce the measurement
error. The result is given in Fig.ll.
Lu( 8) [mH]
60

(a)

Speed command

o Impedance method
- Torque method

900[rpm)
-900[rpm)
90

180
270
360
Rotor position 9(deg]
Fig.ll. Winding inductance versus rotor
position.

(2)Torque Method
As is well known, the
developed torque is given by equation (11)
when, for example, only the U phase winding
is excited by the dc current Iu.
When the

20(ms/divj
(b)

inductance

Fig.9. step response of the motor speed.
(For speed command change from

-900 to 900 rpm)
(a) vector control
for id=IA (b) maximum torque control

Lu

can

be

as

expressed

Table 1. Results of harmonic analysis of
Fig.l0 is the instantaneous output torque
characteristics

versus rotor

position

of

instantaneous torque curves

differs

inductance.

when

the vector control is performed for id=I(A).
As expected, the torque ripple is notable. It
is observed from this figure that the shape

(a)

2

3

4

5

6

7

8

9

3.99 5.34 2.96 0.11 1.64 0.15 0.60 0.04 0. 20

(c)

3.35 5.03 2.78 0.12 1.51 0.23 0.64 0.06 0.21

(d)

3.93 5.57 2.11 0.15 0.65

-

-

-

Fundamental amplitude is 100%.
MEASUREMENT OF INDUCTANCE

(a)Harmonic order
(b)Iu=0.5[A](Torque method)

(I)Impedance

Method

In this

method,

winding impedance is measured at every

position

338

using

the voltmeter,

ammeter

10

(b)

for

different torque current and that the torque
ripple exists even when the torque current
is zero(This corresponds to the detent torque
of the conventional stepping motor.).

the
rotor

and

a

sinusoidal function of e, the
developed
torque is also a sinusoidal function of e.
However, when the inductance Lti

(c)Iu=I.O[A)(Torque method)
(d)Iu=0.5(A] (impedance method)

-

Tu :

( Iu'/2 ) aLu/ ae -------------(11)
() Before compensation
compensa tion A

e

contains the harmonic components,
( 1 2) should be used Lu in (11)
Lu(S) : LgO + Lg2

..

r

u

equation

~ O. 3

compensation B

'0

~ O.2~__

O. 4

O. 2

...0'o

O. 6

8

reluctance

motor is relatively small compared to the
self inductance(in the test motor, it was 1-2
% of the self inductance) and, therefore,
it
can be neglected for the estimation of the
developed torque. As a result,
the torque
equation is given as follows.
T :

r (

ik' /2 ) aLk/ BG ----------- (14)

k=UVW

Considering the harmonic component of
inductance, equation (14) can be arranged as
follows by substituting equation (12)
into
equation (14)
T : - Lg2

[I

components of torque obtained by impedance
method are omitted due to the inaccuracy of
inductance. Fig.10 is the corresponding experimental result, which shows the calculated
and the experimental values are well in
accord.
The amplitude of the torque ripple
was measured for the constant excitation(id=l
A) and the result is shown in Fig.13.
From
this
result, it is confirmed that
the
amplitude of the torque ripple is nearly
proportional to the torque current.
Therefore,
the torque ripple can be expressed by
equation (17), where the first term

flT : KO FO(e) + Kl iq Fl (9)

hn n { iu'sin2nG +

n= I

iv'sin n(2G+2lT/3) + iw'sin n(2G-21T/3)) J
-----------------------(15)

Here, the winding current is approximately
related to the d-q axis current as follows.
COSS
[ cos (G-2lT/ 3)
cos(9+21T/3)

I. 0
iqlA]

Fig.13. Amplitude of torque ripple versus
torque current before and after
compensation.

ESTIMATION AND COMPENSATION OF TORQUE RIPPLE
The mutual inductance of the

O. B

-sinS

J id

-sin( 9-21T/3)

[iq]

-sin(G+21T/3)

-------(17)

represents the detent torque and the second
term is associated with the torque current.
In equation (17), KO and Kl are the constants
and FO(9) and Fl (9) are the torque ripple
functions which can be determined by equation
(15) or from Fig.13.
There

are two stages to compensate

the

torque ripple, compensation A and B.
(Compensation A)
To compensate the detent
torque, the compensation current iqO defined
by the following relation should be supplied
to the motor.

-----------------------(16)
T : K iqO + KO FO(S) : 0 -----------(18)
Fig.12 shows the results of calculation for
iq:O[A) and iq:l(A) under the same excitation
id:l(A). It is noted that the higher harmonic
Torque T[kg.cm]
1.0
id:l[A] iq:l[A]

(Compensation B)
Once the detent torque has
been compensated, the torque can be given by

- Torque method
..... Impedance method

O.B
0.6

T

K iqO + Kl Fl (S) iqO
K iqO

1 + Kl F1(9)/K ] -------(19)

equation (19) and, therefore, the compensation current iql in equation (20) should be
iqO

0.4

iql
+ Kl

Fl (9) /K

0.2
iqO
0+--f--\--1~1-i_+_+_\___;f-+_+-l

Rotor position S[deg]
-0.4
Fig.12. Calculated instantaneous torque.

1 - Kl Fl (G) /K ) ------- (20)

supplied in place of iq for developing the
constant torque independently of the rotor
position.
A.

Fig.14 shows the result of compensation
From this figure, it is observed that the

339

(a)

inductance
used.

by the impedance method has

been

360
CONCLUSIONS

o
(b)

O[kg·cm]
(c)

0.4[kg·cm/div]
(a)Rotor position elec. deg.[deg]
(b)Torque for iq=I(A]
(c)Torque for iq=O[A]
Fig.14. Result of torque ripple compensation.
(Compensation A)
(a)

360

o
(b)

O[kg·cm]
(c)

0.4[kg·cm/div]

This paper describes the digital signal
processor-based high precision torque control
of the reluctance motor with the sinusoidal
current excitation. Based on the analytical
model, two types of the torque control are
proposed,
one is the ~ector control and
another is the maximum torque control. In the

vector control, the developed torque
proportional to the torque current as in

is
the

conventional
vector controlled
induction
motor.
In the maximum torque control,
the

linearity between torque and current is not
achieved but the maximum torque is obtained
for the given winding current.
It is well known that the reluctance
motor produces the large amount of torque
ripple. In the test motor, the ampli tude of,
the torque ripple was as much as 26 % of the'
rated torque.
For the estimation of the
torque ripple, the accuracy of the winding
inductance measurement is very important and,
therefore,
the measurement is discussed in

the paper.
Using the results, the torque
ripple is estimated and compared to the
experimental
values.
In
addition,
the
compensation of the torque ripple by the
current control is proposed. The prototype
was tested and the performances were to be
excellent.

(a)Rotor position elec. deg.(deg]
(b)Torque for iq=I[A]
(c)Torque for iq=O[A]

REFERENCES

Fig.IS. Result of torque ripple compensation.
(Compensation B)
detent torque is compensated but the torque
ripple due to the torque current is still
remained. The amplitude of the torque ripple
is also shown in Fig.13., Fig.IS is the
result of compensation B. The amplitude of
the torque ripple is measured and plotted in
Fig.13.
The effect of compensation is largely
affected by the estimation of the torque. As
explained,
the torque is calculated
by
equation (IS) and the accuracy of inductance
is important. The amplitude of the torque
ripple has been reduced from 26 % to 6 % of
the rated torque when the inductance obtained
by the torque method has,been used, whereas
it has been only reduced to 10 % when the

340

(I )P. J. Lawrenson
Reluctance Motors
No.4, July, 1980.
(2)J.

Variable-Speed Switched
IEE Proc. vol.127, Pt.B,

R French: Switched

Reluctance

Motor

Drives for Rail Traction:Relative Assessment;

IEE Proc. vol.13l, Pt.B, No.5, Sep. 1984.
(3)A. Chiba, T. Fukao : A Control Method of
Super High Speed Reluctance Motor for Quick
Torque Response(in Japanese);Trans.IEE Japan
vol.107-D, No.10, Oct. 1987.
(4)B. K. Bose, T. J. Miller, P. M. Szczesny,
H. Bicknell: Microcomputer Control of
Switched Reluctance Motor; IEEE Trans. on
Ind. App., vol. IA-22, No.4, July/August,
1986.

w.

high resolution position control under I s,c. of
an induction motor with full digitized met~ols

Isao Takahashi

Makoto

Department of Electrical and Electronic
System Engineering
Nagaoka University of Technology
1603-1 Kamitomioka Nagaoka Japan 940-21

Abstract

The paper proposes a method of high resolution
position control using an induction motor drive system.
To get high resolution position control, it 1s combined
two control methods.
One is ultra-low speed control based on principles
of

impulsive

torque drives by using a high

frequency

dither signal which can compensate standstill frictions
at low ·speed.
The other is linear control along an optimum sliding
line which is decided by free-run characteristics of
the mechanical load. The sliding line enables the
improvement of a response and robustness of the system,
and linear control area situated along this line
improves the accuracy and stability.
The control circuit is composed of a high resolution
position sensor (1296Kpulses/rev.), a controller using
by a Digital Signal Processor(DSP) and a PWM inverter
having optimized PWM switchinp. patterns.
The PWM
pattern memorized in a ROM is made to generate the
impulsive
torque.
The nsp makes simple circuit
configurations, short calculation times and a speed
s"msorless
system.
Moreover it is marle to have
flexibiHty and intelligent abilt.ty such as auto tuning
control for a parameter variations of the load.
The accuracy of the position control obtained in the
experiment is 1/1296000 (rev. ) which corresponds to
one second of the mechanical angle.

Power Supply Division
Sanken Electric Co., Ltd
677 Ohnohara SiJroakasaka Kawagoe Japan 356

not applied in these systems because of difficulty to
get the control accuracy.
In the position control
system, the more high resolution is needed the more
affects stand-still frictions at low speed on the
resolution.
This paper proposes a method of high resolution
position control strategies using the induction motor
for improving the above problem.
To reduce effects of
the stand-still friction at low speed control, the
pulsation torque generated by the PWM inverter is
employed for the torque dither signaL
By using
principles of its impulsive torque drive, ultra-low
speed control of the induction motor under 1 rpd(day)
has been experimentally realized. (11
It combines the above ultra-low speed control with
an optimum sliding line which has linear control
regions near along the sliding line.
The control can
make not only robust systems but also stabilized and
high resolution systems.
By the recent advancement of high speed
and low
cost micro-processors, it becomes possible to replace
a conventional analog control circuit for a dizital
control one. The use of micro-processors makes the
circuit simpler as well as gives more sophisticated
functions such a intelligent control as auto tuning
adaptive control. The auto tuning control by the
optimum sliding line for a parameter variations of the
load is also proposed in this paper.
In the experiments, the accuracy of the position
control ( 1/1296000 (rev.», 1.e., one second is
achieved. which has never been realized by the usual
induction motor drive technique.
2.

1. Introduction
Recently,
factory
automation systems such
as
industrial robots and numerically controlled machines
became highly advanced. Owing to maintenance-free, the
use of an ac servo in the system would be most
desirable in todays industry servo applications. But
its complexity and expensiveness of the control circuit
disturb its popularization, therefore a dc servo is
still now widely applied for mechanical actuators.
Because' of having stronger structure and better
overload endurance,
an induction motor
is
more
suitable to ove,rworked servo drive systems than de OJ;
ac motors using permanent magnets.
The requirements of high accuracy, quick response
and high stiffness characteristics are indispensable to
highly advanced servo mechanism.
In higher resolution
position control systems, direct drives servo systems
become applied in exchange for servo systems with
reduction gears. But most of all these motors are
reluctance machines with a large number of poles to get
high position resolution.
Therefore, the small size
ana light weight of the motor cannot expected and
smaller, air gap construction of the machine is also
necessary to get the larger torque.
In spite of' above merits, the induction motor has

Iwata

Principles
control

of

the high

resolution

position

Because of having robustness, a sliding mode has
become increasing. But I from the point of view of high
resolution control, the sliding mode control method
would not always be suitable because of its large
torque ripples or acoustic noises.
The control presented in this paper is somewhat
different from the usual switching type sliding mode
control as follows;
(1) A optimum sliding line
Figure 1 shows an optimum sliding line on a phase
plane.
To achieve the mentioned characteristics, the
line is decided as close as to coincides with the freerun decelerate
characteristic
curve at low speed
condition. And at the other speed region, to minimize
the setting time, the line has to be set up as maximull
deceleration curve as the drive system can generate.
Accordingly, because of the small torque ripple aD
the optimum sliding line near the target position, it
is not only suitable a high resolution position control
but also torque ripples or acoustic noises to minimum.
(2) Impulsive torque drive
In the linear control region, the impulsive torque
is generated by using a PWM torque modulator.
In this
regions, the torque is proportional to the status error
S decided from the speed wand the position error 8e .a

© 1989 IEEE. Reprinted. with pennission. from COI/Jemlce R('c(ml (!frlle
19891£££ 1ndusfly Applications Society.

341

AAAli. i_Isive torque
--vvvcarrier
Figure 3

Figure I

Optimum sliding control on a
phase plane

Unear region width

Ar

T

o

Schematic diagram of an
ultra-low speed control

gives the impulsive torque to the induction motor. The
absolute value of the output of the PI circuit is pulse
width modulated with a impulsive triangular carrier
wave and makes run/stop(R/S) signal of the motor. The
carrier f.requency used in the experiment is 2.0 KHz.
The output of comparator CI which detects the
polarity of PI output, gives forward/backward(F/B)
signal of the motor and the output of C2 is RIS signal
which specifies the time ratio of the non zero and zero
voltage vector of the inverter. R/S signal gates 30 KHz
clock signal which drives a 9 bits up/down counter and
adjusts the inverter frequency. F/B signal is connected
to the up/down control terminal of the counter and
controls the direction of the phase rotation of the
inverter. The output of the counter is connected to the
address lines AO AS of a ROM •
.Figure 5 shows the relation of waveforms of the
impulsive carrier, the PI output and the motor control
signals, FIB and Rls signals.
The ROM is programmed to get Vlf constant control
and least torque ripple. The impulsive torque frequency

for large Be

(a) Relation of status
error and torque

Figure 2

(b) Relation of position
:ia~~ and linear region

Control methods of the linear
control region

shown in figure 2(a). The status error S will be
discussed fully in next section. The saturation level
of the motor torque Ts varies with Se as shown in
figure 2(b). At low speed, the· system could have a
fairly large gain under stable states. Therefore, for
getting the good accuracy. it is better to use the high
gain at the low speed and small error position states.
It is known that a high frequency dither sIgnal
makes compensate a non-linearity of the control system
such as
a static friction.C2J It can realize by
superimposing the high frequency torque generated by
inverter switchings to the mechanical load. As the
stand-still friction must be canceled in the high
resolution position servo mechanism, the high frequency
impulsive
torque drive would be superior to the
linearly controlled one.
Figure 3 shows the schematic diagram of the ultralow speed control system by the impulsive torque drive.
Applying
the high frequency and small
amplitude
impulsive torque slightly larger than the
static
friction, ul tra-Iow and smooth speed control can be
achieved. As shown in this figure, the motor speed wm
is measured by a dc tacho-generator and directly
feedbacked. Comparing the reference wm. with the speed
wm, the speed error wm*-wm is controlled to minimize by
• high-gain PI circuit. Superimposing the impulsive
triangular carrier, the non-linear .load is linearlized
and the system becomes more stable.
Figure 4 shows the inverter control circuit which

342

Figure

System configuration of an
inverter controller

forward
stop
backward

Figure 5

Relation of the PI output, impul.sive
carriers and control signals

RCM (2Kbyte)

r - - - - tON;rd - - - - - - ,
IAlOr - _ ... - ......... _ ...... i
I
FIB

runlOCXle

RiS

'3

I

fran
counter

3. Optimum sliding control line

The principles of ul tra-low speed control can be
applied to linear region control in the optimum sliding
control.
But a design of a PI circuit in figure 3 is
very difficult to get a high stiffness and stability in
all the area of the phase plane.
The simple PI circui t in figure 3 is only composed
of the integrator wi th the constant gain of Ki and the
proportional component with the constant gain of Kp.
Since the output of the integrator lis corresponds to a
position error Be and the output of the proportional
component is the speed of the Diotor, those trajectory
can be expressed by the straight line on the phase
plane.
If the
control is perfectly performed, it moves
along the switching line as follows;

(1)

Kpw + KiSe :::: 0

Schematic arrangement of PWM

Figure 6

switching patterns of the ROM

controlled by RIS signal specifies the amplitude of the
torque ripple.
If the frequency is too low, the large
torque ripple causes the position error. But too high,
the system approaches to linear control and becomes
unstable.
Figure
shows a schematic diagram of contents of
the ROM composed of four kinds of the optimum switching
patterns. The patterns are set for getting the minimum
harmonic current at steady states.
It has four
switching patterns; run and stop modes for forward and
backward modes, respectively. The run mode patterns
generate the vectors to follow the circular locus of
the primary flux linkage WI as close as possible with
smallest number of switching.
The patterns can reael
only by ac:cessing the address A9 to the high level.
The zero voltage vectors patterns are used to
decrease the vol tage and frequency of the output.
When the patterns are accessed, the flux is stopped
its rotation and the motor decreases its torque.
Accessing the signal of A9 to the low level, the
patterns corresponding address of the above switching
pattern can be read and simultaneously the counter is
stopped by closing the gate.
Figure 7 shows an experimental results of ultra-low
speed control characteristics of a conventional 0.75
KW induction motor. The speed control from I rpd (day)
to 1500 rpm .at no load condition is experimentally
obtained. The speed ripple will be under + 0.2 rpd. For
forward, locked, and backward control states, the speed
drift and unstable states are not observed even in the
loaded state.

Orpd

If the trajectory moves along the line, no output
vol tage is appeared in toe inverter terminal because of
no PI output voltage. But, in free-run condition, the
trajectory doesn't always draw a straight line as
equation (1).
Assuming that the load torque TL composed of the
constant stationary torque TLO, the damper component
Ow, and the moment of inertia Jw, the state equation of
the motion is

[~e 1= [~ -D~J 1 [~e 1+ [-I~J

j(TLO+Tm)

(2)

where,
Tm ; motor torque
In the case of the low speed operation, the value IIp
is sm&ll in comparison with TLO so that equation (2) is
rewritten as follows;

ee + (1/2)(TLO/J)w 2 = aeO

(w>O)

ae - (1/2)(TLO/J)w 2 = aeO

(wO)

(5)

I rpd

-I rpel
where,
n = 2, CI = I, C2 = (l/2)(TLO/J) ; at low speed
n ... I, CI = I, C2 = J/D
; a t high speed

o ~------I~~------2~~~------JOCO~~
t (sec)

Figure 7

Experimental resul t of ul tralow speed motor control

The value of S is the status error. On the optimum
sliding line, there is no switching and torque rippleless operation is obtained.
But in the high speed region, the motor speed must be
operated at the maximum speed, and must generate
maximum braking torque if a minimum setting time is
desired. In linear region, according to the value of S
in equation (5), the torque is pulse-width modulated
with the impulsive triangular carrier.
It makes not

343

nsp controller (TMS320l0)
FIB

Switching
patterns
circuit
(Figure 4)

1296000 pulses/rev.

Sa
reference

Figure 8

Schematic diagram of the DSP controller

only itnpulsive torque but also the
linear control
along the optimum sliding line.
The carrier frequency
used in the experiment 1s 2.0 KHz.
To co~pensate the error by the variation of the
stand still load torque, another PI control must be
also
applied.
The
reference of
the
modulator
corresponding to the PI output is switched to the value
expressed by the following equation.

U = S + K' (8e)fSdt
where,
K'
S

(6)

The

RIS time ratio data is transformed RIS signal by a
timer. But the amplituqe of the carrier is
modulated by AT shown in figure 2(b) and the PI gain is
changed according to the optimum sliding line. FIB
signal and R/S signal are applied to ttle switching
pattern control circuit as shown in figure 4, and
dri ves the PWM inverter by the optimum switching
pattern.
Figure 9 shows the flowchart of the proposed linear
sliding control algorithm of the nsp controller.
The motor speed w at (k+I)T is estimated in high
speed conditions by the following way.
pres~table

variable function of Be
status error in equation (5)

The second term can be used for compensation of a
small disturbance torque and it acts within only a
small ge region as I ee I < 32 (sec.).
In the other
region, the integration is stopped and saturated to
reduce the extra transient phenomena. A time constant
KI(Be) of the integrator is a function of Be and the
value becomes larger as near 6e=O. For a disturbance
torque, the more works the integrator with high gain,
the more maximum position error becomes small and gets
higher response. The first term in equation (5) is a
proportional cotpponent to improve stability. (3)
4. System configuration and Software

Figure 8 shows a configuration of the proposed DSP
controller. As shown in this figure, the position 6a of
the induction motor is measured by a optical position
sensor (81000 pulses per revolution) I and one p~lse is
electrically divided into 16 to obtain the pulse train
of 1296000 pulses per revolution which corresponds to
one second of the mechanical angle.
Comparing 6a with the digital reference 6a* by a 24
bits up/down cOl!nter, the 24 bits position error e is
applied to the nsp (TMS320l0) controller. Inside of the
controller, the data calculated using upper 16bits, but
limited in 20 bits to simplify the calculation.
In this controller, calculating the status error
Uin equation (6), FIB signal and RIS time ratio data
is decided just as the same way as shown ~n figure 4.

344

s - CI6e - C2cJl
(w < 0 )
S = CI6e + C2W n
( w>0 )

....__.-_---1 FIB, RIS
Figure 9

Flowchart of the DSP software

w«k+I)T)

~

Be«k+I)T) - Be(kT)/T

(7)

where,
8e ; position error
T ; sampling period
It is only calculated by the pulse number of the
position sensor during sampling period T.
Sampling
period T in the experiment is 500 Vsec. (2.0KHz) which
is equal to the period of the impulsive triangular
carrier wave.
Assuming that the output pulse of the
position sensor is 2.0 KHz, the minimum detectable
speed of the motor is 0.0926 (rpm).
When the nsp estimates the speed w as zero, another
measuring scheme must be applied under 0.0926 rpm.
It
is realized by measuring the pulse duration of the
position sensor as shown in figure 8. When the value
w«K+l)T) becomes zero, the low speed detector works by
switching SW to wLOW side. And when the counter data of
the .low speed detector is saturated, the speed W Is
regarded as zero. Be is limited 20 bits (+524288 ~ _
524288 pulses) in this controller.
To get the status error S, Be and ware substituted
in equation (5). When IBel < 60(32sec.), another PI
control expressed in equation (6) is applied.
The
linear region width AT is decided by the position
error. It corresponds to amplitude of the" impulsive
triangular carrier wave.
The status error S calculated from equation (5) is
compared with zero and give FIB signal. The absolute
value of S is pulse width modulated with the impulsive
triangular carrier to get RIS time ratio, and gives Rls
time ratio data.
The optimum sliding line with auto tuning controlled
for a parameter variations will be discussed in next
section. The calculation time is accomplished within 60
llsec. It is so sm;U compared to T, but, considering
from the stability problem, it is better to set the
value as small as possible.

of

5. Auto tuning control
The recent development of a micro-processor enables
digital controllers with a high intelligent abilities.
Increasing the demands of complex servo mechanisms, it
becomes
very
difficul t to adj ust the gains
of
controllers.
Accordingly, an auto tuning control is now a very
promising method to the 'motion control system. (5) In
this system, a simple auto tuning control 1s tried by
changing the slope of the optimum sliding line with a
parameter variations. The optimum Sliding line is
varied instantly by observing the relation of the phase
plane trajectory and the sliding line.
The phase plane trajectory usually varies along to
specified the sliding line as shown in figure 10(a).
But as shown it:l the trajectory (b), when the slope of

the sliding line is larger than that of the trajectory,
the trajectory has some ripples. On the other hand, as
trajectory (e), when the slope of. the sliding line is
too smaller, the overshoot and the limit cycle is
observed. Accordingly, the optimum sliding line would
be able to specified by observing the motor and load
characteristics variation.
Figure 11 shows a control result using the tuned
optimum sliding line.
As shown in this figure, two
auto tuning lines 5L"'O and 5H-0 are considered in both
sides of the optimum sliding line. When the trajectory
collides with the lower auto tuning line SL=O, it is
better to use the larger slope optimum 'sliding line to
get more stable response. On the other hand, the
trajectory doesn't reach the higher side of the auto
tuning line 5H=0, the slope'of the optimum, sliding line
must be increased.
The auto tuning lines 5L and 8H are specified in
this paper as follows;

SL

- C2(KLw)n

(wO)

~

CI

ee

SH - CIBe - C2(KHw)n

(8)

where,
KL ; K - .6K , KH ; K + .6K
K ; the gain of equation (5)
In this region, only the gain of K in the optimum
sliding line is adjusted,
whether the trajectory
collides with the optimum sliding line or not.
F,igure 11 (a)
shows the trajectory with no auto
tuning where the ripple is observed in the phase plane
trajectory. Figure (b) is auto tuning where the ripple
is compensated. These real time control is easily
executed by the nsp controller.
Figure 12 shows a schematic diagram of the nsp
software of proposed auto tuning control. S in equation
(5) is calculated from Be and w by the sliding line
controller and SL and SH are by the auto tuning lines
controller. Comparing the value of S with the value of
SL and SH,
the output of 3-state comparator is
Speed

Position
error

Be'().023 rad div.
w'().185rps div.

(a) no-auto tuning control
Speed

w

Position
error

Speed

Be ~ 0.023 rad div.

w '().185rps div.

B"'().02J rad div.
""Sition

error

I

I

1

I

1/3 rated toque
I
I

i

I
I

1

pulse

I

I

.J;1lUl1:40

I"-

position
error

~

/

If3

rat~ 'torque

~

r---

Transient response near the
target
Figure 17

ncmrer

,:

:1-.. . .,. . . ., ;. . ,. .

"
f

test n~ri

"""I.r.t.t.~~

ea

12e.I-+-I--v,49rn~01f'-+-l:----j
:.. 10

o!0

Response to the disturbance torque
input

···1··"···1

37

OF

(tiJres)

T

IGnsec/div.

nax:lnun:40 pulse
Figure 15

1=

I

0.1 secldiv

Disturbance
torque

0 I-- V
0

1

1

s~1

:300: o pul

eo
ition
e ror

(Xl

.185 f1's/di
0

."i.. !..... !
(a),: unal o tlU 'ng
(b): aute tuni

j,I

'~

\' ~

(a)

I'" r"""'"
~ II
(b)

Figure 16

Distribution of position
errors

Figure 18 Auto tuning control on a
phase plane

7. Conclusion

In this paper, to get the high resolution position
control method of an induction motor t skillful control
techniques are applied. And the following resul ts are
obtained:
1)
By using the impulsive torque drive at
linear
region has a good stability and precision at low speed
area.
2)
Optimum sliding mode control with the variable
gain which is different from conventional one, enables
the
improvement of an
accuracy,
responses
and
robustness.
3)
For compensation of a disturbance torque at the
stand-still condition, the PI controller with the
variable time constant is also employed.
4)
The use of the DSP
makes
simple
circuit
configurations and a speed sensor less system.
S)
The system' is made to have
flexibility and
intelligent ability such as auto tuning
control of
sliding
mode
switching
line
for
parameter
variations in the motion control system.
As' the results, the proposed motion control method
would be available in a high resolution servo under 1
sec. resolution.
Through experimental resul ts,
the validity
of
proposed control is provided to be very promising and
skillful techniques to the high resolution position
control system.

Acknow 1edgmen t s

The authors 'Jould like to express thei·r appreciation
to Mr.S.Asakawa and Mr.S.Tanaka of Sanken Elec.Co.Ltd.
an1 the Power Electronics Laboratory m'~mbers of Nagaoka
University of Technology. Part of the work is supported
by Grant-in-Aid for Scientific Research of the Ministry
of Educ.aition and by the foundation of Highly Advanced
Mechat ronics Technology.

Reference

(1) I.Takahashi,

S.Asakawa, II Ultra-wide speed control
an induction motor covered 106 range IT IEEE-lAS
(1987)
"
(2) Olle
I.Elgerd
"CONTROL SYSTEMS THEORY
•
International
student edition, McGRAW-HILL INC.
1967
(3) I.Takahashi and M.lwata II High resolution servo
system
of an induction m,::>tor using linear mode
sliding control"
PCIM' 88 (INTELLIGENT MOTION).
Japan (1988) p. 254
(4) T.lwakane and T.kume II High performance vector
controlled ac motor drives (applications and new
technologies)" IEEE-lAS (1985)
(5)
R.Lorenz "Tuning of Field-Oriented Induction
Motor Controllers for High-Performance Applications
" IEEE-lAS (1986)
of

pp "

347

348

A TMS32010 Based Near Optimized Pulse Width Modulated Waveform Generator'

ILJ.Ch'lm:p Hnd J.A.Taufiq, Oept. Ele-ctronic A· El(>(:lrical En~., llni\"€'rsity of Birmingham, Birmingham SIS 2IT, UK

frequency, it is not possible to
inverter generated hannonics in this
h,'md. IIm,:e\"(~r'l with typicR,l input filter values I it can
be> shown that the si~nalling frequency components in
the rails will bE" much less than the typical threshold
1",",'1s (1,2],
~\oo'itching:

im'ertpr
('J iminatC'

This
papPI'
d("scrit~s
a
gvst('m
for' dyrvuni('al J~
calculating
optimizf'd pulse ~idth mcxlulated tr\.~1)
~a\'f'forms
for liSP lo.:ith volt.aq:e source im:erter (V51 l
ff'd
induction
motor
drives in rai hmy traction
appl iCfltions.
A
DfS32010
siECnal
procf'ssin~
microproc{'ssor,
capable
of
fast
ari t.hmetic
is
i ntt.'rfac('d t.o R. nOH'1 random accE'SS memory based
wavefonn
genE-rating
hardv.:are.
This pro\'idE's t.he
capabi Ii t;.· to control waveform detai 1 impossible wi th
more
conventional
microprocessor
based
systems.
Although the pap€"r concentrates on the implementation
of a parti cular algorithm, the design can implement
variable pulse widths in mul tiphase systems and in real
time. An important aspect of this work is the role
playiE"d by microprocessor simulation in testing the
design.
Nomenc la ture
m
cx..k
NPI
Vdc
VSI

f

number of switching angles per quarter cycle of
PWM wavefonn.
kth switching angle
modulation depth of PWM wavefonn
inverter de 1 ink vol tage
voltage source inverter
inverter output frequency
Introduction

With the increasing availability of high power gate
turn off (Gro) thyristors, there has been a renewed
interest in inverter drives for electric multiple Wlit,
metro and light rail applications. Comparing the GTO
inverter and GTO chopper from an economic viewpoint, it
is widely accepted that the GI'O voltage source inverter
(VS!) is the most favourable of all the inverter
configurations.
The
GIO VSI does not require a
preconditioning chopper and. input voltage fluctuation
is compensated by the VSI controller.

the>

To

date, the implementation of this type of optimized
P\-.:-'I scheme has been I imi ted to a look-up table of the
exact switching angle data, which is precomputed for a
given inverter input volta~e and ratio changing scheme.
A high incremental resolution of the angles may be
needed which could requi re substantial memory. Also
fluctuations in the OC input voltage can only be
compensated by perfonning an interpolation of exact
switching angle data. Therefore the preferred solution
would De to generate these switching angles on-line. It
has been shown [2] that it is possible to approximate
thE"
exact
swi tching
angle . trajectories
by an
algorithmic
approach, which results in relatively
simple equations. This algorithm has also been shown to
generate near optimal switching angles for any number
of angles per quarter cycle, m. The equations to be
computed are derived in 11] and can be sumnarized as
follows:
For odd k,
l!.k = 0.4

(k-

6lJc

=0

for NP1

-

[1200 X4kXNP']
0.8 (.1+ 15
1800

X

siD

With
these
drivE'S,
the
problem
of signalling
interference is more pronounced with power frequency
type track cireui ts. As in this case the signalling
frequenc.ies are relatively low, typically below 400 Hz,
any components at these frequencies generated by the
GTO \lSI will not be significantly attenuated by the
input fj 1 ter of t.he t.raction equipment. Therefore it is
essential t.o ensure that the Gro VSI does not generate
any components at these signalling frequencies. In the
case of audio frequency type track circlli ts, the range
1)[ signalling
frequencies used is usually around 2-10
KHz and with t.he t.ypical constraint on the maximum

4Dk

X

k]

[~

for NP1 > 0.8

For even k,
k

Ok = 0.4 sin [

r.;:v (

58.60

12.5Q

-1iii=2»)

]

kx60°1200 XAkXHP1]
li+'iT + [ 0.8 (m+ 1)
- Aile

(2)

~=

where AIle "" 0 for NP1 < 0.8

[ 1800

14 (NP1_0:8)'

From
a signalling vi~int, the main di fference
between the VSI and the fixed frequency chopper drives
is that the fonner can potentially generate components
over a wide range of frequencies as the VSI operates
from minimwn to maximum frequency. Previous experience
wi th
chop~r generated
interference suggests that
methods of control which can theoretically eliminate
components
at the signalling frequencies will be
required. by most metro authorities when new equipment
like the GTO \.rSI is considered. In this respect, it has
been shown [1,2] that a harmonic elimination optimized
WM based ratio changing scheme which is tailored to
suit the type of signalling system used. is the best
solulion. Although other types of PWM scheme such as
regular sampled .and distortion minimised are more
commonly used in industrial AC drives, these are not
really the ideal in this particular application.

..

.:s 0.8

'3 (NP1_ 0.8) a
0.Q9m.

=

+ 60.4°]

m

(k+1)x60°
~ =
(m + 15
where l1Dk

0.5) x 59.20

sin [

l!.Ik ..

0.09m

aiD

•

(k-',5)]
III

for JfP1 > 0.8

Choice of microprocessor
This
type
of
microprocessor based PWM wavefonn
generator
design
often
uses
a general purpose
microprocessor
interrupted
by a counter-timer as
described
in
[3].
Three
different
types
of
microprocessor, ZBO, 8086 and TMS32010 were benchmarked
TABLE 1

PERFCJR.'lANCE

OF 3 PJl(X;ESSORS FOR OPI'IMISED PI'M SCHEME

Z80(4MHz) 8086(5MHz)
TMS32010(20 MHz)
multiply time
0,2
(16,,16 bit) .....
300
30
divide time
400
(16/8 bit) ....•..
40
4
memory/register
transfer time
(16 bit word) ...•
0.2
estimated time
to compute (2)
(NPI < .8).......
3000
310
20
on chip RAo'i
(bytes)..........
0
o
288
timer peripheral
poor
availability.....
good
good
multi-level
poor
interrupts. . . . . . .
good
good
{all times are in microseconds}

© 1988 IEEE. Reprinted. with pennission. from Third Illfernatiol1ul C(IJlfc'r(,l1c(' 011 Power Electrol1ics
and Variable Speed Dril'es, Conference Publication Number 291, July 1988.

349

for this wa'"E'fonn ge-llE."rator as shown in tablE" 1.

requirE'"S an inconveniently high frequency source of 70
to givE" 'hE" 1% resolution required at 150 Hz. The
s~conrl
fIlE'thod. rE"'qui res the RAM contents to be chRnged
several t i~s in on(' cycle and therefore to be ul~at.ed
frequently even when A constant output f['('{juf>nc:y is
genC"rated. ThE> potential accurac~{ of this generator is
high and is the one used here. This method of using RAM
external to the processor addrE'sS space for generating
the PWM wavefonns for a variable frequE'ncy VSJ is
apparantly "novel. Existing wavefonn generators usually
lise SOllIE" fonn of cOlmter-timE."r.

MHz
Onl~'

th£' 'r.-IS32010 can conn:'1ute all th(> S\.;i tching angles
algol"ithm in lh~ desirf'd t i~ of 2 ms. It also
ch i p for
....·ith the
TMS32010
is
t.hf'
lack of suitR..bl('> counter/timer
l-"I("riphE"I'f\ls. The~fore a lotall~' eJiff£'rC"nt approach has
~n usC'd for "avefonn generation.
with

Ul(~

E.'noll~h
random access mE"mor~· ( H:\~1l on
(>xcclllin~
th('> ah;orilhm. The main difficult~·

h.-"lS

mE.~t.h()d

of generating a "ave-fonn is to stOrE" it as a
patt.€'rn in RM-t. Thus a square wave, for example,
can ~ cI'E"ated by storing N binary one's followed by N
zeros and reading these locations at regular intervals.
The RAM address becomes in effect the wavefonn angle or
time which can be generated by a binary counter. This
method is not efficient for generating a changing
wavefonn because a large munber of RAM locations must
be
continually updated. However the memory based
wavefonn generation hardware used here is based on the
storage of identifying codes only at the addresses
(switching edges) where a wavefonn state change occurs.
This reduces to a minimum the memory locations lIsed to
define the desired wavefonn. 4096 words of 8 bits are
used wi th two bits of each word per phase, so that six
bi ts can produce a three phase wavefonn. 1be wavefonn
is stored as a 'map' of switching angles or times as
illustrated in Fig. 1.
On£'

TMS32010

wavefonn

generator

interface

binar~·

H
--RAIl _
o1
1
1
1
1
1
1

.. - - - - -

0 1 1 1 1 1 0 1 1 1 1
1 1 1 0 1 1 1 1 1 0 1
1 1 0 1 1 1 1 o 1 1 1
o 1 1 1 1 0 1 11 1 1

~

14
0 1 1 }

1 1 1
1 1
0 1
o 1 1 1 1 1 I" O' '1 1 1 1 1
11 1 1 0 1 1 11 1 1 0 1

:)
:)

bit~

The implementation of this scheme is shown in Fig. 2.
The TMS32010 'CLKOOT' signal is divided from 5 MHz by a
prescaler to drive the 12 bit binary count,er. This rate
is current.ly 1.25 MHz. Thus external RAM address
updates by the counter are synchronised to processor
operations. During every machine cycle, the TMS32010
produces
one of the following MlITUALLY EXCWSIVE
signals:
(1) MEN

(2) WE

DEN

(3)

instruction fetch.
port wri te (output).
port read (input).

In a TMS32010 running at 20 HHz, MEN occurs at a 5 HHz
rate except during the input and output operations.
This makes it possible for the hardware cOWlter to
'steal t the MEl\!" cycles for updating the wavefonn from
the output RAM while allowing the TMS32010 to access
the RAM without constraint. In Fig.2 the following
operations are carried out:

3
2
bit 0

:::~"-1
:::~ ....... z

:::~"""'3
--~~----ongle
Fig.1 Codes to generate a three phase wavefonn from RAM
Each address location contains the two bits per·phase
coded as follows to load an output latch as the address
generator (binary counter) is incremented:

11
01
10
00

no change in latch state.
set output latch to '1'.
reset output latch to '0'.
not used.

The outputs of three such latches thus generate the
wavefonns for a three phase supply. Before using the
RAM, all locations are initialised to the 11 (no
change) condition. A wavefo~ is created. by writing
either a 01 or a 10 into RAM locations which correspond
to
the desired switching angles. To change this
wavefom, the contents of these addresses must be reset
to the no change ( 11) state before new values are
written.
methods may be used to generate a wavefonn of
variable frequency. To drive the coWlter that supplies
the RAM address from a variable frequency source while
storing a complete wavefonn cycle or alternatively from
a fixed frequency source so that the RAM addresses
become equivalent to time delays. The first method
Two

350

Fig.2 Data pathways in the wavefonn generation hardware

*
*
*
*
*

Output a new divider value (A)
Input the current address (B,G).
Output a new RAM address (C) to the buffer/latch.
Input RAM contents (I,D) at the previously latched
address (H) .
Output new RAM contents (E,J) from the TMS32010 at
the previously latched address (H).

The above TMS32010 operations all coincide with either
a DEN or WE TMS32010 machine cycle and are interleaved
wi th output latch updates performed in hardware during
the MEN (data path K ; address path F) time slots.
Software Overview
1MS320l0 software may be conveniently split into
four sections: obtaining m and NFl, calculation of
switching angles, conversion of angles to time delays
and output RAM update.

The

Obtaining m and NP1
The inverter output frequency and dc link voltage
values
are
obtained from two analog to digi tal
convertors. From these two inputs, the two varia,bles in
(1) and (2), namely m and NP1, need to be deduced. The

required value of m for a given value of inverter
output frequency is obtained from a look up table
equivalent to Fig.3.
(svitcbias freq. (Hi»

TABLE II
OUTPUT RI\M ACfIVITY WITH TIlE P.<\.<;SAGE OF THIE
RAM ADDRESS/
CUJNTER VALUE

WAVEFORM
GENERATED

o-

from RAM
cycle N
0-2047
from RAM
cycle N
2048-4095
from RAM
cycle N+1
0-2047
from RAM
cycle N+l
2048-4095

2047

...............
20~8

- 4095

...............
o - 2047

...............
2048 - 4095

10

20

30

40

50 60

70

80 90 100

Fig.3 Ratio changing scheme for deriving m from
inverter output frequency
This particular ratio changing pattern has been devised
to ensure that the VSI drive does not produce any
components at typically used signall ing frequencies
over the entire inverter output frequency range [2].
The required variation of the VSI fundamental line
vol tage wi th inverter output frequency is also stored
as a table. In railway traction systems t Vdc is subject
to a +20% -30% variation. In order to keep the motor
line voltage constant, irrespective of this variation,
NP1 is altered to compensate.
Swi tching angle calculation
The angles are calculated from ( 1) and (2) using
integer arithmetic. The output of this stage is a list
of switching angles between zero and 1t /2 radians. Sine
and cosines are derived from a table.

Angle to time conversion
The switching angles must be divided by the inverter
output frequency f to give corresponding time delays. A
16 bit word does not give the required 1/300,000 time
resolution. This problem has been solved by a fonn of
block floating point [41 which may be efficiently used
on the THS32010. For frequencies above 12 Hz, times are
divided by 4,lbut below 12 Hz, they are divided by -64.

DELETE OLD
CODES

INSERT NEW
CODES

from RAM
N-l

for RAM
cycle N
2048-4095
for RAM
cycle N+l

c~'cle

20~8-~095

from

R~M

c~·cle

N
0-2047
from RAM
cycle N

0-20~7

for RAM
N+l

c~'cle

20~8-4095

20~8-4095

from RAM
cycle N+l
0-2047

for RAM
cycle N+2
0-2047

internal data memory for deletion during the next
cycle. As the end of the ~ wavefonn will hardly ever
occur at RAN address zero, the address of this point
nlst be added t.o the swi tching t.imes for the next eye Ie
of the wavefonn.
Testing the System
Practically all the development of this project was
perfonned on simulated rather than real hardware.
The 1MS32010 simulator
A simulated TMS32010 [51 was used to test the software
to the point where it could be used with confidence in
the tsrget hardwsre (see appendix A). This si...llator
has particularly versatile means of interfacing to
1MS32010
streams of test dsts held on the host
computer.
Additionally, real or even non-existant
peripheral
devi.ces
can be simulated in the 'e'
'language. In this case, the wavefonn generator hardware
of Fig 2 was completely simulated in software.
TMS32010 eguaUon calculations
The scheme shown in Fig. 4 was used to check the
TMS32010 equation calculations perfome 0.8 add 23.-1
microseconds.
Simulating the wavefonn generation hard\<"are
The TMS32010 simulator allows thE' "cormection" of a
simulated peripheral device lIsually written in 'e'
(appendix A). In order to expedite testing of the
complete
'IMS32010
software, a simulation of the
wavefonn generation hardware was created and interfaced
to the simulator. It allows testing which would be
nearly impossible on the real hardware.
The wavefonn generation RAM is simulated by an array of
integers updated entirely by the TMS32010 program which
can be examined by the user or written to file. A COWlt
is kept of the number of times that the simulated RAM
address passes through zero allowing output pulse
widths to be computed and saved on file for test
purposes. Note that digital values of long times
(equivalent to more than 2048 RAM addresses) -are not
obtainable from the hardware. They could include errors
introduced outside the angle -computations e. g. due to
the block floating point representation or failure to
delete angles from the wavefonn generation RAM .

The speed of the TMS32010 program in perfonning the
hardware updates is of course very dependant upon the
type o-fwavefonn and the part of the cycle involved.
However as an indication, at 17 Hz, simulation shows
that 127 microseconds is necessary to update the
hardware during the worst case RAM half cycle. This
compares with 1.6 milliseconds which is available while
the RAM cycles through 2048 addresses,
Perfonnance in hardware
The functional integri ty of the hardware was tested
wi th small programs to excercise the various sections.
Thus, by the time the software described above was
transferred to the target system, the hardware was
known to be capable of executing a TMS32010 program,
generating a wavefonn from the wavefonn generation
hardware and correctly reading the analog to digital
converters used to input the dc link voltage and the
inverter output frequency. Waveforms produced by this
hardware on the first trial, agreed with expectations
predicted by simulation.

given application, it is IX>ssible to predict the
maximum value of tq , and td is set to a constant val ue
\<"hich is slightly greater than this. Having derived the
ideal complementary .gate drive wa\-eforms from the thr€'e
Th1!-f waveforms, the effect of td is to delay the turn-on
edge of these gate dri ve wave-forms and leave the
turn-off edge unaltered as shown in Fig. 5.
This dela:..' can be incor}XJrated in hardware but, \;o'ith
this RAM based method of generating the waveforms, the
delay can be incor}XJrated in software. This reduces the
hard",'are requirement especially if the delay must be
variable. All four binary codes are used wi th t\<,'o
output latches per phase in the swi tching angle map as
follows:
11
10
01
00

output latches remain unchanged
set output latch I, reset latch 2
reset output latch 1, set latch 2
reset both latches

With this system, a typical switching sequence might be
as follows:
10
11
00
11
01

device 1 on, device 2 off

both

devices

off

device 2 on,device 1 off

Of course twice as many RAM accesses need to be made
but this is no problem with the TMS32010. A PAL e.g.
the PAL 16R6 is ideal for implementing the Boolean
expressions to directly generate the 6 GTO gate drive
signals from the RAM contents.

=- JLnJ1
flJ1J1
l'

•

gate

0,

,',,'

drive

1LJUij

I

I,

I

ileal

~SO,

'1'1'

_,. . . 1:ri :ri:ri

LLJ L!J L
•
:r1 : r1 :
waveformsO~-..LJJ.U
.:J

.~gate
0,

drive

I

"

I

11

td

Fig.5 Generation of G'IO gate drive signals

Non-complementary wavefonns

Due to the turn-off time tq , the G'ID inverter pole
switching waveform will be slightly different from the
generated PWM waveform. This small error in the power
electronics reproduction of these wavefonns will result
in a slight change in the harmonic spectrum measured. in
the power circuit compared with the ideal case. It is
possible to compensate for this effect if the variation
of tq with anode current is known [ 7]. The spare
computing capacity on the TMS32010 means that it will
be possible to incor}XJrate a closed loop controller to
compensate for the varying GTO turn-off time.

The system described. so far can generate the three
phase pole switching waveforms to drive a three phase
inverter. In each phase, the complementary device can
only be switched on at a finite time after the other
device in the same phase has been switched off, thus
avoiding a dc link short circuit. Therefore, for a
definite time interval, both gate drive signals for a
phase are off. This delay time td must be greater than
the turn-off time tq of the GTO used. tq varies wi th
the type of GTO and the anode current being coovnutated.
Therefore, in this application, tq will vary depending
on the point in the inverter current waveform at which
t.he inverter current is turned off. However, for a

After extensive testing and debugging of the software
using the simulation facility already described, the
software was evaluated in the target hardware. Prior to
interfacing the TMS32010 based waveform generator to a
G'ID inverter, the hannonic spectra of the near optimal
pWM wavefonn
produced were analysed. Fig ..6 shows this
ideal pole switching wavefonn spectrum for m = 5 and,
as expected, the 5th, 7th, 11th and 13th harmonics
(i.e. m-l harmonics) are almost zero. This confirms the
theoretical work carried out on evaluating the accuracy
of
the algorithm [1]. As explained in [ 1] , thE'

Design Adaptability

The high speed of the TMS32010 has resulted in spare
computing capacity which can be used in several ways.

352

Practical Resul ts

,.,

"HI VOL t •• s

.7

11., .1 ••

.15

3

17

IS

21
19

I
5 7

I

11 13

23

I

In
AYG
CAL ~JCURSOR FI 35 ....
6 Y'K SU 16.L..-~OLlS "-163. AI 625.[-3

ID I

IIII

0

Fig.6 Ideal pole switching waveform spectnUR for m

=

5

algorithm is least accurate when m = 3, especially for
high NP1 values. This is clearly demonstrated by Fig. 7,
which shows the effect of the increase in the value of
NPl as Vdc decreases to its minimum value •

..,

In

o

.7

5 7

13

17 19

(a) Line current spectnun

...,...

:

I6t+1 VOlt •••

11

_

3
IS

19

13

5
III

I.U

!l

~"YP", SUAYG 16

17

I

CA~OLlS mR~~~. ~:

I

r~

45....

7".£-3

(a) Nominal Vdc
•• ,. HZ

528,[-3 YOU INS

...... H:

In

6

12

18

24

30

( b) Link current spectrum

Fig.8 Measured spectrum for m = 5

11
15

3

11

17
7

19

1-

1

5

~:.:I

]

AYG
CAL
CURSOR F I ".181
I"
, YPK IU 16.L-.-v.!1!.!L !.!.Ji:i...!.!....k~

(b) Minimum Vdc
Fig.7 Ideal pole switching wavefonn spectra for m = 3
showing the effect of a decrease in Vdc

Using

the
mode
is
explained
by using
1MS32010

algorithm, i f the 7th harmonic in the m = 3
unacceptably high in amplitude, then as
in [11 the situation can be easily improved
the exact switching angles instead. The
based
wavefonn
generator
was
finally

interfaced to a 2kVA GTO VSI driving a small induction
motor. The measured. line current spectrum for m = 5 is
shown in Fig. 8a and the' first signi ficant hannonic
present is the 17th as expected. The corresponding
inverter OC link current wavefann spectnun is shown in
Fig.8b. As anticipated, the 6th and 12th DC side
hannonics are aIlIK>st zero and the first main hanoonic
is the 18th due to the 17th and 19th N:; side harmonics.
Similarly accurate results were also obtained for
higher ValuE'S of m.
Further tests were carried out to investigate the
transition during gear changes. Fig.9a & b show the
inverter line current wavefonn during the transition
from m = 5 to m = 3 and m = 1 to m = 0 (quasi-square)
respectively. As can be seen, the transitions occur
smoothl~' and
there is no observable transient in the
inverter line current wavefann.
Conclusion

The use of a hannonic elimination optimized PWM ratio
changing scheme is essential if railway traction VSI
drives are to be compatible with signalling systems. In
particular l i t is shown that it would be advantageous

353

if the switching anglE:"s could be computed OIl-line by a
algorithm
which
gives
nE:"ar optimal
g('fl€>ralized

no. 2,Pt B, pp 71-84.Mar 1986.
[3J S.R. Bowes & M.J. Mount "Microprocessor cont.rol of
PWM inverters", lEE PRCC., vol 128, rrr a, no. 6 pp
293-305 ,Nov. 1981.
(4] A.V. Oppenheim "Realisation of Digital filters
lIsing Block floating point arithmetic", IEEE Trans.
Audio Electroacoust. vol. AU-18,pp. 130-136, JWle
1970.
{5] R.J.
Chance
., Simulation E..... pcriences
in the
dE:"velopnent
of
softwarE:"
for
digital signal
processors" Microproc. and Microsys, Vol 10 No 8
Pl'. 419-426, 1986.
[7] S. Sane, H.lrinatsu "Very preci.se turn-off timing
control of ga.te turn-off thyristors", lEE PEVD
conf. London, Pl'. 23-26,1-4 May 1984.
(8J R.J. Chance "TMS320
digital signal
processor
development
system" ,
Microprocessors
and
Microsystems, vol. 9 no. 2, pp 50-56, Mar 1985.
(9) R.J. Chance & ·B.S.Jones "A Combined Software
/Hardwa~e Development tool for the TMS32020 Digital
Signal Processor tl J. Micro. App., Vol 10, pp
179-197,1987.
AEE!1ndix A The TMS32010 Simulator

The Simulator used in this work [5,8] is part of a
TMS32010/20/C25 development syst.em written by one of
the authors (RJC). It includes a TMS32010 assembler and
simulator
the normal host being an IBM Personal
Computer. 'The simulator accepts machine code created by
the
assembler
and allows simulated execution of
1MS32010 programs. The usual facilities such as break
point setting, access to user symbols, instruction
timing
elc.
are
provided.
This
simulator
is .
psrticularly intended to be used for linking digital
data streams to TMS32010 i/o ports or memory for test
purposes. One advantage is that values are precise
digi tal values rather than analog signals and are thus
repeatable and accurate. In addition, powerful software
tools on the host may be easily used to generate or
analyse i/o data.
(b) Transition from m = 1 to 0

Fig.9 Inverter line current during ratio changes
switching angles. This paper shows that a high speed
signal
processing
microprocessor
can
be
used
efficiently in implementing such an algorithm. The
calculation time is extremely small when compared with
conventional processors. The fast cycle time makes it
desirable to use·novel methods for waveform generation.
The one described gives exceptional wavefom control
and a low chip count.

The use of an Wlusual TMS32010 simulator has not only
allowed
the testing of TMS32010 algorithms using
integer arithmetic but also enabled the exact pulse
width output values to be analysed. Due to the Wlusual
output stage, this would have been di fficul t in real
hardware. It has allowed complete debugging of the
software before the final implement.ation.
In this implementation, the computation of E:"ach even or
odd
swi tching
angle
takes 24 .8 liS or 32.4 ps
respectively.
When
the required flUldamental pole
switching amplitude is greater t.han O.8Vdc PI/2, the
correction factor equations add a further 23.4 )lS.
References_
(1) J.A. Taufiq, B. Mellitt & C.J.Goodman "A Novel
algori thIn for generating near optimal Mt wavefonns
for ac traction drives", Proc. lEE vol 133,Pt B,
no. 2, pp 85-94 Mar. 1986.
12) J .A. Taufiq, C.J. Goodman & B.Mellitt "Railway
signalling compatibility of inverter fed induction
motor drives for rapid transit.", Proc. lEE vol 133,

354

The use of such a simulator would be limited. without
the ability to simulate essential peripheral hardware
e.g. the wavefonn. generation RAM. This simulator is
supplied in object library fonnat. The user may create
a software simulated peripheral 'device t, to be linked
at the object code level to the TMS320 simulator. Such
a simulated peripheral is usually written in '·C'. This
enormously extends the use of the simulator and allows
debugging methods impractical in hsrdware such as the
trapping of complex i/o data. Simulated peripherals
have been used not only to allow the use of real
hardware and imaginary hardware used only for testing.
The