1959 03_#15 03 #15

1959-03_#15 1959-03_%2315

User Manual: 1959-03_#15

Open the PDF directly: View PDF PDF.
Page Count: 361

Download1959-03_#15 1959-03 #15
Open PDF In BrowserView PDF
Proceedings oJ the

WESTERN JOINT COMPUTER CONFERENCE

March 3-5. 1959

San Francisco. Calif.

Sponsors:
THE INSTITUTE OF RADIO ENGINEERS
Professional Group on Electronic Computers

THE AMERICAN INSTITUTE OF ELECTRICAL ENGINEERS
Committee on Computing Devices

THE ASSOCIATION FOR COMPUTING MACHINERY;

Printed in the United States of America

Price $6.00

PROCEEDINGS OF THE
WESTERN JOINT COMPUTER CONFERENCE

PAPERS PRESENTED AT
THE JOINT IRE-AIEE-ACM COMPUTER CONFERENCE
SAN FRANCISCO. CALIF.• MARCH 3-5. 1959

Sponsors

THE INSTITUTE OF RADIO ENGINEERS
Professional Group on Electronic Computers

THE AMERICAN INSTITUTE OF ELECTRICAL ENGINEERS
Committee on Computing Devices

THE ASSOCIATION FOR COMPUTING MACHINERY

Published by
The Institute of Radio Engineers
I East 79th Street. New York 21. N. Y.
for the
Joint Computer Committee

ADDITIONAL COPIES
Additional copies may be purchased from the following sponsoring societies at $6.00 per copy. Checks should be made payable to anyone of the following societies:
INSTITUTE OF RADIO ENGINEERS
1 East 79th Street, New York 21, N. Y.
AMERICAN INSTITUTE OF ELECTRICAL ENGINEERS
33 West 39th Street, New York 18, N. Y.
ASSOCIATION FOR COMPUTING MACHINERY
2 East 63rd Street, New York 21, N. Y.

Copyright

©

1959

THE INSTITUTE OF RADIO ENGINEERS

LIST OF EXHmITORS
AERONUTRONIC SYSTEMS, INC........................ Santa Ana, Calif.
AMPEX CORP .......................................... ,. Redwood City, Calif.
AMP INC .................. , . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Harrisburg, Pa.
BECKMAN/BERKELEY DIVISION OF BECKMAN INSTRUMENTS, INC..................................... Richmond, Calif.
BENDIX COMPUTER DIVISION OF BENDIX AVIATION
CO RP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Los Angeles, Calif.
E. L. BERMAN CO.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. San Francisco, Calif.
BRYANT COMPUTER PRODUCTS DIVISION ......... " Springfield, Vt.
C. P. CLARE & CO.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Los Angeles, Calif.
COLLINS RADIO CO..................................... Burbank, Calif.
COMPUTER CONTROL CO., INC.. . . . . . . . . . . . . . . . . . . . . .. Los Angeles, Calif.
DATA INSTRUMENTS DIVISION OF TELECOMPUTING
CORP.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. North Hollywood, Calif.
DIGITAL EQUIPMENT CORP......... . . . . . . . . . . . . . . . . . .. Maynard, Mass.
ELECTRODATA DIVISION OF BURROUGHS CORP...... Pasadena, Calif.
ELECTRONIC ASSOCIATES, INC.. ...................... Long Branch, N. J.
ELECTRONIC ENGINEERING CO. OF CALIFORNIA.... Santa Ana, Calif.
FERRENTI ELECTRIC, INC............................. Hempstead, N. Y.
FRIDEN, INC........... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. San Leandro, Calif.
GENERAL ELECTRIC CO., LIGHT MILITARY ELECTRONICS DEPT.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Johnson City, N. Y.
HUGHES AIRCRAFT CO.......... . . . . . . . . . . . . . . . . . . . . . .. Culver City, Calif.
INTERNATIONAL BUSINESS MACHINES CORP....... .. New York, N. Y.
LABORATORY FOR ELECTRONICS, INC..... . . . . . . . . . .. Boston, Mass.
LIBRASCOPE, INC....................................... Glendale, Calif.
F. L. MOSELEY CO..... . . . . . . . . . . . . . . . . . . . . . . . . .. . .. . . .. Pasadena, Calif.
G. E. MOXON SALES.... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. Beverley Hills, Calif.
PACIFIC TELEPHONE & TELEGRAPH CO.. . . . . ........ San Francisco, Calif.
PHILCO CORP........................................... Philadelphia, Pa.
RADIO CORP. OF AMERICA. . . . . . . . . . . . . . . . . . . . . . . . . . .. Camden, N. J.
REMINGTON RAND DIVISION OF SPERRY RAND CORP. New York, N. Y.
RESEARCH & ENGINEERING, THE MAGAZINE OF DATAMATION............................................ Los Angeles, Calif.
RESE ENGINEERING, INC.............................. Philadelphia, Pa.
ROYAL McBEE CORP.. . ................................ Port Chester, N. Y.
SOROBAN ENGINEERING, INC.. . . . . . . . . . . . . . . . . . . . . . .. Melbourne, Fla.
SPRAGUE ELECTRIC CO................................ North Adams, Mass.
STROMBERG-CARLSON-SAN DIEGO................... San Diego, Calif.
TALLY REGISTER CORP................................ Seattle, Wash.
TELEMETER MAGNETICS, INC......................... Los Angeles, Calif.
THE THOMPSON-RAMO-WOOLDRIDGE PRODUCTS CO. Los Angeles, Calif.
JOHN WILEY & SONS, INC.. . . . . . . . . . . . . . . . . . . .. . . . . . .. New York, N. Y.

NATIONAL JOINT COMPUTER COMMITTEE
Vice-Chairman

Chairman

P. Armer
RAND Corporation
Santa Monica, Calif.

H. H. Goode
Bendix Systems Division
Ann Arbor, Mich.

Secretary-Treasurer
Miss M. R. Fox
National Bureau of Standards
Department of Commerce
Washington 25, D.C.
IRE Representatives
R. D. Elbourn
National Bureau of Standards
Department of Commerce
Washington 25, D. C.

AIEE Representatives
R.A.Imm
IBM Corporation
Rochester, Minn.

H. H. Goode
Bendix Systems Division
Ann Arbor, Mich.

G. Glinski
Dept. of Electrical Engineering
University of Ottawa
Ottawa, Ont., Can.

W. Buchholz
IBM Product Dev. Lab.
P.O. Box 390
Poughkeepsie, N. Y.

C. A. R. Kagan
Engineering Research Center
Western Electric Company, Inc.
Princeton, N. J.

L. Nofrey
General Electric Computer Dept.
Phoenix, Ariz.

S. Rogers
c/o Convair, Mail Zone 6-156
San Diego 12, Calif.
ACM Representatives

P. Armer
RAND Corporation
Santa Monica, Calif.

J.

F. M. Verzuh
Massachusetts Institute of Technology
Cambridge, Mass.

H. R. J. Grosch
IBM Corporation
New York, N. Y.

R. W. Hamming (ACM)
Bell Telephone Laboratories
Murray Hill, N. J.

D. Madden
System Development Corp.
Santa Monica, Calif.

Ex-Officio Representatives
W. H. Ware (IRE)
RAND Corporation
Santa Monica, Calif.
M. Rubinoff (AlEE)
Philco Corporation
Government and Industrial Division
Philadelphia, Pa.

J.

The Headquarters Representatives
L. G. Cumming
The Institute of Radio Engineers
New York, N. Y.

Moshman
Council for Economic and Industry Research, Inc.
Arlington, Va.

R. S. Gardner
American Institute of Electrical Engineers
33 West 39th Street
New York, N. Y.

WESTERN JOINT COMPUTER CONFERENCE COMMITTEE
General Chairman. . . . . . . . . ..

R. R. Johnson, General Electric Co., Palo Alto, Calif.

Vice-Chairman. . . . . . . . . . . . ..

R. W. Melville, Stanford Research Inst., Menlo Park, Calif.

Secretary-Treasurer. . . . . . . ..

H. G. Asmus, General Electric Co., Palo Alto, Calif.

Technical Program. . . . . . . . ..

R. W. Melville, Chairman
J. Paivinen, General Electric Co., Palo Alto, Calif.
A. S. Zukin, Lockheed Aircraft Corp., Palo Alto, Calif.
D. Teichroew, Stanford University, Stanford, Calif.

Publications. . . . ... . . .. . . ...

B. J. Bennett, Chairman, IBM Corp., San Jose, Calif.
E. Lowe, IBM Corp., San Jose, Calif.
D. Willard, IBM Corp., San Jose, Calif.
A. Bagley, Hewlett-Packard Co., Palo Alto, Calif.

Publicity. . . . . . . . .. ........

G. A. Barnard, III, Chairman, AMPEX, Redwood City, Calif.
C. Elkind, Vice-Chairman, Stanford Research Inst., Menlo Park, Calif.
W. C. Estler, Consultant, Palo Alto, Calif.

Exhibits. . . . . . . . . . .. .......

H. K. Farrar, Chairman, Pacific Telephone & Telegraph Co., San Francisco, Calif.
G. H. Warfel, Bank of America, San Francisco, Calif.

Field Trips .. '. . . . . . . . . . . . . ..

K. F. Tiede, Chairman, Livermore Radiation Lab., Livermore, Calif.

Registration ........ , .......

R. M. Bennett, Jr., Chairman, IBM Corp., San Jose, Calif.
F. Chiang, IBM Corp., San Jose, Calif.

Printing. . . . . . . . . . . . . . . . . . ..

L. D. Krider, Chairman, Livermore Radiation Lab., Livermore, Calif.
R. Abbott, Livermore Radiation Lab., Livermore, Calif.

Women's Activities. . . . . . . . ..

Mrs. J. Teasdale, Chairman, General Electric Co., San Jose, Calif.
Mrs. R. Waters, General Electric Co., San Jose, Calif.
Mrs. E. Majors, General Electric Co., San Jose, Calif.

Local Arrangements. . . . . . . ..

R. C. Douthitt, Chairman, Remington-Rand, El Cerrito, Calif.
W. Gerkin, Remington Rand, San Francisco, Calif.

Mailing. . . . . . . . . . . . . . . . . . ..

E. T. Lincoln, Chairman, Stanford Research Inst., Menlo Park, Calif.

6

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

TABLE OF CONTENTS
Foreword.............. ........... ..................................................................................
New Horizons in Systems ................................................................................ Darwin E. Ellett
A Multiload Transfluxor Memory ............................................ D. G. Hammel, W. L. Morgan, and R. D. Sidnam
Design and Analysis of MAD Transfer Circuitry .............................................. . D. R. Bennion and H. D. Crane
A Twistor Matrix Memory for Semipermanent Information ................................................ Duncan H. Looney
A Card Changeable Nondestructive Readout Twistor Store ...................... . J. J. DeBuske, J. Janik, Jr., and B. H. Simons
Square-Loop Magnetic Logic Circuits ............................................... : .................... Edward P. Stabler
Relative Merits of General and Special Purpose Computers for Information Retrieval ..................... . A. Opler and N. Baird
A Specialized Library Index Search Computer ..................................................... . B. Kessel and A. DeLucia
Programmed Interpretation of Text as a Basis for Information-Retrieval Systems ................. " ., ................ . L. Doyle
A Theory of Information Retrieval. ..................................................................... Clinton M. Walker
The Role of USAF Research and Development in Information Retrieval and Machine Translation ............... Robert F. Samson
Computing Educated Guesses ...... " ... " ........................... '" .................. , ...... " ...... . E. S. Spiegelthal
A Memory of 314 Million Bits Capacity with Fast and Direct Access-Its Systems and Economic Considerations ...............•.
· .......................................................................................... N. Bishop and A. I. Dumey
Information Retrieval on a High-Speed Computer ................................ . A. R. Barton, V. L. Schatz, and L. N. Caplan
The Next Twenty Years in Information Retrieval: Some Goals and Predictions ................................... Calvin N. Mooers
Simulation of an Information Channel on the IBM 704 Computer ............................... E. G. Newman and L. O. Nippe
A Compiler with an Analog-Oriented Input Language .................................... M. L. Stein, J. Rose, and D. B. Parker
Automatic Design of Logical Networks ........................................................................ T. C. Bartee
The Role of Digital Computers in the Dynamic Optimization of Chemical Reactions ............. R. E. Kalman and R. W. Koepcke
Simulation of Human Problem-Solving ...................................................... W. G. Bouricius and J. M. Keller
The Role of the University in Computers, Data Processing, and Related Fields ...... " .............................. Louis Fein
The RCA 501 Assembly System .............................................. . H. Bromberg, T. M. Hurewitz, and K. Kozarsky
A Program to Draw Multilevel Flow Charts ................................................................... Lois M. Haibt
A Compiler Capable of Learning ........................................................................ Richard F. Arnold
Special-Purpose, Electronic Data Systems-The Solution to Industrial and Commercial Automation ............ William V. Crowley
The Residue Number System ............................................................................ Harvey L. Garner
System Evaluation and Instrumentation for Military Special-Purpose Digital Computer Systems ............................... .
· .................... " ................................................... , ....... . A. J. Strassman and L. H. Kurkjian
Automatic Failure Recovery in a Digital Data-Processing System .................. . R. H. Doyle, R. A. Meyer, and R. P. Pedowitz
A High-Speed Data Translator for Computer Simulation of Speech and Television Devices .................................... .
· .................................................................. E. E. David, Jr., M. V. Mathews, ani/, H. S. McDonald
Some Experiments in Machine Learning ................................................................. Howard Campaigne
Some Communication Aspects of Character-Sensing Systems .............................................. Clyde C. Heasly, Jr.
An Approach to Computers That Perceive, Learn, and Reason ................................................ Peter H. Greene
Automatic Data Processing in the Tactical Field Army ...................................................... . A. B. Crawford
Data Transmission Equipment Concepts for FIELDATA ............ '" ................................. " ..... W. F. Luebbert
A High-Accuracy, Real-Time Digital Computer for Use in Continuous Control Systems ...................... W. J. Milan-Kamski
The Man-Computer Team in a Space Ecology ....................................................... J. Stroud and J. McLeod
The RCA 501 High-Speed Printers-The Story of a Product Design ................................... C. Eckel and D. Flechtner
A Digital Computer for Industrial Process Analysis and Control ............................................. Edward L. Braun
The Burroughs 220 High-Speed Printer System .................................................. F. W. Bauer and P. D. King
The ACRE Computer-A Digital Computer for a Missile Checkout System ...................... , ........... Richard I. Tanaka
IBM 7070 Data-Processing System .............................................................................. J. Svigals
An Organizational Approach to the Development of an Integrated Data-Processing Plan ................... , .. , . George J. Fleming
Developing a Long-Range Plan for Corporate Methods and the Dependence on Electronic Data Processing ........ Norman J. Ream
A General Approach to Planning for Management Use of EDPM Equipment ................................ Gomer H. Redmond
Dynamic Production Scheduling of Job-Shop Operations on the IBM 704 Data-Processing Equipment .......................... .
· ....................................................................................... L. N. Caplan and V. L. Schatz
Numerical Methods for High-Speed Computers-A Survey ................................................. George E. Forsythe
More Accurate Linear Least Squares .................................................................. Richard E. von Holdt
The CORDIC Computing Technique .......................................................................... Jack Volder
Monte Carlo Calculations in Statistical Mechanics ........................... " ............... W. W. Wood and J. D. Jacobson
Real-Time Digital Analysis and Error-Compensating Techniques .................................................... WaUy Ito
Automatic Digital Matric Structural Analysis .............................................. M. Chirico, B. Klein, and A. Owens
A New Approach to High-Speed Logic .............................. " ............... " .................... " ... W. D. Rowe
Information Retrieval Study ............................................................................... Robert Cochran
Communication Across Language Barriers .......................................... " ..................... W. F. Whitmore
Symbolic Language Translation ......................................................................... Eugene C. Gluesing
A Generalized Scanner for Pattern- and Character-Recognition Studies ................... W. H. Highleyman and L. A. Kamentsky
File Searching Using Variable Length Keys ............................................................. Rene De La Briandais
Program Design to Achieve Maximum Utilization in a Real-Time Computing System ........................ . A. Frederick Rosene
Pattern and Character Recognition Systems-Picture Processing by Nets of Neuron-Like Elements ............. . L. A. Kamentsky
The Social Responsibility of Engineers and Scientists ............................................................ F. B. Wood
Emergency Simulation of the Duties of the President of the United States ....................................... Louis L. Sutro
Can Computers Help Solve Society's Problems? ........................................................... Jerome Rothstein
The Measurement of Social Change .............. " ...................................... " ............... Richard L. Meier
Simulation of Sampled-Data Systems Using Analog-to-Digital Converters .................... '" ............ Michael S. Shumate
FOXY 2: A Transistorized Analog Memory for Functions of Two Variables ......... . L. J. Kamm, P. C. Sherertz, and L. E. Steffen
A Time-Sharing Analog Computer. .................................................................... John V. Reihing, Jr.
Computers-The Answer to Real-Time Flight Analysis ....................................................... Guenther Hintze
Industry's Role in Supporting High-School Science Programs ......................................... J. O. Paivinen, Chairman

7
8
14
21
36
41
47
54
57
60
63
66
70
74
77
81
87
92
103
107
116
119
127
131
137
143
146

153
159
169
173
176
181
187
189
197
202
204
207
212
217
222
231
234
240
244
249
255
257
261
269
272
277
283
286
288
291
295
299
304
310
314
323
327
331
338
341
350
358

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Foreword
The Western Joint Computer Conference returned to San Francisco, Calif., for the second
time in 1959. The dynamic growth in our industry and the computer industry growth in the
San Francisco area is evidenced by the doubling of attendance since 1956.
The 1959 Western Joint Computer Conference presented a view of "New Horizons with
Computer Technology." These new horizons are appearing with the new circuits and devices
that are arising from the maturing research efforts of our growing businesses. Similarly, new
vistas are opening as a result of the knowledge gained through the application and utilization
of computers throughout our entire economy.
These annual conferences are intended to provide an opportunity for the professional
people in the computer industry to discuss their technical and business interests and accomplishments. The papers herein present the record of these discussions.

R. R.

JOHNSON

Conference Chairman

7

8

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

New Horizons in Systems
DARWIN E. ELLETTt
T HAS been our experience in the Air Materiel Command, and I imagine this is true of everyone engaged in data systems modernization and automation, that the further we advance, the greater potential
we find for achievement. A prime characteristic of our
objectives would appear to be mobility, for as we move
closer to each established milestone, we inevitably find
a new horizon beyond. So, in keeping with my subject,
I propose to review some suggested issues and opportunities for future exploration and development.
The field of data systems design and automation is at
once broad and complex, with its many interlocking
facets of concept, principle, methodology, equipping,
and technique, and its deep penetration into every
phase of management and operational activities.
Further, it is a very dynamic field. There are interacting
advancements afoot in all of these facets and all of these
phases, and there is, therefore, a wide range of new horizons available for examination.
In the light of this situation, my comments necessarily follow from a process of selection and limitation.
I have chosen those areas which appear to us to be of the
broadest application and the most basic concern.
As a first order of limitation, I shall confine my remarks to management systems from the viewpoint of
the Air Materiel Command. I shall further limit the
subject to a discussion of four steps in the management
process:

I

1) The expression of the mission of the Command.
2) The establishment of the work processes for the
accomplishment of this mission.
3) The establishment and organization of the management rules associated with these work processes.
4) The translation of these rules, where appropriate,
into the procedural detail required for machine
application.
Although I am basing my remarks on our experiences
in the Air Materiel Command, I think it is safe to say
that the fundamental management decisions are substantially the same in all enterprises. Therefore, the
issues upon which I shall build my case are offered as being of possible general application. In this connection,
since I am basically describing a possible course for our
own future efforts, my intent is simply to add whatever
stimulus I may to your thinking, rather than to propound ready-made answers.
The immediate reaction to my selection of specific
topics may be that I am outside the lawful hunting

t Colonel, USAF; Chief, Data Systems Plans Div., Air Materiel
Command, Wright-Patterson AFB, Ohio.

ground of the data systems designer. Traditionally, this
is true. I think most of us work under written or implicitly accepted legislation which decrees that the establishment and ordering of work processes is a separate
task, and that management retains the prerogative of
deciding and specifying how it shall manage.
Let it be understood at once that I am not about to
make a case for major adjustment in the organizational
chart, with the data systems planner to come into a new
ascendancy. Rather, I am making the proposition that
we cannot do all that we should, if in our data systems
work we go no deeper than a reappraisal and updating of
accounting and computational methodology, for this
level is to a large degree no more than a reflection of the
basic work flow itself.
In developing a case for positive action, I should like
to begin with a brief review of our own data systems
modernization program. Since early 1954, we have been
vigorously pursuing a command-wide program for modernizing data systems through the use of electronic data
processing and modern communications. Our primary
aim has been data systems integration. I shall not attempt to define integration to the satisfaction of everyone, but I should like to devote a moment or two to
pointing up what such an objective has meant to our
program. Simply stated, it has meant that our program
could be no less in scope than the command itself. While
we do pursue a policy of extensive decentralization, all
of our activities and operating facilities are themselves
completely interdependent; they are as indivisible as
airpower itself.
Let me further enlarge on this point by a brief description of our specific management responsibilities and
relationshi ps.
The Air Materiel Command is the agency responsible
for materiel support of all Air Force organizations and
activities in accordance with established Air Force plans
and programs. This involves:
1) Managing the production and delivery into active
service of the air weapons themselves.
2) Determination of requirements and control of
world-wide distribution of the million and a half
line items of spare assemblies, parts, and supporting equipment and "supplies in today's inventories.
3) Operation of a formidable array of industrial facilities which must physically accomplish the procurement, storage, transportation, and repair of
these items.
In the execution of these responsibilities, we do not
have a situation, as have many industrial concerns,
where each operating division, or plant, can be permanently identified with a separate product, or range of

Ellett : New Horizons in Systems
products, and can therefore engage in separate forecasting, workload planning, budgeting, production, and distribution. To the contrary, it is in the nature of the Air
Force mission, and the highly dynamic state of research
and development, that air weapons will constantly enter
and leave the active inventory, and during their life
cycles, will enjoy different relative precedences at different points in time. All command resources, in funds,
materiel of common usage, and facilities, must be applied in the manner most effective for appropriate support of all weapons, in accordance with the current
combat force structure, and the precedences of units
and weapons within this structure.
This condition brings to bear two severe criteria for
the data systems planner:
1) In the light of the very dynamic operational and
technological environment, there must be a very high
capability for rapid and coordinated reaction and adjustment down through all of the levels of materiel support programming and program execution, from weapon
production, through scheduling of spares acquisition
and allocation, to workloading of the industrial facilities.
2) The managers of all components of the command,
however organized and wherever located within our
fourteen major management centers across the country,
are jointly engaged in managing a common enterprise
in support of a common workload. This means that
there is complete interdependency of management decision and action, and therefore interdependency for
management information. Thus, we have, in fact, a
single man age men t process and the need for a single,
all-inclusive data system.
With an awareness of this need, our data systems development program has moved out on this general
basis:
1) We have defined and published this objective of
systems integration, and the essential detail of planning
criteria it engenders.
2) We have initiated and are well along on a systems
design and application program which provides for the
identification and separate modernization of data subsystems.
There are about 30 of these subsystems within our
total data system. Examples range from the computation of requirements for aircraft spare parts, to inventory control of ground vehicles, to cost accounting and
labor distribution in the maintenance shops. The objective of command-wide integration implies the need for
command-wide procedural standardization, and our development program is geared to this end. A formal development project has been established for each subsystem. Each such project is guided by a Headquarters
steering group made up of representatives of all offices
of management or systems interest and monitored by a
representative of the Data Systems Plans Division.
Since our materiel operation is decentralized, most of
the projects cover depot-level applications. Therefore,

9

much of the procedures design work, and the machine
programming, test, and initial computer application are
assigned to a "pilot" Air Force Depot in each case.
When this work is completed, all other Air Force Depots
concerned implement the standard product as developed. (Air Materiel Areas and Air Force Depots are
equivalents for our purposes here. Each of these activities holds world-wide materiel management responsibility for an assigned segment of the total inventory. In
addition, each is the site of a part of the industrial facility complex, and as such engages in materiel processing, that being primarily m'ajor repair and wholesale
storage, for the various materiel management elements.)
As one can see, under this approach we are taking the
data system apart, updating its components, and putting it back together again i,n a new composite. We are
doing this through central control against an end of
integration, and through decentralized but standard
modernization of components.
We can report that this approach works, and it works
rather well if the objective is to make rapid improvements in data handling. However, it will not quickly
achieve integrated systems, as will become evident in
this discussion. We have been attacking the subsystems
on a "pay-off-as-we-go" basis, in an order of priority
which affords the greatest rate of improvement in our
combat support capability and in resources management. Many of our priority efforts have reached the operational stage, with acceptable results; others are
nearly there.
About a dozen large-scale and 30-odd small- and
medium-scale computers are in active use at Headquarters and at our Air Materiel Areas and Air Force
Depots. We have in being a world-wide punch-card
transceiver net which links these centers and most of the
major Air Force Bases, and we are well along toward
significant further improvement in our communication
systems.
Beyond these immediate data systems improvements,
there has been increasing awareness of the need and opportunity for greater management integration. In recent
months, the command has undergone considerable adjustment in management organization, moving into an
aggregation of responsibilities which parallels closely
the present array of decision points and resulting data
flow.
While our data system work has been by no means
the sole (and quite possibly not even the primary) cause
of this reorganization, it is safe to say that it has contributed an important and positive influence. At the
same time, and notwithstanding these significant
achievements, if we are to attain and retain the ultimate
in an integrated system, there is much yet to be done,
and this brings us back to the further efforts I defined at
the beginning of this discussion.
Before I move into a brief review of these issues, I
should like to point out that much of what I shall say is
based primarily on personal thoughts and observations,

10

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
\

and should not be regarded as a chronicle of what the teriel to the using forces. These are the action steps
Air Force now officially proposes to do. I am simply pro- which march through all phases of planning, programposing to share with you, as fellow workers in the sys- ming, and budgeting, and the direction, execution, and
tems field, some of the ideas that have evolved in at- evaluation of results of these programs. And although
tempting to project my thinking beyond today's state- there is a positive interrelationship between the work
of-the-art and potential achievement in the near term. steps and the rules associated with them, I believe the
Let us begin with my first step, "The expression of general pattern of work must be planned before setting
the mission of the Command." As presently stated, the down the rules by which it will be directed and reported.
Air Materiel Command does have an abstract single
I do not know how many such steps would be involved
ultimate objective. In a very simplified version, it is: in the typical situation, nor do I know how many there
To produce and deliver air weapons and thereafter to are in our own system, but obviously there are a considsupport them through their life cycles, in accordance erable number, possibly several thousand. In whatever
with Air Force plans and programs and priorities, and event, a description of these steps, in their logical seas indicated earlier, we have a single integrated enter- quence and without regard to current organization or
prise to accomplish this mission. Although it has a division of responsibility, is a delineation of the live sysmultiplicity of functions, and a complex far-flung or- tem itself, and is therefore the foundation for the most
ganization, it is nevertheless of single purpose and uni- effective data system. Since the advent of the electronic
fied by nature. Now, in the modernization of data sys- computer, and of course even before that, we have all
tems over the past few years, it has become increasingly given much attention to the data needs of management,
evident that if the many decision points in this total and have been much concerned with meeting these resystem are to be equated, then the mission itself, to quirements. I am now suggesting that there is a more
which they should all relate, must be stated quantita- basic task at hand, if we are to explore the full potential
tively. There have, of course, been many advancements of today's equipment and techniques. We must equip
over the years toward a more precise definition of the ourselves with a systematic view of the total work procend product of the work of the Command. For example, ess in which the data system is grounded, which is at
we have used and continue to use a desired in commis- once the source of all its inputs and the destination of
sion rate for each of the various types of aircraft and all outputs. For the relatively small and simple entermissiles in the inventory. And to some extent, we have prise, operating at one location, this may be no great
used a desired percentage of positive supply action as problem; all of the elements may literally be already in
an objective of the enterprise. We use and have used an sight. However, for the large-scale activity, which has
array of other objectives, such as a desired percentage of expanded with the industrial revolution and has tended
the aircraft and missiles in commission that are equipped to grow up in segments, through the necessary multiplito perform their wartime mission, or the sortie capabil- cation of management, this is no small task.
I can perhaps elaborate a bit on the true extent of
ity desired at a given point in time by various units,
with an expression of the quantitative materiel require- such an effort, and further establish oUrr need for this
ments at a particular site to achieve this. None of these systematic view, by expanding the discussion of a work
objectives or definitions of the products of the Com- step. We find that in the operating system, the work remand are entirely satisfactory when viewed from the quired is a mixture of data processing and materiel procstandpoint of the data system designer, who is attempt- essing, interwoven and interlocked. In the early stages
ing to relate appropriately all decision points to each of the cycle, particularly those involved in planning,
other and to the mission of the organization. This is so programming, and budgeting, only data processing is
because the objective has never been defined as a single involved. However, beyond the first point at which maall-inclusive quantified expression. I realize that to teriel is committed to manufacture, there is an interdeachieve this is a tremendous undertaking, but without pendent blend of materiel processing and data processit, no final evaluation of all of the work processes as to ing in terms of work flow. From this observation, we
their essentiality and quality can be achieved nor can have drawn the conclusion that data processing is so
the best supporting automated data system be attained. closely related to the physical work process that it is
Our systems people are not now actively working in this inseparable of design if the ultimate in integration is to
particular area, but we realize that to do so will become be achieved. This alone is sufficient reason for a view
increasingly more important with the passage of time. which includes all work steps of whatever specific naNext, the establishment and description of the physi- ture.
And this is by no means the whole story. If we follow
cal work processes necessary to the accomplishment of
this mission. By referring again to our single ultimate this logic a bit further, we come to a rather startling
purpose, and the unity of our enterprise, we come to this revelation. While we have said that data processing is an
conclusion: The entire operation can be considered in integral part of the total work process, we are still contotal as a world-wide production line, composed of an fronted with the fact that the basic job is to manufacunbroken series of work steps, from original input of Air ture, deliver, store, repair, use, and dispose of materiel.
Force programs to final delivery of the last item of ma- To go back to one of our earliest axioms: data processing

Ellett: New Horizons in Systems

11

is not an end in itself, paper can never be the final prod- production is by definition to a large degree automauct. This means that data processing serves only to pro- tion of management data processing.
vide a control and a report; a control to direct a physiIn summary on this point, we come to the inevitable
cal action with reference to the materiel itself, and a re- conclusion that we must eventually cease to view the
port of action accomplished which in turn leads to direc- data system as something apart; we must step up to a
tion of the next succeeding materiel work step, another plane from which we can attack the entire operating
action, another report, and so on throughout the entire system as an inseparable entity.
I should like to turn now to my third point: the essystem. Of course, there is a very definite layering effect
in data processing. The execution of one data step may tablishment and organization of management rules
lead only to another, down through any number, but associated with the work processes. This subject reflects
the whole series must eventually emerge into direction to a considerable degree the same issues as those we
of a materiel processing action in our business or it has found in reviewing the subject of work steps, for a management rule as I am using the term is basically an exno purpose.
If all of this is so, then it must surely follow that much tension of the statement of a work step to include the
of the complexity and volume of data processing must be conditions, actions, and exceptions which will apply
charged to the number of separate control directions to under each set of possible circumstances. For each work
be given and reports to be rendered, and the extent of step there must be governing management rules, and in
composite these steps and their rules are a procedural
their repetition.
Therefore, can we not postulate from this that the extension of management objectives, plans, and policies.
road to data systems simplification must finally lead The incidence of management rules will, of course, dethrough simplification of the materiel work process? pend entirely upon the incidence of variable conditions.
The simpler and better organized the ultimate job, the In some situations, a standard and unvarying condition
will exist and the only management rule needed is an imsimpler the control and reporting system.
Beyond this, lies factory automation, which I think plied, "Do this work step." Such work steps are most fremost of us have historically regarded as a not-too-fa- quently found in straight production-line situations
miliar cousin. As the physical work itself is automated in where there is no choice and therefore no local decision.
On the other hand, there are of course many situalonger and longer runs, there must be a corresponding
decrease in the number of points at which individual tions where multiple alternatives present themselves.
energizing instructions must be entered from the central Since each decision will have impact on all of the steps
manage men t data system. I t should be possible in the that follow, and, therefore, on the functioning of the enforeseeable future to introduce original instructions at tire system, it is absolutely essential that all possible
point of entry of materiel into the automated production selections must be carefully established and described
run at hand, and thereupon, to pass control to the local within a frame of reference which can be no less than the
automatic governing mechanism, with no need for re- ,total operation.
I think I can develop this point and perhaps point up
port or new instruction until the materiel is delivered at
the other end of the line. The longer and, therefore, the an interesting area for exploration by categorizing these
fewer number of separate production runs, the lesser de- management rules in two groups.
The first group would include all rules which are made
mands upon the central management data system.
Here again a case for reviewing the steps involved in necessary by the nature of the enterprise itself. It would
both data processing and materiel processing as a hardly be realistic to assume that any large-scale acsingle entity, with improvement in one basic to per- tivity could be straight-lined from original plan and
formance in the other.
policy to final accomplishment, with absolutely no need
Our case can be extended through examination of one for intermediate decision and selection of alternatives
final issue in this cycle of events. By automation of the along the way. There are too many variables and unmateriel production process in long sequences, we do, as predictables in the surrounding environment alone to
I have indicated, reduce the need for step-by-step con- permit such a condition. I believe I can demonstrate
trol from the central management data system. But the this rather obvious fact by a simple example, in terms
fact remains that this control must go on. We have of our own Air Force materiel support, and in terms of
simply transferred it to the local automatic control de- work steps and management rules which are a part of
vice for the production run in question. Now, two facts data processing.
become self-evident. First, this local control device is
In the course of our accomplishment of materiel supitself by nature a data processor, moving through a port, one of the major sequential work steps might be,
series of orders and counts, which are quantitative of "Distribute this commodity from storage point 'A' to
expression, and second, it is energized by the input of customer 'B'." Obviously, such a statement immediproduction schedules which -can only come from the ately begets subordinate steps. One might be, "Process
central management data system. Again, the ordering the customer's requisition." Now, as you can readily see,
and processing of the work itself is basic to a correct we have stepped off into the field of data systems, for a
design of the data system, and automation of materiel good part of processing a requisition is data processing.

12

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

We now come therefore to work steps and management
rules which are of basic importance to the data system
designer. One such work step might be, "Compare quantity requested with inventory balance." Here the management rules set in. For example, if the balance is adequate, take one action. If there is stock, but this order
brings the balance below minimum levels for retention,
take another. If there is no stock, there is a third type
of action involved. No matter how precisely future systems may permit the planning of requirements, unpredictable events in manufacture, delivery, or usage may
operate to cause this condition. Therefore, if automation
is to occur, management must make rules in advance to
cover all situations that may arise due to the relationships of the enterprise to its surrounding environment,
and management must assume responsibility for failure
in this area.
The second group would include what we might call
deficit rules; those made necessary by faults in design of
the system itself.
In a rather obvious example, one element of the
data system might prescribe a work flow which terminates in the rendition of reports from a number of similar activities. Another element of the system must receive and act upon these reports. If these reports are in
standard format and content from all reporting agencies, and as needed by the receiving activity, then it is
possible to proceed immediately to the next work step
of operational value. But, if reports are not consistent,
there is a need to provide management rules and further
work steps which cover actions to be taken to discover
and eliminate the various discrepancies. This might involve returning incorrect or incomplete reports to the
sender, and suspending the remainder. It might involve
making assumptions which would permit the process to
continue. In any event, it causes the establishment and
building into the system of a set of management rules
which serve no more than a deficiency purpose, and for
which responsibility must rest with the systems designer
himself.
If we design and create the system in segments, there
is every reason to believe that with due care we shall
eliminate the internal need for such deficit rules within
each segment. However, with anything less than perfect coordination of detail and rate of progress among
those designing the various segments, such deficiencies
will inevitably occur. The direct result: inefficiencies in
the segments, cumulative inefficiencies in the entire system, and finally inefficiency and loss of effectiveness and
economy in meeting the basic mission. I believe that at
this point we can add this review of management rules
to our earlier observations on work steps, and draw a
rather obvious conclusion: to secure the most effective
results, we must design and bring into being a system
which extends from original input programs to final
product delivery in an unbroken series of work steps and
governing management rules, with physical materiel
processing and data processing steps and rules, and

their automation, regarded as inseparable and completely interdependent. The composition of such a
system must be limited entirely to rules and steps
which actually advance the work at hand; and as a
first order of business, the system deficiencies must be
sought out and eliminated. This latter statement raises
a further issue: the efficient organization of these work
steps and rules. We have just observed that steps and
rules entered by the systems designer as a deficiency action must be eliminated. This leaves the province of
work step establishment and type one rule formulation
entirely to management, and this is as it should be.
However, once this has been accomplished, the system
designer must step forward and assume responsibility
for their most effective organization. This division of
effort has always been true of manual methods; it is
even more pertinent as we move deeper into process
automation. This organization comes to final fruition in
our detailed procedural flow and our computer routines,
and the outer limits of effective automation will depend
upon our ability to absorb and consolidate these routines in larger and larger composites.
Thus, we find ourselves in a situation where we must
pursue an objective of attaining and maintaining complete systems integration, accompanied by maximum effective automation. We must ask all of the many elements of management to establish work steps and formulate rules in the light of all others, and we must ask our
systems designers to organize all of these steps for the
most effective execution.
Given this statement of need, we must now cast about
for a con trol vehicle through which we can approach this
task, and here we come upon a familiar subject. I refer
to the various directive and procedural manuals and
similar publications which are an integral part of any
management system, in which all of these work steps
and rules are traditionally expressed, and which must
continue to serve in one form or another as our documents of reference.
At this point, ~e are confronted with very real considerations which are at once friend and foe: the English language, and the human mind.
You are all well aware of the limitations of the English language as a device for definitive and precise communication in procedural publications. If this is true in
a limited area, consider the problems involved in attempting to formulate, correlate, and record with precision, all work steps and management rules for the
whole of a large-scale system. And consider further the
very limited ability of the human mind to comprehend,
analyze, correlate, and evaluate all of these management
rules under such conditions.
Fortunately, we believe that there is the very crude
beginning of a solution at hand, one which requires a
great deal of imaginative and painstaking effort, but
which offers great potential. This is central and standard
control of the body of publication knowledge. I am
sure that this approach has occurred to others in the

Ellett: New Horizons in Systems
field of data system design, probably including at least
some of you here today.
One of our consulting firms has just completed considerable initial research in this area, utilizing some of
our existing publications for analysis. The results indicate that while this is certainly no simple task, it is entirely possible, and we are now preparing to extend our
efforts into further research and probably later into a
major attack on some of our more critical areas. Specifically those which are identified with our priority
data subsystem development projects.
This attack is guided by four basic principles:
Principle No. I-Work steps and management rules
can be removed from the usual narrative form and expressed as independent entities, often in tabular form,
and without consideration of data systems problem organization or the type of processing equipment to be
employed. This principle in itself will work to bring
greater precision to the formulation and application of
management rules. Further, and most important, such
an expression will serve to close partially the language
gap between management and data systems designers.
There is some relief from the need for joint logical flowcharting; management, as the creator of steps and rules,
need not concern itself directly with the most efficient
organization of rules for machine processing.
Principle No. 2-Techniques are required which will
permit the efficient processing, correlation, and control
of the total body of p,ublication knowledge, as a basis
for coordinated formulation of all work steps and management rules. This can be accomplished through the
numerical coding of each English language information
element, set, and system as they are used in procedural
publications to describe these steps and rules. This permits the correlation of English words and phrases which
are dissimilar of expression but have common meaning,
and their reduction to specific and standard terms,
which are in turn susceptible to consolidated filing and
processing.
Principle No.3-There can be established a central
information repository containing all steps and management rules, expressed in terms of standard elements and
sets of information, and indexed to all applicable components of the system, and to related publications. This
library would serve as a control filter between management policy and the data system, by providing evaluation of the total impact of a proposed change to any rule
or information element, a must in the interests of sustained integration.
Principle No.4-Such a file is itself a very likely candidate for automation. This assumes a rather extensive
file, and the need for frequent reference, and it has been
our experience that our own system would certainly
meet these criteria.
In summary to this point, we have proposed a quantitative expression of the objective, the restating of all
contributing work steps, and management rules, in the
most logical and effective sequence across the entire

13

system, their conversion to more precise and standard
terms, and a central file and process which will permit
management and systems designers to review a change
to any sequential step or rule in relation to alI others,
an impossible feat in today's large enterprise, but possibly not so in the future.
Finally, we come to our last new horizon for today's
consideration: automatic machine programming. This
subject may seem a bit apart from those I have been
discussing, but I believe I can establish a close liaison.
As you all know, programming is a very difficult task requiring a very scarce skill, and it is one of the major
limiting factors of our rate of progress in computer application. In the light of this situation, we, like other
computer users, have moved actively into the field of
automatic, or general, programming. We define this as
a technique whereby the computer is instructed in basic
English verbs, and in turn the machine codes instructions for itself when fed appropriately designed data
to be processed. The criticality of this area has been
such that we are applying some of our best talent in a
major effort at improvement. While we are still far from
final achievement, we have made considerable progress,
and are quite optimistic about future potential. In addition to practical relief from the programming task,
achievement here has been considered essential to insuring the necessary degree of data systems standardization
and integration among our computer centers.
Beyond these direct systems benefits, let me now
point up the close relationship between this field and
those I discussed earlier. The specific situation is this:
if we can achieve a high degree of direct correlation between the input to automatic programming and the
standard expression of work steps and management
rules, we will have forged the final link in the chain, and
will have a highly controlled and responsive means for
progressing from original management policy and decision to procedural implementation. We can then close
the circuit by providing test decks and other analytical
means for insuring that the information system has in
fact produced the results desired by management in
original decision and directive.
This, then, is our view of where we must go in the
future if we are to reap the full benefits inherent in the
use of advanced equipments and techniques, and are to
finally achieve the best in management control through
systems integration and automation.
In summary and analysis of the issues I have been
discussing, it occurs to me that the basic substance has
been this: The future of our systems work holds almost
unlimited potential for gains in industrial efficiency, and
that resulting benefits will continue to justify the expenditure of the required resources to achieve them.
However, I cannot leave the subject at this. I suggest
for your sincere consideration that if we as systems
people proceed solely within this goal of efficiency, we
are guilty of introspection; we may miss completely the
basic point and true objective of all our efforts to im-

14

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

prove the welfare of human beings. If there is a real reason for promoting this efficiency, it can be none other
than the betterment of human living, whether through
better defense of the nation, improvement of the
standard of living, relief from manual drudgery, or
whatever the specific and valid aim.
In pursuing our goals of greater integration and
automation by attempting to predict all eventualities
and to prearrange solutions, we as systems workers,

must never lose sight of this purpose, lest in our enthusiasm we do harm to our real intent. We must assume
major responsibility for insuring that every step forward meets the criterion of even greater human contribution all up and down the line, and therefore even
greater human dignity. Progress in systems design and
automation must always be measured within this larger
perspective, and must forever be conditioned, and perhaps even limited, by this governing need.

A Multiload Transfluxor Memory
D. G. HAMMELt,

w.

L. MORGANt,

INTRODUCTION

N the field of computing machinery there is an everincreasing demand for the develop men t of randomaccess digital memory-storage units that operate at
higher speeds and provide greater storage capacities. In
1951 a digital memory-storage unit that had a capacity
of 1000 words and performed the basic memory cycle in
about 200 p,sec was quite sufficient to satisfy the needs
of a large-scale data-processing system. Today memory
units of large-scale data-processing systems are being
designed to provide as much as 90,000 words of storage
capacity and to perform the basic memory cycle in
about 4 p,sec. The trend is obvious, but the means of
achieving desired results are often cumbrous.
The development of a superior memory-storage device is presently a major consideration of many prominent research activities. An important feature of the
most promising schemes is the ability of the storage device to perform a nondestructive readout. This means
that the state of the storage device is not destroyed
whenever a readout is performed. This is not the case
with present-day magnetic core memories which destroy
the stored information when the core is read, and consequently require that the information be written back
into the memory if retention is desired. This in effect
gives the nondestructive storage medium a 2: 1 speed
advantage over the destructive storage device. "
The read/write speeds of the memories being developed with present-day techniques are approaching
their maximum, and any further increases can be
achieved only with decreased reliability and increased
costs. These memories are capable of executing only one
access at a time and therefore restrict the digital com-

I

t RCA, Defense Electronic Products Div., Moorestown, N. J.
t RCA, Astro-Electronic Products Div., Princeton, N. J.

AND

R. D. SIDNAMt

puters to functioning sequentially. The ability of a
memory to perform more than one access simultaneously would be a major advancement in the computer
field; this would be equivalent to increasing the read/
write speed of the memory cycle. But a much more significant aspect to this mode of operation is that it
would make possible the practical realization of truly
parallel computers, computers capable of simultaneously and independently sharing the same memory or
memories and hence able to communicate at the computer speed. A storage system which holds promise of
fulfilling all these desirable features is a multiload transfluxor memory.
The multiload transfluxor is a multiapertured magnetic memory element employing the same type of highremanent ferrite used in the ordinary memory cores.
Thus the transfluxor is built upon a strong foundation
of practical and theoretical ferrite core knowledge. The
wealth of experience that has been accumulated during
the development and use of coincident current magnetic
core memories is applicable. In addition the transfluxor
offers many properties heretofore unobtainable.
THE Two-HOLE TRANSFLUXOR MEMORY

The original Rajchman and Lo transfluxor is a twohole ferrite core. 1 The following is a brief explanation
of the device. (See Fig. 1.)
The large hole is used for the writing operation and
the small hole for reading. There are two parts to the
writing cycle: a block pulse and a set pulse. The block
pulse is a large pulse, either positive or negative, which
saturates the entire core in one direction. When the core
is blocked, the flux direction on both sides of the small
.1 A complete explanation of the device may be found in J. A.
RaJchman and A. W. Lo, "The transfiuxor," PROC. IRE, vol. 44, pp.
321-332; March, 1956.

15

Hammel, Morgan, and Sidnam: A Multiload Transfluxor Memory
hole is the same [Fig. 1(b) ]. The set pulse is a restricted
smaller pulse of opposite polarity which reverses a
portion of the flux in leg one and all the flux in leg 2.
This is the set condition of the core [Fig. l(c)]. The
transfluxor is read by applying sine waves or pulses
of alternating polarity to the small hole to sense the
presence or absence of a set condition. If the core is
set, the following occurs. First, a prime pulse, which
produces a clockwise path around the small hole [see
Fig. 1 (d) ], is applied to leg 3. A drive pulse is then applied to leg 3 to produce a counterclockwise flux path.
The reversal of the direction of the flux path around the
small hole by the prime and drive pulses generates an
emf which is sensed by a winding on leg 3 [see Fig. l(e)
and l(h)]. The set condition of the core and subsequent
readout of a sense voltage is the equivalent of writing
and reading a "1."

HALF PRIME
DRIVE

a

HALF BLOCK
SET

a

I~+-~--o

@
I

1 (g)-1(i) ]. Thus the generation or nongeneration of a
sense voltage is the equivalent of reading a "1" or a "0"
from the core.
The applicable addressing techniques for transfluxor
memories are similar to those used for magnetic core
memories. Each of the two transfluxor holes may have
its own set of selection wires as shown in Fig. 2. Because there are separate addressing systems for both
reading and writing, and because there is negligible interaction between aperture signals, it is possible to
write in one location of the array while simultaneously
reading in another location.

HALF BLOCK
a SET

--l-~~-\--''(<.&.~----,(~....---t--'(~bH'l'r--H-~

HALF PRIM E
a DRIVE

23

NUMBERING OF LEGS

BLOCKED

(a)

(b)

(c)

ill
Fig. 2-Coincident current transfluxor array.

(d)

(e)

(£)

"~I,

I
PRIMING A
BLOCKED
CORE

DRIVING A
BLOCKED
CORE

(g)

(h)

,

J

I

,

~
p

p

P

(i)

Fig. l-Transfluxor operation.

In order to write a "0" in the core, the write operation must be modified so that the core remains blocked.
This is accomplished by an inhibit winding through the
large hole. At the same time the set pulse is applied, a
half-current pulse of opposite polarity is applied to the
inhibit winding. This half-pulse thus nullifies the set
pulse and the core remains blocked. With the transfluxor blocked, the prime-drive pulses will not reverse
any flux in the vicinity of the small hole and consequently will not generate any sense voltage [see Fig.

An important feature of the transfluxor is its ability
to perform a nondestructive readout. This ability of the
transfluxor memory system offers significant advantages to the digital computer. Because of the nondestructive nature of the transfluxor, there is no need to restore
the transfluxor to its original state after interrogation.
This results in the elimination of a major portion of the
read/write cycle normally associated with memory
systems, and the timing requirements are appreciably
simplified. In effect the speed of the memory cycle is
increased by approximately 2: 1.
This nondestructive read characteristic is also significant in that there are no ruinous effects when a transient
error occurs during a read operation. True, the information in a memory location may be read incorrectly due
to a transient noise but this does not affect the contents
of that memory location since the data do not have to
be rewritten from the output. A computer that has a
nondestructive memory can easily cope with transient
noise by immediately repeating the read operation upon
detection of a read error. The ability to read from the
memory without destroying its contents is a desirable
feature especially in real-time computer applications
where the execution of the stored program must be reiterated indefinitely.

16

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
x

THE MULTILOAD TRANSFLUXOR

One of the attributes of the transfluxor is signal isolation between the windings of the small hole and the
large hole. Through the proper physical placement of
additional small holes in the ferrite body and a separate
set of addressing wires and sense windings in each small
hole, it is possible to obtain more than one independent
output from the transfluxor without seriously affecting
the signal isolation between holes. Thus it is possible
to consider three, four, five, or more holes in the transfluxor, offering the designer an extremely versatile memory storage component. This new component is called
the multiload transfluxor.
For the purpose of discussion, consider a multiload
transfluxor with a large hole and two small holes. If only
one of the small holes is used for readout, the performance of the multiload transfluxor is identical to the twoholed transfluxor. A possible configuration of a dualload transfluxor and its flux patterns with only one of
the small Woles being pulsed for readout is shown in Fig.
3(a)-3(d).
The cores shown in Fig. 3 are pulsed by the coincident-current address method. This means that two
pulses, one on the X wire and one on the Y wire, are
needed to supply the total current required to generate
the proper flux condition in the core. These flux patterns
are nearly the same as those for the more conventional
transfluxor. If the prime-drive cycle is simultaneously
initiated in both small holes, the resulting flux distributions are shown in Fig. 4(a) and 4(b). Notice that there
is minimum interaction between the holes. This permits
the timing of the priming and driving pulses for each
hole to be independent of the other.
The multiload transfluxor as used in a memory system has a separate set of addressing wires and sense
windings in each small hole. Each set of wires is tied to
independent address registers and loads. All address
registers are capable of addressing randomly any core
in the array and any core may be addressed by all the
registers. Therefore the information stored in one core
may be read out simultaneously to any or all loads, and
any number of cores may be read independently into
different loads.
With only one large aperture in each multiload transfluxor, there can be only one write/addressing source for
each word. This limitation is imposed on the large aperture because each inhibit winding is common to every
bit in its memory plane; so it can only represent one addressing source. Of course, multiple writing could be
made possible by eliminating the inhibit winding, but
this scheme involves many more control circuits. A
simple and quite satisfactory method of accomplishing
multiple writes is to use a split-memory system. The
split-memory system is discussed later in this paper.
There are two techniques for selecting cores or core
registers mentioned in this discussion: coincident-current selection and external word-selection (end firing).

X
NIs
2

t--""*-+t-I-I-+ Z

(b)

.~~§.
(INHIBIT)
c;

SET~ING (WRITE)

( :1)

"
Fig. 3-Flux patterns of a dual-load transfluxor with only one small
aperture being pulsed; single-load readout.

PRIMING BOTH HOLES

(a)

DRIVI NG BOTH HOLES
N~

N1cJ

"2 y

X2

(b)

Fig. 4-Flux patterns of a dual-load transfluxor with both small
apertures being pulsed; double-load readout.

The coincident-current technique is illustrated in Fig.
5. It is used here to address three cores simultaneously
from three independent sources.
The wire leads in the array are designated ~y a code
which describes the addressing source, the addressing
axis, and the row or column number of that axis. For
instance, LY3 indicates that the origin of the lead is
the third column in the Y axis of the L addressing
source.
In this figure the U addressing source transmits halfcurren t priming signals on the UX 4 and U Y3 leads. Accordingly, the upper apertures of cores 41, 42, 44, 45,
13, 23, and 33 all receive half-current pulses, but this
half current is not sufficient to significantly disturb the
flux pattern around the small aperture. This is a basic
requirement of any coincident-current magnetic-core
memory. However, core 43 receives half currents on
both the UX and UY axis leads and the coincident
summing of these currents causes a flux reversal in the
vicinity of the small aperture. Thus, core 43, and only
core 43, is primed for reading by the U addressing
source. Similarly, the L addressing source selects core 13
for reading also, while M addressing source selects core
22 for writing.

Hammel, Morgan, and Sidnam: A Multiload Transfluxor Memory
UY'l-L ly, UY2l-(lY2
(

\
......

-

vL........
(

,

,

\ l\, ~

...... ~

(

\
)

1'--1----

\

\

\

\

/ L

UY3~ lY3 UY4l-(lY4 UY5l-( lY5

1
l

12

\\

'J}

~~

(

\

(

\

}

\

13

'- --

F

'liiii'1J:J..---oSENSE (L)

I'-V 15

14

• I

...
} ...

\

#;
~

21
(

!
31

\

\

(

v_
(

I

1'-'--:

II

----- \

\

/

\

17

v'--..

~~

"~

(

------1
\
' / I
\

J

~

...

-- 23

\

/

j

(

\

(

\

..
\\
\ ' '" !
1'-1--- 42
~~ 43
~

j
\

}-+-

...--c........

)-

(

\
!'---V

35

Fig. 6-Switch-driven memory-employing transfluxors.

v'-....

1-'--..
1\\

----,

'" ! .. \ "- ~'
'-I---- 24
----~ 25
I

r---- I-- 34

~

k,

------' \ ..

,.....-'-....
('-f/
\
)
\

t--+-- 33

t-- [.. . - 32
~
rl\

\

\
I

1 ------- \

v'-....

I'--v 41

I

?!
'\ "
~~
......
22

\
J
t'-- I---- 44

(
\

F

----I---' 45

LEG I S : s J
INPUT ---,f----hf-}----1r--

U

J

Fig. 5-The effects of simultaneously pulsing three cores; not shown:
inhibit wires, sense wires to loads 1 and 2. This is only one plane
of the memory.

LEG 3
INPUT

U:

2 VOLTS/IIO"

U

TURN/DIVJ-bN R is important as a means for obtaining
flux gain when bi-directional coupling loops are not required. Operation with unity-turns-ratio, i.e., NT = N R,
is discussed in Sections V and VI I.
In order to simplify the mathematical relations all
circuit descriptions will be for symmetrical unity-turnsratio coupling loops. The extension to the more general
case NT> N R is relatively straightforward.

M

(d)

Fig. 5-Switching model for basic coupling loop.

termined by the one transfer condition. The equivalent
circuit of Fig. 5(d) can best be used for visualizing the
relations, so that

Therefore, the percentage range R for the Advance curren t, defined as
(IAmax -

IAmin)

X 100

R = ------------------

Advance Current Range
I t is well known that magnetic circuits that operate
in the region of threshold are inherently slower and less
tolerant to clock current variations than would be
similar circuits not so limited. It is important therefore
to determine and to take advantage, insofar as possible,
of all techniques for improving these allowable operating
ranges.
Relations for advance current range are derived below for the coupling loop circuit of Fig. 3, using a very
simple model in which we assume 1) that the one, zero
cf>T-FT curves have vertical steps at threshold FI and
F2 (Fig. 5); 2) that if a transmitter element is in the
zero state, then the circuit must be limited so that as a
result of the Advance pulse, the receiver is not brought
over its Clear state threshold F 2 ; and 3) that if the
transmitter is in the one state, then as a result of the
Advance pulse, the receiver element and transmitter
element are completely switched. For this latter condition, the receiver must receive a net drive of at least F2
and the transmitter a net drive of at least Fl. Although
this model is extremely inadequate (see Sections VII
and VIII), it is very useful for comparative estimating
purposes.
For the circuit of Fig. 5 (a) , the maxim um value of Advance current IAmax is determined by the zero transfer
condition. In this case, IAmax= 2(Fd N) where Fd N is
the current required in each branch to just bring its corresponding element to its Clear state mmf threshold F 2•
The minimum value of Advance current, IAmin, is de-

TRANSFER

(el

is equal to
R = 2[

F2-FI]
X 100.
3F 2 + FI

(1)

For comparative purposes, it is interesting to consider
the limiting case when FI = O. Under these conditions,
the limiting value of range, RO, is

RO = 67 per cent.

(la)

In the sections to follow, circuits that exhibit significantly greater range will be introduced and the expressions for range determined for these may be compared with the relation derived here.
I t may be noted that in these circuits only the Advance current range is of concern since the Clear current
range is essentially unlimited, as long as it is above the
minimum value required for adequate clearing of the
elements.

Switching SPeed
I t is well known that the rate of switching of a
"square loop" magnetic material is approximately proportional to the amount of (excess) drive, over and
above the threshold value. Thus, two thin rings (of the
same material) of radii rl and r2 would switch at the
same rate if driven with mmi's in the ratio rI/r2.
For the switching problem at hand, we will estimate
the switching speed with "the Advance current set in the

N

H:>-

TABLE I
SUMMARY OF RANGE AND SPEED RELATIONS DERIVED IN TEXT
- -

Figure

RO[R for Fl = 0]

R [Range in fraction of 100 per cent}

Condition

Number of switching thresholds

- ----------

n measured at IARV

nO - 1 (nO is ValUe)
for FI = 0

-----------

5

2

(f;21~J

2

3F2
2(FI

-

3

+ FI
+ F2)

-

1
2

let F2B = F2

~

a

Q
~

- --------9

.....

'C
Vt
'C

t;:j

3F2 + FB + FI
2(FI
F2)

2

( FzB - Fl )
2 3F2B
FI

+ FB

-

+

~ (F2B)

+

3

2

FB

~

V)

a

~

[

N;?::4NB

10

2

(F2 - F1)N

(3F2

+ FI)N -

+

2N
3N-4NB

2FINB
]
(4F2
2FI)NB

----

+

+

N~4NB

2 [F2N - (2F2
Fl)NB]
F2N
FINB

2(N - 2NB)
N

N=4NB

2 [2F2 - FI]
4F2
FI

1

+

+

[(3F2

+

+

FI)N - (4F2
2FI)NB]
2(FI
F2)(N - 2NB)

+

[F2N+FINB]
2(FI
F2)NB

+
[ 4F2 + Fl ]
2(FI + F2)

~

N
2(N - 2NB)

t:tj

N - 2NB
2NB

V)

~

t;5
1

~

~

2;
Fdc~ (4NB

N-1 )

F2

Fdc:$ (4NB
N -1 ) F2

[
2

+

(F2 - Fdc - FI)N
2FINB
(3F2 - 3Fdc
FI)N - (4F2 - 4Fdc

+
+ 2FI)NB
[ (F2 + Fdc)N - (2F2 + Fl)NB]
2 (F2 + Fdc)N + (FI - 2Fdc)NB

[(F2
2 (F2

+ Fdc)N + Fdc)N -

11
Fdc= (4NB
N-1 ) F2
N=2NB
= F2

and Fdc

12

[
2

[(3 F2 - Fdc
2(FI
(F2

2F2NB]
2FdcNB

2F2
2F2 - Fdc

2F2 - Fl
]
4F2 - 2Fdc
Fl

---

+

[2F2 - Fl]
FI
2 2F2

Relations same as in Fig. 10 with N replaced by N'

+ FI)N -

2N
3N - 4NB

]

+ Fdc)N + FINB
2(Fl + F2)NB
[ 4F2 + Fl ]
2(FI

[ 4F2
2(Fl

2

+

+

+

(4F2
2FI)NB]
F2)(N - 2NB)

+

F2)

+ Fl ]
+ F2)

(F2 - Fdo)N
2F2(N - 2NB)

+

N(F2
Fdc)
-1
2NBF2

~
~

8
~
~

1

t;5

1

8

~

~

~
~
~

+ 2NB' and NB replaced by NB'

~

Q
13

Relations same as in Fig. 10 with N replaced by 2(NI

+ N2) and N B replaced by Nl
--------

- -

----

--

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry

middle of its calculated range, i.e., with the Advance
curren t set at

With the transmitter and receiver windings directly
in parallel, then at every instant during the switching
process,

where 4>T and cf>R are the switched fluxes in the T and R
elements, respectively. With equal turns, NT = N R, the
branch currents will divide so as to cause the same
number of thresholds, n, in the transmitter and receiver.
This will result in equal switching rates, i.e., ci>T =ci>R.
Furthermore, according to the cf>-NI model of Fig. 5,
IT and IR will be constant during the switching process
(see Fig. 23). The number of thresholds in the transmitter is equal to nt = NITav j Fl and in the receiver is
equal to NIRavjF2 where ITav+IRav=IAav. By setting
nt = nr = n, then we find the relation
3F 2 + Fl
n=-----

(2)

In the limiting case, Fl = 0, then
nO = 1.5 thresholds.

(2a)

This value of nO could easily have been predicted by
noting that

For the ideal case assumed here, during one transfer, the
entire Advance current flows into the receiver. Thus,
nO = NIAavj F 2 •
The relations for range, for n, and for switching speed
proportional to n-l derived here are listed in Table I,
along with corresponding relations for the circuits derived below. (It should be kept in mind tha.t speed, liT,
where T is switching time, is proportional to the excess
number of thresholds of drive and hence n-l, since n
was defined as the total number of thresholds.)

25

Another way to improve the speed, and in general the
range as well, is to provide as much drive as possible
about the output aperture of the transmitter. Under the
ideal condition of threshold Fl = 0, then it requires no
curren t to switch the transmitter, and I R = lAd uring
one transmission. This is equivalent to saying that all
of the mmf appearing about the transmitter output
aperture in the zero state is transferred to the receiver
via the coupling loop during one transmission.
Consider again the elementary coupling loop of Fig. 6
with IAmax applied, Fig. 6(a) and 6(b). During zero transfer, both the transmitter and receiver are stressed with a
threshold mmf F 2• During one transfer, at least ideally,
the receiver becomes stressed by 2F2 • If appropriate
circuitry can provide higher stresses in the transmitter
(around the output aperture) during zero transfer, then
the receiver stress during one transmission is correspondingly higher. Generally, this can be achieved in two
ways, Fig. 6(c) /and 6(d). In Fig. 6(c), the coupling loop
applies an mmf of 2F2 , which ordinarily would tend to
set a zero transmitter. However, this is prevented by a
bias equal to F2 so that the net setting mmf about the
central aperture in the zero state is still only F 2 • Furthermore, the bias F2 is not strong enough to clear a set
transmitter during one transfer. Thus, with this arrangement, a stress of 2F2 can be added to the receiver during
one transfer, resulting in a total receiver stress of 3F2•
In the arrangement of Fig. 6(d), a zero state stress
of 2F2 is also achieved in the transmitter, but this is
obtained by applying an extra clear direction drive of
magnitude F2 on the inner leg.
Alternate schemes combining these approaches may
be used as well, as indicated in Fig. 6(e), where k, which
may have any value, but practically will lie in the
range 1~kS2, is an arbitrary constant. In this case, the
net stress about the output aperture is 2F2 , independent
of k. Note that the circuit of Fig. 6(c) results from k = 2,
and the circuit of Fig. 6(d) for k = 1.
The initial circuit arrangements that follow are
motivated from these concepts. However, as these circuits develop other concepts arise which lead to still
other circuits.

III.

SCHEMES UTILIZING ONLY A SINGLE WINDING
IN THE INPUT AND OUTPUT APERTURES

Motivation

Transmitter Bias

It is clear from the above relations for Rand n that
one way to improve the speed and advance current
range is to make F2 large relative to Fl. This implies
the use of a large diameter element, compared to the
diameter of the small aperture. However, a large element is undesirable for many obvious reasons. It is fortunate, though, that the equivalent of a large element
can be obtained by appropriate biasing arrangements.
But even further, the biasing arrangements can improve
the operation even beyond what would be expected
merely of a "larger" element.

With a current IBT in the clear direction through a
winding of N BT turns linking the central aperture of the
transmitter (Fig. 7), the Advance mmf NTIT must first
overcome the mmf FBT=NBTIBT before it can switch
flux about the central aperture of the transmitter. This
"bias," therefore, effectively increases the magnitude of
the threshold F2 by the value FBT . The equivalent
threshold is, therefore, F £ = F 2 + F BT. The threshold F 1
is not affected, however, since the bias current does not
link the flux paths that are local about the output
aperture. Thus, to the electrical circuit the element ap-

26

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

pears to be larger than it actually is. The magnitude of
bias is limited to a value N BTIBT = F2 or the bias mmf
itself would tend to clear a Set transmitter. Thus, with
maximum bias, FBT = F 2 , the element appears to be
twice as large in diameter, and the effective threshold
F£ is essentially twice F2•

I

MAX·

A

1. ~A x.

2F2

=N

,ZERO TRANSFER

ONE TRANSFER

(0)

(b)

Receiver Bias
Wi th the transmitter in the zero state, the manner in
which IA initially tends to divide between transmitter
and receiver branches depends upon the branch inductances LT and L R , which are attributable mainly to
the saturation permeability of the ferrite. However, the
final division of current depends only on branch wire resistances RT and RR. For identical elements and with
turns ratio N T / N R, the ratio of branch inductances is
(NT/ N R)2. It is desirable to make the resistance ratio
the same in order to eliminate transient overshoots in
the branch currents. At the same time, it is desirable for
RT / RR to be in the ra tio NT / N R (in the case of no transmitter bias) in order for the final mmE's applied to transmitter and receiver to be equal. The above two conditions appear to be incompatible for N T/ NR~ 1 or even
for N T / N R = 1 if transmitter bias is being used. However, by application of bias N BRIBR to the receiver
(Fig. 8), an additional term is added to the receiver mmf
and the above conditions can both be satisfied. In this
sense then, receiver bias serves as a free parameter for
simultaneous satisfaction of one additional loop condition.
For the case NT = N R = N, it is clear that receiver bias
should be equal to transmitter bias for proper balance.
At the same time, this leads to a symmetrical loop
capable of bidirectional transmission.
It is important to note that whereas the transmitter
bias must be limited to, at most, the value F 2 , the receiver bias can be of arbitrary value. This is so because
the receiver current can only cause flux switching about
the central aperture of the receiver, and the only concern, therefore, is with the net mmf (NRIR-NBRIBR)
about the central aperture. During one transmission any
flux switching about the central aperture of the transmi tter in the Clear direction is detrimental.
Simultaneous Transmitter and Receiver Bias
With bias FB = FBT = FBR applied, the maXimum
value of IA~ax is [Fig. 9(a)]

ZERO

ZERO

( c)

ZERO

(d)

(e)

Fig. 6-Basic biasing arrangements.

(a)

(b)

Fig. 7-Transmitter bias.

CLEAR-..----'
CLEAR--+-----------------~

Fig. 8-Receiver bias.

(al ZERO TRANSFER

(blONE TRANSFER

Fig. 9-Circuit using separate transmitter and receiver bias.

(al ZERO TRANSFER

(b) ONE

TRAN~FER

Fig. 10-Circuit using compatible transmitter and receiver bias.

27

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry
Therefore, the range R is
(3)

For values larger than this, the transmitter would tend
to be cleared during transfer. In either case, fAmin is
determined from Fig. 10(b), where

N

RSBO = 67 per cent

(3a)

where the subscript SB implies that this range is for
circuit with simple bias. The switching factor n can be
found in the same way as for (2); thus,

NI T av

NIRav - FB

nSB = - - = - - - - Fl
F2

3F2 + FB
2(F2

+

+ Fl
Fr)

(4)

and
(4a)

It is interesting to note that with bias mmf FB applied, the range is increased exactly as though F2 were
replaced by a larger element of dimension F2B = F 2+ F B.
However, the switching speed is improved beyond just
the simple substitution of F2B for F2 in (2).
Th us, by the use of bias in this manner, the swi tching
properties are significantly improved. This bias could be
provided from a dc source, or from a pulse source. In
the next section, it is indicated that still further advantage is obtained by having the bias mmf provided
by the Advance current itself.

Compatible Bias
In order to keep the number of current sources to a
minimum and also to insure that the receiver bias operates only during the Advance current (if, by design, it
should be greater than F 2), it is desirable to have the
Advance current itself provide the bias as indicated in
Fig. 10.
In this circuit arra'ngement, there are two conditions
for IAmax. For N"?,.4N B, then IAmax is limited by the zero
transfer, in which case

I Amax =

F2

+ NBIAmax] +, -----F2 + NBIAmax]
N

T

IAmax

= ----

+ F2 + NBIAmin]

lAmin = Fl]

and, again for the limiting case Fl = 0,

N

T

R

or

F2 + Fl
lAm in = - - N-NB

The resulting relations for R CB , nCB, RCBo, and nCB o
are given in Table I, where the subscripts CB imply the
compatible bias use. Note that RCB and nCB are functions of Nand N B. Of course, with N B = 0 these relations
reduce to the same relations as given in (1) and (2).
Maximum RCBo and nCBo occur for N = 4NB in which
case

RCBo = 100 per cent

(5)

nCBo = 2.

(6)

The increased range obtained with compatible bias
can be explained in terms of "moving thresholds." That
is, with compatible bias, the effective transmitter
threshold (F2+ N BfA) is itself a function of fA. Hence,
as fA increases from the center of its range and, therefore, tends to approach the threshold F 2 , the effective
threshold value itself tends to increase, reducing considerably the overrunning effect. The same stabilization
results in the case of Advance current reduction as well,
as far as the receiver is concerned. Thus, as fA decreases
from the center of its range, the receiver moves further
from threshold, but the effective threshold itself is decreasing. Note that with N=4NB, and with f Amax applied, the circuit of Fig. 10 exactly matches the conditions of Fig. 6(c).
Thus, significant improvement in Rand n is obtained
by the use of compatible bias. However, in practical circuits, N is greater than N B, and since the minimum
value of N B is unity, single turn coupling loops cannot
effectively be used.

Counter Bias
R

or

2F2

N- 2NB

If IAmax were larger than this value, then the transmitter and receiver would overrun their threshold
during zero transfer.
For N'5::4NB, then f Amax is limited by the one transfer,
in which case

In order to improve the stabilization and, therefore,
increase the Advance current range still further, it is
necessary to increase the feedback effect; i.e., it is necessary to make the bias "move" even faster as a function
of Advance current. This may be achieved by increasing
the bias turns N B relative to the coupling loop turns N.
However, for a given value of fA and N, as the bias
turns are increased the transmitter bias increases, and
soon overruns 100 per cent. This effect may be compensated for by use of a dc bias of opposite sign (i.e.,
counter bias), as indicated in Fig. li.
As in the previous case there are two conditions for
f Amax . For the case where F dc "?,. (( 4NB/ N) -1) F 2, then
f A max is determined from the one transfer case.

28

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
F2 + NBIAmax IAmax = - - - - - - - - -

FdC]

N

+

F2

T

+ NBIAmax - FdC]
N

R
FoC

or
(a)

ZERO

TRANSFER

Ib) ONE

TRANSFER

Fig. ll-Circuit using dc counter bias.

The minimum current IAmin is determined for one
transfer and is

or

Fig. 12-Circuit using inner-leg drive.

These combine into the relations for Rdc and ndc
given in the table.
For the alternate condition Fdc~(4NB/N-1)F2'
then IAmax is given simply from the one transfer case as
F2

Fig. 13-Circuit using floating coupling loop.

+ Fdc
NB

The relation for IAmin is the same as before.
The range is a maxim um where Fdc = (( 4NB/ N) -1) F 2.
Nominally, the dc bias is limited in magnitude to the
value of F 2• For this value of dc bias maximum range
occurs for N / N B = 2, and is equal to 200 per cent. The
corresponding value of switching factor is nO = 2. Thus,
ultimately, a 2 to 1 improvement in operating range is
achieved.
This improvement in operating range is obtained at
the expense of an extra dc bias. However, no extra windings are required since the dc current may be simply
carried on the existing Clear windings. Furthermore, although range improvement is obtained at the expense of
a new current source, dc currents are very simply regulated compared with pulse currents. Finally, the magnitude of the dc counter-bias current may be used as a
fine control to aid in achieving the optimum operating
point.

patible bias can be obtained but with significantly
lower Advance currents. In this circuit the Advance current is made to link the inner leg about the output aperture of the transmitter, as well as the coupling loop itself.
The equations for this case are identical to the equations for compatible bias if in the latter equations N is
replaced by (N'+2N B') and NB by N B'. Thus, for example, the case N' = 3, N B' = 1 with the present scheme
is identical to N = 5, N B = 1 in the earlier scheme. Thus,
N' = 2, N B' = 1 yields maximum range corresponding to
the case N = 4, N B = 1, which was shown to be optimum
for the earlier scheme.
Although, for a given number of coupling loop turns,
there are more total turns in the small aperture, the additional drive turns may be of relatively small wire, since
the main concern is qply to make the coupling loop
turns as large as possible to reduce coupling loop resisti ve losses.

Floating Coupling Loop
fIV. MULTIPLE WINDINGS IN INPUT
AND OUTPUT APERTURES

The circuits discussed thus far have all employed only
a single winding in the input and output apertures. By
relaxing this restriction, significant advantages can be
obtained.

Drive on Inner Output Leg
With low-turn windings in the coupling loops, the Advance currents can become relatively large. By using the
drive scheme indicated in Fig. 12, then for the same
number of coupling loop turns, the advantages of com-

The circuits discussed thus far have the following two
disadvantages: 1) with the coupling loop directly
driven, care has to be exercised in physically connecting
the two branch windings together so that proper ratios
of the (parasitic) resistance and inductance are maintained and 2) since the same advance current flows
through the coupling loops and bias windings, there are
restrictions on the combinations of turns that may be
used.
By allowing "floating" coupling loops, both of the
above disadvantages are overcome. In the circuit of
Fig. 13, the turns N, N I , and N2 are completely inde-

29

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry
pendent. The case Nl = N2 is equivalent to the maximum range case N' = 2, N B' = 1 in the inner leg drive
case, which in turn is equivalent to the maximum range
case N=4, NB=1 for compatible bias. The equations
for this case are identical to the equations for compatible
bias if N is replaced by 2(N! + N 2) and N B is replaced
by N l .
Note that with Nl=N2 and with IAmax applied, the
circuit of Fig. 13 exactly matches the conditions of Fig.
6(d), except that in this case the stress on the outer leg
of the transmitter (and receiver) is applied directly
from a drive winding instead of from coupling loop current. Current flows in the coupling loop only during one
transfer.

V.

COUPLING

Loop

rf;R= rf;R'

L

R at each transfer and the input
flux ct>R' at the previous transfer must be related as indicated in Fig. 14, where the gain G =ct>R/ct>R' is > 1 in
the interval ct>r R' u and is < 1 in the interval
CPL R' r. Thus, the transfer operation will tend to
increase a "low" one level of flux toward ct>u, and to decrease a "high" zero level of flux toward ct>L. This is
equivalent to saying that with such a gain relation, the
operation stably protects against "zero build-up" and
"one build-down."
With turns ratio greater than unity, i.e., NT>N R, it
is a straightforward matter to arrange a coupling loop
to obtain the necessary relation between ct>R' and ct>R.
Suppose that there are no losses in the coupling loop of
Fig. 1S(a). Then by integrating the relation NT4>T
= N R4>R, which must hold at every instant of the switching period, we find the relation NTct>T=NRct>R, where ct>T
and ct>R are the net flux changes in the transmitter and
receiver. Then, provided ct>T=ct>R"

This relation between CPR and ct>T is illustrated in Fig.
1S(b). Notice that the linear relation holds only until
CPR saturates. In order finally to obtain the necessary relation of Fig. 14(b), consider the coupling loop of Fig.
1S(c) in which a "clipper" core is added. This core is arranged to have a total flux linkage capacity of CPa, which
is a relatively small fraction of the saturation flux of the
MAD's. However, the switching threshold is much

CLEAR

(c)

(d)

Fig. lS-Coupling loop arrangement for achieving
proper gain relations.

lower than that of the receiver, and therefore, its full
capacity of flux will be substantially switched before the
receiver begins switching at all. In this way, a constant
amount of flux is subtracted from the quantity
(NT/NR)ct>T when >CPa; if (NT/NR)ct>Ta, then CPR=O.
The resulting relation between ct>R and ct>T =ct>R' is shown
in Fig. 1S(d). Notice that this curve has the proper form
for bistable operation (with ct>L = 0 in this case). If the
flux clipper is cleared at the same time as the transmitter, none of the basic clock cycle operations of Fig. 4 is
altered, and very good transfer loop operation is
achieved.
The use of a flux clipper along with the condition
NT / N R> 1 makes the gain properties of the transfer
loop very explicit. However, a flux clipper is not actually required. In fact, it is further demonstrated in the
following sections that the relation N T / N R > 1 is also
not required and that successful operation can be
achieved with NT/NR~1. The case NT=NR is interesting because the transfer loop is symmetrical, and bi-directional shifting is possible by merely reversing the sequence of Clear pulses or sequence of Advance pulses in
Fig. 4. The case NT = N R = 1 is particularly interesting
because of the simple assembly schemes that are made
possible. The case NT>NR is useful where bi-directional
properties are not necessary, and extra flux gain is required. No particular advantages can be seen for the
case NT T will be absorbed entirely into JI LRLdt+ LI L before I L brings the receiver up
to threshold, and therefore cf>T will be 100 per cent lost.
Hence the plot of cf>R vs cf>T will start from zero at some
value of cf>T>O, just as in Fig. 15(d).
Unlike Fig. 15(d), the curve will now not be linear,
but as long as the turns ratio is just great enough to
bring the curve above the cf>R =cf>R' line at some higher
value of cf>R', bistable operation will be achieved. The
turns ratio required is not high, 6/5 being a typical example. In fact, a high turns ratio is quite undesirable,
since the excess drive available for switching is reduced.
For example, if N T / NR = 2, then for a simple coupling
loop with no bias, ITO = Fd NT is an upper bound on the
current available for steering to the receiver during one
transfer. The receiver mmf provided by this current
would be (N R/NT)F2 =(1/2)F2 , whereas the corresponding figure for unity turns ratio would be F 2 •
Before discussing the properties of unity-turns-ratio
operation (NT=N R ), let us consider some basic switching properties of magnetic cores that will be useful in
later discussions.

VI.

SOME BASIC SWITCHING PROPERTIES OF
CONVENTIONAL CORES

Consider the flux-current relations for the conventional toroid of Fig. 16(a). Assume that the B-H curve
for the material is ideally square, Fig. 16(b).
With very long setting pulses Is, the cf>s-Fs curve is
as indicated in Fig. 16(c), where cf>s is the amount of
switched flux in response to a setting mmf Fs = NaIa
applied to a well-cleared core. The ratio Fb to Fa is the
same as the ratio ro to ri (outer to inner radius). This
curve may be automatically plotted by setting up a continuous pattern of alternate Clear and Set currents, in
which the Set current is made to vary in amplitude from
cycle to cycle. By deflection of an oscilloscope beam in
the x direction in response to current Is and in the y
direction in response to switched flux (= Jedt) , the
cf>a-Fs curve is automatically traced. In all of the cf>-F
curves to be considered here, cf> represents remanent flux
(i.e., does not include the elastic or reversible component of flux). To plot remanent flux curves, it is only
necessary to energize the oscilloscope beam just after

SET

Is

~s

!p

0

CLEAR

I

I

!
I

I
I
I

H

I

I

0

r-+Ho

(a)

(b)

r;-T!- Is

c/>Sl£

Fa

I

I

I

~

_
Fb FS-NSIS

c/>s---{
BEAM

:.:

~I
I

I

TURN ON
(e)

DELAYED

l+!/1t-

Is----1_1

I

t

Id}

Fig. 16-Core switching experiment.

the setting current is over, by which time the elastic
flux has been removed. To maintain the x deflection unchanged until such a time as the beam is energized, it
is merely necessary to connect the Advance current
pulse to the x deflection plates via a delay line, Fig.
16(d).
An interesting property to consider in relation to the
cf>s-Fs curve is its dependence on the pulse width of the
setting current. Consider a switching model in which it
is assumed that the switching rate dB/dt in any portion
of the material is proportional to the instantaneous excess drive (H - H o), where Ho represents the threshold
field. Although idealized, this model does result in the
usual inverse relationship between switching time and
excess field for the case of a thin ring of material. By
its use, calculated cf>s-Fs curves for different pulse
widths are shown in Fig. 17(a). For very long pulse
widths (i.e., T ~ 00), the curve reduces to that shown in
Fig. 16(c). However, for a given drive F, as T decreases,
a smaller and smaller amount of flux is switched; hence
the cf>s-Fs curves are monotonically lowered as T decreases. With this model, for pulse widths greater than
some critical value T e , each curve has a linear region
marked on the lower end by the mmf required to just
saturate the inner radius in time T, and on the upper
end by the mmf required to just start switching the material at the outer radius. The nonlinear connecting
regions are approximately parabolic for relatively thinwalled cores having a ratio of outer to inner radii of
about 1.3 or less. 2
In Fig. 17(b) is shown an actual family of cf>s-Fs
curves. For later comparison, these and all later curves,
unless otherwise stated, are taken on an experimentally
molded MAD element (having the nominal dimensions
indicated in Fig. 19) treated as a conventional core. By
2 It may also be noted that these curves have the identical form as
for the case in which a very long switching pulse is used on a core
having the same dimensions as here but for which the slope of the
"rising" portion of the B-H curve of the material is a variable. With
truly vertical sides, the curve T ~ 00 applies. With the sides less
steep, as shown by dotted lines in Fig. 16(b), the family of cPs-I.
curves has the identical form of Fig. 17 (a).

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry
dcI>s
dt

31
df6s

~dllt
a::

::l

u

:z:>

,

~~

I

t- -- -

a:: '
f(~
~ ~
(a)

t
5.5p.sec/DIV

( b)

Fig. 17-Calculated and measured CPs-Fs curves.

( a)

( b)

Fig. 18-Calculated and measured dcps/dt-time curves.

shaping about the apertures as indicated in Fig. 19,. the
results are substantially identical with results obtamed
on actual toroids of the same material. Notice that,
compared with the curves of Fig. 17 (a), these curves do
not radiate in so pronounced a fashion from the value
F but rather are mainly translated horizontally to
hi~her values of mmf as the pulse length decreases. This
property is very important in MAD elements for reasons that will become clearer below.
If one looks at the corresponding switching voltage
curves for this element, it becomes apparent what is
causing this translation of the ¢8-F8 curves. This family of voltage curves vs time is indicated in Fig. 1~,
where the parameter is the magnitude of 18 (where 18 IS
a very long pulse). The curves of Fig. 1S(a) are calculated using the model dB/dt ex (H -Ho). Notice that this
simple model is very inadequate for predicting the front
end of the voltage curves. This fact is understandable,
since this model relates more to the rate of movement of
existing domain walls. However, if we start with a wellcleared core in which there is a minimum of reverse
domains (walls), then after the pulse 18 is turned on, it
takes a time for domain walls to be established. 3 This is
reflected in the initial slope of the voltage curves. In any
case, notice the difference in behavior of the peaks of the
voltage curves as a function of 1 In the family of actual curves, there is a large "peaking delay" at low
levels of switching. With materials that exhibit significant peaking delay properties, the voltage is almost exactly zero before the time of peaking. It is straightforward to convert the voltage curves of Fig. 1S(b) into
the ¢S-F8 curves of Fig. 17(b) and see the reason for
the increase in threshold of the narrow-pulse curves with
peaking delay.
It may be noted that peaking delay is a property of
some materials and not others. For example, there are
materials for which the peaks in the switching curves of
Fig. 1S(b) lie almost directly over each other. For these
materials, the ¢8-F8 curves are more like those of Fig.
17(a).
Given a core of appropriate material, it is further
necessary, in order for the core to exhibit peaking delay, that the setting current 18 be applied to a wellcleared core. Generally speaking, Clear strengths of at
8,

8 N. Menyuk and J. B. Goodenough, "Magnetic materials for
digital computers, I. A theory of flux reversal in polycrystallme
ferromagnetics," J. Appl. Phys., vol. 6, pp. 8-18; January, 1955.

il, = 020
1 2 = 020
13= 040
o = 020
0 , = 275

(ell

INCH
"
"
"

"

(b)

Fig. 19-5haping of MAD elements.

least two to three times threshold are required for good
peaking delay.
If a MAD is made by cutting apertures in the wall of
a conventional toroid, Fig. 19(a), then regardless of the
Clear magnitude, it is impossible to get all material on a
major loop since the cross-sectional area h + 12 is less
than 13 for a core of unit height. Thus, even for material that is potentially capable of exhibiting peaking
delay, this type of construction nullifies the effect. However, by appropriate shaping of the element, e.g., ~o
that h + 12 = la, the element treated as a simple torOId
will exhibit significant peaking delay.
Another very important switching property related
to peaking delay is shown by the families of. ¢8-Fs
curves taken for the condition in which the core IS preset before Is is applied. The families of ¢8-F8 curves
shown in Fig. 20, contain the magnitude of preset flux
¢p as the parameter. The difference between the :arious
families of curves are that they are taken for dIfferent
combinations of long and short duration preset and set
pulses.
For zero preset (¢p = 0), the total switched flux ¢8
is ¢8max =¢M, where ¢M represents the total flux capac~ty
of the core from saturation in one direction to saturatIOn
in the other. When a core has been preset, the current Is
has correspondingly less flux available to switch. Thus
in Fig. 20(b), as the amount of preset flux ¢p increa~es,
the maximum switchable flux s-Fs and dc/>s/dt-time curves for various pulse
lengths of Ip and Is.

For later convenience in dealing with families of
curves for MAD elements, the curves of Fig. 20(b) are
redrawn in Fig. 20(c) with the final rather than initial
values of flux superimposed. Actually, this is equivalent
to raising the zero flux level for each curve by the magnitude cpp.
When a well-cleared core is preset to a certain level
of flux by a long current pulse, one can visualize the flux
condition in the core to be represented by a circumferential domain wall outside of which the flux is in a
clockwise (Clear) direction and inside in a counterclockwise direction. Except in the transition region, substantiallyall material is well saturated. Since Is stresses
the core in the same direction as I p , it is reasonable for
gentle preset (long pulse) that, regardless of preset
level, the current Is should continue the switching where
Ip left off, and with essentially the same threshold, as
demonstrated in Fig. 20(c). Let us now consider the effect of shortening the pulse durations. In order for a
certain magnitude of flux, CPP' to be preset as the preset pulse is decreased in length, the magnitude of the
preset current must be increased. For significant decrease in duration, the magnitude of current must be
likewise considerably increased. For a reasonably thinwalled core, this increased magnitude of current is capable of switching flux simultaneously throughout the
entire core, so that the current pulse must be shut off
when the proper level of preset flux is reached. In this
case, it is certain that reverse domains are distributed
throughout the body of the core in some random fashion.
Thus it is hardly surprising that after such a preset
pulse, the set current, which tends to continue the
switching in the same direction, finds a much lower and
less abrupt threshold. This effect is clearly seen when the
curves of Fig. 20(c) and 20(d) are compared.

Let us next consider the effect of short set and preset pulses, Fig. 20(e). For CPp=O, the CPs-Is curve is just
the appropriate curve of the family of Fig. 17(b), for the
given pulse duration. If we compare the curves of Fig.
20(d) and Fig. 20(e), we notice that in the latter case,
the threshold moves a considerable distance to the right
for zero preset, but not quite so far for nonzero preset. This is reasonable in terms of the previous discussion of the effects of a good Clear state on peaking delay.
Good peaking delay occurs only when all of the material is in a well saturated condition. However, due to
presetting with a short pulse, a random distribution of
reverse domains is left throughout the core, resulting in
very poor peaking delay after preset and hence very
little increase in threshold for a short-pulse set. This
point is demonstrated by the voltage-time curves (similar to those of Fig. 18) taken after a preset condition
of CPP= 1/2 cpM using a short preset pulse, Fig. 20(£),
and a long preset pulse, Fig. 20(g). Notice the significant difference in switching times for these two families.
The effect of preset is also demonstrated in the CP8- Fs
curves, Fig. 20(h), taken for the same preset level
1/2 cpM. The group of curves radiating from the lower
threshold value of Fs is for the short preset pulse; the
other group is for the long preset pulse. The lowering of
threshold for short preset pulses is clearly seen. Within
each group, the parameter is the duration of Set pulse.
VII.

SWITCHING PROPERTIES OF

MAD's

I deal Family of Output Curves

In Fig. 1, output CPT-FT curves were shown for the
two cases of a Set and Clear MAD. Actually, there exists a whole family of such curves with the amount of
preset (or input) flux as the parameter (Fig. 21). In
Fig. 21 (a), the input current is shown linking leg h
about the input aperture, and the output current is
shown linking leg 14 about the output aperture. Assume
all legs are of equal dimension. Let cpM represent the
total flux capacity in any leg from saturation in one
direction to sa tura tion in the other direction. With leg
14 saturated in the Clear (clockwise) direction, application of IT of sufficient magnitude will switch an amount
of flux cpM in leg 14 independent of the amount of preset
flux. A portion of it equal to cpp( =CPin) will switch locally
about the output aperture and the remainder will
switch around the main aperture. If all material is operating on an ideal rectangular hysteresis loop, the family of curves will have the form indicated in Fig. 21(b).
Actual Family of Output Curves

Actual families of output curves, for the same MAD
used for the above tests of core switching properties, are
shown in Fig. 22. These curves are automatically plotted
by the method previously described and are taken for
the same combinations of long and short current pulses
indicated in Fig. 20. Notice that the effects are substantially the same as observed in the case of presetting

33

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry

into unequal branch currents IT, IR such that at all instants of time ¢T=¢R' Because of the lower threshold
(neglecting the voltage drop in the loop resistance and
inductance) in the transmitter, IR>IT, but always
I R+ IT = I A. This si tua tion can be characterized by a
loop current I L superimposed on the zero transfer currents so that IR=IRo+IL and IT=ITo-IL

'-+---~--FT

(

( b)

(0 )

whereIR

=
o

IT

=
0

IA) or IL =I_R_-_I_T_
2'

2

Fig. 21-Idealized family of output curves ¢T-FT for a MAD.

~_~T .

.....
~

~

>

CD

I

I,
-r--

q

.

!

.45AT/DIV

(a)~:35}'see
1r ·20 }'see

:

T

~_f ~~
.45

T

.45

T

(e) lp: I}'see
I r : I}'see

(b) Ip: I }'see

Ir:20}'see

Fig. 22-Actual f 3 cd lies of ¢T-FT curves for various input
and output current pulse durations.

,--L._ _~_&"-_-+--

t

IT
Fig. 23-Transmitter and receiver currents during one transfer.

a toroid. Actually, the short-short current pulse combination is the one of interest, because this more nearly
approximates actual operation. In Sections III and IV,
it was indicated that in biased circuits, the transmitter
and receiver are switched with two or more thresholds of
current. This operation corresponds to high-drive, shortpulse conditions of measurement. However, it is shown
below that for one transfer, the transmitter current IT
and receiver current IR are not constant in time. To
this extent, the above experimental families of curves,
which are plotted for rectangular input and output current pulses, do not apply. Nevertheless, they are extremely indicative of the nature of the operation.
The main significance of these curves is the lowering
of the main aperture threshold for partial set levels relative to the Clear State threshold. It is demonstrated
below that this property provides a mechanism for obtaining proper gain relations for unity-turns-ratio operation.

Transmitter and Receiver Currents
"

In·the (ideal) zero transfer case, assuming unity turns
ratio, the Advance current IA divides into equal branch
currents ITo = IRo = (l/2)IA, where the sub "0" stands
for the zero case.
During a one transfer, the Advance current divides

During one transfer, the transmitter may be characterized as a relatively small, thick-walled core [see e.g.,
Figs. 9(b) and lOeb) ] compared with the receiver. Thus,
for given magnitudes of IT and IR [assuming the switching model dBldt R' where CPR' is the flux received in the
transmitter during the previous Advance pulse. If during the present Advance pulse, all available flux about
the output aperture is switched but none is switched
about the main aperture of the transmitter, then CPT
=CPR'. Now CPR = CPT -CPloss, for a single-turn coupling loop
(a special case of unity-turns-ratio operation) where
CPloss, the flux loss in the loop resistance R L, is fI LRLdt
volt-sec. Hence, if c/>T=CPR',
~loss

G=l--·
~R'

Note that G> 1 is impossible here because of the subtractive loss term. This equation is characteristic of the
operation of a conventional core-diode-type shift register, which requires NT> N R in order to obtain G> 1.
In a MAD arrangement, CPT can be more than CPR' because of the possibility of flux switching not only locally
about the output aperture, but also around the main
aperture in a direction to increase the setting of the eleJhent. For illustration, assume that the Advance current is set in the center of its range, or IA =IA av , so that
NIAI2 is below threshold F2 as indicated in Fig. 24. Also

34

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

assume a level of input flux cf>R' as indicated in the figure. Then as IT increases from its initial low level (see
previous section) towards its steady-state value IAI2,
it soon enters a region in which flux may be switched
about the main aperture. This flux is defined as cf>*. Thus
cf>T =cf>R' +cf>* and the gain equation is modified to read

TIME

It ii clear from Fig. 24 that cf>* is a function of cf>R' having the form indicated in Fig. 25. cf>* is very small at low
levels of received flux because the maximum value of
NIT is below threshold for switching around the main
aperture. cf>* is also very small for. large cf>R' because of
saturation. It is important to note that because of the
finite slope of the cf>T-FT curve below threshold (even
for the Clear state curve) that cf>* is everywhere > 0,
and hence that cf>* Icf>R' ---? 00 as cf>R' ---?O. Thus the contribution of the term cf>* /cf>R' to the gain equation has the
form indicated in Fig. 25(b). The steeply rising portion
of this curve for low values of cf>R' contributes to the
lower unity-gain crossing of the Gain-cf>R' curve at
cf>R' > 0, resulting in cf> L > 0, as would be expected. Thus,
if over some interval, the cf>losB term is smaller in magnitude than the cf>* term (for cf>R'>cf>L), then a gain curve
of the form indicated in Fig. 14 can be obtained, and the
interval in question is just the interval cf>r R' u
where G>1.

Fig. 24-Demonstration of cfJ* flux gain.

p*

't=

.* t-----------

~'~

+~

~

(al

(bl

Resistive Flux Loss
The resistive flux loss, JI LRLdt, is just proportional to
the cross-hatched area in Fig. 23. An accurate analysis
of cf>loss is extremely difficult, since no accurate switching model is available for predicting the current shape
I L(t) as a function of flux level, cf>R'.
In Sections II to IV, for purposes of comparing
switching speeds for various circuit arrangements, it
was assumed that IA divides into IT and IR in such a
way as to apply equal multiples of threshold mmf to
transmitter and receiver. This model does result in the
correct ordering of the different circuits in terms of
speed, but it is very inadequate for predicting cf>losslcf>R'
as a function of cf>R' for a given circuit. This fact should
be quite obvious just from the complexity of measured
switching characteristics as represented in Figs. 20 and
22.
True, one can definitely say that cf>loss increases with
cf>R', but not even this much can be said about the ratio
cf>losslcf>R', except for very low values of cf>R" In this latter
case, as was indicated earlier in the discussion of the
partial dipping action of loop inductance and resistance,
cf>losslcf>R' will be essentially unity for sufficiently low flux
levels.
In effect, then, all that can be said at present is that
the loss term will decrease from near unity for very low
cf>R' to a value A at some low cf>R'( =cf>1) and remain below

Fig. 26-Addition of gain equation components
to form the gain curve.

A at least up to some high cf>R'( =cf>2) , as indicated in

Fig. 25(c).
In Fig. 26, it is shown how the three terms of the
gain equation
G

= 1+~

_ 1 R' 2.
.
Whatever the actual variation of the loss term in this
interval, then, the qualitative nature of the gain curve
will not be changed, rather only the locations of the
unity gain points cf>r and CPu.

Flux Boost
In order to get reasonable Advance current range,
the short-short cfIT-FT curves [Fig. 22(c)] should have
the Clear state curve moved as far to the right as possible, and the higher level flux input curves moved as far
to the left as possible. Experimental results have indicated that these conditions are generally met best by

35

Bennion and Crane: Design and Analysis of MAD Transfer Circuitry
materials exhibiting good peaking delay properties
[Fig. 18(b)].
The range analysis of Sections II to IV does not apply
well to the unity-turns ratio case since the advance current is more limited by the requirement of getting cf>*
to make up for flux losses than by the assumptions of
Section II.
In any case, the situation can be significantly improved by the circuit described here.
Between the time that the flux cf>R is received, and
flux cf>T is transmitted to the next stage, the previous element is cleared. By having this pulse do double duty, as
indicated in Fig. 27, the flux boost may be obtained (at
Clear time) before transfer out of the element occurs.
With N turns on the receiver, the Clear current is adjusted so that the receiver is brought up to the vicinity
of threshold F 2 • Thus, if cf>R is low (as for a zero), then
the boost pulse has essentially no effect. If cf>R is high (as
for a one) but less than cf>M because of cf>Iosl< during transmission, then the boost pulse will increase the set level
of the element.

First, there are no coupling-loop losses associated with
flux boost, whereas part of the available cf>* is always
lost during transfer. In fact, with flux boost, it is possible
for the receiver to be set fully before flux is transferred
out of it; this condition is impossible to obtain for unityturns-ratio operation without flux boost, because of
losses. Second, the boost current has a fixed magnitude
and duration independent of flux level, whereas the current IT switching cf>* is greater (at least in integrated
value) for low flux levels than for high levels. Third, the
boost current can be adjusted in width and amplitude
independently of the advance current, while the mmf
that switches cf>* is tied to the advance current.
I t is pertinent to note that with flux boost taking care
of the flux-gain requirement, Advance range is more
closely predicted by the relations of Sections III and
IV. Furthermore, flux boost may be used advantageously in connection with all of the circuits derived in
that section. This is so in particular for unity-turns-ratio
operation, but advantage can be obtained even for circuits in which N T / N R> 1.
VIII.

CLEAR

Fig. 27-Circuit for achieving flux boost.

The family of curves of Fig. 24 is taken with a winding
about the output aperture. However, the reduced
thresholds for partially set levels are characteristic of
the state of the entire element (as previously described)
and would also be observed in cf>-F curves taken on the
flux boost winding. Thus the effect of the flux boost curren t can be predicted from the cf>T- F T curves.
The Clear winding has nN turns, which results in the
transmitter being cleared with n thresholds of mmf. For
good clearing, n should be greater than 2. By consulting
Fig. 4, we see that with this new arrangement, two processes are going on simultaneously in the receiver. Clearing the transmitter causes a nega ti ve set curren t to
switch flux locally about the input aperture of the receiver, and the mmf applied by the flux boost winding
causes additional set flux to be switched about the main
aperture.
In all previous circuits, the Clear pulse had unlimited
range as long as it was above some minimum. Such is
not the case in the present circuit, since the Clear curren t works against a thresnold. However, although the
Clear current rangejs decreased, the Advance range, is
significan t1 y increased.
The flux boost method of making up flux losses has
three main advantages compared to the cf>* method.

RESULTS AND CONCLUSION

The range and speed relations derived in Sections
III and IV, although based on a very simple model, do
properly predict comparative results for the various
circuit arrangements. As pointed out in the discussion
on flux boost, these relations do not apply well for unityturns-ratio circuits if flux boost is not used. In this case,
the Advance currents must be adjusted more to satisfy
the flux gain requirements than the simple switching
model used to derive these range and speed relations.
Listed below is an example of the types of comparative
results obtained with a coupling loop using NT = 6,
N R = 5 , for the circuits indica ted.
Circuit
of Fig.

5
10
11

Bias turns
NBT=NBR

1
2

Range
15 per cent
30 per cent
50 per cent, using 50 per cent
dc counter bias

Flux boost helps even where NT>N R • For example,
when flux boost is used, the 15 per cent range obtained
using no bias will increase to about 30 per cent. In flux
boost circuits in general, the Clear and Advance currents may be independently adjusted to give either one
a high range at the expense of the other. The range value
given here, however, implies that the Clear and Advance
currents are adjusted so that they both have the same
range, namely 30 per cent.
Unity-turns-ratio coupling loops, i.e., with NT=NR ,
have operated with the following typical results. With
the bias circuit of Fig. 10, N = 4, N B = 1, and flux boost,
Clear and Ad vance ranges of greater than 40 per cent
each are achieved. These results are obtained with l-,usec

36

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Clear and Advance pulses driving a register of experimentally molded elements having the dimensions indicated in Fig. 19 and made with material having a longpulse threshold of 0.7 oersted. Single-turn coupling-loop
circuits with the same effective bias operate equally
well.
The detailed analysis of these circuits is difficult not
only because of the usual difficulties of dealing with the
dynamic properties of highly nonlinear elements, but
also because of the relatively complex geometries involved. It is clear that a good deal of the design of these
circuits is necessarily based on intuition and empirical
results. The circuits described here can be made to operate quite well, however, and the lack of analytical

tools is felt more in trying to decide how or when a particular arrangement is optimum. It is hoped that future
efforts will result in the development of satisfactory
switching models that will make the circuit design procedure routine.
The techniques presented here provide the potential.
for developing extremely reliable digital circuitry at
least for the intermediate computer speed ranges of 0.1
mc to 1 mc clock (or bit) rates.
IX. ACKNOWLEDGMENT

The authors wish to acknowledge the very helpful
suggestions and contributions of their colleague, Dr.
Douglas Engelbart, to the material presented here.

A Twistor Matrix Memory for Semipermanent
Information *
DUNCAN H. LOONEyt

INTRODUCTION

A
n..

NEW magnetic matrix memory has been developed for the storage of semipermanent digital information. The memory is designed for computers
which require random access to stored information that
is changed very infrequently. The information is stored
in a pattern of permanent magnets arranged on a plastic
board. The presence or absence of a permanent magnet
is sensed nondestructively by a wire wrapped with a
magnetic tape placed close to the permanent magnet.
A stored word is read by a linear selection system using
a biased core access switch. 1
The memory is fabricated in modules. A typical
module is shown in Fig. 1. The photograph shows a 512word memory consisting of 32 X 16 word locations. Each
word location stores 26 bits of information. Any word
location in the memory may be selected at random and
the information read in a period of a few microseconds.
The temperature range of operation of the memory is
extremely wide.
The concept of storing information in an array of
permanent magnets was advanced by the late S. Shackell. Mr. Shackell's work was interrupted by his untimely death and has not been previously reported in the

* The work reported in this paper was done for the U. S. Dept. of
Defense under Contract DA-30-069-0RD-1955.
t Bell Telephone Labs., Murray Hill, N.].
1 J. A. Rajchman, "A myriabit magnetic core matrix memory,"
PROC. IRE, vol. 41, pp. 1407-1421; October, 1953.

literature. With the development of the twistor,2 John
Janik, who was familiar with the Shackell scheme, suggested its use in such a system to reduce the size of the
permanent magnets.
The operation of a store using the 512-word memory
module is described in a companion paper.3 The store,
which utilizes all solid-state circuitry, is compared to
other systems using photographic or magnetic techniques which can be used for the storage of semipermanent information.
OPERATING PRINCIPLE

The information is stored in an array of small permanen t magnets. The presence of the magnet is sensed by
a wire wrapped with magnetic tape placed close to the
magnets. A group of 26 wrapped wires are encapsulated
in a plastic tape. The encapsulated wires are then enclosed in a set of copper solenoids as illustrated in
Fig. 2. A particular solenoid corresponding to a stored
word may be selected by activating one core of the
biased core access switch. The bar magnets are arranged in a pattern on the surface of a thin plastic ca~d.
Each magnet is located at the intersection of a wrapped
wire or twistor and a solenoid. The purpose of the
permanent magnet is to inhibit locally the drive field of
2 A. H. Bobeck, "A new storage element suitable for large sized
memory arrays-the twistor," Bell Sys. Tech. J., vol. 36, pp. 13191340; November, 1957.
3 J. J. DeBuske, J. Janik, and B. H. Simons, "A card changeable
nondestructive readout twistor store," this issue, pp. 41-46.

Looney: A Twistor Matrix Memory for Semipermanent Information

Fig. i-A 512-word memory module. The unit is composed of 16
frames of 32 words. The cores of the access switch and the multiturn windings are shown. The encapsulated twistor tape threads
through the module continuously.

37

Fig. 3-A memory frame of 32 words and its magnet board. Every
other copper strip is used as a solenoid and is attached to an access
core. The guide pins of the frame and the corresponding holes of
the magnet board are necessary to register the bar magnets at the
intersections of the solenoids and the wrapped wires.

BIASED CORE

32 COPPER STRIPS, .060" WIDE
BONDED TO MOUNTING BOARD

Fig. 2-A section of a me-~ory frame. Three cores of the biased core
switch and their word solenoids are shown. The absence of a bar
magnet is sensed by the flux reversal of the ~rapped wire when a
drive current flows in the solenoid.

the solenoid. Thus, the permanent magnet prevents the
switching of the wrapped wire located beneath the
permanent magnet. The magnitude of the field in the
solenoid can be high to achieve fast switching. The field
of the bar magnet must, however: be sufficient to inhibit the switching of the wrapped wire. A plane of the
module with a complete magnet board is shown in
Fig. 3.
In the present design, the solenoids are 1/16 inch
wide and spaced 3/16 inch apart. For a current drive of
1.8 amp in the solenoid a switching speed of about 1 }J.sec

is obtained. Current pulses of 600 rna each are applied to
four turn X and Y windings of the biased core switch.
The bias on each core is 2.4 amp-turns. The X pulse is
a pplied first since the X winding is parallel to twistor
wires and results in a larger inductive signal. The sequence of operations is shown in Fig. 4. Time is measured from left to right. The two current pulses are
shown in Fig. 4(a). When the applied pulses are removE-d, the bias current resets the core. The back voltages on the X winding through 32 cores and the Ywinding through 16 cores are shown in Fig. 4(b). The core
requires about 0.6 amp-turns to generate an output
voltage which will drive 1.8 amp through the solenoid.
The resultant output signals are shown in Fig. 4(c).
Both one and zero output signals are shown by inserting
and removing a magnet board. The one signals average
8 mv into 180 Q while the zero signals are less than 2
mv. The signal-to-noise ratio is 5 to 1. Considering the
time necessary to turn on the access switch, a S-}J.sec
cycle time is easily achieved. Since there are no fundamental limitations on the JPagnitude of the drive or the
amount of flux which must be reversed, shorter cycle
periods are possible.
1
The general performance of the module is shown in
Fig. 5. The two current pulses into the biased core
switch are shown in Fig. Sea). The one signals observed
on one sensing wire from 32 word locations are shown
superimposed in Fig. S(b). The one signals observed on
26 sensing wires at one word location are shown in Fig.
S(c). The open circuit and matched load output signals
are shown in Fig. Sed) for the S12-word module. The resistive load is 22 Q and is equal to the sensing wire resistance.

38

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

~

V

./

~

/

I'

/
--

(a)

(a)
"--

~\.

/

"(

..........

"

fII"

/

~

(b)

(b)

I\

~

L

- '~

(c)

(c)

Fig. 4-Electrical characteristics of the module. Time is measured
from left to right and each division is 0.5 p,sec. (a) Drive currents.
The upper trace represents the X selection current, the lower
trac~ the Y selection current. Each division is 500 rna. (b) Observed
back voltages. The upper trace is the back EMF on the 4-turn winding through the 16 cores of the Y winding, the lower trace that of
the 32 cores on the X winding. Each division is 5.0 volts. (c) One
and zero output signals observed with the magnet board present
and removed. The first low peaks are due to inductive pickup and
shuttle. Each maior division is 5 mv. The output load is 180 !J.
DESIGN OF THE PERMANENT MAGNET ARRAY

A number of factors must be considered in the design
of the permanent magnet array. First, the array of
magnets must be simple to manufacture. Since the magnets are the primary storage medium of the memory,
they must be capable of retaining their magnetization
under very severe conditions. Methods must be devised
to register the permanent magnets accurately over the
bit locations defined by the wrapped w"ires and the word
solenoids. Finally, the spacing between the individual
magnets must be chosen carefully, since one magnet
should not disturb either the neighboring magnet or its
sensing wire.
For illustrative purposes, the bar magnet will be considered as two magnetic charges ± m spaced a distance
d apart. The magnetic field H at any distance r perpendicular to the center of the magnet is:
md

H = -----------[(dj2)2 + r2)3/2

(1)

and is directed parallel to the axis of the magnet. There
are two limits to be considered. In the limit of r~O,

(d)
Fig. 5-0utput signals of the module. Time is measured from left to
right and each division is 0.5 p,sec. (a) Drive currents. The upper
trace is the X selection current, the lower trace the Y current.
Each major division is 500 rna. (b) 32 "one" signals observed on
one sensing wire from 32 cores on one frame. Each major division
is 5.0 mv. The load is 180!J. (c) 26 "one" signals observed on the
26 sensing wires in one word location. Each major division is
5.0 mv. The output resistance is 180!J. (d) The output signals for
open circuit and matched output loads. The output resistance of
the lower trace is 22 !J and is equal to the resistance of the sense
wire. Each major division is 5.0 mv.

(1) reduces to:
8m

Ho=-·

d2

(2)

The field which acts on a sensing wire placed near the
magnet is, then, inversely proportional to the square of
the length of the magnet. For values of r~ 00, (1) reduces to:
H

md
r3

=-.
00

(3)

Looney: A T'wistor Matrix Memory for Semipermanent Information
Thus, for neighboring positions, the magnetic field is
proportional to the length of the magnet.
Ideally, a permanent magnet should have a large effect on the sensing element just underneath it and no effect on any of the adjacent elements. The ratio of the
field for r small to the field for r large should be very
high. Using the two previous approximations, the ratio
IS:

Ho

8r 3

(4)

In order to reduce the interaction, the magnets should
be made as small as possible. A number of other factors,
however, prevent the magnet from being made extremely small. The first is that the length of wrapped
wire underneath the bar magnet must be large enough
to produce a detectable output signal. In addition, the
demagnetizing factors associated both with the wrapped
wire and with the permanent magnet must be considered., The"demagnetizing field, of the permanent magnet,
is inversely proportional to the square of the length d of
the magnet. If d is reduced un til the demagnetizing
field is greater than the coercive force, the effective pole
strength m will be reduced. In the case of the wire used
as a sensing element, the demagnetizing field of the flux
reversed must be less than the applied driving field.
Boards containing the permanent magnet arrays are
prepared by etching sheets of Vicalloy I which have
been bonded to plastic boards. Vicalloy tape can be obtained in strips about 4 inches wide and 2 mils thick.
The Vicalloy is heat treated to produce a saturation
magnetization of 5000 gauss and a coercive field of 200
oersteds. The individual magnets are etched using the
standard photo resist etched wire technique. Master
negatives are prepared such that the individual magnets
may be removed by masking out their positions on the
negative. Consequently, all information patterns may
be prepared from one master negative. The flat form of
the magnet simplifies its positioning over the bit location. The bar magnet used is 20 X 60 X 2 mils. The direction of magnetization is parallel to the long dimension ..
The card containing the magnets is placed in registration by guide pins and is pressed firmly against the
solenoids by springs. Consequently, the separation of
the permanent magnet from the sensing wire is only a
few mils. The two-pole approximation cannot be used to
determine the true magnetic field for distances comparable to the bar magnet length. Experimentally, the
magnetic field on the wire beneath the permanent magnet is about 20 oersteds. The field on the nearest
neighbor is about 1 oersted.
The magnets were spaced unequally in the three
dimensions of the present design. In order to reduce the
inductance of the solenoid strip encompassing the sensing wires, the wires are spaced 100 mils apart which was
considered the minimum distance to avoid lateral interactions. To prevent the permanent magnets of one word

39

from acting too strongly upon the sensing wires in the
next word, the individual solenoids are separated 3/16
inch. This distance should be kept small to minimize
the length of line over which the output signal must
travel before reaching the detecting amplifier. Finally,
the individual frames are spaced about! inch center to
center. One quarter of an inch of this spacing is used for
the solenoids, the sensing wires, and the supporting
board. The remaining 1 inch is taken up by the permanent magnet card and its spring assembly. Thus it is
quite easy to slip the card in and out as is shown in Fig. 6.

Fig. 6-Rear view of the 512-word memory module. The magnet
board is being inserted. It is held against the memory frame by a
spring.
'
THE WRAPPED WIRE USED FOR SENSING

The twistor wire is shown in Fig. 7. A three-mil copper
wire has a magnetic tape wrapped around it at an angle
of about 45°. The particular material used is 4-79 permalloy. The tape has a coercive force of about 3 oersteds
and a cross section of 5 XO.3 mils. The length of the wire
per bit is determined by the width of the solenoid employed and is made the same wid th as the bar magnet for
simplicity, i.e., 60 mils.
The cross section of the tape must be adjusted to
satisfy a number of conditions. First, the ratio of the bit
length to tape thickness determines the demagnetizing
field. It is desirable to keep the demagnetizing field
small since it decreases the effective driving field. Also,
the thickness of the tape should be kept below! mil to
insure that the eddy current losses are not excessive.
The amount of material determines the size of the access
core and no more material should be included than is
necessary to provide a detectable signal. Finally, the
wrapped wire consists of a copper conductor used for the
transmission of information wrapped with a magnetic
tape used for the detection of information at particular
locations. Since the magnetic material acts as a loading
on the transmission line, it is desirable to keep the

40

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

amount of magnetic material to a minimum. The sense
wire is a small copper wire and has an appreciable resistance per unit length. It is desirable to use a large
diameter wire to minimize the attenuation. Unfortunately, the output signal is determined by the number
of times per bit length that the flattened tape wraps
around the center conductor. For a given wrapping
pitch and bit length, the number of wraps decreases as
the diameter increases. Thus, the output signal would
be reduced. A compromise must be made between the
attenuation which produces a variation of output signal
from near bit locations to far bit locations, and the
amplitude of the output signals .

•003" DIA.
COPPER WIRE

PERMALLOY TAPE
11
.005- WIDE X .00033 THICK

Fig. 7-Helical magnetization by wrapping. The permalloy tape is
wrapped continuously around the 3-mil copper conductor.

The uniformity of the output signals is improved if
the wire is magnetized in one direction. Consequently
the wrapped wire is magnetized before the memory is
placed in operation by the passage of a dc current down
the wire. The perman en t magnet field is directed to
maintain the continuous magnetization of the wire. The
drive field must switch the wrapped wire into an opposite state of magnetization. As a result, a demagnetizing
field is created which opposes the drive field. When the
core switch resets the bit, the demagnetization field is
in the direction to aid the resetting of the bit. Thus, the
wire will remain in a uniformly magnetized state.
THE BIASED CORE SWITCH

An equivalent circuit for the biased core switch with
a constant current input is shown in Fig. 8. The net
current drive is the number of ampere turns of the bias
since each of the X and Y currents provide the same
number of ampere turns as the bias. The core can be
represented by two circuit elements. The first is a current sink of NIo which represents the fact that a given
number ampere turns must be applied to the core before
it begins to switch. The core, during the switching operation, acts as a resistance R c ,4 which is proportional to
4 E. A. Sands, "Behavior of rectangular hysteresis loop magnetic
materials under current pulse conditions," PROC. IRE, vol. 40, pp.
1246-1250; October, 1952.

the total flux, of the core divided by the product of the
switching coefficient5 of the material in the core and the
mean path length around the cQre. The core, in switching, generates a voltage which forces current into the
load ZL, which represents the solenoid and its sensing
wires. In order to switch current effectively through the
core, it is desirable that Rc be made large compared to
ZL. One possibility is to make the flux in the core large.

NI

Fig. 8-An equivalent circuit for a biased core switch for a constant
current input I. N is the ratio of input to output turns. The
switch core is represented by the current sink Nlo and the resistance Re.

However, the total back voltage on a selection line is a
function of the total inductance of the cores on the line.
To reduce the inductance and hence the back voltage,
it is desirable to keep the core cross section and thus the
total flux small. The minimum flux must be sufficient to
supply an output voltage to drive the required current
into the load ZL long enough to complete the sensing.
The resistance of the solenoid, its inductance, and the
flux, which must be switched in each of the wrapped wire
elements contained in the solenoid, determine the required flux of the access core. Since the flux is equal to
Jotvodt, a convenient nux unit is mv p,sec. The flux required by the 26 wrapped wires is about 30 mv p,sec.
The flux which must be supplied to drive the inductance
with a current of 2 amp is about 50 mv p,sec while the
flux necessary to drive the current through the resistance
of the solenoid for 1 p,sec is 100 mv fJ.sec. A total of 180
mv p,sec of flux must be supplied as a minimum by the
access core. The cross section of the core is made sufficiently large to contain 300 mv p,sec of available flux.
Only about 200 mv p,sec of the available flux is used.
The access core is made of ferrite containing cadmium
as well as manganese and magnesium. The core has an
extremely flat hysteresis loop, Br/B8:::::::::!.0.93, and a very
low coercive field, Hc:::::::::!.0.15 oersted. If a permalloy core
were used, only a few wraps would be required to supply
the 300 mv p,sec of flux. Tape cores with a small number
of wraps of i or i mil tape are not as square as the ferrite core. The tape core has higher dynamic resistance
for a given flux than the ferrite core, but this factor is
less important than the superior squareness and the
lower cost of the ferrite core. In fact, the dynamic resistance of the ferrite core is so large that the current
I) N. Menyuk and J. B. Goodenough, "Magnetic materials for
digital computer components," J. Appl. Phys., vol. 26, pp. 8-18;
January, 1955.

DeBuske, Janik, and Simons: A Card Changeable Nondestructive Readout Twistor Store
regulation of the switch is extremely good. The biased
core switch improves the rise time of the current delivered to the load compared to the rise time of the drive
currents.
SUMMARY

A 512-word 26 bits per word module of a magnetic
matrix memory has been developed for the storage of
semipermanent information. The memoty is capable of
random addressing at high cycling speeds. The information is stored in an array of permanent magnets. The
presence of a permanent magnet is sensed by a wire
wrapped with magnetic tape placed adjacent to the
permanent magnet. The magnetic materials used in the
memory are permalloy tape, Vicalloy tape, and ferrite
cores. As used all materials are relatively insensitive to
any change in the ambient temperature and, consequently, the memory may be operated over a wide
temperature range.
The electrical characteristics of the memory module
can be compared to those of a ferrite core memory. By
the use of four turns on the access core, the drive current
into the magnetic switch becomes 600 mao The back
voltage, however, is low. The output signal at the word
location is 8 mv. The transmission properties of the
sense wire must be considered for large memories. The
propagation time per bit is about 0.06 mMsec/bit. Since
the sense wire is resistive, the output signal may be attenuated as much as 1 db per 512-word module. Larger
memories may be made by interconnecting several

41

modules, but the number of modules which may be connected to one sense amplifier is limited by the attenuation of the wrapped wire.
The present memory is an initial model which has
been developed to demonstrate feasibility of the system
as well as to meet certain operational requirements.
Models will be made soon which have improved characteristics. In particular, the size of the memory may be
reduced by a factor of two or more. The output signal
from the sensed bit may be increased. The transmission
line properties of the structure may be improved by
making the cond ucting wire larger.
ACKNOWLEDGMENT

The semipermanent memory is the result of the effort
of several people and the author is acting as the reporter for the group. J. Janik suggested the use of the
permanent magnet and wrapped wire scheme. Dr. H .. L.
Stadler contributed the magnetic design, while A. J.
Munn has been responsible for the mechanical design of
the memory module. Test apparatus design and operation have been contributed by J. A. Ruff and ]. L.
Smith. The design of the permanent memory has been.
carried out in,parallel with the development of a variable
memory under the supervision of A. H. Bobeck. The
permanent memory has benefited from the early design
and the suggestions of this group. The present project
was under the supervision of Dr. F. B. Humphrey. The
author gratefully acknowledges his assistance in the
preparation of the present paper.

A Card Changeable Nondestructive Readout
Twistor Store *
J. J. DEBUSKEt, J. JANIK, JR.t,

INTRODUCTION

ITH the steady increase in the required speed
and complexity of computing and data processing equipment necessary to handle today's problems, the role of the associated storage systems has been
expanding. Generally, a rather large amount of permanent or semipermanent storage capacity is required to
store information such as programs, constants, and
tables. Besides having a large bit capacity at low bit
cost, such stores must be fast, reliable, and flexible in

W

* The work reported in this paper was done for the U. S. Department of Defertse under Contract DA-30-069-0RD-195S.
t Bell Telephone Labs., Inc., Whippany, N.J.

AND

B. H. SIMONst

addressing. The twistor sensing an array of small permanent magnets meets these requirements admirably.
The "Twistor" as a memory element was conceived
by Bobeck.! It may be used as either a memory or a
sensing device. It is as a sensing device that it is used
in the store described in this paper. The details of the
magnetic structure are given in a companion paper. 2
This memory matrix utilizes cards containing a space
for a small magnet at each bit position. A magnet is
1 A. H. Bobeck, "A new storage element suitable for large-sized
memory arrays-the Twistor," Bell Sys. Tech. J., vol 36, pp. 13191340; November, 1957.
2 D. H. Looney, "A twistor matrix memory for semipermanent
information," this issue, p. 36.
'

42

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

r-----------.

present if a zero is stored and absent if a one is stored.
The model described in this paper is word-organized.
One row across a card forms a word. These cards are
placed over columns of twistor wires, one twistor wire
for each bit which is used to sense the presence or absence of a magnet for the bits of the word being addressed.
By combining memory cells using the cards and
twistor wire arrays with suitable access, timing, and
readout circuitry, a store is obtained compatible with
high-speed data systems. Furthermore, the information
on the cards is permanent and easy to check. The
stored information can be readily changed by removing
the cards and replacing them with others.

I
I

I

I
I
I
1--I

TWISTOR
MEMORY MODULE

ADDRESS

I

I
I
I

READ
START
SIGNAL

THE 512-WORD, 26-BIT-PER-WORD, STORE

A block diagram of the store is shown on Fig. 1. Information is stored on 16 cards. Each card stores 32 26bit words. Computer access to this stored information
is via the timing and memory access circuitry. In order
to sense the information stored in a particular word
location, the computer must supply a "read start" signal
and the binary coded address of the desired word. Upon
receipt of the "read start" signal, the timing circuit
generates the internal timing necessary to complete the
operation. The information read out of the memory
module is detected and amplified by the read detectors
before being passed on to the computer at a logic level
of -3.5 volts.

l __ ~!!:o~ _____ ____ J
OUTPUTS
Fig. 1-Basic block diagram of the store.

PERMANENT
MAGNET
WORD SOlENO) D
11111

+

J\-.

====1 /)'=#======~~=-

2~6

THE TWISTOR MEMORY MODULE

Each of the 512 words has a word access core and a
word solenoid associated with it, as shown on Fig. 2. A
group of 26 twistor wires, encapsulated in a plastic tape,
threads the word solenoids. The region along each
twistor wire, defined by the width of the associated
solenoid constitutes one bit of a word. A twistor wire
consists 'of a 4-79 permalloy tape, of cross section 0.005
inch by 0.0003 inch wrapped around a 0.003-inch copper wire at an angle of about 45°. The 26 twistor wires
are magnetized in the same direction, which is defined as
the set state.
To simplify the description of the operation of the
basic memory cell in Fig. 2, only a vertical winding is
shown threading the cores. A current "I" of sufficient
amplitude in the vertical wire threading a word core reverses the direction of the magnetic flux in the core.
This change in flux induces a current in the solenoid
which produces a magnetic field strong enough to reverse the direction of magnetization of the sections of
the twistor wires within the solenoids.
A twistor bit having an associated magnet is prevented from switching its direction of magnetization by
the presence of the external magnetic field of the magnet
(bit 25), (Fig. 2). This field is in the same direction as
the set state of magnetization of the twistor wire. The
rate of change of flux in a twistor bit that is switched induces a voltage in the twistor wire. The passage of cur-

WORD ACCESS
CORE

Fig. 2-Basic memory cell.

ren t in the opposite direction through the word access
core resets the core, which in turn restores the switched
twistor bits to their set state. Therefore, in this application, the twistor elements are used to sense the presence
or absence of permanent magnets. The presence of a
permanent magnet represents a stored zero, and the
absence of a magnet represents a stored one.
The 512 core-solenoid combinations are arranged and
wired as a biased core switch 3 in a 32 by 16 matrix. (See
Fig. 3.) Note that the 26 tape-encapsulated twistor
wires are in one continuous folded belt. A vertical drive
winding threads each core of a particular column with
four turns. A horizontal drive winding threads each core
of a particular row with four turns. The vertical and
3

J. A. Rajchman, "A myriabit magnetic-core matrix memory,"

PROC.

IRE, vol. 41, pp. 1407-1421; October, 1953.

DeBuske, Janik, and Simons: A Card Changeable Nondestructive Readout Twistor Store

43

THE ApPLICATION OF THE MEMORY MODULE
TO THE STORE

I

2

3
31

32

WORD ACCESS
CORES

Fig. 3-Memory module biased core switch.

horizontal drive curreJ;1ts and windings produce aiding
flux at a cross point in each core. A constant current
bias winding threads each core of the matrix. The coincidence of currents in the horizontal and vertical drive
windings of a word core overcomes the effect of the bias,
and thus switches the core resulting in the readout of
the information stored in that word location. The
switched bits are restored to their former states by the
action of the bias current which resets the word access
core when the horizontal and vertical drives are removed. In order to select word 1, Fig. 3, it is necessary to
supply pulse currents, of sufficient amplitude and duration, on horizontal drive winding "a" and vertical drive
winding "1".
With horizontal and vertical drive currents of 0.6
ampere into 4 turns each, and a bias current of 2.25
amperes, the twistor output for a stored 1 is approximately 8 mv and about 2 mv for a stored zero. When a
word access core is switched, the other 31 cores of the
row and 15 cores of the column are shuttled by the
drive currents. The effect of these shuttle currents and
inductive pick-up noise is reduced by staggering the
time of application of the drive currents and by strobing
the read detector outputs.
In the present S12-word memory module, the transmission properties of the twistor circuit produce an output signal attenuation of as much as one db and a propagation time of about 35 mJ,tsec. In order not to penalize
the signal-to-noise and timing of the words most remote
from the read detectors, two modules of the present
type may be read with a single set of read detectors.

A diode, WECo type 1N2146, is added in each horizontal and vertical drive lead to block "sneak" paths.
The leads are interconnected as indicated on Fig. 4.
This arrangement permits the selection of one of the 16
horizontal drive leads by a 4 by 4 selection switch and
the selection of one of the 32 vertical drive leads by an 8
by 4 selection switch. The selection switches and the decoders comprise the memory access circuits. (See Fig.
5.) Under the control of the binary address supplied by
the computer, the decoders select the horizontal and
vertical drivers required for access to a particular word.
The selected drivers supply the necessary drive currents
under the control of the timing circuit.
The outputs of the twistor wires are amplified by the
read detectors to logic level (- 3.5 volts) and passed on
to the computer during the strobing interval. The strobe
gates of the read detectors are controlled by the timing
circuit.
When the timing circuit receives a "read start" signal
from the computer, it generates the horizontal drive,
vertical drive, and strobe signals, as shown in Fig. 6.
Note that the vertical drive signal occurs approximately
0.5 J,tsec after the horizontal drive signal. The selected
drive currents flow in the word-access core matrix during the interval the drive signals are applied. The solenoid current and the twistor output signal are shown on
the figure for reference. The strobe gates of the read
detectors are activated during the strobe interval of 0.2
J,tsec commencing approximately 1.4 J,tsec after the "read
start" signal.
The function of the initial magnetizing circuit (Fig.
5) is to insure the uniform magnetization of the twistor
wires in the same direction. This i~ accomplished by
passing a 1O-J,tsec , 200-ma current pulse through the
twistor wires, and must be done before the store is
placed in operation or whenever the program cards are
installed or replaced in the memory. This circuit is operated by a switch located on the control console of the
computer.
DECODER

The decoding of the address is done in "double-rail"
logic. To address the S12-word store, 9 binary address
bits and their complements are required, these being divided into 4 groups. Three bits and their complerpents
are used to select one of the 8 top vertical drivers 'Fig.
5); 2 bits and their complements are used to select one
of the 4 bottom vertical drivers; 2 bits and their compie men ts are used to select 1 of the 4 right horizon tal
drivers; and 2 bits and their complements are used to
select 1 of the 4 left horizontal drivers. The address bits
from the computer are the outputs of the flip-flops of
the address register and appear as steady ground or
- 3.5 volt signals. The address bits are assigned so
that the first 32 words will appear in order from 1-32 on

44

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
READ START SIG ...._ _ _ _ _ _ _ _ __
(FROM COMPUTER)

HOR. DRIVE

VERT. DRIVE

SOLENOID CURRENT

TWISTQR OUTPUT

STROBE
0123.4.5
. TIME IN MICROSECONDS--+
Fig. 6-Timing diagrams.
Fig. 4-Interconnection of drive leads.
00

z

01

(/)
(/)

0

t=
(,)

LIJ
0::

LIJ
...J
LIJ

«

v

0
0

(/)

10

u.

0

II

s-L
HORIZONTAL DRIVE

Fig. 7-Decoder logic diagram.

zontal drive signal. Similar logic is used in the vertical
for address decoding.
DRIVERS

Fig. ,5-Complete store block diagram.

the first program card in the first slot, and the following words through 512 will appear consecutively in the
adjacent slots.
One of the 2 horizontal decoders is illustrated in the
logic diagram 'of Fig. 7. The inverters are provided for
fan-out purposes. The 2 bits and their complements are
combined with the horizontal drive signal (in "and"
circuits) to produce a l-out-of-4 selection. The selected
lead remains at - 3.5 volts for the duration of the hori-

The horizontal and vertical drive currents are each
controlled by a pair of drivers ca~able of providing an
O.6-ampere pulse with a 0.5-,usec rise time. A vertieal
winding (4 turns shown as one wire) with drivers and
cores is shown on Fig. S. The bottom driver of the pair
has a current monitoring resistor
RS and a feedback
\
network composed of diode CRI and resistors R 7 and
RS. The feedback network monitors, and thereby regulates, the drive current of 0.6 ampere. Transistor Ql is a
WECo type 2N560, and transistors Q2 and Q3 are
WECo type 2NI072.

DeBuske, Janik, and Simons: A Card Changeable Nondestructive Readout Twistor Store

45

-v'

VERTICAL DRIVER (TOP)

R9

CI

t----...- ........H··
R6

R7

OUTPUT

C4

+v

eRI

TPo---e
Fig. 9-Read detector schematic.
HORIZONTAL
DRIVE

Fig. 8-Driver schematic.

.r-L
READ DETECTOR

The read detector schematic is shown in Fig. 9.
Transistors Ql and Q2, WECo type 2N559, comprise a
stabilized feedback linear amplifier4 whose gain is given
to a good approximCl tion by
P,21

=

1

+ R3/R4.

Transformer Tl is used to match the impedance of the
twistor circuit to the input impedance of the amplifier
stage. The third stage, transistor Q3, is normally held in
a conducting condition by a current from the minus
voltage supply through resistor R7. The signals at the
collector of Q2 are of such a polarity as to tend to cut
off the collector current of transistor Q3. A thresholding
function is obtained because an output signal (one)
from the twistor, amplified by Ql and Q2, must overcome the base bias of Q3. The collector circuit of transistor Q3 is designed to be compatible with the connecting logic circuitry of the computer. Transistor Q4 is
used to "strobe" the output of transistor Q3. This provides discrimination against signals coming from the
memory except when the output signal is expected.
Clearly, both a twistor output signal and the strobe signal must be present simultaneously or no output results.
Both Q3 and Q4 are also 2N559's.
TIMING CIRCUIT

All of the timing for the store is generated within the
..
store. The "read start" signal from the computer is the
4

F. D. Waldhauer, "Wide-band feedback amplifiers" IRE

r957~S. ON CIRCUIT THEORY, vol. CT-4, pp. 178-190; September,

READ
START
SIGNAL

..l

(FROM
COMPUTER)

STROBE

---11-

Fig. 10-Timing circuit logic diagram.

only externally generated signal and is used to initiate a
series of univibrators (U). (See Fig. 10.) The univibrator
is basically a I-shot multivibrator whose pulse width is
determined by an RC time constant. One univibrator is
used to generate the 3.0-,usec horizontal drive pulse
(Fig. 6) Two univibrators are used to generate the
vertical drive pulse, one univibrator to provide the
0.5-,usec delay and the other to determine the pulse
width. Similarly, 2 univibrators are used to generate the
strobe signal.
512-WORD STORE (MECHANICAL CONSTRUCTION)

The ~tore is assembled with its twistor memory module, decoders, drivers, timing circuit, and read detectors
as one package. (See Fig. 11.) The weight of the store is
70 pounds and its over-all dimensions are 9 by 21 by
22 inches excluding power supplies.

46

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

-

II

II

II II

11111

0

II

I
c::I

g~
OCI

0:0
"'0:

~

c::I

MEMORY MODULE

1.~fJili~-

c==~~~~~~~---

c:::II

TIM ING
DIODES

DECODERS
DRIVERS
Fig. 11-512-word store (mechanical construction).

The me111:0ry module is located toward the front of
the package for ease of card changing. The read detectors are located immediately to the rear of the
memory to keep the low-level signal leads as short as
possible. Two partial planes are used to mount the read
detectors. Below these are mounted the memory
module, timing circuit, and diode circuits. The bottom
plane is reserved for the decoders and drive un,its.
The entire 'store can be disassembled by separating
the individual planes. Connectors at the .front and rear
of each of the planes carry signals upward and downward between planes. Computer access is outward
through rear connectors.
The store contains 200 2N559 diffused base switching
transistors, 20 2N 560 diffused silicon switching transistors, 40 2N1072 silicon, high current, switching transistors, and 48 IN2146 silicon diodes. Approximately 50
watts of power is required for the operation of the store.
This power is supplied by 3 direct current supplies, -12
volts, + 12 volts, and +32 volts.

29
30
31

II c:::II
1129
1111

- II

I

III 30
II 11111 131~

Fig. 12-Program card.

left to right; green cards are used in the odd slots and are
magnetized right to left.
Each card has a tab on which the programmer marks
the card number and the program symbol when the
card is prepared. Lines are provided on the card separating the 26 bits of each word into five 5-bit columns and
one 1-bit column to assist the programmer. Further,
each word is'''numbered for his use.
SUMMARY

Information is stored on removable cards which can
be prepared quickly by stenographic personnel if desired. Twistor elements are utilized to sense the information which is in the form of tiny permanent magnets.
This store uses a basic 512-word twistor memory module
and is packaged with all of its associated solid-state circuitry as one integrated and removable unit. It is operated on a 5-p.sec cycle time and requires approximately 50 watts of power. A 2048-word, 26-bit-perword, store is under development; in principle it operates in the same manner as the 512-word store, except
that four twistor memory modules are wired together.

PROGRAM CARD

ACKNOWLEDGMENT

The program card consists of 32 rows of 26 or less
Vicalloy magnets per row. (See Fig. 12.) The magnets
are 0.060 inch long, 0.020 inch wide, and 0.002 inch
thick. Two types of cards are used because the twistor
tape folds back and forth through the memory and the
twistor wires are uniformly magnetized in one direction.
Red cards are used in the even slots and are magnetized

This work has been done under the supervision and
guidance of J. E. Corbin. The mechanical design has
been accomplished by W. T. Drugan and members of
his group, C. H. Williams, W. L. Richardson, and C. W.
Mensch. The read detector was designed by D. C.
Weller, and the organization of the access circuitry proposed by W. B. Gaunt.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

47

Square-Loop Magnetic Logic Circuits
EDWARD P. STABLERt

Threshold Property

INTRODUCTION
ERRITES, ferroelectric capacitors, and some ferromagnetic materials all possess bulk characteristics which are usually referred to as square-loop
properties. It has been recognized for some time that
these properties are particularly compatible with digital-device requirements. This paper attempts to categorize the various methods which are utilized in device
synthesis. In addition, it introduces an equivalent circuit model of the devices to determine some of their
attributes and limitations. The model is essentially a
nonlinear resistive and reactive network whose parameters are determined by the internal state of the device
and by its present state of excitation. For the sake of
simplicity, the discussion will be limited to magnetic
devices. Other authors have pointed out that a direct
ferroelectric equivalent exists for multiple-aperture
magnetic devices. l

F

For values of M below a threshold value MOl the
terminal relationship is

deb

dM

-=Kl - ·
dt
dt

However if M exceeds MOl switching may take place.
The terminal variable relationship is

deb

-

dt

= KI

dM
-+
K
dt

2

(M - Mo).

The first term represents the reversible flux change
and the second term represents the irreversible flux
change during switching. The element has a negative
threshold as well as a positive one. If M is negative and
has a magnitude greater than M o, switching may also
occur, hence

SIGNIFICANT PROPERTIES OF NETWORK ELEMENTS
The significant properties of an element of squareloop material are threshold, memory, and saturation.
These properties are described more precisely below.
Some rather gross assumptions are made; however, a
more refined description does not seem warranted because of the increased an~lytical difficulty. The basic
magnetic element, shown in Fig. 1, is a cylinder whose
t • LENGTH
A

= CROSS

M•

SECTIONAL AREA

fH'dl

..,. ffS"dA

Memory Property

The change in cp during switching is the sum of two
terms. The first term, proportional to K I , is a reversible
term. It represents the lossless linear flux change
caused by an applied magneto motive force. The second
term, proportional to K 2 , is an irreversible change. A
net change in cp proportional to K2 will occur during
switching. The implication is that  has many stable
values for zero applied magneto motive force. The
memory of the element is associated with this property.
We can define an internal state S, which is related to
the flux , in the element with zero applied field as

S
Fig. 1-Elementary magnetic element.

length is considerably larger than either of its other dimensions. The properties of the element are described
in terms of the terminal magnetic variables. The cylinder is a two-terminal circuit element. The magnetomotive force M between the two terminals is a function
of the flux  passing through any cross section. In this
approximation, the magnetic field is assumed to be constant over the length of the element.

t

General Electric Co" Syracuse, N. Y.
1 T. E. Bray and B, Silverman, "Shaping magnetic and dielectric
hysteresis loops," Proc. SPecial Tech. Conf. on Solid-State Dielectric
and Magnetic De1Jices, Catholic University of America, Washington.
D. C.; April, 1957.
'

=

Koeb

M

with

= O.

Choose Ko so th<;lt the magnitude of S never exceeds
unity.
Saturation Property

The total irreversible flux change that can take place
is limited by saturation. When the material is saturated
by a positive drive, the terminal relationship is

deb

- = K1

dt

dM
-

dt

for any applied magneto motive force greater than - Mo.
In negative saturation a similar expression is obtained:
de/>
dM
- = K1 dt
dt

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

48

+

for all M less than
Mo. Saturation is related to the
irreversible flux changes which have taken place. We
have chosen the constant Ko so that

+ 1 for positive saturation

S =
and

S = - 1 for negative saturation.
N ow we can summarize the complete set of terminal
relationships.
1) We define S=Ko (cf>-K1M), where we have sub-

tracted the reversible flux contribution caused by
an applied mmf.

de/>

dM
= Kl dt
dt

3) de/>

=

Kl dM

dt

for [

dt

4) -de/> = Kl -dM
dt
dt-

de/>

5) -

dt

de/>

6) -

dt

M< IMol,

for

2) -

for

= Kl

dM
-+
K
dt

2

= Kl

dM
-+
K
dt

2

S = 1
M> - Mo

J

[S = 1J ,
M

<

Mo

(M - Mo)

(M+ Mo)

lsi ~ 1J
[ M> Mo ,
for [ lsi ~1
for

M< -Mo

J
.

The constants K 1, K 2 , K o, and Mo depend on the bulk
material and the dimensions of the element. Mo is
usually a function of S, but this fact will be neglected
here.
A fourth property implicit in the above relationships
is symmetry. If we conduct an experiment starting from
state S, applying a drive M(t), we observe the response
(t).
All of the terminal relationships listed are linear differential equations. The magnetic element is nonlinear
because the applicable differential equation depends on
the applied drive and the internal state. The next section introduces a nonlinear electrical circuit equivalent
to the magnetic circuit element.
EQUIVALENT ELECTRICAL CIRCUIT

The equivalent electrical circuit will consist of linear
electrical components and switches which are actuated
at the turning points

lsi

= 1

The constraints on dcf>/dt and M, imposed when several
elements form a magnetic network, are exactly the
constraints on i and e in electrical networks. In Fig. 2
we show three two-terminal electrical networks for
which the relationship of the terminal variables e, i is
identical to the relationship between M and dcf>/ dt. The
equivalent electrical network that should be used is a
function of the internal state S and the applied voltage.
A capacitor appears in the equivalent circuit because
we are forming the analog of the magnetic variable relationship. This relationship is different from that observed at the electrical terminals of a winding on the
magnetic element.
If an electrical circuit, such as a drive winding or an
output winding, is coupled to an elementary magnetic
element, the relationship between the terminal variables
M and dcf>/dt is modified. Fig. 3(a) shows a magnetic
element with a coupled electrical circuit which has been
reduced to its Norton equivalent circuit. The previously derived circuit is appropriately modified by the
addition of a series impedance numerically equal to
YeN2 and a series voltage source numerically equal to
N Ie. Modification of the magnetic-network-element
parameter relationships by means of coupling to electrical circuitry is useful in some synthesis problems.
When two or more magnetic elements are coupled by
the same electrical circuit, an extension of the preceding
technique is used to obtain an equivalent circuit. This
procedure is not always desirable because there is no
longer a close relationship between the graph of the derived equivalent circuit and the geometry of the magnetic device. In such cases, it is preferable to solve the
network using a mixed (magnetic and electrical) set of
independent variables.
PHYSICAL INTERPRETATION

I t is reasonable to expect that the equivalent circuit
parameters can be determined from the dimensions of
the element and the properties of the bulk material.
From the magnetic material properties, we obtain:

Bs = saturation flux density,
Ho=threshold magnetic field,
Sw = switching constant,
p. = small signal permeability.
The magnetic element has a length 1, and a crosssectional area A. We solve for the circuit parameters:
1

Ko=--

BsA

or

IMI = Mo.

K2 =

2BsASw

We choose to relate

de/>
electrical current.
dt
Ml".Je voltage.

-

I".J ~

p.A

K 1 =1

Mo =

Hol.

Stabler: Square-Loop Magnetic Logic Circuits

e (

1

49
M

USE

T~
c±J

WHEN
S= I AND e> -Eo

Te=K 1

OR

5=-1 AND e>-Eo

OR

Is/ ¢I

AND

/el <

N TURN WINDING

dt

-

Eo

Ie

e (

lRhR~

(a)
USE WHEN
S < I AND e > Eo

~e=~~-L

r-

e

e

K2

EO' Mo

USE

WHEN
5 >1

AND

e<- Eo

(b)

Fig. 3-Modified equivalent circuits for magnetic element coupled
to an electrical circuit.
Fig. 2-Equivalent electrical circuits.
M

In practice, these values are reasonably accurate with
the exception of the solution for K 1 • Kl represents the
reversible flux-change term and depends quite noticeably on dimensional relationships other than those
mentioned, upon the electrical winding configuration
and the internal state S. One contributing reason for
this effect is that the small signal permeability of the
materials used is often less than two orders of magnitude
greater than that of air. It has been assumed that the
flux entering or leaving the magnetic element through
its walls was negligibly small compared to the amount
of flux entering or leaving the ends of the element.
Careful design is necessary if this assumption is to be
a useful one.
Although the reactive nature of the equivalent circuit
plays an important part in the analysis of a given device,
it does not have the nonlinear property utilized in the
digital-device synthesis. For this reason only the nonlinear resistive portion of the equivalent circuit will be
used during the discussion of synthesis techniques. As
a result, the network will consist of linear resistors, batteries, and switches which provide the nonlinear characteristic. In addition, an internal state of S= ± 1 will be
indicated by an arrowhead in the usual way. Elements
in other internal states will have no arrowhead notation.
On~e a network has been synthesized in graph form, it
is essential to perform a thorough analysis to fix the
optimum geometric ratios and winding configurations.
Fig. 4 shows the simplified equivalent circuit and notation.
LOGICAL FUNCTION SYNTHESIS

One method of synthesis may be called flux steering.
In this type of operation, the device is put into an ag-

('

~
-;-+

....---------

tJ

~
o--J\IV\r--1llt--o ~
R

Eo

~

~
(b)
0

•

0

•

0

5 =-1

0

5=+1

0

151 +1

0

(c)

Fig. 4-(a) Elementary unit, (b) simplified equivalent
circuits, (c) graphical notation.

gregate internal state dependent on the input binary
variables. For each internal state, the output binary
function is either equal to unity or equal to zero. When
the internal state corresponds to a unity output, the
device will respond to a drive D by having a large flux
change all along a closed loop L called the output loop.
The output function is zero when the response to the
drive D is different from that previously described.
Either the path along which switching takes place
(along which d¢/dt is large) will differ from L, or there
may be no switching at all. Any Boolean function may
be written in canonical form as the conjunction of disjunctive polynomials or, alternatively, as the disjunction of conjunctive polynomials. We choose the former

50

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

represen ta tion to show a direct device syn thesis. As an
example, let
g = l(X l , X 2 , X a, X()
g

= (Xl

+X

2)(X 3

+ X )(X l + Xa)
4

where g is a binary function of the four binary-input
variables.
A device which will generate to this function is a
three-input AND gate as shown graphically in Fig. 5.
It is shown initially in its rest state prior to the reception of input drives. The closed path formed by It, l2, la,
is the output loop L. The control legs leI, le2, and lea may
be switched during the input period by drive windings
controlled by the input variables Xl, X 2 , X a, X 4 placed
on the control legs. Fig. 6 shows the approximate
equivalent circuit of the device during an input period
in which

Fig. 5-Graph of three-input AND gate in initial stage.

Xl = 1

X2 = 1
Xa = 1

X 4 = O.
This input signal will cause all three elements forming
the output loop L to switch. The device may be designed
so that the control legs saturate, or reach S = -1, at
the same time that the controlled legs ll' l2, la reach
S = -1. A subsequent drive tending to switch the output loop L will switch each element of the output loop.
If the input state had been
Xl = 1

X2 = 1
Xa = 0

X" = 0,
then leg 12 would not have been switched during the input period. Consequently, 12 would not switch during
the output drive. There are a number of variations of
this operation which tend to improve its characteristics.
The example given, however, sufficiently illustrates the
principle of operation of a class of gates called, fluxsteering gates. We restate this principle for emphasis.
In a flux-steering gate the internal states of a number
of eleme~ts in an output loop L are controlled by a
number of control elements. Switching takes place in
every element in L during the output period if, and
only if, the output binary function is equal to unity.
I t is clear from the discussion that theoretically a
single flux-steering gate can generate any combinational
logical function.
Gates of this type have been constructed at the
) General Electric Electronics Laboratory. Reasonable
pow~r gain is available. Fig. 7 shows photographs of
the ,output signal of a two-input flux-steering gate for
unloaded and loaded operati~:)ll.

Fig. 6-Equivalent circuit of three-input AND gate
during input period.

A second method of synthesis is named flux summation. 2 Again the funct~on is written as the conjunction
of a number of disjunctive polynomials.--If the Boolean
function is the conjunction of n disjunctive polynomials,
the device will have n input elements, n -1 shunt elements, and one output element. The device has a rest
state which precedes any input time period. The geometry is such that switching takes place in the output element only when all the input elements are switched
during the input period. As an example, we will synthesize the function
g

= (Xl

+X

2

)(X a

+ X )(Xl + Xa).
4

A graph of the device in its rest state is shown in Fig. 8.
Elements ll' l2' and 1a are input elements; elements 14 and
l5 are shunt elements which may be combined if desired;
and element 16 is the output element. All elements have
the same cross-sectional area. During the input period,
elements 11 , l2, and 13 may be driven and switched. The
element l6 has a high threshold because of its additional
2 N. F. Lockhart, "Logic by ordered flux changes in multipath
ferrite cores," 1958 IRE NATIONAL CONVENTION RECORD, p:t. 4, pp.
268-278.

Stabler: Square-Loop Magnetic Logic Circuits

Fig. 7-Flux-steering gate output. Scale: 0.05 v per turn per division
(vertical) and 0.2 p.sec per division (horizontal). Top: unloaded;
bottom: loaded.

~I

Fig. 8-Graph of flux-summation gate.

length. As a result, if two or less of the input elements
are switched, the output element will not switch. The
output element will switch only if all three of the input
elements are switched. This results from the fact that
the two shunt legs will saturate when any two of the
input legs switch under the influence of input drives.
If all three input elements are switched, the two shunt
legs become saturated before the input elements saturate and cannot undergo further large flux changes.
Switching in the output element ensues. The threshold
Mo of the output element is made considerably greater
than that of the two shunt elements.
It is obvious that the device forms a three-input
AND gate as shown. Multiple windings on the control
elements perform the disjunction operation. On element
1, two windings are placed; one is excited whenever
Xl = 1, and the other is excited whenever X 2 = 1. The
excitation provided by either winding is sufficient to
switch element 1. The other control elements are wound
in the same way. :Various means are available for sensing
the output element. The shunt elements may be' combined into a single element or may be used to generate
other functions of the input variables. This type of gate
is only briefly discussed here but is covered thoroughly

51

by Lockhart. 2 A single gate of this type can generate
theoretically any combinational logical function. The
principle of this gate is restated for emphasis. In a fluxsummation gate, switching of an output element is controlled by the presence of shunt elements in parallel with
the output element. During the input period, the switching of input elements will cause switching in the shunt
elements. Switching will occur in the output element
only if the shunt elements saturate before the input elements saturate. Actually this description has been restricted in order to emphasize the principle.
A third type of logical-function synthesis is the relay
analog method. It is related to the flux-steering gates
but is sufficiently different to warrant separate treatment. The graphs of the relay analog unit in its two
possible input states are shown in Fig. 9. The closed loop
formed by lo, ll, and le is capable of storing information
in the same way as a magnetic-memory core, by saturation of the three elements in a clockwise or counterclockwise direction. All three elements have the same
cross-sectional area, and normally le has a higher
threshold than either 11 or lo. The closed loop is initially
saturated in one direction or the other by means of an
input variable X. Suppose X = 1 corresponds to counterclockwise saturation. Now the element is read out by ,
applying one mmf, M, as shown in Fig. 10. As a result
of M, a flux c,hange will take place in Zi (the input lead)
and in Zl (the unity output lead). Little or no irreversible flux change will take place in Zo (the zero-output
lead). It is clear that these devices can be cascaded to
form a wide 'variety of functions. A symmetric tree of
three input variables is shown in Fig. 11, along with the
relay equivalent. The output leads are Zo, Zl, Z2, Zs
where the subscript refers to the number of input variables equal to unity. Fig. 11 shows the internal state of
thedevicewhenXI=O, X2=1,Xs=1. Theoretically, devices of this type may be cascaded to generate any combinational function.
Relay analog elements have been built at the Electronics Laboratory. The ratio of the irreversible flux
changes which take place in the two output paths, Zl
and Zo, is about 15: 1 for a single stage. Fig. 12 is a
photograph of d4>/dt for the two paths.
We have described three different techniques of synthesizing a gate to generate a combinational logical
function. The problems associated with the input and
output circuitry have not been considered. There are
a number of different types of circuitry which may be
used to cou pIe the magnetic gates. 8-7
3 A. Wang, "Magnetic delay line storage," PROC. IRE, vol. 39,
pp. 401-407; April, 1951.
4 R. D. Kodis, et al., "Magnetic shift register using one core per
bit," 1953 IRE CONVENTION RECORD, pt. 7, pp. 38-42.
6 V. Newhouse and N. Prywes, "High-speed shift registers using
one core per bit," IRE TRANS. ON ELECTRONIC COMPUTERS, vol.
EC-5, pp. 114-120; September, 1956.
6 D. Loev, et at., "Magnetic core circuits for digital data-processing systems," PROC. IRE, vol. 441 pp. 154-162; February, 1956.
7 M. Karnaugh, "Pulse-switching circuits using magnetic cores,"
PROC. IRE, vol. 43, pp. 570-584; May, 1955.

52

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

(b)

(c)
Fig. 9-(a)lRelay analog unit, (b) relay analog unit storing a "1,"
(c) relay analog unit storing a "0".

Z,

Fig. 12-Signal and noise responses of the relay analog unit.
Time scale: 1 ILsec/cm.

Zo

Fig. 10-0utput period drives applied to relay analog unit.

ZI

Zo
(a)

(b)

Fig. l1-(a) Relay analog symmetric tree,
(b) relay symmetric tree.

In general, all the techniques successfully used in
conventional-core logic may be used for multiple-aperture gates. In the next section, we" discuss another technique of interconnection fundamentally different from
core logic output circuitry and unique to multiple-path
logic gates.

MAGNETIC INTERCONNECTION

The conventional magnetic-core logic interconnection
circuitry is used to force switching of the input leg of one
gate when switching occurs in the output leg of another
gate. Since both these events are merely flux changes,
the possibility of coupling the two entirely within the
magnetic medium seems promising. The magneticinterconnection network should permit unambiguous
signal propagation and should not adversely affect the
operation of the units which it couples.
Fig. 13(a) shows a graph of a magnetic-network element with properties similar to an electrical diode. The
approximate terminal characteristics are given in Fig.
13 (b). Leg ld is saturated as shown. Leg lr has a much
higher threshold than ld, and a much higher crosssectional area and saturation flux. The element is shown
in its initial state. The diode element will be much more
responsive to a positive-applied M than to a negativeapplied M. Information can be transmitted from left
to right, but not in the reverse direction. If the diode
unit is used to transmit information, switching may take
place in ld. As a result, it will no longer have the desired
diode characteristic and must be reset. A drive applied
to lr forces ld back to its original saturation state. These
diode units have the required properties for direct interconnection of two magnetic elements. The next paragraph demonstrates their utility by describing a digitaldelay unit in which the storage locations are coupled
through magnetic diodes. It is important to note that
repeated transmission of information in a single direction will lead to saturation of lr so that ld cannot be

53

Stabler: Square-Loop Magnetic Logic Circuits

~~

2d

t t

r

o____~~
______1~~____~
A

~

f f

STORED "I"

STORED "0"

eO

(a)

M

~
456789«;0
~3

del>
dt

000

t

0

0

~

0

0

0

(b)

Mr
I

2

3

4

5

6

7

8

9

10

II

000000000

(c)

Fig. 14-Digital-delay element.

L
Fig. 13-Magnetic diode terminal characteristics.

rt!set. In any specific design, techniques can be found
to prevent this occurrence. Also, other types of coupling
networks can be used in which this problem does not
arise. The diode described above is only intended to
indicate an approach to the problem.
DIGITAL-DELAY COMPONENT

Fig. 14 illustrates a section of a digital-delay device
which uses magnetic diodes to couple informationstorage units. In order to advance the stored "1" to the
right, two pulse drives occur in sequence. First, all oddnumbered vertical legs are driven upwards. The diodes
prevent information flow to the left. Only the storage
location storing a "1" will switch. For the state shown
in Fig. 14, ll, and l2 will switch. The next pulse resets
the diodes. The final state is shown in Fig. 14(c). The
stored "1" has moved to the right. The next advance is
caused by a pulse drive applied to all the even-numbered
vertical legs, followed by a diode reset drive. We have
described the digital-delay unit only briefly because we
intend to illustrate the possibilities of the synthesis approach rather than to describe a practical device.
The magnetic-diode element may also be used to
couple the output leg of a gate to an input leg of
another gate. It is possible to construct fairly complicated logical machines in which the information IS
propagated entirely within the magnetic medium.
ALL-CORE NETWORKS

Networks built entirely of simple cores and copper
windings exist which are equivalent to any of the
multiple-aperture devices which have been described.
A brief introduction to the subject is given here.
When magnetic elements are combined in a network
to form a multiple-aperture device, nodal constraints
are imposed on the flux levels in the elements. At a node

4> = 0

and

If windings on several different cores are connected in
series to form a short-circuited loop.

Le= 0
for the series loop. If there are no other impedances, all
the voltages in the loop are induced voltages

e

=

d4>

N-·
dt

The flux changes which take place in the cores coupled
by the series loop obey the following equation

This constraint is similar to the nodal constraint of
magnetic networks. Following this procedure, a core
circuit equivalent to any multiple-aperture device can
be found. The relative advantages of the two classes of
circuitry will be discussed in a subsequent paper.
CONCLUSIONS

We have described an elementary magnetic element.
This can be used as a building block for any multipleaperture device. We have also presented a terminal
variable relationship of the element, and introduced a
nonlinear electrical equivalent circuit which may be
used to analyze the operation of a multiple-aperture
device. Three synthesis techniques have been given for
obtaining a multiple-aperture device to generate any
combinational logical function. Such devices may be
coupled by conventional-core logic circuitry.
The possibility of coupling logical gates entirely
within the magnetic medium has been discussed briefly.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

54

A simple example of a magnetic "diode" unit was used to
demonstrate that square loop materials can be used in
interconnection circuitry. If this procedure is followed,
many logical operations may be performed on the input formation while the information is kept continuously within the magnetic material.
All-core networks may be derived which are equivalent to the multiple-aperture devices. The networks
which result have many advantages over their multihole
counterparts. These networks will be discussed in a
future publication.
FUTURE PROSPECTS

Coupling of logical gate devices within the magnetic
medium has many attractions.

1) Compatibility-the physical input quantities are
of the same magnetic nature as the physical outputs. Direct coupling permits scaling down physical dimensions to reduce the power consumption.
2) Resistance to noise-the magnetic elements are
natural integrating devices. In a noisy environment they will not respond to large but short-lived
noise impulses.
3) Reliability-the reliability of magnetic materials
is seldom questioned. The circuits described here
contain no electrical components except for a
clock pulse source and drive windings.
Future effort should be devoted'to developing simple,
flexible magnetic-interconnection circuitry suitable for
coupling the various types of logical gates.

Relative Merits of General and Special Purpose
Computers for Information Retrieval
A. OPLER t

AND

INTRODUCTION

NCREASING attention to automatic information
processing is being given by all sections of our
technology, commerce, and military operations.
One of its more important aspects is the use of computing machines for storage and subsequent retrieval of
information by request.
There have been two simultaneous patterns evolving
in the last decade. One group has borrowed the equipment used for standard accounting and engineering
calculations and has demonstrated the practicability of
automatic information retrieval (IR) using such equipment. A second group has concentrated on designing
special equipment for use solely for mechanized information retrieval.
We are at present in a transition period and it seems
appropriate at this time to review the progress made
using each of the two approaches and perhaps to introduce some constructive cross-fertilization. For the most
part, this field, like many new and exciting disciplines,
has produced much controversy, strong prejudices, and
a tendency to place personal viewpoint above impartial
analysis. Let us hope this "feudal" period is over.

I

INFORMATION RETRIEVAL ON GENERAL
PURPOSE COMPUTERS

Excluding military systems, there have been approximately eighteen information retrieval systems pro-

t

Computer Usage Co., New York, N. Y.

N. BAIRDt

grammed and debugged for general purpose (GP) computers. Some of these were written primarily for purposes of exploration and others have been actually made
operational. From a review of this activity, we are now
able to draw a number of generalizations regarding the
performance of general purpose computers in the in forma tion -retrieval area.
1) The high-speed computer has proven satisfactory
for both exploration and operation. Intermediate
speed machines have been satisfactory only for
exploration, for operation of simple searching
schemes or use with small collections.
2) The files of information to be stored were maintained on magnetic tape (one exception employed
magnetic disk storage).
3) The full gamut of available machines has been
used for information retrieval and it appears that
no currently available logical design is markedly
superior to any other. Information retrieval systems tend to take many forms. For each IR system formulation and for each machine design,
there will be special programming techniques
required.
4) A remarkable variety of document storage formats and retrieval schemes has emerged. With
the general purpose computers, it appears that
the searching system designer is relatively free to
build using the classification system best sui ted
to his needs. Indexing schemes as simple as Dewey
Decimal and as complex as those required for

Opler and Baird: Relative Merits of Computers

5)

6)

7)

8)

9)

10)

describing the details of the kinetic processes involved in the synthesis of pharmaceuticals have
been programmed with little real difficulty.
Most computer searching systems are designed
to input the search requirements in formats suitable for use by personnel unfamiliar with computers and to produce outputs designed for a
similar group.
Cost of development of such computer systems
has been very high and the absence of a suitable
information processing language compiler has
been sorely f el t.
Once a system is developed, the actual searching
and file maintenance costs have proved to be
moderate.
The most common technique is that of searching
a magnetic tape file from beginning to end while
seeking answers to more than one question (multiplexing). The searching speeds so obtained have
proven better than first anticipated. Common
speeds range from 1000-15,000 document interrogations per question per minute. As each new
program is written, valuable experience is accumulating regarding the best programming and
storage techniques to use.
Document collections thus far used have ranged
from 1000-50,000 items. In every case, these
collections have been of "more than average"
value to the sponsoring organization. Typical
systems search important collections of patents,
developmental chemicals, and reports especially
pertinent to the sponsor's area of interest. No
one has yet been willing to expend funds to index,
code, and store the complete contents of newspapers, encyclopedias, or even technical journals.
In no case has the acquisition of a large computer
solely for information retrieval been recommended or even suggested. Both the developmen tal and operational searching systems share
machine time with other technical operations.
SPECIAL PURPOSE INFORMATION RETRIEVAL
RELA y MACHINES

-Comparred ,to, these' rather" 'imposing- 'accomplishments, the work done to date by the special purpose
retrieval machines has seemed less spectacular. During
the last five years, the use of the GP electronic computer
has been explored while the special purpose machines
developed have been primarily_ electromechanical. Special purpose electronic machines are about to appear. It
will probably be some time before we can obtain as
much perspective here as we have on the general purpose electronic computers.
The first group of special purpose devices select from
a collection of separable records (punched hole, optical
film, and magnetic cards). In each of these devices, the
file items are passed through a sensing device which analyzes the code stored on ea_ch card. The search criteria
are established either by wiring patch boards, inserting
special request cards, or by other means. The selected

55

cards may then be used to prepare printed lists, to control the selection of fuller documentary records or, in
the case of film, to project and photocopy the original
information.
Another electromechanical device in current use employs continuous reels of punched paper tape as input
and still another uses punched card input and paper
tape for intermediate storage. In the former case, the
search requirements are established by patchboard wiring and in the latter, are established by machine option
switches. In both devices, the successfully retrieved
items are typed out by an attached, machine-controlled
typewri ter.
From the expe~ience thus far gathered in the operation of these machines a number of generalizations may
be made.
1) While there is a wide gamut of card-processing
speeds, their general performance has been quite
satisfactory. All operate on the site of the information processing activities rather than in a
computation center.
2) The chief limitations have been in treating, a)
problems where quantification is important
(range of boiling points, per cent of each of several ingredients required), b) cases involving the
conjunction of a number of disjunctive classes
(e.g., a card is to be scanned to determine whether
it contains both the code for three American
states and the code for no French province), and
c) problems involving detailed interrelationships
such as sequence, connectivity, subject-object relationship. These limitations apply to some degree to each of the electromechanical devices but
in some cases these limitations have been surmoun ted in ingenious fashion.
3) The cost of these machines, with the exception
of the more powerful optical film and magnetic
card devices, has been remarkably low compared
to the high-speed electronic machines and their
full-time use for information retrieval has been
practical. The cost per search, as well as the
capital investment, has been low.
4) The rate of search has been limited by card and
paper tape handling speed. Again, excluding the
optical and magn~tic, the systems have ranged
_ from 8-500 items per minute.
S) The separable unit record has proved to have a
number of advantages and disadvantages. Among
the former are the ability to select small subfiles
for machine feed, the ease of file maintenance and
the easier access (in many cases) to origin~l information (through film inserts, card drawings,
etc.). The disadvantages center around the variability in information content of documents which
cause either inefficient waste space on punched
cards or, introduce the difficulties of multiple
card evaluation and manipulation. It has of
course been possible to simulate the behavior of
simple relay and medium electronic computers.

56

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
6) Experience with optical film and magnetic cards
has been very limited. The specifications and
, demonstrations have been most impressive.
SPECIAL PURPOSE INFORMATION RETRIEVAL
ELECTRONIC COMPUTERS

Weare now entering the era of the special purpose
high-speed electronic information retrieval computer.
Only one or two published accounts of such machines
have appeared, since they are still developmental and
frequently related to military applications.
Such devices differ little from some currently announced data processing equipment and it appears
that their usage will be justified only in the largest informa tion processing establishments. These machines
will look very much like a general purpose computer
but trade unnecessary features (e.g., floating point
arithmetic) for hardware embodiments more needed
for information retrieval (e.g., a large number of independently operating comparison registers).
Another "special" unit is a special limited version of
a standard data processor with certain features eliminated. The belief that this will materially reduce the
cost of such a machine is not in accord with the experience of computing machine manufacturers.
Special computers are also required to handle special
problems in information retrieval. One that is receiving
a good deal of attention is the problem of the storage
and retrieval of graphical, geometrical, and topological
configurations. One special computer has already been
designed for work in this area. Special military information retrieval problems have given rise to the design of
special machines geared to these problems.
The following is a description of a nonexistent but
typical large, special purpose computer. The external
storage might consis't of a large bank of high-density,
high-speed magnetic tape units which would have the
facility (under computer control) to advance or back
up rapidly and then to search not for one "key" but for
many logical c0mbinations of keys. All tape units would
be active simultaneously under independent control of
a special tape manipulation and testing unit. As information which satisfies the rough screening conditions is brought from the tapes, it, would be sent into a
magnetic core unit where a further refining process is
carried out. Information which successfully meets all
the criteria for one of the many simultaneous searches
to be conducted would be sent to the proper output area
for both immediate visual display and for subsequent
complete printing of full information on \ high-speed
equipment.
The hypothetical computer just described represents
only one direction in the development required. We
assume that any good information retrieval system is
sufficiently flexible to meet the logical requirements of
the searching system and to search at sufficiently low,
unit cost. Beyond these, the two most critical factors
are investment cost and the size of the document col-

lection that can be manipulated in some reasonable
time. The typical relay card-selecting device represents
relatively low investment but with correspondingly small
collections manipulable in practical times. The typical
search program running on available electronic com,puters represents a high investment cost with moderate
sized collections searchable in practical times. The
special purpose electronic computer, such as the hypothetical one described above, represents an attempt to
obtain increased performance at an increased investment cost. Thus, we see that no major breakthroughs
have occurred to date.
The challenge to the computer designer and to the
system designer is the development of techniques for
handling large collections on relatively inexpensive devices. It is hoped that the developments in computer
components, computer logic, storage devices, and systems operation will lead to the development of improved devices. Hand-in-hand with such development
must be the maturing of our understanding of the theoretical and practical bases for the retrieval of information.
BIBLIOGRAPHY

(1] P. Bagley, "Electronic digital machines for high-speed information searching," Master's thesis, M.LT., Cambridge, Mass.;
1951.
\
[2] R. H. Bracken and H. A. Tillitt, "Information searching with a
701 calculator," J. Assoc. Compo Mach., vol. 4, p. 131; 1957.
(3] S. R. Moyer, "Automatic search of library documents," Computers and Automation, vol. 6, p. 24; May, 1957.
[4] J. J. O'Connor, Univac Applications Research Center Tech.
Rept. No. 18; 1957.
[5] A. Opler and N. Baird, Proc. Internatl. Conf. on Sci. Information,
Area IV, p. 37; 1958.
(6] B. K. Dennis, "Rapid retrieval of information," Computers and
Automation, vol. 8, p. 8; October, 1958.
(7] W. H. T. Davison and M. Gordon, "Sorting for chemical groups
using Gordon-Kendall-Davison ciphers," Amer. Documentation,
vol. 8, p. 202; 1957.
[8] c. Mooers, Zator Co., Boston, Mass., Tech. Bull. No. 59; 1951.
[9] C. Mooers, Zator Co., Boston, Mass., Tech. Bull. No. 64; 1951.
[10] A. Opler and T. R. Norton, "A Manual for Programming Computers ... ," Dow Chemical Co., Midland, Mich.; 1956.
[11] T. R. Norton and A. Opler, "A Manual for Coding Organic Compounds ... ," Dow Chemical Co., Midland, Mich.; 1956.
(12] A. Opler, "A topological application of computing machines,"
Proc. WJCC, pp. 86-88; 1956.
[13] A. Opler and T. R. Norton, Chem. and Engrg. News, vol. 34, p.
2812; 1956.
[14] A.Opler, Chem. and Engr. News, vol. 35, p. 92; 1957.
[15] A. Opler and N. Baird, paper presented before American Chemical Society, Div. of Chemical Literature; April, 1958.
[16] R. A. Carpenter, et al., "Correlation of structure and physical'
properties with utility of chemical compounds," paper presented before American Chemical Society, Div. of Chemical
Literature; April, 1958.
[17] W. H. Waldo and M. DeBacker, "Printing chemical structures
electronically; encoded compounds searched generically with
IBM 702," Proc. Internatl. Conf. on Sci. Information, Area IV,
p.49; 1958.
I
[18] L. C. Ray and R. A. Kirsch, "Finding chemical records by digital computers," Science, vol. 126, p. 814; 1957.
[19] H. R. Koller, E. Marden, and H. Pfeffer, "The HAYSTAQ
system: past, present, and future," Proc. Internatl. Conf. on
Sci. Information, Area V, p. 317; 1958.
[20] J. J. Nolan, paper presented before American Chemical Society,
Div. of Chemical Literature; April, 1958.
[21] J. W. Perry and A. Kent, "Tools for Machine Literature Searching," Interscience Publishers, New York, N. Y.; 1958.
[22] Ibid., ch. 19.
[23] Digital Computing Newsletter, vol. 10, no. 4, p. 4; October, 1958.
[24] Patent Office Res. and Dev. Repts., No. 13; November, 1958.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

57

A Specialized Library Index Search Computer
B. KESSELt

AND

INTRODUCTION

HE NEED for mechanizing library information
searches has become apparent during the past two
decades. The phenomenal increase in the size and
number of library establishments in conjunction with
the,requirement for greater speed in servicing informatiot?- requests have been key contributory factors in focusing ~ttention on this situation. A large variety of
developmental and commercially available devices have
been used for putting library card catalogs into forms
more amenable to automatic searching. It was only a
matter of time before many researchers became aware
of the advantages of digital computers in mechanizing
this operation. Government and commercial organizations, by writing new programs, were able to adapt
those computers which were already available to them.
Early in 1958, a program was initiated by the Rome
Air Development Center for the design and fabrication
of an Index Searcher that was to be tailored specifically
to the needs of library documentation. The Index
Searcher was to contain only those logical functions that
would be of value in library information searching. The
Searcher was to be developed primarily as a research
vehicle for use in studying various index and information retrieval approaches. The basic design was to include facilities to allow the Searcher to be used as a
fully operational device. In April, 1958 a contract was
awarded to the Computer Control Company with delivery of the Index Searcher to occur early in 1959.
Design concepts of the Index Searcher have evolved
in part from a knowledge of the Minicard Selector.!
However, the Index Searcher will be used in library
situations where index data and graphics material are
stored separately, rather than together as on Minicards.
The Index Searcher will search through large volumes
of index data serially and print out the identification of
those documents, reports, or graphic materials that
satisfy the requirements of the search criteria. A consideration of the problems of library mechanization indicated that the following features should be included in
the Searcher design: 1) high-speed searching of the index
data; 2) the ability to reproduce all or part of the index
library cheaply and quickly; 3) a minimum of delay in
effecting the search beyond the setting up of the search
criteria; 4) the capability for handling a wide variety
of index and classification schemes; 5) ease of operation; 6) flexibility in permitting frequent updating of

T

t

Computer Control Co., Inc., Framingham, Mass.

i J.Rome
Ai: Dev. Center, Griffiss AFB, Rome, N. Y.
W. KUlpers, A. yv. Tyler, and W. L. Myers, "A Minicard sys-

tem for documentary mformation," American Documentation, vol.
8, pp. 246-268; 1957.

A. DELUCIAt

the file; 7) search for more than one question at a time;
and 8) a growth potential allowing for relatively efficient
use of the Searcher either singly or in groups as the size
of the library increased. The system design of the
Searcher as a special purpose system, was to result in an
information handling capability that could be matched
in the general purpose computer field only by a considerably larger and more expensive machine.
GENERAL DESCRIPTION

The Index Searcher uses magnetic tape as the storage
medium for index data. An index entry is made on the
tape for each document, report, or other piece of physical material which is to be made available for rapid
automatic searching. An entry might consist of anything from a title to a complete text, depending upon
the storage and recovery system to be used. In typical
applications the entry made for a given document will
consist of a document number, title, author, da~e, and
several descriptors that define the subject matter of the
document.
The descriptors can be grouped into sets that are
designated as phrases. The value of this feature can be
illustrated by considering the indexing of a technical
paper that describes a machine using transistorized logical circuits and a magnetic core shift register storage.
Listing of just the four descriptors for transistorized,
logic, magnetic cores, and shift registers could result in
the false selection of this document during a search for
magnetic core logical circuits. Use of phrase boundaries
in the proper places assures that the descriptors will be
properly associated with each other during the searches.
The standard machine word for the Searcher is 42 bits
in length. This consists of seven alpha numeric characters of 6 bits each. The system also includes provision
for handling double and triple length words so that it
can accommodate clear text as well as coded index data.
A typical document index entry might consist of
twenty machine words, making a total of 840 bits of
information.
Search criteria are specified in terms of question
words plus logical connectives to group the question
words into question phrases and to group the question
phrases into complete questions. The question words are
stored in the internal memory of the Searcher. The
memory has a capacity of 20 machine words.
The type of comparisons to be made between question
words and the document index entry words is specified
individually for each word by plugboard wiring .. The
specification can be for "equality," "less-than," "greater-than," or any combination of two of these types of
comparison. For example, a question might specify that

58

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

a document is required on a particular subject that
would be identified by descriptor equality comparisons;
published after 1956, identified by a "greater-than"
comparison; and with a security classification no higher
than confidential, identified by an "equal-to or lessthan" comparison.
The question words are grouped in to question phrases
by means of plugboard-connected logical circuits. Fifteen phrase elements are available for composing up to
15 different question phrases. Two or three phrase elemen ts can be cascaded to make a larger phrase than
can be handled in one logical element. Each phrase' can
use any desired combination of question words, in
either assertion or negation form, as inputs, and each
question word can be used in as many of the 15 different phrases as desired.
Complete search criteria questions are made up of
question 'phrases by plugboard-connected select logical
,elements. The same flexibility exists here as in combining words into phrases. Ten question elements are provided, resulting in the ability of the Searcher to simultaneously search for documents meeting ten different
search criteria.
Searching consists of scanning through the complete
document index tape and comparing the contents of
each index entry with the question words and ]ogic
stored in the Searcher memory and plugboard connections.
The result of a successful search is a print-out of the
document numbers of those document index entries that
have met the search criteria. An identifying number
printed beside each document number shows which of
the several search questions is answered by that document. A picture of the Index Searcher is in Fig. 1.
FUNCTIONS

The Index Searcher has six different modes of operation. Listed in the order in which they would be used,
these are 1) Document Insert, 2) Regenerate Tape,
3) Question Insert, 4) Search,S) Print, and 6) Edit.
Each of these will be described after a brief reference to
the system block diagram shown in Fig. 2.
The major blocks making up the Searcher, and their
functions, are as follows:
A) Magnetic Tape Unit-stores and scans document
index data.
B) Tape Buffer-serves as a buffer to and from the
Magnetic Tape Unit and Flexowriter.
C) Flexowriter-serves as a punched-paper-tape reader for document and question insertion, and as
output printer for searching.
D) Word Input Buffer-accumulates magnetic tape
information frames to form complete machine
words.
E) Print-Out Buffers--store tape data which is to be
printed out if the document is a desired one.
F) Word Storage Buffer-stores complete magnetic
tape index words for comparison with question
words.

Fig. i-Index searcher.

Fig. 2-Block diagram.

G) Memory-stores question words.
H) Comparison Circuits-compare tape index words
with question words from memory.
I) Plugboard Circuits-use results of word comparisons to make complete question comparisons.
J) Control Circuits.
The actions of the Searcher in each of its operating
modes follow.

Document Insert
This mode is used to place new document index entries into the Searcher's magnetic tape storage. Primarily it is a punched paper tape to magnetic tape conversion process. New index entries are submitted to the
machine in the form of punched paper tapes. Paper tape
frames are accumulated in the tape buffer until the buffer is full. Those contento of the buffer which represent
complete index entries are transferred to the magnetic
tape as one block of tape data. Any partial entry left in
the buffer is then completed with the next paper tape
information to arrive, and is followed by more documents. The process continues automatically to the end
of the paper tape. Each magnetic tape block contains
an integral number of complete index entries. The actual lengths of blocks on the tape are variable.

Kessel and DeLucia: A Specialized Library Index Search Computer
Regenerate Tape
This is a simple magnetic-tape to magnetic-tape routine, which allows the file tape to be duplicated as insurance against loss of file data through accidental damage to the tape on the Searcher. This requires an additional magnetic tape unit that is not part of the Searcher as originally built, although space has been left for it
in the racks.

Question Insert
Thi~ mode transfers question words from punched
paper tape to the Searcher Memory. This is accompanied by insertion of a question plugboard that specifies the nature of the comparison to be made for each
word and the combination of the question words into
phrases and complete questions. The plugboard connections can also specify one or two words per selected
document to be printed out in addition to the document
. number. A simplified symbolic representation of the
plugboard and its circuits is shown in Fig. 3.

59

ing the question answered by the document. This information is printed out on the Flexowriter. Further
searching ordinarily continues during print-out. However, if a series of successive selections results in filling
the Buffer faster than the maximum print-out rate, the
tape automatically stops until adequate Buffer capacity
is available for further document data. When the end
of the recorded portion of the tape is reached, the tape
unit automatically stops and positions itself ready for
the next search in the opposite direction.

Print
This mode is used to print out entire document index
entries rather than just the three words possible in a
normal search. The machine operates in much the same
manner as for normal searching un til a selection is made.
The tape must move in the forward direction. Selection
of a tape index entry causes the tape to stop, reverse,
reread the entire selected block into the Buffer, print
out the complete selected entry, and then resume search.

Edit
PHRASE LOGIC AND
STORAGE ELEMENTS

o

0

0

0

SELECT LOGIC
ELEMENTS

0

WOfIO COMPARISON
STORAGE
OUT

T'fPICAL
IOF 10

Fig. 3-Simplified representation of plugboard circuits.

Search
This is the primary operating mode of the Searcher.
This mode performs the scanning of stored document
index entries in search of those that meet specified question criteria. The tape can move in either the forward or
reverse direction to accomplish this search. While the
tape is scanned, tape frames are accumulated into complete machine words, and are compared with the twenty
stored question words. A plugboard word storage element remembers a successful comparison with any of
these words until an end-of-phrase designation occurs in
the tape data. At that time the word storage outputs
are sensed in the phrase element logic to determine
whether any complete question phrase criteria have
been satisfied. If so, a phrase storage element remembers
t4is fact as scanning continues through the remainder of
the tape index entry. When the end of tape data for
the document is reached, sensing the outputs of the
phrase storage elements determines whether or not a
complete question criteria has been satisfied. During
the scanning of the document entry, the Print-Out Buf. fer receives automatically the document number and
two other plugboard-specified words. When a document answers a question, the Tape Buffer receives the
contents of the Print-Out Buffer and a number identify-

This mode is used to delete unwanted index entries
from the tape. Document entries to be deleted are specified by document number or other normal question criteria. Operation is similar to the Print mode up to the
point of bringing into the Buffer the block of tape information containing the document to be deleted.
At this point the block is recorded in the same place
it formerly occupied on the tape, but with blank characters in the position which had been occupied by the
deleted entry.
PARAMETERS

The Searcher scans through magnetic tape document
index entries at an effective rate of 218,000 bits per second, or about 5200 machine words per second, where
each machine word contains 7 alpha numeric characters.
For the typical document index entry length of 20 words
mentioned earlier this amounts to 260 documents per
second. While searching at this rate the machine seeks
documents satisfying up to ten independent search criteria.
The Searcher uses 2400-foot rolls of one-inch magnetic
tape for document index storage. One reel stores
57,600,000 bits, which is about 68,500 twenty-word
documents. Uninterrupted search time for a complete
reel is approximately 4.5 minutes.
The input-output rates of the Searcher are presently
limited to the 10-character-per-second rate of the
Flexowriter for document insertion, question insertion,
and selection print-out.
Therefore, the document insertion rate, based on 20word entries, is about 4 documents per minute, and the
selection print-out rate, when printing out 3 words per
selection, is approximately 25 documents per minute.\A 20-word question insertion takes about 15 seconds.
These rates can be substantially increased by use of
high-speed paper tape reader units and high-speed
printer or punch outputs.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

60

Programmed Interpretation of Text as a Basis
for Information-Retrieval Systems
L. DOYLEt

T

wo

conditions have made it almost inevitable
that we have an information-retrieval project at
the System Development Corporation (SDC).
First, our internal documentation. It has been estimated that we acquire 10,000 documents a year both
from internal and external sources, not including books
and periodicals. Internal distribution of these documents runs into millions of copies. We have not been
able to afford to abstract and subject categorize more
than 10 per cent of our 10,000 documents a year. With
a good retrieval system we might be able to make many
thousands more of our documents accessible by subject
without increasing documentation expense.
Secondly, one of SDC's major activities is computer
programming for air defense, and as a result of this we
have charge of several large general-purpose digital
computers.
Thus, we have both the motives and the equipment
to study the computerization of the handling of documented information. And so, about six months ago a
three-man project was created at SDC to do research
and development in this area.
PROGRAMMED SELECTION OF DESCRIPTORS FROM TEXT

Previously we had considered using, as stop-gap solutions, some of the already existing well-known information-retrieval formulas, such as U niterm, or marginal
punched cards, or peek-a-boo, or something of this nature; but we thought, "Where are we going to get the
people to read and categorize 10,000 technical documents a year?" We realized that we were already in a
situation which was going to become increasingly common as time goes on-a setting where skilled man-hours
are harder to find than time on a large computer.
At this time we switched to a machine-centered philosophy and began to explore the possibilities for performing the chores of documentation on digital computers and EAM equipment. The crucial difficulty in
implementing such a philosophy is finding a way to use
computer programs to interpret natural English text for
the purpose of effectively csupject-indexing documents,
so that they can be retrieved precisely without any person's having read them for purposes of categorizing,
picking descriptors, or encoding in any manner.
MECHANICAL INDEXING AS A PRELUDE

Our thinking about the interpretation and retrieval
of natural text has changed greatly since we started.

t

System Development Corp., Santa Monica, Calif.

For example, one of our realizations has been that it is
premature to set oneself up for pure machine searching
of natural text. Machine searching is superb if you
know exactly how to describe what you are looking for
and if you are sure that you know how to choose from
among many possible searching strategies. I doubt if
anyone is yet in this comfortable position with respect
to machine searching of text. What
is needed is a search\
ing setup which is fast and convenient, while at the
same time allowing the human mind itself, with its
versatility and its powers of observation, to take part
in the search. Our solution has been to employ mechanical indexing using an artificial language, which I shall
describe later.
THE SYSTEM

We now have an experimental abstract searching system at SDC which was set into motion by our retrieval
project. Through the use of it, we hope to find out the
basic difficulties of natural text retrieval and to come
up with new principles of language data processing,
some of which may be useful for purposes other than
information retrieval. It is also a research tool, through
which properties of language and of collections of information may be subjected to analysis-for example, by
making frequency counts. Later I shall describe an instance of the use of the system for research.
Fig. 1 shows the process we ndw use to encode abstracts. At the left is keypunching of the text, which in
itself tends to undermine the purpose of natural text
retrieval-however, we assume that input technology
will develop to the point that keypunching will no
longer be a barrier. For the present, key-punching limits our input to small chunks of text. We now work only
with abstracts; however, we can handle any fragment of
text of about abstract size (approximately 100 words).
As the text material arrives in computer storage (as
6-bit Hollerith), the text-compiler program translates
the raw text into a more condensed form suitable for
searching. There are two stages of this condensation.
First, selection of subject terms from the text. The
text compiler has at its disposa,l a table of allowable
subject words, and it searches text for these words.
Secondly, all of the subject words or terms which the
text compiler finds in an abstract are replaced by binary numbers which have been predesignated to represent the subject words when they are stored on tape.
These' binary number tags take up less than a third as
much space in storage as would the subject words
themselves.

Doyle: Programmed Interpretation of Text as a Basis for Information-Retrieval Systems
TEXT
COMPILER
PROGRAM - - - - - - - - - CARD TO

P~~JH

~~~~~~

"-- ~ ~ ::~
@
F

:; '; ..::......:'
TEXT
ON
CARDS

\

ONE ABSTRACT = 10

0

K C

E

V

S

L

:,. . :~-: -.~ ---ABS~RACT \~

P

T

E

TAPE

462
10 610
27 366 702

TEXT IN

J

-0

~

CONVERSION OF

ALPHABETIZED

~~'ffIO%!/!oLS
PR BG CV

E

6 BIT HOLLERITH

PER MINUTE

111
'-4:3-'3

N 0

I

MACHINE
SEARCH

Y (~~'fttf't,crS)

r:::\
~

TEXT

~~~~~AE:

o~

PR BG

SUBJECT
NUMBERS
IN
OCTAL

PR LI

!

=6

BG

PR NO GK

(BINARY)

CARDS = 130 STORAGE REGISTERS

DE

PR BG PM

PS AY
PS CY ET

STORAGE REGISTERS

PS CZ RF

Fig. 1-Conversion of text to condensed code for searching.

By these means the text compiler will condense an
abstract from about 130 storage registers of text down
to about 5 or 6 registers of binary numbers. With this
condensation one can represent 10,000 abstracts on
about 5 per cent of the total length of a standard IBM
tape reel. We now have a tape layout which contains
the boiled-down essential information of many abstracts, and any time we wish to make reference to these
abstracts, they can be read into computer memory at
the rate of 500 per second. The text compiler can process 4 abstracts per second.
At this point we have our choice of two courses of action. We can either machine search the tape for particular combin:ations of subject words, or we can convert
the entire tape to an alphabetical code, which can
be printed out as an alphabetized format and searched
by eye.
MACHINE SEARCHING

If one does decide to machine search this tape, it can
be done in a very short time. With a large digital computer, such as IBM 709 or the AN/FSQ-7, the tapestored abstract plots can be searched much faster than
they can be read in. Since both these computers can
execute instructions while in-out operations are going
on, it is possible to conduct 100 simultaneous logical
product searches during the 20 seconds it would take
to read in 10,000 abstracts.
THE HUMAN MIND AS A SEARCHING INSTRUMENT

However, at this point in our development we value
searching flexibility much more than we value speed or
efficiency. Speaking of speed, it is still difficult to duplicate mechanically the feat of a person looking up a
number in a telephone book. Let's consider this point.
The Los Angeles Central Zone Directory has enough
alphanumeric content to fill six IBM tape reels. But almost anyone is capable of finding an entry in this directory in from 10 to 30 seconds. This illustrates the power
of familiarity with alphabetical order, which is a typical
(and therefore unappreciated) human ability. If you
have about seven RAMAC's you might just be able to
exceed this common everyday performance by humans.
If you now imagine a whole room filled with telephone-directory-sized books, each of which contains en-

61

tries confined to a small portion of the alphabet, it is
easy to see that it would take some person less than
twice as long to find an entry in these many books as it
would take to find something in just one book. Of
course, for this to be possible, all the books have to be
stacked in alphabetical order. We now have a very
cheap searching mechanism, which no computer can
equal in speed for such a large volume of material. There
is, of course, the cost of publishing all these books to be
considered, but when such things as microfilm scanners
are available, the possibilities for exploiting human
familiarity with alphabetical order are something to be
seriously considered.
Another human skill of importance in searching is
the ability to judge the meaning of a symbol by its context. It will take a great amount of programming and
even more preliminary research before this skill can be
challenged by electronic instruments.
From the standpoint of research and development,
the most important thing about humans as searching instruments is that they are observant. They repeatedly
notice things that they are not programmed to notice.
At this stage, our progress strongly depends on the application of these powers of observation. Our intent is to
use astute people to search small document collections
in order to find out how computers should be programmed to search large collections. Our hope is that
the state of the art of programming to produce highly
condensed, information-rich formats for search by the
human eye will improve at such a rate that pure machine searching of natural text may not for many
years catch up in convenience or effectiveness with
computer-assisted eye searching.
Our present alternative to machine searching, which
is illustrated in Fig. 1, is the generation of an alphabetized printout consisting of two letter code words for
subjects. So, instead of using the binary subject number tape as input to a searching program, we feed it to
another program which converts all the binary numbers to two letter symbols, which we call bigrams. The
conversion process is totally analogous to binary-todecimal conversion, the only difference being that instead of subtracting powers of ten, we subtract powers
of 26, the number of letters in the alphabet. These bigrams are punched out on regular IBM cards, after
which EAM equipment can offset reproduce and alphabetically sort the cards prior to the printing out of a
format.
Why do we use bigrams for our alphabetized printout instead of octal numbers, decimal numbers, or for
that matter the original subject words themselves? One
answer is that we get greater condensation. We can
store all the subject words in one abstract on one IBM
card. And we are now in a position to use EAM reproducing and sorting equipment to alphabetize on every
subject word in every abstract. Fig. 2 shows how the
reproducing is done on the contents of one abstract. One
can see from this that if one has an abstract containing

62

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

I ! 1"';::'"j

Sj/MIJLATION
[SY::IJNICATION
PICKET
I
VESSEL
WARNING
"
1ME
IMt'PIJT

:1

COMPIJTER-

t%t~f,,~~O

COORIOINATES
TtE
JINTERCEPTION

Rf'lLM AY BK NB EP LG/QB PS HF CQ PX

GENER%~~

LMIAY BK NB EP LG/QB PS HF CQ PX
AY BK NB :~_!-§-,QEI PS HF CQ PX

i

PERMIJTATIONS

I

'PERMIJTATION}
TO GO IN
"RAOAR" SEC-

lNo!1:&~f5

I ~~
1,/

XY 123
RF XY 123
RF LM XY 123

~~~

/1
PS 1HF CQ PX

RF LM AY

:

BK NB EP LG QB XY 123

1
I

llJtn~JfAfot~~~

PX:

j

RF LM AY BK NB EP LG QB PS HF CQ XY 123

I

t

INOEXING
COLIJMNS

ONE

COMPlJTER
PROGRAMS

OOCIJMENT
NIJM8ER
COLIJMNS

ONE

A8STRACT---P%1~ZEO

REPROOllCING
PlINCH

SORTER
TWELVE
ANO 407 TWELVE
P%fJoHfO --:~;:it:~mZEO

(ONE CARD PER\
\SUBJECT TERM)

Fig. 2-Cyclic permutation of code symbols to
produce index entries.

n subject words, then this EAM reproducing process
will yield n cards, each beginning with a different subject word, and each constituting an entry in an index.
Each entry carries, along with the indexing word itself,
the remaining subject words in the abstract, which act
as context for the indexing word.
Whenever I start explaining bigrams to people, their
reactions convince me of the undying popularity of indexes which are in plain English. Nevertheless, I maintain that no spoken or written language was designed
with alphabetized indexes in mind, and am convinced
that special indexing languages, understood and used
by librarians and other specially trained people, will
play an important part in information retrieval as soon
as the programs are available to translate from Erlglish
to these languages.

tures of any artificial language especially designed for
indexing. It will greatly simplify the searching, it will
make the artificial language easier to learn, and it will
allow generic searching, the lack of which is probably
one of the major disadvantages of natural text retrieval
based too closely on the natural language itself.
4) Quantitative dependence of assignment of English
meanings on the contents of the library. A document is
retrieved from a library, but even so it is not obvious
that the contents of the library should be as important
a factor as the contents of the document itself in affecting the way the document is encoded as a search item.
In our artificial language, the nature of its correspondence to English words ought to be governed by how
much of what is in the library. Application of this principle is difficult because it must certainly involve a
means of frequency-counting everything in the library.
But the benefits are many. One possible benefit is that
the state will be approached where words of the artificial
language will be equally used, which of course is an information theory ideal. It is important to apply this
ideal if one is to get predictable effectiveness from a
searching system. The equal use of artificial words tends
to lead to equal retrieval precision for all documents
and also has the effect of insuring that all parts of an index will be equally used. I have seen several automatically generated word indexes, and they all suffer from
the presence of very large blocks of entries beginning
with the same word. Their heterogeneous structure
causes about half of the space to be taken up with
blocks which are seldom used because they are so large.
There are a great many arguments in favor of isotropic
indexes, where entry blocks are similar in size regardless
of the subject.

A BETTER ARTIFICIAL LANGUAGE

Our present system at SDC is an adaptation from
something that started out to be a pure machine searching system, and therefore it has some of the vestigial
organs and imperfections of an evolved creature. One of
these imperfections is that the bigrams are arbitrarily
assigned. As long as we feel the necessity to use an artificiallanguage, we might as well try to derive one which
is tailor-made for indexing. What properties should
such a language have?
1) All words should be short and uniform in length.
This is the only property of the four I am going to discuss which our present bigram code actually has. Shortness and uniformity are desirable not only because of
convenience in manipulating the language on EAM
equipment, but also 1;>ecause more information is
brought onto one page, which aids eye scanning.
2) No synonyms. The importance of not having
synonyms is that it makes it possible for most searches
to be satisfied by proceeding to one page or region of the
index. Skipping around will be necessary only when one
wishes to follow association trails to related topics.
3) Relationship between meaning and alphabetical
order. This, I think, would be one of the handiest fea-

LIBRARY ANALYSIS

One of the general aims of our project is to develop
means of analysis of very large collections of verbal information. We think of a collection of documents as
having an inherent structure, which is not affected by
physical rearrangement of the documents, a structure
which, incidentally, we are fully able to probe only with
the aid of fast computers. Fig. 3 shows how significant
the structure of a document collection can be.
One question in my mind has been: Is it possible for
a computer program to determine what is a subject
word without having available any subject-word table
or dictionary prepared by some human? In other words,
do subject words have distribution characteristics within a library that a computer program can detect, thereby permitting distinction from nonsubject words? The
data in this figure indicate that the answer is very probably "yes."
The dictionary which we now use as input to our t~xt­
compiler program contains no nonsubject words, but
it does contain some borderline cases like the word
"time," which many people would regard as too general
a concept to be a good subject word. "ADC" on the

Walker: A Theory of Information Retrieval
PROBABILITY
OF PLOTTCO~
CORReLATION

~ (10\

\

( OCCURRING
BY CHANCE

o 001 OJ.

o 01 OJ.

15

",

\

I--~-I----+---If---+---IC:I~-'---t

I---I-~p..t=~:--I----:----I--+---t

FREOUENCY OF
OCCURRENCE
IN 600
ABSTRACTS:
ADC: 40
TIME: 43

o

1 OJ. I---+--f---+--f---+-~:o.---I---t

1 OJ.

t::::c=t::::::t::=t::::::1:::=J=::r::::j

RANK-I

IN STRENGTH OF )
CORReLATION WITH
(;
INOICATEO TERM

Fig. 3-Probability of highest observed correlations
as a test for subject words.

other hand is a good solid subject word, standing for
"Air Defense Command." Our alphabetized index enables us easily to select the words which correlate most
highly with these two topics, and we can calculate and
compare probabilities.
One can reason that a good subject word should have
certain other words which co-exist with it in the same
documents with a frequency much greater than expectable from chance distribution, but that nonsubject
words, which are likely to be used by anybody writing
about any subject, should not have high correlations.
To test out this notion I calculated the probabilities
for the highest observed correlations occurring by

63

chance, both for "ADC" and for "time," given that all
words in the library are randomly assigned to documents. Of course, the words are not randomly assigned,
so I got some very low probabilities. As Fig. 3 shows,
the ADC correlations are ve~y .much more improbable
than those for "time."
Now these correlations will become weaker for any
word, subject word or otherwise, as they are present in
fewer documents. This means that in order to apply any
sort of correlation test to select subject words, one has
to make allowance for the frequency of the word. Unfortunately, the correlation test will fail altogether when
a word is present in only 3 or 4 documents. One has to
have a large enough sample. Also author biases in use
of common words could conceivably cause many nonsubject words to pass a correlation test. But, fortunately, as collections increase in size these effects
should become less important, and it is in very large
collections where this sort of methodology will be
needed.
CONCLUSION

As a final comment: libraries and other collections of
written information can be thought of as realms of nature, subject to scientific observation. Science brings
valid simplicity to that which is apparently complicated, and it is hard to find anything more complicated
than masses of ideas recorded on paper. And so I make
explicit an idea which I hope has been implicit in this
presentation-that general-purpose computers of today
give us the opportunity to apply scientific method to
uncover the principles of the nature and use of information in order that we may put to better use the vastly
more powerful computers of tomorrow.

A Theory of Information Retrieval
CLINTON M. WALKERt

T

HE mathematical formula which best describes
my conclusions from reading the literature on information retrieval eIR) is the following:
4UOK4$=ET
For you, better for dollar equality.

This states that a more economical approach for organizations interested in information retrieval might be
collectively to support as an information retrieval center
some nonprofit organization, such as SRI or SDC. Such

t

Hughes Aircraft Co., Culver City, Calif.

an organization could be a center for receiving and dissemination of up-to-the-minute retrieval literature of
organizations concer~ed; could advise on the practicability of certain undertakings; and could perform experiments in the field of IR.
Aside from this one equation, formulation should
proceed from basic principles. Perhaps the most basic
of all principles is that meaning, rather than information alone, needs to be retrieved. Just how does one
produce or obtain meaning? Take the example of a
small child. All a child knows at first is himself. He gets
acquainted with his hands and feet, and then with his
near associates by relating them to himself. He gradu-

64

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

ally learns to classify things in terms of roundness,
which things he might call a ball; in terms of use, such
as food. New things learned are related to things already known. For example, at an early age, any man
might be classified as "daddy." Throughout his life,
meaning is obtained by relating what is familiar to that
which is unfamiliar.
We might christen this process the "Mew-Mew"
theory of meaning. Each of us, as a "me" looks at something else as a "you," which we interpret in terms of the
"me" or what is known, but which we might also take
back into the relative "you" for objective evaluation.
This process of classification is an opera tional way to
plfoduce meaning. A language, in effect, classifies nouns;
descriptions of "which," "what kind of," and "how
many" have meaning when related to other nouns. What
the nouns do-and how, when, and where they do ithas meaning when related to what other nouns might
be doing.
So, an operational language is one in which classification takes place in familiar areas or domains. In these
domains, dictionaries can be constructed of key nouns;
definitions can include relationships to other key nouns
within the area. For retrieval purposes, reference to a
key noun could have a built-in potential reference to
other key nouns, thus providing a built-in meaning
potential. The prospects are exciting. But, before we develop the idea further, let us lay down some basic postulates.
We can set up a number series as a set R of objects
called nouns, with the relationships defined by three
operations denoted by ~, TI, and j. Concomitant with
this set is another set M whose members can be derived
from certain operations on the set R. The symbol
~ means "results in a relationship of," or "implies that";
the symbol "+" means "and"; the symbol "-" means
"not"; "( )" are used in the usual enclosure sense. An
IR specific operation is one denoted by the symbols ~,
TI, or j. An IR nonspecific operation is any other operation in real or complex variable theory . We will assume
that IR nonspecific operations will follow the manipulative rules of real and complex numbers for IR specific
operations. For example, the operations are additively
commutative.

(A~B)

+ (C) D) = (ej D) + (Aj B)
+ (C~D) = (C~D) + (A~B)

(AllB)

+ (CllD)

(AJ B)

=

(CllD)

+ (AllB).

The operations wi thin the parentheses are I R specific;
those between the parentheses are I R nonspecific. Additional postulates are required for defining the operation
processes in an uncompleted operation. The following
postulates and definitions are offered for consideration.
DEFINITION

The domain of A consists of all subcategories and subsequent subcategories under A.

AjB states that a word, A, which is classified in category B is put in a relationship such that A is in a hierarchy less than that of B, and that the domain of A includes not more than the domain of B. That is, A is part
of B.
POSTULATE

1

AjBjC~AjC

states that a member of a subcategory is also a member
of a category; for example, shoelace is a subcategory of
shoe, which is a subcategory of clothing, which implies
that shoelace is also a subcategory of clothing.
POSTULATE

2

AjB~BjA

means that a category cannot be a member of a subcategory unless it is the only member. True, in ordinary
language, sight can be thought of as a subcategory of
sensing and perhaps sensing at the same time can be
thought of as a subcategory of sight; but, for the convenience of constructing an unambiguous dictionary, we
can exclude this possibility until such time as we find it
absolutely required. Thus, we shall construct a dictionary
in a domain with rigid hierarchical relationships among
nouns. If later, we want to relax this requirement, we
may find some interesting experiments available in the
realm of "machine thought processes."
DEFINITION
A~B

means that A is synonymous with B.
POSTULATE

3

+ (A~B) ~ BjC

(AjC)

means that synonyms within an area are necessarily
mem bers of the same category.
DEFINITION

A'7r-B means that A and B are related by means of the
characteristics of some domain. We shall call these
words "relatives."
POSTULATE

(AllB)

4

+ (AjC) ~ BjC

means that relatives are subcategories of the same category.
DEFINITION

M is a set of elements of meaning derived by categorizing two or more elements of R at the same time.
POSTULATE

(B~C)

5

+ Aj(B + C) ~ (AjB) + M

= (AjC)

+M

means if Band C are synonymous, and A is a common
subdivision of both of them, then the classification of A

Walker: A Theory of Information Retrieval

into B and A into C simultaneously adds meaning to
one of them, but it would be redundant to use both
classifica tions.
POSTULATE 6

(AIlB)

+ (A + B)jC =

(AjC)

+ (BjC) + M

means that when relatives A and B are, together, classified as members of C, they contain an element of meaning which is not present when they are separately so
classified.
POSTULATE

(BIlC)

7

+ Aj(B + C) ~ AjB +AjC + M

65

which of several alternatives is to be preferred. In many
instances, the major purpose 0; the retrieval system is
to perform a rough scanning job for a literature searcher,
to determine for him whether a particular document is
worth further reading. Often the author can furnish, in
addition to his name and topic, a list of his main ideas
and purposes. He might even estimate a degree of correlation between the concepts embodied in his document
and a list of key nouns in its general area.
To apply the power criterion, the machine or human
doing the segregation of valuable from useless information can be simulated by a filter separating relevant message from total signal. Assuming homoscedasticity and
linearity in the specified direction, a function

F = ~(y - bX)2
means that to categorize a subdivision of two relatives,
Band C, is to add meaning to both of them.
can be constructed, the parameter b minimized by least
We have, by these postulated operations, created a squares, and a Pearson correlation coefficient, r, oblanguage of classification-an operational linguistics tained between simulated relevant message and simuwhich should be compatible with operational mathe- lated total message. The parameter, b, would represent
matics. Hopefully, a classification of nouns accessible by the error term between, for example, time and amplidata processing equipment can relieve the information tude. An autocorrelation can also be performed minseeker of the trouble of searching the entire haystack imizing the error between message power and an amplifor his needle of information and thread of meaning. tude-attenuation factor representing noise.
Purposely sacrificed is the richness of redundant normal
With power evaluated, we can set whatever boundlanguage in favor of the more important feature of ex- aries we desire as to speed and cost and make our
actness. Not only do we attempt to be more exact, but choice by linear programming.
also to minimize ambiguity, to allow easy translation of
An example of a low-cost, high-speed retrieval system
concepts, to assure objective criteria of meaning, and to with fair retrieval power is one based on the key nouns
provide a basis of agreement in discrimination.
with which an author titles his document. Other key
Postulate 1 tells us which words can be classified in nouns are likely to be found in the same sentences as the
a given domain. Postulate 2 prevents common words title key nouns; therefore, the searcher, machine or hufrom being counted in esoteric categories, unless they man, can reduce the volume of the document to a deare subsumed under those categories. Postulate 3 per- sired degree of abstractness by selecting the frequency
mits the counting of synonymous words in the same fre- and location of the sentences containing these title
quency tally. Postulate 4 permits the discovery of alter- nouns which he wishes to extract. A simple experiment
native paths for continued search. Postulate 5 permits was performed by the author, using as an abstract the
singling out of the representative path among equivalent first sentence containing a title noun in each major subpaths to be followed. Postulate 6 shows that two words division. Questions pertaining to the documents conare more significant if the context does classify them to- cerned were asked participants in the experiment, some
gether. Postulate 7, finally, shows that paths which of whom had read the author's abstract; some, the aboriginally diverge become significant upon reconverg- stract of key words; and some, the en tire document.
ence. In all those postulates which have symbol M as an Results of the scoring were roughly comparable for the
added element, significance is increased since M repre- • three categories, for equal reading time. The experiment
sents meaning and meaning is of prime importance in itself is not so important except as an illustration of the
transference.
power of the use of key words. Properly categorized, the
In any system of information retrieval, there are use of key nouns could become an effective means of
factors of cost. speed, and power. These three criteria speedy, powerful, and, in large volume, relatively inexcan be used to determine, under a given circumstance, pensive information retrieval.

\

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The Role of USAF Research and Development in
Information Retrieval and Machine Translation
ROBERT F. SAMSONt

INTRODUCTION
HE United States Air Force has numerous and
varied types of data handling pr~blems. This
paper reviews some of the developmental, approaches and contributions that the Air Force has made
toward the solution of semantic-graphic information
handling problems. Some of the interesting problems
encountered in development of techniques and equipment'in this field are presented.

T

BACKGROUND HISTORY OF ROME AIR DEVELOPMENT
CENTER EFFORTS IN THE FIELD OF INFORMATION
RETRIEVAL AND MECHANICAL TRANSLATION

handling documents and their contents, the Air Force
Research and Development (R&D) program has been
based on the premise that R&D effort in these two areas
should be mutually cooperative. To illustrate this part
before passing on: it gives a good return for effort expended because the two fields are interrelated, and advance in one usually means advance for the other. For
example, if we were interested in information storage and
retrieval alone, the Mechanical Translation (MT) field
would be suffering for lack of a high-density storage that
now seems quite practical. They "complement" one another from a development point of view, not only in
hardware as mentioned but also, and perhaps more importantly, from the study of the rudiments of language.

, In the past four years the Intelligence Laboratory
of the Rome Air Development Center (RADC) has
SOME CONTRIBUTIONS BY RADC TO THE LARGElearned, through trying experiences, how to get the reSCALE INFORMATION RETRIEVAL PROBLEM
quired movement in this data handling field. UnderSeveral years ago RADC could not begin to say what
standably there are many approaches or philosophies, if
type of development catalyst was needed. It could have
you will, of how to develop the right synthesis of index,
been in the form of heat generated from "blowing off
logic, hardware, etc., for any particular informational
steam" about the "vast amount of data that, must be
retrieval solution. The same can be said of mechanical
handled" or it could have been in the form of a stimtranslation (lexicon-logic and hardware). If we allow our
ulating hardware development acting as a catalyst inminds to review these years and look at the situation as
serted into all the ingredients and by "stirring around to
it was when we began our efforts, without today's vast
bring about enough agitation to get something done in
kno~ledge ,of hindsight, I believe our approach would be the field." As mentioned in literature, the old cliche of
quite similar to the one we took then. We would see the
bewailing the fact that we are being overwhelmed with
information'retrieval problem growing at a staggering
vast amounts of data and consequently develop only
rate. The linguistic side was getting some attention, but
half-vast ideas, was not all correct, although I remember
the hardware, 'very little. The need and requirements
using the expression more than once. We accepted the
for the Air Force were there and all that remained was
approach of getting "something" underway and in so
to gatherJunds, select approaches, and secure contracts.
doing we became a doer in the field as well as the cause
I presume you recognize the humor of the previous senof the needed catalystic actions. From the start we realtence.
ized we would have to accept the empirical approach;
At that time the Air Force started to lend support to by this I mean a single superior approach was lacking.
vari~us projects already underway as well as to initiate • We accepted the empirical approach not in total ignorentirely new work in this field. We knew the field needed ance, for we knew if one was to develop working tools,
much development effort, and involuntarily the Air theoretical analysis alone would be of little help. In our
Force took on the role of "catalyst" in information search for new storage media in the field of information
retrieval and mechanical translation developments. retrieval, we came across a high storage density, equipNote that I am not saying we were ~rst with most; mental technique which, when coupled with high readindeed not-we slipped into a "role" that was important out rates, could well be the answer to a practical and
to the Air Force and I believe it has done justice to the economical MT look-up or dictionary device. There was
proble~ of both information retrieval and mechanical one storage medium known at that time that had postranslation. You will note that I imply the existence of a sibilities of handling densities in the order of 106 bits per
common problem area in my use of the term "both in- square inch; this was the work of King and Ridenour in
formation retrieval and mechanical translation." In- the use of photographic emulsion on glass disks. The
deed, with the exception of the problem of physically work that followed is now history-the disk photoscopic
memory, handling 3 X 106 bits/square inch, was made
feasible, providing us with an extremely valuable empirt Rome Air Dev. Center, Griffiss AFB, Rome, N. Y.

Samson: USA F Research in Information Retrieval

67

Fig. 1.

ical tool for further research in both information and
retrieval and MT. This is an example to illustrate the
point mentioned earlier, of getting better value in development investment when a development group has
interrelated fields. While I am discussing the developmen t of the photoscopic memory, I would also like to
illustrate an interesting point. This is of particular interest because it illustrates a sometimes neglected point
in developing an equipment that is dependent on a new
technique. Referring to the photo disk memory, the
equipment necessary to produce a disk was considerable,
but of course necessary, if one was to get a high-density
storage medium (see Figs. 1 and 2).
These two pieces of equipment by themselves do not
represent all the necessary capability required to make
a disk, but do show quite clearly the development involved in reaching a goal of practical and economical
storage. The point here is that development in these two
adjacent fields does not require only development of
data handling equipment per se. It requires development
of all those components that have anything to do with
the creating of the media. Actually, the development
breakthrough here in terms of what had to be done to
produce the required density, was not the disk itself; although this is the end product, it was the precise components that allowed us to make this disk from the raw
data on the tapes, thereby providing a facile method of

Fig. 2.

trying many types of stored data. This is not unusual
in development programs of this type, but it is the unheralded side. In terms of engineering toil, it represents
70-75 per cent of the work and a substantial portion 01
the development dollars. Feasibility is a wonderful expression but a tricky term when it comes to development work. It was feasible to reduce a "bit" in terms of
laboratory tests-the emulsion always had the resolution-putting the emulsion on optical flat glass had
been done-reducing the bits to concentric tracks to
disk to show feasibility for a digital store-all this then
is "laboratory feasibility," and the cost is quite insignificant when compared to the cost to reduce many
millions of bits in a unit of time on a disk at the accuracy required. This had to be done precisely and
accurately, and peripheral equipment had to be designed and constructed to make the photoscopic memory
workable. The first set has been fabricated and improvements are now underway to perfect the disk-maki11,K
equipment. The electronic logic used for reading in and
out of the photoscopic memory comprise the "other
half" of the development.
Now as to the empirical approach in information re-

68

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

trieval. Although it seems that a device would exist that
allowed compact physical storage and efficient retrieval
for large-scale document libraries, such was not the
case several years ago. Today after some doing instead
of speculating, several methods exist. I shall speak of
three RADC equipment developments that cover conceptual voids in the field of storage and retrieval. These
developments can be broken down into two general
categories, both of which depend on environmental and
operational conditions when selection is made.
Category i-The separation of the index from the
text. Defined, it is the index of the document removed
from the document itself so that the index search is
separate from the physical document. The physical
document is retrieved by an identification number as a
subsequent operation.
Category 2-The combination of the index with the
text. The index will also produce the physical document
when selection is made.
Under Category 1, at RADC we have the Magnacard
Development and the Index Selector. Both these developmen ts come under the heading of technical development, which means in reference to these particular
equipments that we are developing "working" tools for
experimental use at RADC. In the case of Magnacard,
the storage medium is magnetic material deposited on
segmented tape, 1 X 3-inch plastic cards. Engineers at
RADC feel that Magnacard has excellent potential for
files that require high-speed extraction of information
and also where ease of updating and extensive file manipulation by categories is required.
The Document Data Index Set or the Index Search
Computer for specialized library as reported in another
paper at this conference by Ben Kessel of Computer
Control Corporation, is an Index Searcher designed for
library mechanization. It searches a large volume index
data and prints out the identification of the documentgraphic material, etc., that satisfies the search requirements. The Index Searcher uses continuous magnetic
tape as the storage medium and the scan is serial in
fashion.
Under Category 2, that is, index and document text
stored together, we find the Minicard program. As one
would suspect, this philosophy is based on usage with
extremely large files; and, aside from its ability to perform random search, it of course reduces file space and
bulk document handling problems considerably. We at
RADC consider this as one of the outstanding examples
in empirical development approaches, and state without reservation that this technical accomplishment is
unsurpassed in the storage and retrieval field. Incidentally, this development has now reached a point
where one can say, "It works." It is our sincere hope
that large-scale empirical data will be obtained by its
application that will give still further impetus to storage
and retrieval development. Also at this point, it might
be of interest to those in this field that achievements
of this kind do not come easily, and I am sure designers

and engineers in this field realize fully that 4! years is
certainly a short period of time to develop an aggregate
of ten complex equipments having many thousands of
interrelated problems involving optics, emulsions, mechanisms, and electronics.
What does all thi~ mean? It means we have mechanized library equipments that will simultaneously give
improved operations and serve as tools by which we
can experiment with various known library languages
and in a relatively short time show the hidden problems
in these index schemes themselves. It will also be quite
natural to design the index around the logic and structure of the tool. We can prove the worth of indexes by
constant evaluation while building a file.
I was asked to include- in my paper all the work being
done by RADC in the field of information retrieval and
MT. In this respect I would like to mention that we are
very much involved in the field of character recognition.
This interest at first came about through the input
problems associated with MT and subsequently considered for all input problems in data handling such as
auto indexing, abstracting, etc. We have sponsored a
development model which reads one English type font
including numerals, both upper and lower case letters,
space, and punctuation. We also are under way in developing a Cyrillic character reading machine which
will give the MT field a tremendous boost in cutting
down the transcription cost.
New York University has recently completed the first
phase of a study for RADC on Russian printing matter.
This study included such problems as the variety and
frequency of Russian type faces and sizes in current use;
the reflectance data of the printed type, the reflectance
data of the Russian paper, the absorption and reflectance data on inks used in Russian printing, the predominant method of printing, and also the frequency of printing errors.
RADC is also doing other work in the MT field besides developing hardware. A contract with the University of Washington has brought forth a lexicon in
the order of 500,000 words with Russian as the "source"
language and English as the "target" language. These
words will be used on the photo memory of the mechanical translator. RADC scientists are also aiding
others in supporting the very interesting work of Dr.
Oettinger at Harvard in linguistic work in producing
scientific dictionaries automatically. We are also supporting the longer range efforts of the Cambridge Language Research Unit of Cambridge University. This
research centers about the use of logical methods
utilizing the thesaurus approach in obtaining a translation breakthrough in the multiple meaning problem.
Here thesaurus! means "an organization of word usage
in an ordering dependent on logical content (rather

1 Report on the work of the Cambridge Language Research Unit
for the National Science Foundation prepared by Gilbert W. King
dated July, 1958.

Samson: USA F Research in Information Retrieval
than on alphabetic content as in a dictionary)." These
two efforts are supported jointly with the National
Science Foundation.
RADC scientists are also aiding in the support of the
Research Group of the Center of Studies on Linguistic
Activity and Cybernetics, University of Milan, Italy.
This work is a continuation of the research studies performed on mental operation and semantic connections.
The Research Group is pursuing the approach that man
has fundamental order in his thinking process and that
these are elements of a correlational net. Taking this
correlational structure of thinking and mastering the
semantic connections which link the input and output
language within this structure, they believe, will be a
solution to some of the more difficult problems in
mechanical translation. 2
We have a development that is completed and although it is classed in the field of information dissemination, we mention it here because it is used in association
with storage and retrieval devices. We feel that dissemination exists as an important problem in the continuous
flow of data in the field of data handling. This function
can be automatized; the equipment referred to is the
automatic disseminator jointly developed by RADC
engineers and Magnavox Research Laboratory. The
disseminator determines what groups are qualified to
receive a given document and controls the production
and addressing of copies so as to insure that the qualified
groups get their copies quickly. The disseminator must
determine on the basis of the subject and geographical
area of coverage of a given document who is qualified
to receive a copy of that document. The disseminator input, as used in one case by RADC, is the flexowriter
tape that was used in the Minicard camera for control
and code input. The information on the tape is compared to the stored requests in the disseminator as
stored in a magnetic drum. The output is tape that contains control data for manufacturing duplicate Minicards based on a match in the disseminator.
2 S. Ceccato, "Mechanical translation," Automaz. e Automat .•
p. 1, April, 1958.

69

SOME REQUIREMENTS OF THE FUTURE

After this cursory review (and I hope some insight) into information retrieval and mechical translation devel~
opment efforts of the RADC, we come to a question of
what lies ahead in these two fields. Before I go too far in
this direction, I would like to mention that the Air Force
has a cardinal interest in the national problem concerning technical information. As can be seen by our efforts,
we are going through a "development era" which we feel
will have a great influence on the national technical information picture. This is a natural feeling to come from
a group that is engaged in developing techniques and
hardware such as language research, print readers,
automatic language translators, storage and retrieval
devices, and disseminators. Equipment such as this
will, out of necessity, play an important part in the national picture in both centralized information systems
efforts or in decentralized efforts.
Being in the development field, one supposes we
should have fine prediction qualities in the semanticgraphic data handling field. Frankly it boils down to
studying the trends, following the curves and coming
out with the statement that future equipment in these
fields should have -faster scanning rates, higher excess
speeds, greater packing power, lower power requirements, lower cost, etc. However, anyone can make those
predictions, but in speaking for a group which has a real
invested interest in these fields, we feel the empirical
exploitation of developed equipments should be aggressively pursued and that much more should be done in
language research for both information retrieval and
MT. We feel some effort is "corning about" in this field
but many more "bold steps" must be undertaken.
Mechanical translation by itself is a language problem,
and, by its solution and future use, we only add more
literature in the already heavily loaded field of storage
and retrieval. Being engineers and scientists we tend
perhaps as a group to shy away from the language research side of information retrieval and MT. However,
we have slowly learned over the past few years that
herein lies the ultimate solution to our immediate common problem.

70

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Computing Educated Guesses
E. S. SPIEGELTHALt

o

DISTINGUISH himself from the poor benighted man-in-the-street, the computer sophisti,
cate is apt to refer to the beasts as "so-called giant
brains" or as "lightning-fast idiots." He knows, as do
we, the great gulf which separates the human brain from
the general-purpose digital computer. Still, the exact
dimensions of that gulf are quite unknown, and the
desire to show that the hiatus between man and machine is smaller than many suspect impels both the adventuresome and the iconoclastic. The attempt, the
successful attempt, to automate one area hitherto considered an exclusively human domain constitutes my
topic today.
We are all familiar, if only by hearsay, with the
troubles that can beset the best of computer programs
if the input to the program is not thoroughly debugged.
For a program designed to test the putative behavior of,
say, a proposed steam turbine, where the input consists
of a scant dozen or so parameters, input debugging is
hardly a problem. The situation is quite different for a
data-processing operation, particularly when the input is massive, as it usually is. Three alternatives, all unpleasant, present themselves to the supervisor of such
a large-scale data-processing operation. He can build a
wide variety of error-detecting features into his program, flagging all input errors for subsequent human
correction, he can employ a host of human pre-editors to
clean up the input, or he can hope that input errors are
rare, and let it go at that.
Unhappily, there are many applications where errors
are not rare, where the do-nothing solution is obviously
frivolous and where, consequently, a sizeable group of
humans is necessary, either as pre-editors or as on-line
trouble-shooters. Nor is it always the case that the
necessary human beings can be clerical types. Certain
input debugging calls for sophisticated and knowledgeable practitioners. We are all hopeful-almost all, anyway-that keypunch machines and operators will sooner
or later be superseded by character-reading devices and
the like. There is no philosophical difficulty in conceiving of typed, printed, or handwritten characters being
translated directly into computer language without any
human intervention, provided, of course, that those
characters were correctly typed, printed, or written to
begin with. Suppose, hpwever, that the source characters are incorrect. Consider the ingenuity expended in
the Post Office just in recognizing all the variations of
"Albuquerque." Our Russian colleagues are supposed to
be far advanced in the domains of automatic translation
and character-reading, but present their machines with

T
,

t General Electric Co., Bethesda, Md.

a first edition of "Cybernetics," with all its typographical errors, and horrible difficulties would ensue. Our
choice, then, is clear. Either we admit that many important data-processing applications are impossible to
automate completely, or we find a way to mechanize the
human capacity for making educated guesses. We believe that, for some applications at least, we have found
a way.
While the techniques we have developed were conceived with one particular application in mind, I shall
describe them without reference to that application,
successful as it was. The principal reason for taking this
tack is to be able to present the basic, quite general,
features of our method without being tripped up by the
special form-fitting required by the actual problem. So,
let us be general, and consider any language with which
humans attempt to communicate with one another.
These may be natural languages, like English or German, or artificial languages like Esperanto or certain
telegraphic codes. There are all sorts of personal reasons for communication being difficult-ignorance,
dogmatism, poor sentence structure, etc.; however,
even if these factors did not exist, all sorts of nonhuman noise would beset would-be communicators. Information theory makes much of "redundancy" as an
aid in error-detecting and error-correcting when a noisy
channel is being used. Indeed, even humans who have
never heard of information theory make continual, and
skillful, use of redundancy in unscrambling all sorts of
garbled communications, whether the trouble be crosstalk in a telephone conversation or missing letters in a
crossword puzzle. Without attempting to build a model
of the brain, replete with neural nets and such, let us see
if we can single out the functions performed by human
redundancy-exploiters. If these functions turn out to be
performable without recourse to extrasensory perception or to the psychokinetic effect, our automation
problem is essentially solved. There remain only the
minor problems of collecting all the necessary data,
carrying out a rather gruesome programming task and
finding a computer fast enough and capacious enough to
make our solution practicable. I shall return later to
this question of practicability. At the moment, allow
me to sketch the functions which, when suitably programmed, allow a general-purpose computer to simulate
a redundancy-exploiting, error-detecting, and errorcorrecting human being.
Rather than jump into a completely general and abstract formulation, let me use a concrete illustration.
Fig. 1 shows two familiar sights, a correctly prepared
mailing envelope and, below it, a somewhat sloppier version of the same thing. We shall assume at first that a

Spiegelthal: Computing Educated Guesses

John I. Zilch
40 Blueberry Lane
Boston 3, Mass
Computers Ltd.
12345 E. 152nd St.
Phoenix, Ariz.
Att.: Mr. P. B. M. Smith
(a)

JohnIZilch
40 Blueberry Line
Bost. Massachuesets
Computers LId.
112345 E 152
Pheonlx,A.
Attention Smith
(b)

Fig. 1-Envelopes, (a) good, (b) bad.

"perfect" character-reading device has "read" the perfect envelope, and consider the functions which must be
performed by a machine to "understand" the envelope,
i.e., to route it to the correct addressee by the desired
means, e.g., air mail or first class. We shall then consider
a fallible character-reading device reading the lower,
garbled, envelope, and see what can be done there. It
should be emphasized that, in this application, we are
not concerned with the essentially straightforward task
of actually routing the envelope to its destination. Our
job here is just to ascertain the information needed by
the routing program.
Our machine must perform two separate functions on
each "word" read from the envelope. A word here is any
group of contiguous characters on one horizontal line,
not containing any embedded blanks or commas. Given
any such word, the machine must first ascertain the
class of words represented by this word. In our example,
the machine must determine that "Phoenix" is the
addressee's city, and that "1." is an initial of the sender.
This first function is called the "identification" of the
word. The second function is that of "recognition."
Having established the class to which a word belongs, it
is next necessary to determine which one of the class
members the given word represents. In our first example,
the recognition process is simple, almost trivial.
"Phoenix" is matched against every element in a master
list of cities and, 10 and behold, it is found that "Phoenix" is "Phoenix." A glance at the lower envelope on
Fig. 1 will reassure you that the recognition problem is
not always a trivial one.
The identification process is fairly easy for a Gestaltperceiving, pattern-recognizing human who is himself
accustomed to writing envelopes according to the standard format. The machine needs a little help in this direction. Fortunately, we can provide this help. On the one
-hand, we can program our machine to elicit the same
data that our pattern-recognizing facility allows us to
obtain. Clearly, our character-reader will be able to
note, for each word, its relative position with respect to
all other words on the envelope, and its position with
respect to the envelope itself. For each word, then, we

71

start off with the knowledge of the line it is on, its position on the line (left end, right end, interior) and the
words which flank it on either side. With a little extra
programming effort we can determine the length, i.e.,
the number of characters of each word, its character
pattern (is it all alphabetic, all numeric, some sort of
hybrid?) and, perhaps, the presence in the word of some
salient feature, e.g., the colon following "ATT. :". Indeed, we can usually determine quite easily much more
information than we need for the identification of our
words. Much more, that is to say, when we are dealing
with a noiseless channel, and/or a communication format as simple and relatively invariable as the front of
an envelope.
Of course, whether this information is adequate,
overly complete, or inadequate depends on how we use
it. At this point in the identification process, the machine
must turn to its accumulated store of factual knowledge,
a store which is compiled by a subsidiary program in
advance of production running. This store consists of
lists and tables of probabilities, and provides the data
which, in conjunction with the specific information for
each envelope, allow each word to be identified with a
high probability of correctness. Our basic technique
here is the use of Bayes Factors as instruments for
weighing evidence. Fig. 2 gives the essentials of this
technique.
For each class of words that can occur in the specific
type of communication in question-mail envelopes, in
our example-an a priori probability is given for the
occurrence of a representative (or two, or n) of that
class. This probability, like all the others we use in this
process, is derived from frequency counts on sufficiently
large samples of the data to be processed. Also for each
class, we provide the probabilities that, for example, a
specific representative of that class will have length
3, or 4, or 5, or that the class representative will be
found at the beginning, or the end, of a line. In brief,
for every piece of information we scan each envelope
for, we have a corresponding set of probability distributions, one set for each class of expected words.
In the identification phase of our program, we consider one actual word at a time, testing that word
against the hypotheses that it is a representative of expected class A, B, etc. Eq. (1) in Fig. 2 gives the skeleton
of such a test. Here we are testing the hypothesis that
the word "Smith" is a representative of the "zonenumber" class. Our frequency counting is supposed to
have informed us that the a priori probability that any
word on our envelope is in the zone-number class is
0.017. We first test our hypothesis by using the empirically-determined fact that "Smith" has length 5. This
gives us our second term on the right side of (1), i.e., the
Bayes Factor for the "length event." The product of the
Bayes Factor and the a priori probability is the a
posteriori probability that "Smith" is a zone-number.
Not very surprisingly, this is a small number. We now
compare this number with two thresholds. If the a

72

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

P(H/El )

= P(H). P(EI/H) = P(H).
)

peEl)

P(EI/H)
P(EI/H)P(H)
P(El/H)P(H)

+

(1)

= (0.017)(0.0001) = 0.0000017{> 0.000001 = Rejection Threshold
< 0.9
= Acceptance Threshold
P(H/E .E )
l 2

= P(H). P(EI/H)P(E2/H·El )
peEl' E 2)

= (0.0000017)(0.00001)

I"V

P(H).P(EI/H) .P(EdH )
peEl)
P(E2)

< 0.000001

= Rejection Threshold

(2)

where
H = hypothesis that "Smith" is a "zone-number"
El = the event that the length of "Smith" is "5"
" E2=the event that the pattern of "Smith" is "all alphabetic"
Fig. 2-Hypothesis testing.

posteriori probability exceeds the acceptance threshold,
we accept the hypothesized identification and turn our
attention to the next actual word; if the probability
falls below the rejection threshold, we reject the hypothesis, and test the actual word against the next expected word class. Finally, if our probability falls between the two thresholds, we test the same hypothesis
against the next event, using our a posteriori probability
as the new a priori probability. In our example in Fig.
2 we have been generously low with our rejection
threshold, so that it is necessary to go to (2) where we
test the hypothesis, "Smith = zone-number," against the
character pattern, and allow the low probability of a
zone-number consisting exclusively of letters, to push
our hypothesis into limbo. If we were scanning French
addresses, with zone-numbers given in Roman numerals,
the Bayes Factor in (2) would be very different.
After scrutinizing all the actual words on the envelope
in this manner, we may find that certain words are still
unidentified. In this case, we iterate through our process
once again. However, certain features of the process will
have changed. Suppose that we have identified two
different zone-numbers in the first pass. Since we expect
to find no further zone-numbers, we no longer test any
of our undecided actual words against the hypothesis
that they are zone-numbers. This not only reduces our
processing time-it also changes the a priori probabilities of the remaining word classes, and affects the numbers entering into all the Bayes Factors. Another change
in the second pass is that new evidence can be used to
give rise to Bayes Factors. A word identified as a zonenumber in the first pass provides strong evidence that
the word to its left is a city name. Clearly, the topological relationships subsisting between words cannot be
utilized until some words have been identified.
If successive identification passes still leave a residuum of unidentified actual words, as might happen if,
for example, two or more words were run together, thus
appearing to the machine as one word, there are subsidiary tricks that can be played. Due to time limitations, I shall have to leave these tricks to your imagination, and move on to the recognition phase.

In the simplest case, all actual words will have been
correctly identified and, if the words are all correctly
spelled and correctly ingested by our character-reader,
recognition will consist of little more than finding the
exact match in the proper list, a list determined by the
identification of the word. It is possible to make even
this simple process simpler or, at least, faster. To search
a list of all the cities in the United States can be timeconsuming, particularly if the list must be transferred
from tape to core memory. However, if the corresponding state has previously been recognized, then a much
reduced list of cities can be inputted and searched. Suppose further that the corresponding zone-number has
been recognized as "25." Then we need consider only
those cities in the given state which have at least 25
zones.
If we are bound to get a direct match whether we
scan a big list or a little list, this process of list reduction
is of secondary value only. It is when a direct match is
not forthcoming that this technique assumes greater
importance. In the absence of a direct match, we are
constrained to use brute force techniques of a more or
less sophisticated nature. If we are fortunate enough to
reduce a list down to one entry, then we can avoid brute
force completely. Failing this, we can expect two advantages to accrue to the use of a reduced list, in general.
First-, for the same elapsed time, we can employ more
brute force techniques per list entry; second, we can at
least hope that, by reducing our initial list, we will expunge spurious candidates to which our brute force
techniques might give scores equal to, or even greater
than, the score of the correct candidate. For example,
Fig. 3 gives one horrible example, often quoted in this
connection. Recognizing (a) as being either "New
York" or "Newark" is an awful job. A non-brute-force
technique, such as list reduction, which removes the
false entry from cqnsideration is a welcome way of
cutting this Gordian knot.
Again for lack of time, I must give the actual bruteforce techniques a very hasty treatment. Let me mention just two techniques of the many available. To
match a word which has had two letters transposed [as

Spiegelthal: Computing Educated Guesses
in (b) in Fig. 3}, against the original word, we look for
list entries with the same letter composition as our actual word, i.e., entries with the same number of A's,
B's, C's, etc. Scanning these for a single transposition is
relatively easy.
A second technique is useful when a letter or two (or
more) has been erroneously dropped from, or added to,
a word. (c) in Fig. 3 is due to a stuttering typist who repeated the first letter of the word. Two words run together provide further examples of this kind of noise.
What we try here is a direct match of our actual word
with a proper subset of our list entries, and vice versa.

(a) Newyark
(b) Pheonix
(c) Bboston
Fig. 3-Typical typographical errors.

If no amount of brute force seems to work, and certain words just cannot be recognized, we can either give
up gracefully at this juncture or we can admit, even
more gracefully, that one of our educated gusses might
have been wrong. If we choose the latter alternative, we
have the messy job of deciding whether we went haywire in the recognition phase, or all the way back in the
identification phase. In either case, it is still necessary
to find a likely spot for picking up the dropped stitch
without causing the entire garment to unravel. Sometimes, indeed, we are left with the original ball of wool.
These, however, are almost always the cases which
stump human editors.
This ability to iterate back, and back, and back, can
of course lead to excessive use of computer time. It does
have its advantages though. It means that a bad guess
is not an irrevocable misstep. I t also means that various
parameters, the identification acceptance and rejection
thresholds, for example, are not nearly as critical as they
would be in a once-through process. Since these are
among the hardest parameters to estimate accurately,
any diminution of their sensitivity is a positive gain.
At this point, I should like to restate our major techniques in somewhat folksier terms than "Bayes Factors" and "list reduction." In our identification phase,
we attempt to use the constraints imposed by the format, mailing envelopes in our example, plus the constraints of the language itself, the length and character
patterns of the expected word classes, to provide a rudimentary form of pattern recognition. We then use our a

73

priori knowledge of word statistics and interword relationships to find the most probable matching of the
actual words to the expected word classes. Human beings presumably use rank orderings of hypotheses,
modified by intuition, to perform such matchings. Since
machines lack intuition and since we have not yet developed a calculus of rank orderings, we use the paraphernalia of Bayes Factors to accomplish the same task.
Without undue stretching of the terms, we might say
that, in the identification phase, we exploit the syntactic
constraints on the language we are processing, whereas
in the recognition phase, by our use of the list reduction
technique, we exploit the semantic constraints. We
might subsume both types of constraint under that
much-abused word, redundancy. As for a folksy term for
our brute-force techniques, the most accurate that occurs
to me is "knowledgeable cynicism." We expect errors to
be made and, usually, we have some information as to
the kinds and sources of error, as well as their frequencies
of occurrence. If we know that typists frequently hit a
key next to the one they should hit, we store the keyboard pattern in our program; if we know that our
character-reader frequently confuses "0" with "c," that,
too, goes into our dossier.
A final word now as to the applicability and practicability of our techniques. What with tape searches and
Bayes Factor computations, processing time may, but
need not always, be excessive. The preparation of all the
lists required in the recognition phase is a painful task.
With a relatively stagnant language, this list-making
can be a one-shot ordeal; with a volatile language requiring frequent updating of the lists, the pain might be
unbearable. What has been said about lists also applies
to the preparation of the probability tables for the
identification phase. A final pause-giving consideration
is the amount of redundancy in the language to be
processed, particularly when the processor cannot establish the language, which he sometimes can do. Our
private feelings are that a language sufficiently low in redundancy to be unintelligible to a machine will also be
unintelligible to a man. I won't press a point which
trods so heavily on anthropocentric toes.
In summary, then, we feel our techniques can be useful in some massive data-processing applications, in
automating post offices, in translating natural languages,
where every second word in the source language has
several correlates in the target language, and we know
our techniques have worked at least once. I won't ask
that you take this on faith, though I'd appreciate it if
you would.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

74

A Memory of 314 Million Bits Capacity with Fast
and Direct Access- Its Systems and
Economic Considerations
'
N. BISHOPt

AND

HE graphic arts laboratory of Time Inc. in
Springdale, Conn., is a laboratory largely devoted
to improving the arts of printing and papermaking. Of equal importance to a publisher ,of weekly
news magazines is a large and well-serviced list of subscribers. This may serve to explain why there exists today in Springdale a completed and working engineering
prototype of a large direct-access memory. This equipment was developed by the laboratories to prove the
technical and economic feasibility of storing and processing large basic record files in memories providing
direct access facilities to individual records. Since our
program of systems study, equipment development, and
economics study has indicated such feasibility, a review
of our findings may well open new avenues of approach
to the solution of present and expected future problems
in the expanding field of automatic data processing.
The data-processing systems designer is constantly
striving to strike an optimum balance between the costs
of data storage, computer time, job requirements of input and output, and manpower. So fast is the art progressing that today's best balance for customer A may
be quite different from that for customer B a year later,
even though A and B present identical job requirements.
The concept of large data-storage capacity coupled
with direct access to individual records has long intrigued the systems man. His interest is usually diminished when he starts to consider the cost of storing large
basic files in such a manner with the presently available
memories providing such direct-access facilities. Ejther
the storage cost per bit is too high or the access time is
too high to keep up with the daily work load. Serial
storage provides high capacity and low cost per bit, but
it does not provide for the often essential requirement of
direct access to individual records. There appears to be
a definite gap in the systems man's bag of tricks and
hardware which needs filling by a direct-access memory
of high capacity, reasonable access time, and reasonable
cost per bit for storage.
In designing a high-capacity direct-access memory
for use in very large basic record files, it is necessary to
make as many records as possible accessible to each
processing station, if low cost per bit of storage is to
result. At the same time, access times must be low
enough to handle traffic requirements. Excessive packing factors result in intolerable tolerance requirements

T

t

~

Time Inc., Springdale Labs. Div., Springdale, Conn.
Systems Consultant, Roslyn Heights, N. Y.

A. I. DUMEYt

for manufacture and maintenance. Excessive travel distances from record to processing station result either in
immoderate acceleration and torque requirements or
excessive access times, if acceleration and torque requirements are kept within bounds. Too many records
per access mechanism, with consequent increase in
traffic per unit, can overburden the designer with accessspeed requirements. The key to an approach which will
meet systems and economic requirements is expressed
in one word-moderation. A combination of moderate
packing factors, moderate travel distances, moderate
accelerations, and moderate average access times can
result, and has resulted, in a fast and direct access
memory with a record capacity sufficient for economical
large basic file application. Let us now discuss its operational characteristics.
The significant characteristics of a direct access
memory to a systems man are those which answer the
questions: "How much data can I store?" "How many
entries can I process per working day?" "What is the
best available estimate of the cost per bit of storage
plus provision for direct access?"
As far as the bit capacity of presently available directaccess equipment is concerned, the upper bound seems
to be a few tens of millions; and, on the cost side, the
lower bound seems to be some ten bits for the penny.
Access times in such memories vary from a second down
to a few milliseconds, the latter figure reflecting the
need to cover high activity against the file, as in the
case of short-term airline reservations.
The engineering prototype of our memory has a total
information bit capacity of 314,500,000. This capacity
is distributed in 262,144, (2 18), separate storage locations within the memory, each location providing a'
message capacity of 1200 bits. The operation of the
memory is under the control of three basic commands
derived from peripheral equipment. The first command
consists of an 18-bit parallel-fed address, followed by a
seek record signal. The second is a process command
calling for a read, write, or erase cycle. The third is a
release-record command, which restores the memory to
readiness for operation on the next series of commands.
The maximum time required to execute a seek record
command is 3.87 seconds and the minimum is 0.61
second. Each process cycle, read, write, or erase, takes
0.45 second. On completion of the last process cycle, 0.6
second is required to restore the memory to a condition of readiness for the next series of commands.
These are the basic figures. Here are some examples

Bishop and Durney: A Memory of 314 Million Bits Capacity

75

of how they would be applied to practical situations. desirable design modifications for inclusion in a producIn an information-retrieval system, in which addi- tion unit.
tions to the file tend to occur rarely, the over-all average
In order to complete our economic studies on the apworking cycle is 3.14 seconds, which includes 0.45 plication of a direct access approach to the handling of
second for one read cycle. Maximum and minimum our subscription records, we had an independent manufigures would be 4.92 and 1.66 seconds, respectively.
facturer prepare a manufacturing cost estimate on the
In our own operation, about 95 per cent of the actions basis of duplicating the prototype memory in quantities
on the file call for five processing cycles per access. This necessary for such application. This estimate indicated
results in an over-all working cycle of 6.72 seconds maxi- a cost of storage of 2 cents per bit or less. Attractive as
mum, 4.94 seconds average, and 3.46 seconds minimum. this figure of cost per bit for direct-access storage may
These last figures imply that, under an even traffic flow, be, the cost of storing billions of bits of data, such as a
over 2 per cent of the storage locations can be consulted large publisher's subscriber records, represents a conand processed in an eight-hour day.
siderable investment for storage alone. However, the
Obviously, file arrangement, input form, and other compensating factors are considerable. First, a potenconsiderations are important factors in arriving at a tially better grade of subscriber service is possible, based
working average of access and processing cycle for any on faster action on subscriber requests; and second, a
direct-access memory. The maximum, minimum, and considerable saving can be made in costly computer
average figures will permit a rule of thumb estimate of decision-making time, as a result of limiting such comthe applicability of this equipment to various problems. puter time requirements to entries demanding decision.
In our own case, there were serious queueing consideraThe original specifications for our memory were based
tions and addressing limitations, which made the actual on the concept of storing all file data on each subscriber
speed of the device available as a safety factor for an in one addressable location of 1200-bit capacity. As our
actual file activity of about 1 per cent of the file per day. studies progressed, it became apparent that it is not necNeither time nor circumstances permit a complete essary to store all file data under direct access, since
description of our engineering prototype at this time; many of the data are not required for daily file mainhowever, some of its physical characteristics may be of tenance, and can be stored to better systems operational
general interest. Information bit rate in and out is advantage in serial access memories. Since we planned
20,000 serial bits per second. Information input and to store the name and address of subscribers on serial
output levels are adequate for computer communica- tape for operating address label printers, and the name
tion purposes. Signal-to-noise ratio, measured as the and address always appear as input with each action
ratio of peak signal voltage read from a recorded loca- against the file, it is unnecessary to store these data under
tion to the peak noise voltage read from an erased loca- direct access. Complete billing information can also be
tion, exceeds 40 db. Semiconductor components are stored on tape, as it is not required for the daily mainused throughout. The circuits within the memory tenance job. I t was decided to limit the storage per subcabinet provide all systems communication and control scriber under direct access to the minimum required for
facilities except buffering. Power requirements are 550 decision-making in daily file maintenance plus the few
watts average and 750 watts peak load on a 115-volt characters necessary for minimizing duplication. The
60-cycle single-phase circuit. The prototype, which data retained are sufficient to provide systems communiweighs 4800 pounds including its glass and steel en- cation with related files. Thus, it is now possible to
closure, occupies a floor space of 51 inches by 62 inches store the decision-making data on two subscribers in
and is 95! inches high. It is designed to operate in one 1200-bit storage location. While traffic and overflow
average office conditions of temperature, humidity, and requirements have not permitted a 50 per cent reducdust content.
tion in the number of direct access units required in our
This equipment is now set up for demonstration and system, they have enabled a 30 per cent reduction withcycled testing in the laboratory. It operates under the out impairing systems performance.
control of a device which was designed to run it in a test
Efficient usage of direct-access storage capacity remode. The test equipment consists of a Flexowriter, quires particular attention to the matter of distribution
1200-bit shift register buffer, comparator and control of entries within the file. This is especially true when
circuits which also provide coding interlocks. Keyboard input to the file is uncontrolled, as in a subscription servcontrol, tape reader control, or a combination of the ice operation. An answer to efficient distribution within
two is possible; and information may be stored, edited, the file was found in an addressing procedure which
justified, transmitted, and received. Special test tapes translates a subscriber's name and address into a ranhave been prepared which permit repetitive cycles of dom and reproducible address within the file. Overflow
access and processing at rates, and in a manner, simu- capacity within each memory is reserved to handle new
lating hard usage in a system. The comparator ar~d 'entries addressed to fully occupied storage locations.
counter circuits associated therewith permit a wide
An action against the file usually begins with letter
variety of error checks. Input and output can be repre- including the subscriber's l).ame, address, and requested
sented on hard copy, punched tape, or a combination of action. The file location for a subscriber, whether a new
the two. Many test runs have clearly indicated the few or an old customer, is derived from the most reliable

a

76

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

portions of the name, street address, city, and state. As
stated above, to attain efficient usage of memory capacity, the data supplied by subscribers must be manipulated by a computer into addresses which distribute
randomly over the field of available storage locations;
and furthermore, identical subscriber data must always
produce the same address. We have chosen a computing
technique which, in brief, considers the subscriber-supplied data as a binary number, divides it by a prime
number almost as large as the number of addressable
locations, and uses the remainder as the address. 1- s
Such randomizing techniques lead to occasional duplication, and the system must be capable of distinguishing
between such cases and of providing accessible storage
for each. Entry data comparison with data stored at an
occupied address in the memory provides this distinction. If the comparison indicates that the entry is in fact
a new one, and if the location is fully occupied, the new
entry is referred to a directory of unallocated storage
locations specifically reserved for such overflow cases.
The first such location address readout directs the
access to this location where the new entry data are
stored. This address is, at the same time, erased from
the overflow directory and recorded in the original storage location. Subsequent access to the new entry is attained without reference to the directory, as the overflow address is now stored in the addressable location
computed from subscriber-supplied data. The directory
of available unallocated storage locations is kept current by re-entering storage locations allocated to overflow when they are emptied by subsequent file action.
This approach to the attainment of economical distribution of entries in a direct-access file system a?pears to
result in a favorable balance between the total number
of addressable locations required and the multiple accesses required to reach entries stored in locations allocated to overflow. Other equally favorable solutions
may result from specific choice of final systems componentry.
The principal output of a systems design for subscriber service is in the form of address labels printed
on a weekly schedule. Conventional tape storage is the
logical way to store the mailing list for efficient control
of output printers. This requires weekly preparation of
a new mailing tape, which is produced by collating a
weekly change tape with the previous week's mailing
tape. Computer requirements for this relatively simple
collating operation are moderate, and our studies indicate that low-cost computers are adequate for this operation. A tape converter is included to translate the output of the file maintenance portion of the system into
the tape format used by the collator. Decision-making
computer requirements for file maintenance have been
1 L. N. Korolev, "Coding and code compression," J. Assoc. Compo
Mach., vol. 5, pp. 328-330; October, 1958.
2 W. W. Peterson, "Addressing for random access storage," IBM
J. Res. Dev., vol. 1, pp. 130-146; April, 1957.
3 A. 1. Durney, "Indexing for rapid random access memory,"
Computers and Automation, vol. 5, pp. 6-9; December, 1956.

minimized, since the daily work load is limited by direct
access storage to an estimated average of 100,000 items
per day against a total file of 12,000,000 entries, some
nine to ten million of which are active.
We proposed to use a certain medium size computer
for updating, because, although the list size is 12,000,000,
the daily work load of this portion of the system is in
the range of 100,000 items. In other words, instead of
having to cope with about 400 items per second in order
to run through the file in one day, the file maintenance
equipment has to handle about four. In recapitulation,
we can say that the more complex items, totalling about
100,000, make a day's work for a medium-size computer, while the 9,000,000 items of mailing information
(only! of the file is active) are handled by simpler tape
collators.
Communication equipment was required to join the
memories with the main computing element. Our plans
called for a combination of a limited number of buffers,
and ~ cross-bar switching arrangement, the speeds of
each being consistent with the expected ratio of working cycles to access or rest times. Other details, too
bound up with the specific use of the equipment to be
of interest, will not be discussed. We did consider, as
every user of new equipment must, the problems of
conversion, including build-up of the file and parallel
operation of the new and old system.
Our systems studies indicated that conventional techniques and readily available equipment could handle
the systems functions of input, communication between
computers and associated memories, control and
switching, and output printing. We shall not dwell on
these systems aspects other than to say that their combination with the processes described above resulted in
a positive answer to the question, "Can direct access
memories be applied with good economy to the daily
maintenance of very large subscriber files?"
Our subscription service is a large-inventory, highvolume, data-handling problem with many complexities
due to the nature of the weekly news magazine business.
The amount of equipment necessary to the solution of
this problem is determined by the job requirements and
not by the operational characteristics of the direct-access memory we have described. There are other applications where the combination of such a memory with a
small-scale computer will perform all necessary systems
functions. Newly developed and working equipment
opens consideration of new fields for electronic data
processing, and it is our belief that this development will
provoke such consideration.
We have shown that very favorable costs per bit can
be achieved if traffic requirements are modest. 'Systems
considerations have been noted. Testing experience has
been given. Addressing and retrieval ideas have been
set forth. We believe that a workable, large-scale, directaccess memory, with an access time of a few seconds,
has a place in_ the roster of useful data-processing equipments, and we invite your attention to its possibilities.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

77

Information Retrieval on a High-Speed Computer
A. R. BARTONt, V. L. SCHATzt,

HIS discussion concerns a mechanized information retrieval system for the technical library of
General Electric's Aircraft Gas Turbine Division.
The system is confined to the technical reports and
papers available in the Division's Library. Textbooks
have not been included as of yet since they are carried on
a Library of Congress system and did not readily fit
into the manual scheme which was developed. This discussion will be in two parts. The first part will cover the
information retrieval system prior to mechanization,
and the second part will cover the mechanization of this
system.
The technical library was established in 1953 using a
uniterm or key word coordinate indexing system. To
understand this system, we will follow the progress of a
publication through the various steps of the system.
First, as a publication is entered into the library, it is
assigned a six-digit access number (see Fig. 1). This report is typical of the technical reports generated within
the Division. Another example of reports in the library
would be NACA technical reports. The next step in this
system is the abstracting of the document. The abstracting is done by professional librarians and then
posted to a card file as shown in Fig. 2. This card is
controlled according to the previously assigned access
number. The next step in the system consists of reviewing the title, abstract, and document to select the most
descriptive words which will identify the document.
These words are primarily nouns. They become the uniterms. In our hypothetical case, these words are shown
on the right in Fig. 2, just as they appear in the system.
These uniterms, along with the appropriate access
number, are then posted to the uniterm file (see Fig. 3).
There are 100 numbers per side of card when full. Both
sides are used, and in the case of general terms such as
these, they are heavily posted so that several cards are
required. Certainly this system appears cumbersome at
this point, but it has the advantage that, in any given
technical area, the number of uniterms tends to level
off at a specific number after the system is developed.
In our case, this number is something under 9000. The
combined system is shown schematically in Fig. 4. I
think now you can see how information is recalled from
this system. The requestor discusses his problem with
the librarian. They decide on the uniterms to search.
The librarian then furnishes the appropriate uniterm
cards to the requestor. Once again referring to Fig. 3,
the problem facing him can be seen. He must cross coordinate the cards to find numbers which apply to all
uniterms. In our case, we have used three uniterms. A

T

t General

Electric Co., Cincinnati, Ohio.

AND

L. N. CAPLANt

more typical case would be four to six uniterms but
with less access numbers per term. In any case, if the
requestor is persistent, he will come up with some of the
matching numbers. The librarian can then go to the abstract file for the abstract cards. These are perused by
the requestor who in turn selects from the abstracts
those documents he desires to read in full. In the case
of 'the three uniterms we have selected, each has over
1000 access numbers posted to it. It is easy to see the
difficulty of cross coordinating these cards. This difficulty did little to promote the use of the technical library. The result was duplication of .experiments, technical studies, etc., with the attendant delays in time and
increases in cost.
Obviously, something better was needed. That "something better," we feel, was the program written for our
IBM 704. This program is basically a mechanization of
the manual system with very little effort to change the
system itself. This program is in two parts. Part one is
file updating and the cross coordinating of the master
uniterm tape, or, referring to the manual system, the
uniterm card file. Part two concerns the selection and
printing of abstracts. (Part one can be run independently of part two if desired.)
Cross coordinating, at present, is done on a strict
AND system. That is, an access number must appear
under each uniterm used in the search. The possibility
of an AND/OR system was considered and rejected.
However, the AND/OR approach is being used on a
modification of this program for a different type of information retrieval system. If no information is found,
a modified list of uniterms can be developed by the requestor and the librarian, and another run made.
It was decided during the development of the program that one of the features of part one was to update
new information into the file at the time of cross coordinating. This decision was based on the knowledge
that in any information retrieval system there is a major
problem of keeping the system updated. Thus, all uniterms with their associated access numbers were stored
on the master uniterm tape in alphabetic order.
All search information and updating information can
be read into the machine from either the card reader or
tape. This information is processed to determine if it is
in alphabetic order and if there is any updating information. If the input is not in alphabetic order, it will be
sorted within the machine. If there is no updating, a
new uniterm master tape will not be made up. Both of
these decisions are made within the machine by interpreting the input information.
To utilize best the 32,000 locations of memory in the
704, it was decided to use long records of information.

78

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

--.... ..-

1

-~

TUltBOJET CONT' D

CLASSIFICATION
AGT-TECHNICAL INFORMATION SERIES

REPORT NO. M57AGT3471

TURBOJET
00010
00030
00170
01340
02240
03450
04660
05890
06770
07650

00001
00201
00301
00401
00501
00601
00791
00881
00941
01561

01003
01243
01363
01483
02223
03453
04533
05873
06433
07323

00102
00122
00132
00142
00152
00162
00172
00182
00192
00192

00024
00104
00404
00654
00764
00844
00874
00894
00924
01224

00308
00318
00358
00478
00598
00688
00778
00858

091761QI ~ 700918
09196
0 2
01018

04009
04579
04999
05089
0678907229
08819
09009
09509
10239

00205
00215
00335
00575
00775
00795
00885
00975
01125
02455

0200610117
04986
10227
10327
05556
10557
07136
11127
08086
11247
09996
11417
01436
11607
01556
11887
01886
11997
01976

00338
00428
00668
00718
00848
00868
00918
00928
01198
01228

01029
01339
02439
03389
04659
05579
06119
08889
09239
09999

01115
01235
01455
01925
01955
02345
03575
04785
05555
09865

00006
00226
00316
00336
00466
00666
00716
00886
00896

02228
02568
03118
04448
05238
06868
07778
08828
08918

02029
02129
03239
03339
03449
03519
03669
04449
07339
09999

00205
00405
00605
00805
00905
01105
01205
(01345
01495
01765

02006
04006
05496
06996
07346
07466
08796
08946

00007
00107
00337
00437
00517
00667
00717
00837

TOTAL ACCESS NOS. = 1249
A STUDY OF COMPRESSOR VIBRATION UNDER
VARIED LOADING CONDITIONS

I

COMPRESSOR CONT'D

COMPRESSOR

By

J. Doe

JET ENGINE DEPARTMENT
AIRCRAFT GAS TURBINE DIVISION
GENERAL ELECTRIC COMPANY
CINCINNATI 15, OHIO

00060
00070
00090
00100
00300
00500
00700
00800
00900
00910

001n
00121
00151
00331
00541
00721
00831
00971
00981
00991

01013
01113
01123
01443
01563
01773
01783
01883
02323
03773

00112
00222
00452
00492
00552
00772
00792
00842
00862
00932

TOTAL ACCESS NOS.

00034
00114
00334
00364
00444
00584
00774
00864
00934
00984

_"_0 ______

= 1795

I

I

VIBRATION CONT'D

VIBRATION

CLASSIFICATION

Fig. 1-Title page of AGT Report.

02320
01001
02490
01941
03330
02191
03570
03451
04680
04441
06060
05471
07850
07581
08230 , 08881
08981
09110
09360
09121

00033
00073
00083
00103,
00153
00223
00333
00483
00713
00923

03002
04442
05892
06992
07112
08222
08352
08882
09912
09962

00244
00264
00294
00444
00494
·00504
00884
00914
00124
00224

00017
00027
00047
00057
00077
00107
00217
00477
00777

009661011709998

TOTAL,ACCESS NOS. = 1029

Fig. 3-U niterm wheel cards.

TECHNICAL
• REPORT

ASSIGN FIVE-DIGIT
ACCESS NUMBER

10117

~

)---)..~

Fig. 2-Abstract card with uniterms.

Many of the uniterms, with only a few access numbers
posted to them, are combined into one record with a
total length of not over 7000 words. A total of 9000 locations is reserved for reading in these records. If 'any
record exceeds 7000 words: the program will try to
separate this record into two records. If the record exceeds 8995 words and cannot be broken down, the program will be halted. Because of this factor, no uniterm
can have over 8995 access numbers posted to it. For a
larger system, this could be easily modified so that there
would be no limit to the number of access numbers
posted to a uniterm. Another feature to help cut down
on the length of records is whenever a new uniterm is

Fig. 4-Flow chart of manual system.

added, this uniterm will start a new record. This will
also tend to break one record into two records bec'ause
this uniterm may fall alphabetically between two uniterms that are in one record (see Fig. 5).
As the machine does the cross coordinating, each
search is stored in variable length buckets in memory.
The total length of these buckets is also 9000 words.
If the amount of information stored in these buckets exceeds 9000 words, the uniterm that caused the overflow

Barton, Schatz, and Caplan: Information Retrieval on a High-Speed Computer
BEFORE UPDATING

79

AFTER UPDATING

E.O.R.
BOMBER
000005
012345
013596
019436
023500
025001

BOOK
000001
009005

E.O.R.

000005
012345
013596
019436
023500
025001

BOND-- _E_.O_.R_·_ 1
BOND

704

000001
000124

32 K

BOOK
000001
009005

E.O.R.

Fig. 6-Tape assignment part 1.

Fig. 5-Updating uniterm master tape.
INPUT SUPPLIED
BY PART 1

is stored in a temporary area and the search continues.
As more uniterms are read and cross coordinated, the
length of the buckets is decreased, permitting the addition of another search at the end of the last bucket.
When the master tape has been completely read, the
program rewinds the tapes and makes a second pass on
the tape using the uniterms from the temporary area.
At this point, let me stress that a second pass is only
made if there is an overflow on the first pass (usually
over 50 searches). In most instances, this will not occur.
If on the second pass all uniterms cannot be processed,
the program will notify the operators that the search
is too large and must be made smaller.
The present system contains about 35,000 abstracts
with an average of ten uniterms each, or a total of over
350,000 numbers posted to the master uniterm tape.
Part one will handle 99 searches at once with no limit
on the number of uniterms per search. It will also handle
unlimited updating. A normal run is about thirty
searches and updating of about 2000 access numbers
into the general file. The time required is about two
minutes for searching and four minutes for updating, or
six minutes total. Tape assignments for part one are
shown in Fig. 6.
Part two is the printing out of all abstracts which correspond to the access numbers discovered in the first
part. One million abstracts have been allowed for in the
program. Timing for part two is approximately four
minutes per 10,000 abstracts. At the present time,
10,000 abstracts are placed on a master tape in numerical order by access number. A statistical study is now
being conducted so that abstracts will be in order by
frequency of use. This will significantly improve the
timing for part two as we expand our system. Tape assignments for part two are shown in Fig. 7.
Originally it was thought that the program would, in

704
32 K

Fig. 7-Tape assignment part 2.

most cases, go right on into part two. Therefore, referring to Fig. 6, it can be seen that the only tapes available at this time are tapes 7 and 8. The first abstract
master to be read is found on unit 7, the next on unit 8.
While the program is searching tapes 7 and 8, the
operator can mount other abstract master tapes on
units 4, 5, and 6. After each master tape is searched,
another abstract tape can be mounted. A continuous
loop is set up selecting tapes 7,8,4, 5,6 - 7,8,4, etc.
The number of tapes to be read is determined by an input card. All access numbers found in part one are
sorted into numerical order before starting part two.

80

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

At the beginning of ~ach master tape is an indication
of the range of access numbers of that tape. Each number that the program is looking for is compared against
this range, and if it is not on this tape, an on-line comment will tell the operator we are finished with this tape.
The program will then modify all addresses to pick up
the next tape unit in sequence.
The output of this system is in two parts as shown in
Fig. 8. At the top is a sample listing of those access
numbers located by the system. Below is a set of single
sheets of printing. Each sheet contains the abstract corresponding to one access number as shown in the column
on the left. Each sheet is pre-addressed for direct mailing. If there is a security classification, it is shown. If for
any reason it is considered necessary to suppress printing of security or proprietary information, this suppression is under the control of a sense switch. It is now
possible for the requestor to review the abstracts, select
those for which he would like to receive the original
document, check the appropriate access number on the
list on the left, and return this single sheet to the
library.
,I think that many of the advantages of this system are
obvious, among which are speed, cost, designation of
security classification, and the direct mailing feature.
Advantages that are not obvious are:
1) Reduction in amount of human handling with the
resultant errors.
2) All information is in narrative and is alphabetic.
There is no coding.
3) Complete abstracts are readily available to the
requestor.
4) The need for extensive manual files is eliminated.
5) No information ever need be removed from the
system so no information can be lost.
This system has been in operation in General Electric
since September, 1958. Some of the modifications that
we have found to be advantageous through experience
are:
1) In most cases it is desirable to stop the program
after part one and examine the print-out before
going on to part two.
2) It is advantageous to be able to supply to part two
the ranges of access numbers wanted on each
search.
3) An upper limit should be placed on the amount of
abstracts to be printed if it is requested that the
program continue on into part two without stopping after part one.
4) A method of combining several uniterms into one
composite term on any individual search is extremely necessary.

Fig. 8-704 output parts 1 and 2.

5) If any uniterm reduces the number of access numbers found to zero, this term is eliminated and the
search continues as if this word was not given in
this search.
These are only a few of the conclusions we have
reached. Future experience, we are sure, will dictate
many other additions or deletions to this program. Of
course, we are looking forward to the time when with
new equipment we will be able to search tapes simultaneously, thereby reducing our running time by a factor of approximately 10 to 1.
We are presently planning to expand this system to
mechanize the records involved in checking material
into and out of the library. We also plan to develop an
automatic overdue notice system.
This information retrieval program has enjoyed wide
acceptance in our plant. We have received requests to
modify the program to process various types of personnel registers, engine test data files, specialized blueprint
files, and various other types of information systems.
In any place where the key word coordinate indexing
system or some variation of it can be used, this program
seems to be the answer to many of our problems.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

81

The Next Twenty Years in Information Retrieval:
Some Goals and Predictions *
CALVIN N. MOOERst

A

HISTORICAL PERSPECTIVE

ALTHOUGH information retrieval has lately be-

.fi.. come quite a fad, I intend in this paper to stand
back and take an unhurried look at what is going
on, and try to predict where this field must go and what
it must do in the future. "Information retrieval" is a
name that I had the pleasure of coining only eight years
ago, at another computer conference. l The name has
come a long way since then.
In thinking about a definition of information retrieval
and in considering the future of this field, we must take
an evolving view. At the present time, information retrieval is concerned with more than the mere finding and
providing of documents; we are already concerned with
the discovery and provision of information quite apart
from its documentary form. To encompass future developments, as we shall see, even this broad view of information retrieval will have to be modified and extended.
When we speak of information retrieval, we are really
thinking about the use of machines in information retrieval. The purpose of using machines here, as in other
valid applications, is to give the machines some of the
tasks connected with recorded information that are
most burdensome and unsuited to performance by human beings. At all times, it is important to remember
that it is the human customer who uses the information-retrieval system who must be served, and not the
machine. It makes a difference who is served, and this
little matter is sometimes forgotten in computer projects.
To get a historical perspective of the introduction of
machine methods to information retrieval, let us look
back over a bit of history. I think that it can be said that
the in trod uction of machine methods has followed the
realization of a need, backed by pressure and means to
do something about the need. Thus, although quite
powerful mechanical methods could have been de-

* This work has been supported in part by the U. S. Air Force
Office of Sci. Res. through Contract No. AF 49(638)-376. All opinions
are those of the author.
t Zator Co., Cambridge, Mass.
i C. N. Mooers, "The Theory of Digital Handling of NonNumerical Information and its Implications to Machine Economics"
Zator Co., Cambridge, Mass., Tech. Bull. No. 48, 1950; pap~r
presented at the March, 1950 meeting of the Association for Computing Machinery at Rutgers University, New Brunswick, N. J.
c. N. Mooers, "Information retrieval viewed as temporal signalling," Proc. Internatl. Congr. of Mathematicians, Harvard University,
Cambridge, Mass., vol. 1, p. 572; August 30-September 6, 1950.

veloped by the technology of the Hellenistic Era for the
Library of Alexandria, other methods of retrieval, presumably based upon human memory, and the making of
lists, were apparently considered quite satisfactory. The
simple, though powerful, mechanical technique that
could have been used at Alexandria is the method of perforated stencils invented by Taylor in 1915, which has
sometimes more recently been called "peek-a-boo."2
The British inventor Soper in 1920 patented a device
which was an improvement upon Taylor's perforated
!?tencils, and Soper described the use of his mechanism
for information retrieval employing some truly advanced conceptions. 3
Much more attention, however, has been given to the
development of devices for scanning and selecting upon
film strips. This work was apparently spurred by the
perfection of motion picture films and cameras. Goldberg in 1931 patented one of the earliest film-scanning
and photographic-copying devices. 4 Independently,
Davis and Draeger during 1935, in the early days of the
American Documentation Institute, in connection with
their pioneering work in microfilm documentation, investigated the feasibility of a microfilm scanner using a
decimal coding.s Apparently stimulated by reports of
this work, V. Bush and his students at M.LT. in 19381939 built perhaps the first prototype machine along
these lines, a microfilm scanner with each frame of text
delineated by a single decimal code for the subject, and
a photoflash copying method. However, they were unable to interest any commercial or governmental organization in the device, and wartime distractions intervened soon thereafter, so the project was dropped. Two
more "rapid selectors" based upon these same general
principles have been built,6,7 but for various reasons
neither of them has operated in a fashion that is consid2 H. Taylor, "Selective Device," U. S. Patent No. 1,165,465;
December 28, 1915 (filed September 14, 1915).
3 H. E. Soper, "Means for Compiling Tabular and Statistical
Data," U. S. Patent No. 1,351,692; August 31, 1920 (filed July 23,
1918).
4 E. Goldberg, "Statistical Machine," U. S. Patent No. 1,838,389;
December 29, 1931 (filed April 5, 1928).
6 R. H. Draeger, "A Proposed Photoelectric Selecting Mechanism
for the Sorting of Bibliographic Abstract Entries from 35 mm
Film," Documentation Inst. of Science Service, Washington, D. C.
(now American Documentation Inst.), Document No. 62; July 27,
1935.
6 R. Shaw, "Rapid selector," J. Documentation, vol. 5, pp. 164171; December, 1949.
7 Anonymous, "Current Research and Development in Scientific
Documentation," National Science Foundation, Washington, D. c.,
Rep. No.3, p. 27; 1958.

82

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

ered generally acceptable, and neither is currently in
actual use. At the present time, much attention is focused upon the Eastman Minicard machine, a cross between a rapid selector and a Hollerith punched-card machine. s
Card-sorting devices, such as those based upon the
Hollerith card (the card used by IBM), as well as those
based on other cards such as Perkins' marginally
punched card, were recognized at an early date to be far
too slow to cope with problems such as those of Chemical Abstracts. Within the past few years, there have been
a number of instances of the use of electronic computing
machines to perform information retrieval. As computing machines are presently designed, they are not
matched to the job of information retrieval-they can
do it, though not efficiently-and the situation of using
a computing machine for this job is like using a bulldozer
to crack peanuts. Oftentimes, if the information collection is small enough to allow the problem to fit upon the
computer, there are easier methods to perform retrieval.
If the collection is large, it does not have to be very
large to tie up all the computer's memory capacity. It
is clear that special computer-like devices will be called
for if we are to perform efficient large-scale information
retrieval.
Although we have been trying to build high-speed selecting machines for information retrieval over the past
twenty years, since the date of Bush's machine, at the
present time I do not think that it can honestly be said
that we have done too well. We do not really have a machine which is an altogether happy answer to the problems of search and selection on collections ranging in
size upwards from fifty or one hundred thousand items.
The problem becomes even more unmanageable at the
million point, since this size of collection requires reasonably high-speed processing and decision on a scanned
record of something like 109 bits.
However, the hardware will be built-and is being
built. But what about the classification terminology,
the subject headings, the descriptors, and the like? One
after another, various machine projects have foundered
on this problem, especially those projects that have
copied library classification decimal systems or made
use in a detailed way of their indexing techniques. We
should appreciate that new mechanisms deserve new
methods, and that there is a consensus of opinion (although it is not unanimous) that the method of putting
together independent idea-expressing terms and selecting upon their correlative occurrence constitutes the desired point of departure from the historic methods of the
library.
A highly developed form of this point of view is the
method of "descriptors," which was introduced and de-

8 A. W. Tyler, W. L. Myers, and J. W. Knipers, "The application
of Kodak Minicard system to problems of documentation," Amer.
Documentation, vol. 6, pp. 18-30; January, 1955.

veloped in theory in 1948-1950 in a number of papers in
conjunction with a mechanical card selector.9 The descriptor method, which makes a great point of employing precisely defined terms composing a limited vocabulary, is a refinement of a number of earlier practices.
The method was implicit in the work of Soper, it was
toyed with and dropped by the librarian Bliss, and it
was used in one fashion or another by a number of scientists and chemists with Perkins cards in the 1940's, e.g.,
by Bailey, Casey, and Cox. 10 People seem to confuse
descriptors with Uniterms. The latter might be described as a crude form of a descriptor system, originally
making use of words lifted from titles and texts. The
Uniterm approach, since it was introduced in 1951,
seems however to be migrating both in concept and usage towards the descriptor methods, as is clear from
many reports coming from projects where they claim to
use Uniterms.
The problem of classification terminology or language
symbols for machine retrieval is well toward a solution,
even for complex and structured kinds of information.
An example is the work on the coding of chemical structures for machine retrieval. ll •12 However, it should be
noted that considerable work on retrieval of structured
information, especially for chemical compounds, has
sometimes resulted in symbolism that is not completely
suitable for machine use, as for example some of the
methods considered by the International Union of Pure
and Applied Chemistry.
THE PRESENT STATE OF AFFAIRS

Although we may soon have suitable machines for
large-scale information retrieval and although the situation with respect to the language symbols of retrieval is
in a reasonably satisfactory state (that is, ahead of the
machines) we are not yet finished with our problems.
Presuming that we have a machine completely capable of dealing with a collection of one million-or even
a hundred million-items, who will read these items or
documents and assign the descriptors? Experience has
shown that this is a difficult and time-consuming job.
For example, in my experience in reading and coding
patents, it takes me about fifteen minutes of reading, on
the average, merely to figure out what the inventor is
driving at. The Patent Office has some three million of
such patents.
This is exactly the kind of burdensome job that
should be turned over to the machine. In fact this problem is under active consideration and study in a number
9 C. N. Mooers, "Zatocoding and developments in information
retrieval," ASLIB Proc., vol. 8, pp. 3-22; February 1956. This paper
summarizes these developments.
10 C. F. Bailey, R. S. Casey, and G. J. Cox, "Punched-cards techniques and applications," J: Chem. Ed., vol. 23, pp. 495-499; 1946.
11 C. N. Mooers, "Ciphering Chemical Formulas-The Zatopleg
System," Zator Co., Cambridge, Mass., Tech. Bull. No. 59; 1951.
12 L. C. Ray and R. A. Kirsch;' "Finding chemical records by
digital computers," Science, vol. 126, pp. 814-819; October 25, 1957.

Mooers: The Next Twenty Years in Information Retrieval
of places. It is not an easy task to give to a machine. It
contains a great many aspects that would seem to require the exercise of real "intelligence." Fortunately,
we already have one remarkable accomplishment which
shows that this seemingly intellectual job is not completely incompatible with mechanization. I speak of the
work by Luhn on his "auto-abstractor. "I3 By the method
of Luhn, the computer takes in the text of an article,
statistically picks out the unusual words, and then
chooses sentences containing these words to make up
the auto-abstract. If this process were terminated at the
point of picking out the words, we would have Uniterms. If the words picked out in this fashion could be
replaced by standardized words having approximately
the same meaning, that is, if the synonyms could be
eliminated, then we would have descriptors. It should
be noted that this kind of treatment of synonyms, which
has been going on in retrieval for some years, has lately
been given the fashionable name of "the thesaurus
method." In the interests of precision in terminology, I
should like to point out that there are significant differences in Roget's concept of a thesaurus and the set of
equivalence classes of terminology that are required
for retrieval. Indeed, this is precisely why I introduced
the new terminology "descriptor" some years ago, that
is, to give a verbal handle for a group of new conceptual
methods with language symbols.
'
Such a take-off on Luhn's method would not be the
final answer, because as Luhn has set it up, the machine
is operating in an essentially brainless fashion. To do
better than merely picking up words on a statistical
basis, we would have to build into the method the capability of handling the equivalence classes of words and
phrases. This gets us into language translation. After
the statistical approach has segregated words of high
import from the text, we need to translate these words
into the standardized descriptor terminology for further
retrieval. However, even building up the equivalence
classes of the terminology is a burdensome job, and this
too should be turned over to the machine. Not only
should the machine build up these equivalence classes,
but it should be made to refine its performance with respect to using these terms and getting the descriptors,
and it should even be made to learn how to improve its
performance.
Fano and others have suggested the use of statistics
on the way people come in and use the library collection
in order to provide feedback to help a machine improve
its performance,14 While the suggestion is in the right
direction, I think that this kind of feedback would be a
rather erratic source of information on equivalence

13 H. P. Luhn, "The automatic creation of literature abstracts,"
IBM J. Res. Dev., vol. 2, p. 159; April, 1958.
14 R. M. Fano, "Information theory and the retrieval of recorded
information," in "Documentation in Action," J. H. Shera, A. Kent,
and J. W. Perry, eds., Reinhold Publishing C9rp., New York, N. Y.,
ch. 14-C, p. 241; 1956.

83

classes, because people might well borrow books by Jack
London and Albert Einstein at the same time. While this
difficulty can be overcome, there is a more severe problem. Any computation of the number of people entering
a library and the books borrowed per day, compared
with the size of the collection, shows, I think, that the
rate of accumulation of such feedback information
would be all too slow for the library machine to catch up
to and get ahead of an expanding technology.
In this respect, it is my speculation that a more powerful source of educational material for a machine is already available, and it should be tapped. Despite the admitted limitations of such material, the subject entries,
the decimal classification entries, and the other content
typed on catalog cards contains a great deal of ready information that can be used in teaching a machine how
to assign descriptors to documents. Other collections,
besides those in the libraries, also often provide a ready
source of classificatory information that should be
tapped. For instance, in the Patent Office, in each case
record of each application for patent, there is a great
amount of specific reference to other related patents,
and this information, along with the assigned class numbers, is readily available for machine digestion without
further high-level human intellectual effort.
In order to do these things, we shall need a machine
with some rudimentary kind of "intelligence;" or more
accurately, we shall need an "inductive inference machine" in the sense used by Solomonoff.15 An inductive
inference machine is one that can be shown a series of
correctly worked out examples of problems, that can
learn from these problems, and that can then go ahead
on its own (probably with some supervision and corrective intervention) to solve other problems in the same
class. While an inductive inference machine can be quite
capable at a given class of jobs, it need not have "brains"
or "intelligence" in the general sense.
As I mentioned before, putting the descriptors on the
documents-that is, delineating the information in the
text by symbols for retrieval-is a form of crude language translation. It is crude because the machine does
not need to worry about grammar in the target language, since the grammar of descriptors is nonexistent,
or at most, is rudimentary. As I see it, machine translations of this kind for the purposes of information retrieval will be an area of early pay-off for work in inductive inference machines.
If inductive inference machines can be built at all,
then it certainly should be possible for us to feed them
with subject headings and classification numbers on the
one hand, and with the titles of book chapters and section headings on the other hand, in order to teach the
machines how to do at least some rudimentary kind of
job of library subject cataloging. With librarians at
15 R. J. Solomonoff, "An inductive inference machine," 1957 IRE
NATIONAL CONVENTION RECORD, pt. 2, pp. 56-62.

84

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

hand to provide suitable intervention or feedback, the
machine's perfor'mance should improve, and by further
work, even the categories or descriptors which are used
can be improved by the machine. Thus I feel that we
can expect in the next few years to see some interesting
resul ts along this line.
GOALS FOR THE NEAR FUTURE

There are a number of other applications of machines
for purposes of information retrieval of a kind that have
not yet been seriously undertaken, and others that have
not yet been considered. In my discussion I shall bypass
treating such u,seful and imminent tasks as the use of
machines to store, transfer, and emit texts, so that at the
time that you need to refer to a paper, even in an obscure journal, you can have a copy in hand within, say,
twenty-four hours. Neither shall I consider the application of machines to the rationalization and automation
of library ordering, receiving, listing, warehousing, and
providing of documents. Neither shall I consider the
application of machines to the integration of national
and international library systems so that at any firstrate library, you will have at your command the catalogs of the major collections of the world. These are all
coming-but it should be noted with respect to them
that the problems of human cooperation ranging from
person-to-person to nation-to-nation cooperation are
more serious than some of the machine and technical
problems involved.
The first of the rather unusual applications of machines to information retrieval that I want to talk about
can be introduced as follows. When a customer comes to
an information retrieval system, he comes in a state of
ignorance. After all, he needs information. Thus, his
prbblem of knowing how to specify pieces of information
that are unknown to him is a severe one. For one thing,
the vocabulary of the retrieval system, and the usages
of the terms in the system, may be slightly different
from the language that he is used to. For another thing,
upon seeing some of the information emitted according
to his own retrieval prescription, he may decide that an
entirely different prescription should be used. In short,
the customer definitely needs help in using a machine
information retrieval system, and this help should be
provided by the machine.
An indication of what kind of system needs to be provided, and how it can be done, is given by certain of the
simple sorted-card retrieval systems. Some of the sortedcard systems do very well in this respect, others do not.
It has been common practice in Zatocoding systems,
which use a simple schedule of a few hundred descriptors, to employ a descriptor dictionary system having
many cross-references from words in the ordinary technical usage to the appropriate descriptors. 9 Thus th'e
customer can find his way into the system starting out
with his own terminology. After the customer is referred
to a descriptor, he finds there a carefully drafted scope
note explaining the range of meaning attached to the

particular descriptor. In another tabulation in the
descriptor dictionary, the descriptors themselves are
grouped or categorized into fifteen or twenty groups,
and each group is headed by a question pertinent to the
descriptors under it. Thus, under a question "Are geometrical shapes involved?" would be found descriptors
such as "round," "square," "spherical," etc.
These simple card systems provide another source of
assistance to the customer because they are able to emit
cards within a minute or less from the time the retrieval
search is begun. Thus if the search is headed into the
wrong direction, the customer, upon looking at the
cards or documents, will immediately detect this fact,
and can reframe his request to the machine before any
further searching is done. It is deplorable, but true, that
many contemporary proposals for machine systems may
be so slow in providing feedback that the feedback time
is measured in hours"'or days, with the consequent waste
of machine sorting time and accumulation of human
frustration.
The problems of customer assistance are going to be
severe with the large machine retrieval systems of the
future, and these problems must be faced. The descriptor vocabularies are going to be large. Another possibility is that some of the machines will operate internally on vocabularies or machine code systems that are
quite unacceptable for external communication to the
human operators. There has already been some successful experimentation with symbol systems of this kind in
coding chemicals.1l,12 Such symbol systems work beautifully inside the machines, but people should not be
forced to use them. For these reasons, in order to translate the customer's requests into forms suitable for the
machine, machine assistance is going to be desirable.
Holt and Turanski see this problem of processing the
customer's request at the input to the machine as being
very similar to the presently developing customer use of
automatic programming for mathematical problems.
The more advanced systems of automatic programming
provide for a succession of stages of translation, with the
symbolism at each stage moving further and further
from the human word input to the abstract symbols and
the detailed machine orders required for internal operation of the machine. In mathematical programming, the
machine programs itself, and then carries out the program. In retrieval programming, the machine will form
the proper machine prescription, and carry out the
search. To my mind, there is an important difference.
In retrieval, the machine should check back with the
customer as it builds up the prescription in order to
make sure that the search will be headed in the right
direction; then it should search a sample of the collection and check again to make sure that the output being
found is appropriate to the customer's needs. If we are
to have larger and more complex machine retrieval systems, we must come to expect a great deal of back-andforth man-machine communication during the formulation of a search, and as it is going on.

Mooers: The Next Twenty Years in Information Retrieval
Quite another approach to handling the customer's
input problem is advanced by Luhn, who suggests that
the customer write a short essay detailing what he
thinks is descriptive of the information he wants. 16 The
essay text would ,then presumably be handled in the
fashion of the auto-abstract method (though Luhn is a
little sketchy here on the details of his proposal), and
the words selected from the short essay would be compared with words similarly selected from the document
texts. When there is a sufficient degree of similarity,
selection occurs. Although the Luhn proposal does put
the load of translation of the customer's request upon
the machine, it does not provide for customer guidance
into the resources of the machine's selective language
possibilities, or into the resources of the collection. Help
in both of these directions would surely be of great assistance to a customer in extracting the maximum value
from information in storage.
Another possibility is to use an inductive inference
machine, because it is open to learning a great variety
of tasks. It would be able to provide a generalized approach to the problem of customer assistance. But, however customer assistance is provided, I think it is safe
to predict that we must build information retrieval systems with the planned capability to communicate back
and forth with the customer so that he can better guide
the machine in retrieving what will be useful to him.
RETRIEVAL VIEWED AS A PROCESS OF EDUCATION

If the machine aids the customer by guiding him in
the use of the retrieval system, the machine is necessarily educating the customer. Let us take this viewpoint, and look upon a machine retrieval system as an
educational tool. This viewpoint provides a number of
new tangents to consider . We ha ve seen how the
customer can use some coaching by the machine in order
to tap efficiently the information resources during the
search process. But, as anyone knows who has had a
large batch of documents sent his way, maybe the
customer can also use some machine help in reading the
mass of documents emitted from a retrieval system!
It is my prediction that some of the machine information retrieval systems of the future will go considerably
beyond the tasks of mere retrieval or citing or providing
document texts. I believe that some of them will also
help the customer assimilate or read the output provided
by the machine. This prediction is not at all fanciful,
even though it is yet quite a way into the future. How
far into the future it is we can only guess, or estimate by
recalling that the Minicard follows a full twenty years
after the first suggestions for a film selector, or that the
widespread "-~se of descriptors came about forty years
after Taylor actually used something very much like
them in information selection.'

16 H. P. Luhn, "A statistical approach to mechanized encoding
and searching of literary information," IBM J. Res. Dev., vol. 1,
p. 309; October, 1957.

8S

Machines can be very effective in teaching human beings. This is shown by the work of Skinner at Harvard
where, in recent experiments, written modern languages
and college mathematics have been set up on machine
lessons.17 Essential to the process is rapid feedback, or
communication between the machine and the human
learner, so that the human knows immediately that he
is on the right track, and so the machine can apply corrective action as soon as errors appear. Skinner's machines at present employ written materials prepared in
advance by human beings, the machine performing on
the basis of a fixed internal sequence of morsels of information of graded difficulty. However, machines need
not be restricted to doing their teaching according to a
preset sequence of lesson elements of this kind. In the
same way that we are currently looking for techniques
to allow machines to assign descriptors from texts, so
can we contemplate the development of teaching procedures and machines whereby the machines by themselves will be able to pick out a graded sequence of informa tion morsels from the documentary record retrieved and will then present them to the human
customer.
Taking this view of a machine information center acting both as a retrieval device operating upon a store of
information and as a teaching device for the human
customer, we can see that the process of input request
formulation and the process of giving out information
will merge into a sustained communication back and
forth between the customer and the machine. Of course,
once the customer is on the track of documents containing information particularly pertinent to his interests,
he will very likely desire to see the original text. This
can be done, and a customer will have a choice of how
much or how little of any particular actual document he
wishes to read directly.
The range of future possibilities is even greater when
these ideas are combined with the possibilities inherent
in mechanical language translation devices. Of course,
we should expect that future information centers will be
able to provide translation from one ethnic language to
another of the texts that the retrieval system provides.
Let us look further. As is well known, one of the problems in machine language translation is to provide
sentences in the target language in the required formthat is, to provide a smoothly running, colloquial translation. Fot example, in going from German to English,
we must rescue the verbs from the end of the German
sentence and put them up where they belong in the middle of the English sentence. Any machine capable of
doing a high-grade language translation must be able to
arrange and rearrange idea units and word units to make
acceptable text out of them. This being the case, it is
reasonable to predict that the information morsels' that
a teaching machine would put out could be given as in17 B. F. Skinner, "Teaching machines," Science, vol. 128, pp. 969977; October 24, 1958.

86

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

put to a machine technique patterned on the output
half of a language translator. The resulting textual output would be in the nature of a written article having at
least some degree of acceptable style.
This means that you could go to an information center,
describe a certain kind of information f needed, have
the machine assist you in making your request more
definite, and then order it: "Provide me with an 800word article, not requiring more than an undergraduate
chemistry background, on the deterioration of polyisomers by sunlight." After a short, but decent, interval,
the machine would come forth with such an essay.
There is an important corollary to this notion of the
machine central being able to provide essay articles
upon request. We are all aware of the considerable duplication of information found in the technical literature.
The same, or very similar, piece of information is repeated in one article after another. Many articles are
summaries of other articles, or are derivative upon
other articles, and provide little or nothing that is new.
If the output from an information system does not need
to be in the form of a graphic image of the original text,
or a type-out of the text itself, then it is possible to consider the storage of new information only in the machine. A machine could store facts alone, and only new
facts; it would not store text. By eliminating the dependence upon the original text, and avoiding the duplication of the same information written over and over,
it might be possible to secure considerable increase in
the machine's storage capabilities.
Yet there are problems of a kind that will occur to
any thoughtful individual. I do not think we want to
throwaway the original record which we have alreadythe printed books and articles in our libraries. Neither
am I sure that we want to give up entirely our system of
printed publication. But, putting these problems aside,
let us do some more speculating. It might be possible for
the scientist in his laboratory to feed his raw (or nearly
raw) results directly into a machine for computation,
checking for acceptability, correlation with earlier facts,
and ultimate storage. Thus, instead of a scientific
archive existing almost solely on paper, as we now have,
it is possible that a part of the archive in the future will
be in machine form. The only way that such a machine
archive would be tapped would be by having the machine write a summary or article upon specific request.
When we are thinking about information machines of
this kind, I wish to stress that we should not think in
terms of some big single machine central. This is important. It would be f?olish and expensive to build up

a single central "bottleneck." If such machine central
information systems as I describe will be at all possible,
they will be important enough to be set up at a large
number of installations, quite in the same way as we
now are making use of a large number of electronic computer installations. There will be both large and small
information machines. Some of these machines will be in
intercommunication with each other, while others will
operate in isolation. At various times, the machine
memory from one or several of the machines can be
played out onto tape, and the tape record, containing a
vast amount of information, can be incorporated into
the memory systems of many other information centrals.
If machines can store and correlate laboratory facts,
and can communicate with laboratory workers, we shall
have to expect that the machines will find gaps in the
information as a part of the correlation, and they will
point out to the laboratory workers the need for further
experimentation in certain areas. How far we can expect this kind of active feedback to extend is hard to
guess. The present work with pattern recognition will
ultimately lead to a kind of a machine eye, and we already have machine hands for the handling of radioactive materials. An information central machine system,
aided by such receptors and effectors, would become, in
effect, a laboratory scientist.
At this point I would prefer to terminate my speculations on the excuse that we are now perhaps more than
twenty years into the future, the limit that I set for myself in this paper.
In summary, I think that it can be said that mechanical information retrieval has started rather slowly; it
has taken from about 1915 or 1920 until now to become
as popular as it is. At the moment, except for certain
highly integrated small retrieval systems, we are yet
only dabbling in the subject. We do not now honestly
have any appropriate large-scale machine for collections
involving millions of items. We are only beginning to
get a widespread recognition of the capabilities of suitable retrieval language systems, and there still remains
the problem of getting machines with internal digital
operations that are as suitable for retrieval and information work as the operations of addition and multiplication are suitable for mathematical work.
In any event, it is useful for us to know what some of
our future targets are likely to be. With such knowledge,
we will be in a better position to steer our activities in
the present. This is the excuse for the predictionswhich I take very seriously-that are contained in this
paper.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

87

Simulation of an Information Channel on
the IBM 704 Computer*
E. G. NEWMANt

AND

L. O. NIPPEt

INTRODUCTION

E

XPREsSIONS for the probabilities of multiple
error patterns in symbols consisting of N bits are
needed to evaluate the effectiveness of applying
error correcting codes to these information channels.
For example, if a Hamming l single error correcting code
is applied to the information, the probabilities of all
errors greater than one should be established. This will
indicate the number of times the single error correcting
code has failed.
In addition, if the effects of channel asymmetry and
regional error dependence on the probabilities of multiple errors are known, this information can be used to
develop channel characteristics leading to the minimum
number of errors.
Expressions for error probabilities have been obtained
for a binary symmetric channel 2 and for an asymmetric
channel [see (2) or (6) in the Appendix] when errors are
assumed to be independent. Attempts to introduce error
dependence were made by using relationships between
stochastic processes and familiar network concepts. 3 ,4
All results, however, for even relatively simple cases
became too unwieldy to be useful. Therefore, a simulation program which gives approximate values for the
desired error probabilities was developed.
Many practical information channels exist which exhibit both error dependence and asymmetry. An example of such a channel is a magnetic tape channel. A symbol of N bits can be recorded on N parallel tracks (Fig.
1). The error dependence is primarily the result of defects extending over certain regions of the tape which
influence the correct transmission of successive bits in a
particular track or in a number of adjacent tracks.
A single defect may produce errors on a single track
only (Defect I, Fig. 1), or larger defects may cause errors
in a number of adjacent tracks (Defect II, Fig. 1). These
large defects produce multiple errors for all consecutive
N-bit symbols which fall into this defective region.
Multiple errors can also be produced by the simultaneous occurrence of different defects, each affecting the
information in a particular bit of an N-bit symbol
(Defects I and III, Fig. 1).

* This work has been supported by the U. S.

Dept. of Defense.

t IBM Product Dev. Lab., Poughkeepsie N. Y.

1 R. W. Hamming, "Error detecting and ~rror correcting codes"
Bel; Sys. T~ch. [., V;?l. ~9, pp. 147-160; April, 1950.
'
L. Bnlloum, SCience and Information Theory," Academic
Press, Inc., New York, N. Y., ch. 6, pp. 62-70· 1956.
3 W. H. Huggins, "Signal flow graphs and r~ndom signals," PROC.
IRE, vol. 45, pp. 74-86; January, 1957.
4 S. J. Mason, "Feedback theory-some properties of signal flow
graphs," PROC. IRE, vol. 41, pp. 1144-1156; September, 1953.

TRACK OR CHANNEL #1
TRACK OR CHANNEL #2
TRACK OR CHANNEL# 3
TRACK OR CHANNEL#4

DEFECT REGION 1I

TRACK OR CHANNEUt N

t

TIME OR....L.OCATION

SECTION N°'1

I

3

Fig. 1-Effects of regional defects on the generation of
multiple errors in an N-bit symbol.

Depending on the type of recording used, certain
types of defects may cause errors in the transmission of
ones without affecting the transmission of zeros. Other
types of defects or noise bursts affect the transmission
of zeros only. Some defects can produce errors in both
ones and zeros. These phenomena lead to channel asymmetry. The degree of this asymmetry is governed by the
distribution of the various defect types.
For example, for a modified nonreturn to zero type
of recording (Fig. 2), a one is recorded by changing the
magnetization of the tape from one saturation level to
the other. A zero leaves the magnetic tape at a previously established saturation level. As the read head
senses only a change in the flux linking it, only ones
prod uce signals.
.
Tape defects in the form of high spots in the surface
of the magnetic coating or loose particles of oxide material produce a head-to-tape separation, which results
in a loss of signal. This affects the transmission of ones
only. The characteristic loss in signal amplitude is
shown in Fig. 2. This phenomenon is not unlike the
amplitude "fading" of radio signals and may, at high
recording densities, produce a number of consecutive
errors.
Other tape defects, such as pin holes in the magnetic
coating, can produce errors in the transmission of zeros
(Fig. 3). At the boundaries of the hole in the magnetic
coating, a flux change will be sensed by the read head
as it moves from a region free of magnetic particles to
one where magnetized magnetic particles exist. This
change of flux could be sensed as a one in place of a zero.
In addition, amplifier noise can occasionally exceed a
certain clipping level. If this noise peak occurs at the

\

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

88

CORRECT

CORRECT INFORMATION

o

o

o

CURRENT PASSING THROUGH RECORDING HEAD

o

o

CURRENT PASSING THROUGH RECORDING HEAD

RECORDING
TRACK
WIDTH

l~~~~DING ...JI'---_____Sil_j-"'r;i""':J:_~""""------SIGNAL READ BACK BY HEAD

NOMINAL
SIGNAL
LEVEL

INFORMATION

o

-1---"'"

E~~;tL

--

----.r---H.:::.-,=-H---__i;co-~-__i_r

t

~PIN

SIGNAL READ BACK BY HEAD

~MINIMUM

READ BACK INFORMATION

o

o

t

0

ERRO~S

0

0

t t

HOLE

o

READ

o

I

IN ONES

ACCEPTANCE

BACK

INFORMATION

0

t

,

I

I

I

LEVEL'

I

0

ERRORS

Fig. 2-Effect of tape defect causing head-to-tape separation on
signal amplitude (modified nonreturn to zero recording).

Fig. 3-Effect of defect in magnetic oxide (pin hole) on signal
amplitude (modified nonreturn to zero recording).

time when a zero should be read, it may be sensed incorrectly as a one.
If the hole is large, ones recorded in this region will,
of course, not be sensed. A hole in the magnetic oxide or
a nonmagnetic particle in the magnetic coating can,
therefore, produce errors in both ones and zeros.
These are some typical examples of the errorproducing mechanisms which cause the complex characteristics of the magnetic tape channel.

single track or at the most a few adjacent tracks. A
theoretical description of the channel characteristics
can also be used. The program is completely flexible,
that is, tape of any specifications can be "manufactured."
Random information could now be "recorded" over
the whole length of the hypothetical tape and errors introduced in accordance with the defect type. The program, however, concerns itself only with errors. Thus,
information is placed only into the defective regions and
the resulting errors are classified and counted. This results in a considerable saving of computer time.

CHANNEL SIMULATION

Because of the difficulty of obtaining analytic expressions for the probabilIties of multiple error patterns in
terms of the complex characteristics of a tape channel,
simulation means had to be developed.
The simulation procedure can best be described by
comparing it to the manufacturing of tape. The defects

THE

704

PROGRAM

The 704 Information Channel Simulator Program
(Fig. 4) might best be considered by breaking the program into three major phases as indicated below:

Phase I

Phase II

Phase III

LOCATION GENERATOR

SORT

ERROR ANALYSIS

Random Generator for
Defect Location

Sort by Defect
Location

Classifica tion and
Count of Errors

in the tape, produced during the manufacturing process,
can be considered to be distributed in a random manner
over the length of the tape. The program simulates this
process by assigning random locations to each of the
defect regions, as they are read from a list of inputs.
This list gives the number and class of all the various
defects which must be placed on a particular length of
tape. This defect listing can usually be prepared on the
basis of error statistics obtained from tests involving a

Phase I might be called the "tape manufacturing" phase
because it is responsible for the actual placement of defects on the hypothetical tape. All defects are assumed
to be placed on the tape simultaneously and previous
to the analysis phase.
Since the defect locations are generated randomly, it
is next necessary to sort (Phase II) the defects by location so that they may later be processed in sequence.
Phase I II performs the random generation of infor-

Newman and Nippe: Simulation of an Information Channel on the IBM 704 Computer

89

whether one, two, or three adjacent tracks are
involved. For the defects which produce errors in
more than one track, the same defect length per
track was assumed.

START
Phase I

Yes

The above set of values is defined as an input data set.
A different set may be used for each track.

Phase I-Location Generator
No

Phase II
Phase III
No

No

No

Fig. 4-General flow diagram.

mation to be written on the tape, the analysis of the influences of each defect on this information, and the
classification and counting of resulting errors.
INPUT DATA

Each input data set read per track represents K defect areas. In order to assign a random longitudinallocation X to each, a random number modulo M is generated. The pseudo random number generator program
used here is PE RAND. It produces a 35-bit random
number by multiplying two odd, 35-bit numbers,
selected from a group of ten such numbers, to produce
a 70-bit product. The center 35 bits of this product are
used as the random result and may be divided by a
previously specified number, M, to make the result
modulo M. This generator has been thoroughly tested.
The probability that any bit of the resulting random
number is a one lies between 0.45 and 0.55. This was
considered entirely satisfactory for this application.
This defect location just generated is stored along
with the appropriate length, track, and class. Any given
defect may involve one or more adjacent tracks as specified by its class. Arbitrarily, the program was written
to handle defects extending over no more than three
adjacent tracks. The program converts these adjacent
track defects to individual track defects of equal length
and assigns the same location to each.

Phase II-Sort

Before discussing any of the three phases of the program in detail, it is necessary to have an accurate picture of the input data, which are prepared on the basis
of actual test data or from a theoretical description of
the channel characteristics.
To formulate the input data for the program the following statistical information is needed:

After a random location has been generated for each
defect, these defects are sorted by location fields. There
are two general sort programs available for the 704
through the SHARE organization. Either may be used,
or the sorting may be done on another machine.

1) The a priori probabilities that defects of a particular length will occur on a track. These defects can
prod uce errors in ones or zeros or both on this
track only.
2) The a priori probabilities that "centers" of large
defects of a particular size will occur on a particular track. These defects can produce errors in ones
or zeros or both in a region which encompasses a
number of adjacent tracks.

It can be seen from the flow diagram of Fig. 4 that
there are three basic operations other than decision
making that must be performed by this phase. They are

For program use, the above information is converted
into a set of values K, L, T, and C, where
K =the number of defects per track of a certain type
to be placed on the hypothetical tape,
L = the length of the defect in bits,
T=the track number of the defect center, and
C = the defect class, and indicates whether the defect
involves errors in ones, zeros, or both, and

Phase III-Error Analysis

1) Comparison of defect locations,
2) Random generation of information in the form of
N-bit symbols, and
3) Classification and counting of errors within the
affected N-bit symbols.
As each defect is read from the sorted list, its location
must be compared with the location of other defects to
see if any of the defects overlap. When defects on difierent tracks overlap, multiple errors may result. When
defects on the same track overlap, they are joined
in to one defect.
When the number of overlapping defects changes, a
new "section" is formed (Fig. 1). The program keeps
track of the number of overlaps involved at any instant.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

90

The number of bits involved in these overlapping
defects represents the maximum number of errors which
can occur in the symbols in the overlap. For example,
consider Section 2 in Fig. 1. This particular section is
two bits long and involves Defects I and II. Thus, no
more than four bits may be in error in this particular
overlap, two bits in each of two N-bit symbols. The
program provides three counters for double, triple, and
higher order overlaps.
N ext, the program must generate random information for each N-bit symbol in the overlap. Considering
each bit individually, it will be in error if anyone of the
following three conditions exists:
1) The bit is a one and the defect produces errors in
ones,
2) The bit is a zero and the defect produces errors in
zeros,
3) The defect produces errors in ones and zeros.
A test for errors is now made on the basis of these three
conditions. The previously described PE RAND program is used to generate the random information.
Each bit in the affected N-bit symbol is tested, and,
if an error has occurred, the appropriate error counter
is updated. The program provides counters for double
and triple errors
MACHINE TIME ON THE IBM

704

COMPUTER

Table I indicates approximate machine times per
phase for typical runs involving 3500 and 10,000 defect
regions, respectively. It shows that the time required to
run Phases I and III is linearly related to the number
of defect regions involved, while the time for Phase II
(Sort) increases more rapidly with the number of defects. This indicates that a larger number of runs involving fewer defects will give shorter over-all machine
times.
TABLE I
704

MACHINE TIME IN MINUTES PER RUN

N umber of Defect
Regions

Phase I

Phase II

Phase III

Total

3500
10,000

3
9

4

22

4
12

11
43

RESULTS OF SIMULATION PROGRAM

The program was tested by using it to simulate a symmetric binary channel with no error dependence. For
this channel, exact error probabilities can be computed
[see (1) in the Appendix]. Results of these computations
were compared with results of the simulation program.
A few of these results are shown in Fig. 5 for N equal
to 24, 12, and 8. This channel had an a priori error probability per track of p = 2 X 10-3• The average error for
the double error probability, as compared with the
theoretical results, varied in these examples from 0.76
to 1.77 per cent.

N

+5%

c

24

TOTAL NUMBER Of DOUBLE
ERRORS fOR 3 RUNS,I673

199
0
RUN.

2

3

-5%

+

N =12

TOTAL NUMBER Of DOUBLE
ERRORS fOR 3 RUNS, 399

~UN •

0

-227

-5%
.N=8
TOTAL NUMBER Of DOUBLE
ERRORS fOR 6 RUNS, 3:30

+

6
0

'-5%

-IQ%

Fig. 5-Error in per cent for double error probability. Symmetric
channel (defect length = 1); a priori error probability, p = 2 X 10-3•

As may be expected, the accuracy of the results depends on how many runs have been made. The number
of runs required will be different for each problem and
will depend on how long it takes to accumulate a significant count for the various error patterns of interest.
The theoretical limit to the final accuracy of the results
of the simulation program is set by the inherent inaccuracy of the pseudo random number generator and
on the accuracy with which the mathematical model
describes the actual information channel.
The simulation program has been designed for information channels with rather complex characteristics.
An example of such a channel is a Z channel. In a Z
channel, one of the binary symbols is always transmitted correctly; thus, it is a channel with the greatest
possible degree of asymmetry. For the example selected,
the probabilities of occurrence of defects of various
lengths are shown in Fig. 6. This hypothetical distribution was developed to study the effect of channel asymmetry and error dependence on error probabilities. In
this distribution, the defects contain as many single bits
per track as in the previous example for the symmetric
channel.
For this Z channel, the approximate double error
probabilities for N=24, 12, and 8 are shown in Fig. 7.
The results for this example indicate that good approximations for multiple error probabilities 'can be obtained
by assuming that each long -defect region is split up into
as many separate regions one bit long, as there are bits
in the original defect region.
The multiple error probabilities for this Z channel
(Pl = 2 X 10-3) can be computed, using (3) or (8) in the
Appendix.

Newman and Nippe: Simulation of an Information Channel on the IBM 704 Computer

91

100 to90 IBO

16

70
In

12

14

60

)(50

>~

~
<

12

II)

40

-

-

N-24

II)

0

f

30

-

20

-

a::
0
a::
a::

>

t:
.J

I&J
I&J

B

o

II)
:;)

iii
Q.

00---- -- --

..J

«

~

~

®

.8
6
10

9

4

8
7

2

o

23456

7

II
B

9

AVERAGE DOUBLE ERROR
PROBABILITY FOR A NUMBER
OF RUNS
AVERAGE DOUBLE ERROR
PROBABILITY FOR AN "EQUIVALENT"
"z" CHANNEL WITHOUT ERROR
DEPgNDENCE

-

-

0------- ® - ----

-

6 Ir-

I

~

N=B

~
4r-

DEFECT LENGTH IN BITS

Fig. 6-Hypothetical probability of occurrence of
defects of various lengths.

31---

------------0
---®

-- -- r----

SUMMARY

The results of the simulation program indicate that
good approximations to the probabilities of multipleerror patterns in symbols consisting of N bits can be
obtained.
The computer time required for the simulation is
quite reasonable and compares favorably with the costs
for the testing of numerous complete versions of a system. The simulation procedure is, therefore, particularly useful during the early stages of development.
The program, because of its flexibility, also lends itself to purely theoretical investigations of error
statistics.

4

RUN4t1

4

RUN'" I

4

5

Fig. 7-Double error probability for a Z channel for
error dependence shown in Fig. 5.

TRANSMITTED
SYMBOL

RECEIVED
SYMBOL

Fig. 8.

ApPENDIX
EXPRESSIONS FOR ERROR PROBABILITIES FOR AN

(1)

ASYMMETRIC BINARY CHANNEL WITH

No ERROR DEPENDENCE
The binary asymmetric channel can be represented
by Fig. 8 where

qo = the probability of a transmitted
ceived as a zero, qo = 1 - Po,
ql = the probability of a transmitted
ceived as a one, ql = 1- PI,
Po = the probability of a transmitted
ceived as a one,
PI = the probability of a transmitted
ceived as a zero.

For an asymmetric channel, if all N-bit symbols are
assumed to be transmitted with equal probability,

zero being reone being rezero being re(2)

one being re-

Let there be N bits in a character, of which r are ones
and (N -r) are zeros. The probability PN(Z) or Z errors
in a character consisting of N bits can be expressed for
a symmetric channell where PI = Po.

and, for a Z channel, Po=O and qo= 1.
(3)

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

92

and, if PI = expo,

where

(J

= -x-!(-N_N_~_X_)-!

=

(4)

po« 1.

M(z) =

(J

2N

(5)

Expanding (1-p)k in terms of a power series and retaining only the first term, (1)-(3), respectively, become
(6)

Pl'

2N

= -~

Simplifications can be made in (1)-(3), if systems of
inherently high reliability are considered. That is, if

PI« 1,

~{r~(N)[' ~(~)(~) p~zJ}
r

r=O

(r)z Pi

L (N)
- -

r=N
r=z

r

x

:1:=0
z

Z -

•

x

a

Z

(7)

x

(8)

ACKNOWLEDGMENT

The authors would like to thank the U. S. Department of Defense for permitting the publication of this
paper. Valuable suggestions by C. L. Christiansen,
R. J. Sippel, and P. J. Nelson contributed greatly to
this paper.

A Compiler with an Analog-Oriented Input Language
M. L. STEINt,

J. ROSEt,

INTRODUCTION

ANALOG computation is, for many problems, more

rt. convenient than digital computation but lacks
the precision obtainable from a digital-computer
solution. A compiler has been developed which, for
problems involving differential equations expressible in
the form
i = 1, 2, ... , n

(1)

combines to a considerable extent the desirable attributes of both types of computation. The compiler
achieves this by deducing from a description of an
analog setup diagram the differential equations which
the analog computer solves, and then compiling a program for solving the equations.
Two drawbacks are avoided. The differential equations represented by a diagram are deduced with no
attempt to simulate the analog computer, and a sophisticated integration procedure is used. Therefore, an
accurate and relatively efficient digital-computer program is obtained. Even though the solution is obtained
from the differential equations, the identity of the output of each element on the diagram is not lost, so that
the results may be visualized more easily as the response to the physical system being studied.
The integration procedure used in the final program
IS the GilP version of the fourth-order Runge-Kutta

t University of Minnesota, Minneapolis, Minn. The work of this
author was supported in part by Convair-Astronautics.
t Convair (Astronautics) Div., General Dynamics Corp., San
Diego, Calif.
1 S. Gill, "A process for the step-by-step integration of differential
equations in automatic digital computing machines," Proc. Cambridge
Phil. Soc., vol. 47, pp. 96-108; June 3, 1950.

AND

D. B. PARKERt

method. Since no starting procedure is required, and
because the discontinuities introduced by nonlinear
analog elements cause no difficulty, this method Was
adopted.
The compiler is not the first digital computer program
with an input language related to differential analyzers;
for example, there are the programs DIDAS2 and
DEPI.3 However, the program which this paper describes differs from other similar programs of which the
authors are aware in one or more of the following
respects:
1) The input language is closely related to an extensively used electronic analog computer.
2) The user need not provide his own input and output program. The compiler provides a complete
program ready to run.
3) Since the final program produced is in machine
language, it is efficient in terms of execution time
as compared to an interpretive program.
4) Rather than simulating a differential analyzer, the
compiler deduces from a setup diagram the differential equations represented by the diagram.
In producing a program to solve the differential equations expressed by an analog setup diagram, the compiler uses a technique of increasing popularity.4 This
technique is the use of another processor as an inter2 G. R. Slayton, "DlDAS," presented at the Twelfth Natl. Meeting of the Assoc. for Computing Machinery; June, 1958.
3 F. H. Lesh and F. R. Curl, "DEPl, An Interpretative DigitalComputer Routine Simulating Differential-Analyzer Operations,"
Jet Propulsion Lab., California lnst. Tech., Pasadena, CaliL, Memo.
No. 20-141; March 22, 1957.
4 Communications of the Assoc. for Computing Machinery, vol. 1,
no. 7, p. 5; July, 1958.

Stein, Rose, and Parker: A Compiler with an Analog-Oriented Input Language
mediate step. In this case the intermediate processor is
Fortran,5 an automatic coding system developed by
IBM which accepts statements closely resembling the
ordinary language of mathematics as input. The output
of the compiler is in the input language of Fortran, and
is translated by Fortran into machine language.
Through the use of Fortran, the task of developing the
compiler was greatly simplified.
The method used to deduce the differential equations
represented by a setup diagram through analysis of its
description is developed in a previous paper by two of
the authors.6 Therefore the method will not be developed here. Instead it will be illustrated by example. A
description of the preparation of problems for the compiler and a description of the compiler will be given.
The application of the compiler will be illustrated with
the solution of a simple problem, and in conclusion, experience in its use will be discussed.
THE ANALOG SETUP DIAGRAM DESCRIPTION

The conversion of an analog setup diagram to a
digital-computer program involves several steps. The
diagram is described. The description is processed by
the computer and a Fortran program is produced. The
program is compiled by Fortran into a machine language program ready to be run. All of these steps except
the first are performed by an IBM 704. This first step,
the description of the diagram, will now be discussed.

A nalog Elements Recognized
In order to analyze the description of an analog setup
diagram, the compiler must be able to recognize the
more commonly used analog elements. Those elements
available on the Electronics Associates PACE electronic analog computer were chosen as representative.
These elements are listed in Table I. The diagram symbols for the elements are those most commonly used fqr
PACE setup diagrams. The mathematical expression
for the function of each element is chosen so that problems prepared for PACE computers can be converted
to digital programs with little or no change in the setup
diagram. For this reason, the scale factors associated
with multipliers and dividers are included, and resolvers
involve angles measured in volts where one volt equals
two degrees.
Norm.ally, scaling requirements for an analog computer are quite restrictive; but since the operations in
t~e digital computer program are performed in floatingpoint form, all numbers, including parameters associated with elements, can vary over a much wider range
in the digital computer. Therefore, diagrams to be
processed by the compiler need not be scaled for an
analog computer.
Ii J. W. Backus, et al., Programmers Reference Manual for the
Fortran Automfltic Coding System for the IBM 704; December, 1957.
6 M. L. Stein and J. Rose, "The Automatic Deduction of Differential Equations from Analog Setup Diagrams," Mathematical PrePrint Series, Pre-Print No. 13, Convair-Astronautics, San Diego,
Calif.

93

In addition to the use of amplifiers as s~mmers and
integrators, special one-amplifier circuits representing
more complicated transfer functions are sometimes used
on electronic analog computers in order to save analog
elements. Such use of amplifiers is not permitted on
diagrams whose description is to be processed by the
compiler. One-amplifier circuits generating special
transfer functions must be replaced with equivalent
circuits using elements in Table I.
Two of the elements, element 13 and element 14,
listed in Table I do not correspond to actual analog elements, but have been included for convenience and for
the compilation of more efficient digital programs. Element 13, the Fortran statement, is included to allow
the replacement, if desired, of feedback loops generating
functions such as X 2 / 3 , eX, etc., by a Fortran statement
utilizing a library subroutine in order to obtain a more
efficient digital program. Element 14, the external input,
is included to allow the combination of compiler-generated Fortran programs with other Fortran programs.

The Preparation of a Diagram Description
The compiler converts an analog-comp'uter problem
to a program by processing information supplied in a
description of the analog setup diagram. Therefore, the
preparation of an accurate diagram description by the
user of the compiler is an essential step in the process of
obtaining a correct digital program.
Before beginning the description of a diagram, however, it should be determined that the problem is of a
type suitable for conversion. While the analog computer
is capable of solving several types of problems, the compiler is restricted to problems involving differential
equations expressible in the form (1). That is, no implicit relationships are allowed among derivatives or
among other variables. The compiler detects implicit
relationships by discovering the presence on the diagram
of feedback loops without integrators, a procedure
whose suitability for detecting implicitness is demonstrated in a previous paper.6 Therefore, the diagram
should be examined for such loops before preparing the
description.
Having determined that the problem is of a suitable
type, the diagram must be examined for elements other
than those listed in Table I. Circuits involving other
elements must be replaced by equivalent circuits using
acceptable elements. Circuits for obtaining powers,
roots, arctangents, exponentials, and natural logarithms
may be replaced, if desired, by Fortran statement elements.
After it has been determined that the problem is of a
suitable type and that all circuits are acceptable, the
actual process of describing the diagram is clerical in
nature. Consequently, the description may be done by
an engineering aide, freeing the engineer who originated
the problem for tasks making better use of his skills.
The first step in preparing a diagram description is to
assign to each element on the diagram an arbitrary

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

94

TABLE I
ELEMENTS RECOGNIZED BY THE COMPILER

DIAGRAM
SYMBOL

NAME

COMPILER
SYMBOL

FUNCTION

= constant

Reference

RF

P=R

2

PotentIOmeter

PT

P

3

Summer

SU

P = -(alII + a 1 + . . . +a7 17 )
2 2
~ = 1, 5, or 10

Integrator

IN

R

= KI

K

= constant

t

P + C -

J

to

(a 1 I

1

+a

2

a k = 1, 5, or 10

5

PI

Servo
MultiplIer

MS

==

Electronic
MultIplier

C = constant

11 x 1/ 100

P 2 = 11 x 1/100

P3

6

12 + . . +a7 17) dt

= 11 x

I i 100

PI = - 11 x 1/100
ME
P 2 = - 11 x 1/100

7

8

DIvider

LImiter

DI

I

--I

u
Limiter

~P

P = U, lor L

LM

as I ~ U,

L

9

10

11

Rectangular
to Polar
Resolver
Polar to
Rectangular
Resolver

Switch

Function
12

Statement

External
Input

+

I 2
2

pool sin (I )
3
3
1

PI = 12 sin (11)
P 2 = 12 cos (\)

11~

~

1-1 Function

r

SW

~

11"1 - " Fortran r- P
2
:
Statement

I

External

~

If I  1 >L, or I ~ L respectively

P must be defined by another
p

EI
Fortran program.

Stein, Rose, and Parker: A Compiler with an Analog-Oriented Input Language
number between 1 and 999. Using a suitable input form
of the type shown in Fig. 1, the elements are described
one at a time in any convenient order. An element's
number is entered on the form. Then its type is designated as given in Table I-for example,
for a summer. If the element has parameters associated with it,
the values of the parameters are entered on the form.
If the output of the element is to be printed, an N for
normal output or a C for checkout output is written on
the form. The inputs to the element are then listed by
entering the numbers of the elements from which the
inputs come. If the element is an integrator or a summer, the scale factors associated with the inputs are
listed. As an example, Fig. 2 shows the complete entry
for element 2, an integrator whose single input is equal
to five times the first output of element 6, an electronic
multiplier, and whose initial output is -45.
Each element is described in a fashion similar to that
above on one line of the input form. When all elements
have been described and the descriptions checked, the
form is given to a key-punch operator, who punches
each line on a card. These cards become the input to the
compiler.

/~,

+~

R. K.

c.

U

L

;p-" "'....
~~

.. ,,'"

~ '"

,s:"

~

95

,,"

~~'"

..'",,"

~

. '",,"

~

,,~

~~"

sa

Fig. 1-Input form for the compiler.

Fig. 2-Input form entry for one element.

PI
P2

1

3

THE TRANSFORMATION PROCESS

In order to transform a description of an analog setup
diagram into a digital-computer program, the compiler,
using a procedure developed in a previous paper,6 must
deduce the differential equations which the diagram
represents. Having deduced the equations, the compiler
must then produce a Fortran program suitable for
solving the equations. These two major phases of the
transformation process will now be described and will
be related to the complete compiler program.
First, the differential equation of a simple diagram
will be deduced. In Fig. 3 we see the diagram for the
equation

y = R - Ky.

(2)

Pi represents the output of element 1, P2 the output of
element 2, etc., and D2 represents the 'total input to
element 2, an integrator. Examination of the types of
elements 1 and 3 (see Table I) permits the equations
Pi ! R

(3)

P3 = KP2

(4)

to be written. Examination of element 2 discloses that
it is an integrator. The equation for its input can be
written as

D2 =

+ Pi + P3

(5)

If (3) and (4) are substituted into (5), the equation
D2 = R

+ KP2

(6)

is obtained. Recalling that
P2

=

C -

f

t D2dt ,

to

'

Fig. 3-Diagram for y=R-ky.

as shown in Table I, (6) is seen to be equivalent to (2).
Therefore, given values for C (the initial value of P2),
R, and K, a solution for (2) can be obtained through use
of a suitable numerical integration procedure.
Actually, (3)-(5) need not be combined if they are
evaluated in the proper sequence, since all that is required to carry out the numerical integration is that
it be possible to obtain a value for D2, given a value of
R, K, and P2. If equations are not to be combined, deducing the differential equations from a diagram description consists of determining the proper sequence
for evaluating each element and then producing equations for evaluating the output of each element with the
exception of integrators, whose input must be evaluated
as seen above. Leaving equations uncombined leads to a
somewhat less efficient computer program. However, it
has the advantage of preserving the identity of the output of every element, a useful property when programs
are debugged, and also aids in visualizing the results as
the response of the physical system being studied. Since
a procedure for deducing differential equations is easier
to implement, too, if equations are left uncombined,
this method was chZ

X<>--II1II1--..

yo-.....-INPUTS OUTPUTS
z
X
Y

x

y

Z

0

0

0

0.0 .. 0

-5V -5V

-5V

0

1

0

o •1

-5V+5V

-5V

1

0

0

+5V -5V

-5V

1

1

1

+5V +5V

+5V

0

• 1

Fig. 2-Diode "and" circuit.

work is that for each combination of input values there
will be a unique output value, represented by signals on
the output lines. Since the output from a logical network
of this sort is determined by the inputs to the logical
network, it is possible to construct a logical network that
will perform any function having the characteristic that
for each state which the inputs assume, there will be a
unique output. It may be seen that this, does not limit
the function performed to transcendental or straightforward mathematical relationships, but allows any input-to-output relationship.
When a logical network contains several input and
output lines, it is possible to simplify the design problem
by considering only one output line at a time. The signal on this particular line is determined by the values
of the inputs to the logical network at that time, and
may be specified by means of a Boolean function of the
input variables. This expression will be equal to 1 when
the output from this particular line of the network is 1
and to 0 when the output is O. An expression that represents the operation of a single line of a logical network is
called the transmission function for that line of the network. A fully developed transmission function, when
written in the sum-of-products form, consists of a num-

X

-D01

..

1

11

..

0

~

x

'
X

t---<>

XI

X

Xl

o

1

-3V

OV

1

0

OV, -3V

,X

Fig. 4-Transistor inverter.

ber of terms, each of which is a product of all of the input
variables, certain of which may be complemented. This
type of expression is referred to as a canonical expansion
for the circuit transmission.
The first step in the design procedure for a network
with several input and output lines consists of deriving
the canonical expansion for each output line of the network. Table I shows the derivation of the canonical expansion for the transmission function for one output
line bit of a small network which will yield the sine of x
within the limits 0° to 90°. The leftmost column of the
table lists the angle x in degrees, starting with 0° and
increasing by increments of 6° to 90°. The next column
contains the same set of angles coded as binary numbers, starting with 0000 and increasing to 1111 by steps
of 0001, which increases the angle by 6° in each case.
The values of sine x, expressed in binary form, are listed
in the next column of the table. To the right of the column representing the sine values is a column listing the
Boolean symbols for the input variables in productterm form. The variables are primed or unprimed depending on whether or not the respective input value is
o or 1. The computer program generates the sine value
for each input angle and then, examining one bit of the

lOS

Bartee: Automatic Design of Logical Networks
TABLE I
CANONICAL EXPANSION FOR SIN X FROM

x
(degrees)

0

0

0

12
18
24
30

0
0
0
0

0 1 0
0 1 1
1 0 0
1 0 1

90

1

1

1

0

0

0

0

0

0 0 1 1
0 1 0 0
0 1 1 0
1 0 0 0

1

1

-1

1

1

90°

TO

Sin x
(binary form)
(1) (2) (3) (4)

x
(binary form)
a b c d

0

0°

Product
Terms
a ' b' c' d '
a'
a'
a'
a'

b'
b'
b
b

c
c
c'
c'

d'
d
d'
d

a

b

c

d

T(2) =a'b'cd+a'bc'd' +ab'c'd+ab'cd ' +ab'cd+abc'd ' +abc'd
(+abcd I +abcd

The first step of the minimization procedure consists
of matching each term of the canonical expansion with
each of the other terms, and if the terms differ ·.in only
one variable, eliminating that variable. T-his matching
procedure actually consists of the repeated. application
of the theorem (c/)(x+c/>a'=c/». Table II illustrates the
first step of this matching procedure, using the product
terms which comprise the expression developed in Table
I. It is important that a given term be matched with all
of the other terms and not dropped after the first
match. For instance, abc'd' in Table II must be matched
with both abc'd and abed'. After the first set of matches,
the resulting terms will each have one yariable less than
the original set of terms. Since, in Table I I, the first set
of terms consists of four variables each, the next col-

TABLE II
DERIVATION OF PRIME IMPLICANTS

Theorem (¢a+¢a ' =¢)
First Cycle
a ' b'
a' b
a b'
a b'
a b'
a b
a b
a b
a b

c d
c' d '
c' d
c d'
c d
c' d '
c' d
c d'
c .d

Second Cycle
-

a
a
a
a
a
a
a
a
a

b' c d
b c' d '
b' - d
c' d
b' c c d'
- c d
b c' b - d'
b - d
b c -

Third Cycle

a
a
a

-

-

-

d

c
-

-

b

sine values at a time, develops and stores the canonical
expansion for each output line of the logical network.
In effect, the computer proceeds down a column of output values until it finds a 1. It then stores the input
values which generated the 1 output in the output table.
Table I illustrates the derivation of the product terms
for the second bit of the four output bits. Each 1 in the
second-output-bit column is underlined, as is the product term for this particular output. The complete canonical expansion for the output line representing the second least significant output bit of this particular table is
illustrated at the bottom of the figure. An expression is
therefore generated for each output line of the logical
network.
The minimization technique which has been programmed is based on work done by Quine 1.2 of Harvard,
and by McClusky 3 during the preparation of his doctoral dissertation at M.I.T. The input to this particular
program is the canonical expansion developed by the
preceding program, and the output is a minimized
equivalent sum-of-products expression.
1 W. V. Quine, "The problem of simplifying truth functions,"
Amer. Math. Monthly, vol. 59, pp. 521-531; 1952.
2 W. V. Quine, "A way to simplify truth functions," Amer. Math.
Monthly, vol. 62, pp. 627-631; 1955.
3 E. J. McCluskey, Jr., "Minimization of Boolean functions,"
Bell Sys. Tech. J., vol. 35, pp. 1417-1444; 1956.

First Cycle
0 0 1
0 1 0
1 0 0
1 0 1
1 0 1
1 1 0
1 1 0
1 1 1
1 1 1

1
0
1
0
1
0
1
0
1

Second Cycle
-

0 1
1 0
1 0 1 - 0
1 0 1
1 - 1
1 - 1
1 1 0
1 1 1 1 1 1 1

-

1
0
1
1
-

0
1

Third Cycle

1 - - 1
1 - 1 1 1 - -

-

0
1
-

umn, containing the shortened terms, will consist of
terms of three variables. In Table II the missing variables are indicated by dashes. If any term of the expression does not match with any other te~m, it is then a
"prime implicant" term and will be a term of the final
prime implicant expression.
After the original terms have all been compared, a
second set of terms., each one variable shorter than the
original terms, will have been derived. These terms are
then matched with each other, using the same theorem
c/>a + c/>a' = c/>. If the remaining variables are maintained
in their original positions in the terms, and the eliminated variables indicated by means of a dash, as in
Table II, the matching process may be made somewhat
simpler to perform. In this case the terms are matched
on the following basis: first, the dashes indicating the
missing variable or variables must be in the same position (that is, ab-d' and ab-d may be matched but there
is no possibility of matching a-cd with ab-d) and second,
the remaining variables must all be identical save one
(that is, ab-d' can be matched with ab-d, yielding ab--).
This process is continued until no further matches can
be made.
Third and further cycles of this process are continued,
using the same rules, un til a single pass through a cycle
yields no matches. The remaining terms plus all the

106

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

TABLE III
terms that did not match during the process comprise
ORDERED
DERIVATION
OF PRIME IMPLICANTS
the prime implicants.
The right half of Table II illustrates how the above
First Cycle
Second Cycle
Third Cycle
matching process is performed in the computer. The
0 1 0 0
- 1 0 0
1 - - 1
terms of the original expression are stored in binary
form, using positional notation to maintain the identity
- 0 1 1
0 0 1 1
1 - 1 1 0 0 1
1 0 - 1
1 1 - of the variables. A 1 is used to indicate an unprimed
1 0 1 0
1 - 0 1
variable and a 0 to indicate a primed variable, so that
1 1 0 0
1 0 1 ab' cd is expressed as 1011. Since variables will be elim1 - 1 0
1 0 1 1
inated in this process, it is convenient to utilize another
1 1 0 1
1 1 0 '1
1 1 - 0
1
1 0
register of computer storage as a mask, and to alter the
mask as variables are eliminated, while maintaining the
1 - 1 1
1 1 1 1
1 1 - 1
variables in order. Each term is therefore represented by
1 1 1 two registers, one containing the "values" of the variables and another the mask for the term. The mask is Prime Implicants= (- 1 00+- 0 1 1+1 - - 1+1 - 1 -+1 1 --)
or (be'd' +b'ed+ad+ae+ab)
altered to indicate the eliminated variables. For instance, if two terms 1011 and 1010 and their masks are
matched, the resulting term is 101- or ab' c. The first
+30
-30
step in the matching process in the computer, therefore,
is to compare the masks to see if they are identical. If
the masks are identical, the terms are then matched
and if they differ in only one variable, a new term is
b --M--+
formed along with a new mask which indicates the misse' --i4--+--+--+-*~
ing variable or variables.
d'-i+--'
In order to shorten the matching problem, McCluskey
has shown that the terms may first be sorted according
b'--!4----+
to the number of 1 's in each term. Table· III illustrates
e--t4------1---f-t+--+----<>
the same set of terms after they have been sorted and
d --1+----'
be'd'+ b'ed + a e + ad
arranged in tabular form. Each section of a column of
the table contains terms with one more 1 than the terms
a --t4--------.
in the preceding section of the table. I t is necessary to
e --14-----.. .
match only the terms from one section of the table with
the terms in the preceding and following sections of the
a --i+------,
table, for two terms which differ by more than one 1
d - - i + - - - - -.....
cannot match: 1011 cannot match with any term conFig. 5-Two-level diode circuit.
taining only one 1, for instance 1000, for the two terms
must, of necessity, vary in more than one variable.
Further, if the terms from one section of the table are of these terms are common to every possible minimal
matched with those of the next section, the resulting subset, while the remaining terms of each subset may
shortened terms can be matched only with shortened be chosen, subject to a set of constraints. The computer
terms formed by matches in the preceding and following program has been designed to print automatically those
sections of the table. This technique significantly re- prime implicant terms common to every minimum exduces the number of matches which must be made and pression. The computer then prints a table of the realso materially lessens the amount of fast-access mem- maining prime implicant terms along with the necessary
ory that is required at a given time. The number of constraint information. Choice of the remaining terms
terms formed by the matching process tends to increase in the final expression is made from the table. This final
considerably for large problems before finally decreas- choice has been left separate from the computer proing. By storing only the sections of the table which are grams to permit flexibility in the design of the circuitry.
being matched in high-speed memory, and storing the
Fig. 5 illustrates a two-level diode circuit which perrest of the terms on tape or drums, large problems may forms the function developed in Table I. Fourteen
diodes are required to construct this particular network.
be handled more easily.
The set of terms derived in this manner, if collected Since the original canonical expansion would have rein sum-of-products form, will form an expression quired 45 diodes, 31 diodes were saved by the minimizaequivalent to the original expression. An important tion process.
The entire process was programmed for the Whirlwind
characteristic of these terms is that none of them can be
shortened by omitting a variable. Quine has shown that computer located at the Massachusetts Institute of
the shortest sum-of-products expression must consist of Technology. 'This is a general-purpose, stored-program
a subset of these terms. It may be shown that certain computer with a word length of 16 bits. The entire set

Kalman and Koepcke: Digital Computers in the Optimization of Chemical Reactions
of programs are slightly over 3000 orders in length. The
program requires about 40 seconds to generate the
canonical expansion for each output line of a 12 inputline by 14 output-line problem and from eight to ten
minutes to minimize and print the final expression.
About 25,000 registers are required to store partial results during the processing. As a result, drum storage is
used during the minimization procedure. The procedure
is now being programmed for an IBM 709 located at
the laboratory, making possible the solution of larger
problems and somewhat shortening the programs' running time due to the large core storage of the 709.
The design procedure described here appears very
flexible. It can be used to perform automatically the
logical design of circuitry which will perform any function which has a unique value of the dependent variable
for each value of the independent variable.

107

To date, networks which yield sine, arc sine, and
the square root of the input value have been constructed. The concept of programmed logic as an aid
to computer design appears quite attractive for the
design of future machines.
ACKNOWLEDGMENT

The computer programs described in this report were
written by H. C. Peterson who has also contributed
many helpful suggestions. The original problem of designing computer instructions which would yield transcendental operations arose during Lincoln Laboratories'
ballistic missile early warning system studies and was
suggested to the author by W. I. Wells. The author also
wishes to thank V. A. Nedzel, R. W. Sittler, and J. F.
Nolan for their helpful comments during the writing
of this report.

The Role of Digital Computers in the Dynamic
Optimization of Chemical Reactions
R. E. KALMANt

I.

AND

INTRODUCTION

ALONG with the increasing availability of high-

.fi speed, large-storage digital computers, there has
been growing interest in their utilization for realtime control purposes. A typical problem in this connection and one of long-standing interest is the optimal
static and dynamic operation of chemical reactors. 1 ,2 To
our knowledge, no digital computer is being used for
this purpose, chiefly because of the many difficulties encountered in utilizing real-time machine computation in
reactor control. These difficulties range from the unavailability or inadequacy of hardware (i.e., transduoers,
measuring instruments, low-level analog-to-digital converters, etc.) to the lack of a well-established body of
fundamental theoretical principles. Although a great
deal is known about the basic concepts governing control systems,3,4 present methods cannot be readily applied to designing a program for a real-time digital con-

t
t

Res. Inst. for Advanced Study, Baltimore 12, Md.
IBM Res. Center, Yorktown Heights, N. Y.
1 T. J. Williams, "Chemical kinetics and the dynamics of chemical
reactors," Control Engrg., pp. 100-108; July, 1958.
2 R. Aris and N. R. Amundson, "An analysis of chemical reactor
stability and control," Chem. Engrg. Sci., vol. 7, pp. 121-155; 1958.
3 J. G. Trux;al, "Automatic Feedback Control System Synthesis,"
McGraw-Hill Book Co., Inc., New York, N. Y.; 1955.
4 J. R. Ragazzini and G. Franklin, "Sampled-Data Systems,"
McGraw-Hill Book Co., Inc., New York, N. Y.; 1958.

R. W. KOEPCKEt

trol computer. This is because the existing design methods are applicable primarily to fairly small-scale systems, whereas the use of a digital computer (in fact the
very attractiveness of computer control) arises primarily in connection with large-scale problems.
The role of the digital computer in real-time control
consists essentially of "digesting" large amounts of information obtained from the primary measuring instruments and then calculating, as rapidly as possible, the
con trol action to be taken on the basis of these measurements.
One purpose of this report is to provide a broad outline of a new approach to designing control systems for
chemical processes which are to be built around a fast,
general-purpose digital computer operating in real time.
The specific engineering details of the computer will not
be of any interest here; rather, we have concentrated on
studying the types of computations the computer is to
perform. To lend concreteness to the discussion, the
chemical process under consideration will be a continuous-flow, stirred reactor. After the fundamental concepts have been established, the detailed analytic equations (in the linear case) leading to the dynamically
optimal (and thus also statically optimal) design of the
reaction control system are given in Section III. The
equations of Section III represent a'special case of the
new design theory of linear control systems formulated

108

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

11. FUNDAMENTAL CONCEPTS
by the authors. 5 ,6 The performance of the dynamically
optimized control system is illustrated with the aid of
A. Description of Chemical Reactor
a numerical example.
The continuous-flow, stirred-tank type of chemical reIn Section IV the limitations of the linearity assumpactor
with which we shall be concerned here is shown in
tion or, rather, the additional steps necessary to attack
Fig.
1.
The principal inputs to the reactor consist of
realistic practical problems, are briefly discussed. It is
liquid
streams
carrying the various raw materials. The
impossible to give more than a rough sketch of these
volume-flow
rates
of the input streams in Fig. 1 are denew methods in a short report; however, specific details,
,
,
noted
by
M
M
M 3 • Each stream carries one or more
2
I
mathematical proofs, and discussion of engineering
5 12
compounds,
whose
concentrations (measured in terms of
problems may be found in the literature. moles/unit
volume
in Fig. 1) are denoted by UI., ... ,
One of the mathematical tools used in the new ap•
U
Other
inputs
to
the reactor may include a catalyst
5
proach has been called dynamic programming by its de13
stream
(with
flow
rate
M4 in Fig. 1) and provisions for
veloper, Bellman. This is a new method for the solucooling
or
heating
(with
heat-flow rate M5 in Fig. 1).
tion of problems in the calculus of variations where
The
numbers
Xl,
...
,
Xn
denote the concentrations of
dynamic constraints play the central role. It turns out
the
various
compounds
inside
the reactor (some of which
that our new approach to the description of controlsystem dynamics leads to concepts which are also the come from the input streams and some of which are
"natural setting" for solving the optimization problem formed chemically inside the reactor) ; one of the Xi will
by dynamic programming. A second purpose of this re- denote the temperature of the material inside the report is to provide a better appreciation of the advan- actor. Due to agitation, the concentrations of the varitages as well as the limitations of dynamic programming, ous compounds as well as the temperature are assumed
thereby promoting its use in the solution of engineering to be approximately the same at every point inside the
reactor and in the output stream. In most cases, it is
problems.
Perhaps the most outstanding advantage of the use of desirable to keep the amount of material in the reactor
dynamic programming in: our problem is that it reveals constant. This is achieved by means of a level controller
the intimate connection between the static and the dy- which keeps the output stream (Fo in Fig. 1) at all times
namic optimization of the process. In other words, the approximately equal to MI + ... + M 4 •
problem of selecting the operating conditions of the
process to obtain optimum yield or optimum product ,
Agitation
quality cannot be realistically divorced from the probReactor
lem of providing effective regulation to maintain the
process at these conditions. Although these matters are
well known to workers skilled in the control art, they
are often not clearly understood by others.
In addition to providing some practical means for the
solution of reactor control problems, it is hoped that this
report will help clarify a number of basic questions.
Outflow; Fo- <
I.

5 R. E. Kalman and R. W. Koepcke, "Optimal synthesis of linear
sampling control systems using generalized performance indexes,"
Trans. ASME, vol. 80, pp. 1820-1826; 1958.
6 R. E. Kalman and R. W. Koepcke, "Dynamic optimization of
linear control systems. I. Theory. II. Practical aspects and examples." (Scheduled for publication in the IBM J. Res. Dev., vol. 4;
1960.)
,
7 R. E. Kalman and J. E. Bertram, "General synthesis procedure
f'Or computer control of single and multiloop linear systems," Trans.
AlEE, vol. 77, pt. 2, pp. 602-609; 1958.
8 R. E. Kalman, "Optimal nonlinear compensation of saturating
systems by intermittent action," 1957 IRE WESCON CONVENTION
RECORD, pt. 4, pp. 130-135.
9 R. E. Kalman and J. E. Bertram, "A unified approach to the
theory of sampling systems," J. Franklin Inst., vol. 267, pp. 405436; 1959.
,
10 R. E. Kalman, L. Lapidus, and E. Shapiro, "On the optimal
control of dynamic chemical and petroleum processes." (Scheduled
for pUblication in Chem. Engrg. Progress.)
11 P. E. Sarachik, "Cross-coupled multi-dimensional feedback
control systems," Ph.D. dissertation, Dept. of Elect. Engrg.; Columbia University, New York, N. Y.; 1958.12 R. E. Kalman, "On the general theory of control systems," Proc.
Internatl. Congr. on Automatic Control, Moscow, U.S.S.R., Academic
Press, New York, N. Y.; 1960.
13 R. E. Bellman, "Dynamic Programming," Princeton University,
Press, Princeton, N. J.; 1957.

Fig. 1.

The object of the reactor is to produce a certain concentration of chemicals il1 the output streams. To accomplish this with the given types and concentrations
of raw materials in the input streams, one can vary the
flow rates Ml, ... , Ms. Since reactions take place more
rapidly as the temperature increases, control can be
exerted by changing the temperature in the reactor
which, in turn, is achieved (subject to the dynamic lags
of heat transfer to the reactor) by changing the heatinput flow-rate Ms. The amount of catalyst present in
the reactor also affects the reactions; the amount is concontrolled by changing M4 (sl;1bject to a time constant =
reactor volume/ Fo if the amount of -catalyst is not affected by the reaction). Similarly, some measure of con-

Kalman and Koepcke: Digital Computers in the Optimization of Chemical Reactions
trol can be exerted by changing the flow rates M I , M 2 ,
Ma; the effect of these changes is complicated and
depends on the reaction dynamics.

The principal objectives in designing a reactor control
system may be stated as follows:
Problem: Given the desired values X Id , • • • , X n d of the
concentrations in the output stream at time to, manipulate
the control variables in such a manner as to bring rapidly
the actual concentrations existing in the reactor at time to
as close as possible to the desired concentrations and then
keep the actual concentrations constant at all times despite
changes in the concentrations of the input streams, ambient
temperature, etc. If, at time tI> to, the desired values of the
concentrations are changed, the above process is repeated.
We now examine this problem in more detail. In doing
so, we shall specify precisely what is to be meant by "as
close as possible" and "rapidly."
C. Reaction Dynamics

Let us assume that p molecules of compound A and q
molecules of compound B combine chemically to form a
new compound C. If the concentrations X A, X B, Xc, of
the various compounds are small, the rate of increase of
the concentration of compound C is given by the wellknown Arrhenius equation. I ,2
dXc/dt = kAB(T)XAPXBq.

(1)

In (1), the reaction rate coefficient is given' by
kAB(T) = aABexp (-EAB/RT),

(2)

where aAB is a constant, EAB the activation energy of the
reaction, T the absolute temperature, and R the gas
constant. Moreover, the rate of decrease of the concentration of compounds A and B resulting from the reaction is equal to p resp. q times the right-hand side of (1).
In qualitative physical terms, the Arrhenius equation
has the following interpretation. Consider a small volume with diameter equal to the effective range of intermolecular forces. If p molecules of A and 'q molecules of
B have entered this small volume, a reaction takes place,
but not otherwise. In a dilute solution, the probability
of a molecule of some compound entering the small
volume as a result of thermal agitation is proportional
to the thermodynamic factor exp (- EAB/ RT) and the
concentration of the compound, but independent of the
concentration of the other compounds. The probabilities
of independent events multiply, hence (1).
In general, the assumptions which lead to the particular form of (1) are not true, but the 'reaction rate is
still a function of the temperature and concentrations.
Thus, in geneFal, one would 'replace (1) by
dXc/dt = kAB(X A , X B, Xc,

It follows that the reaction shown in Fig. 1 can be
described by the set of differential equations
dXi/dt = fi(X I, ... ,Xn ; M I, ... ,M z; U I, ... , Uk) (4)

(i = 1, ... , n; k, l, n = integers).

B. Statement of the Control Problem

i

109

T),

(3)

where kAB is some scalar function of the four variables
indicated.

This is a good place, conceptually and in order to
simplify the symbolism, to introduce vector-matrix
notation. Thus, let X be a vector (n X 1 matrix) with
components Xl, ... , X n • Similarly, M and U are defined as a (lX1) and (kX1) matrix, respectively;f is a
vector function of k+l+n arguments with components
f1, ... ,fn.
In terms of the new notation, (4) becomes
dX/dt = f(X, M, U).

(5)

The vector X is called the state of the reactor and the
components of X are known as the state variables. The
reason for this terminology is that if the reactor inputs
M(t) and U(t) are specified for all time t 2 to, then the
knowledge of X(to) supplies the initial conditions from
which the solutions of the differential equation (5) can
be uniquely determined (subject to some mild mathematical restrictions) for all future values of time.
Thus, the state is a fundamental mathematical concept
for describing the reactor dynamics; it is also a physical
concept. The temperature and various concentrations
can be physically measured (at least in principle) ; thus
the state at time to may be regarded as the information
necessary to determine the properties of the material
inside the reactor at time to.
The behavior of the reactor through time may be
visualized as a succession of changes in state. This gives
rise to the concept of the state-transition function. In
fact, the function f in the differential equation (5) may
be regarded as specifying the incremental state transitions taking place during the interval (t, t+dt). For
present purposes, it is more convenient to deal with
finite-interval state transitions which are obtained by
solving the differential equations. Anticipating the later
discussion, let us note that for control purposes it is
sufficient to sample the state of the process; i.e., observe
the state only at discrete instants in time, called sampling instants. Usually, the sampling instants are separated by equal intervals 7 of time (7 is called the sampling period), i.e., the sampling instants occur at times
to, to

+ r, to + 2r, ....

Now suppose that 7 is chosen to be so small that in the
interval (to, to+7) the functions M(t) , ,.Y(t) in (5) may
be adequately approximated by the cqnstants M(to) ,
U(to). Then (5) can be readily integrated (if necessary
by numerical methods) and we get
'
X(t o

+ r)

= cj>(r; X(to), M(to), U(io),

(6)

where cf> is a vector function with n components and
k + l + n arguments.

110

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

D. Static Optimization
Precisely what is meant by> the phrase, "as close as
possible to the desired concentrations" in the statement
of the basic problem in Section II -B ?
The states of the reactor ma y be represented as
points in n-dimensional Euclidean space (the state
variables being coordinates of the point) cal1ed the state
space. Suppose we specify r components of the state vector as desired values, with the remaining n-r components being arbitrary. This step may be regarded as
essentially a management decision, relating to the question of how one should try to operate the reaction process. The set of states for which the operation of the reactor meets the management requirements is clearly an
(n - r)-dimensional hyperplane. If the state of the reactor at any instant of time does not lie in the hyperplane, we can measure the "badness" of that state by
the distance of the state from the hyperplane of desired
states (see Fig. 2). The definition of the distance function (technically, a pseudo-metric) is arbitrary and depends on a management estimate as to what types of
deviations from the desired values are more harmful
than others. One possible definition of the distance function is
(r

< n).

(7)

Hyperplane af
Desired States

- - - - - + 0 - - - - - - - - - - . . En-r

Fig. 2.

2) The equilibrium state closest to the hyperplane of
desired states may not be stable.
3) The values of the control variables computed by
static optimization will not remain optimal when some
of the process parameters (concentrations in the input
flows, ambient temperature, etc.) change. In other
words, static optimization does not incorporate the important principle of feedback.
In the following it is shown that it is possible to combine both dynamic and static optimization in such a way
that the principle of feedback is retained.

E. Dynamic Optimization
More generally, if Q is any positive semidefinite matrix,
we can define p by the quadratic form

p(Xd - X) = (Xd - X)'Q(Xd - X),

(8)

where the prime denotes the transpose of the matrix.
By static optimization of the process we mean selecting a set of constant values MO of the control variables
(subject to some magnitude constraints), so that at
equilibrium the actual state lies as close as possible to
the hyperplane of desired states. By definition, the
equilibrium states X* of the reactor are given by:

dX/dt = f(X*, M, U) = 0,

(9)

M, U being constant vectors. Thus the statically optimal control vector MO and equilibrium state X*o are
determined by solving the minimization problem
Min p(Xd - X*),

(10)

M

To find the optimal control vector MO from (10), X* has
to be expressed as an explicit function of M from (9).
This and the amplitude constraints on the control variables lead to great analytic difficulties when f is a nonlinear function. But even in cases where the static optimization problem can be solved, it does not provide a
complete answer to the basic problem. This is because:
1) Static optimization does not provide a guide as to
how the control variables should be manipulated to
bring an arbitrary state as close as possible (in terms of
the arbitrarily adopted distance function) to the desired
sta te (dynamic optimization).

In our basic problem statement in Section II-B, the
last remaining word to be defined precisely is "rapidly."
A performance index for the reaction under dynamic
conditions may be defined as

(t - to) x (to)

+

f

t (t

- r)Dm(r)dr

(24)

to

fQr any t, to. The matrix (r)

k=1

In Qther wQrds, the Qriginal in~nite-~tep decisiQn prQcess
is cQnverted into. a finite-step decisiQQ. prQcess. PrQceeding exactly as in the derivatiQn Qf (16), we find that the
successive Qptimal perfQrmance indexes CPN o are CQnnected by the recurrence relatiQns:

~

= exp Fr =

L: Fkrk/k!

(25)

k=O

The TaylQr series is a cQnvenient way Qf calculating
numerical values Qf , it can be shQwn under variQus restrictiQns6 ,13 that cP NoCQnverges to. (po, and hN converges to. h.

III.

When met) is CQnstant during the intervals between
sampling instants, (24) takes the simpler fQrm
x(to

+ r)

= (r) x (to)

+ Ll(t)m(to),

(26)

where
DYNAMIC PROGRAMMING IN THE LINEAR .CASE

Ll(r) = fo 'T (r - u) Ddu.
(27)
The ease Qr difficulty Qf carrying Qut iteratiQns (20)
is determined largely by the CQm plexi ty Qf the p. yna.mics
Qf the reactiQn and by the limits im PQsed Qn the CQn trQI Eq. (26) is the explicit fQrm Qf (6) in the linear case.
We nQW give a fQrmal derivatiQn Qf the explicit equavariables. To. illustrate these cQmputatiQns cQncr:etely,
we cQnsider nQW the very special (but practically im- tiQns fQr accQmplishing the iteratiQns indicated by (20).
'-The ;variQus fQrmal steps Qf the derivatiQn can be justiPQrtant) linear case where
.
1) The reactiQn dynamics are gQverned by an Qrdinary fied under mild mathematical restrictiQns. 6
If p is given by (8) and CP by (13), it can be shQwn by
linear differential equatiQn with CQnstant cQefficients
2) There are no. amplitude cQnstraints Qn the cQntrQI inductiQn that the Qptimal perfQrmance index may be
writt~n in the fQrm
variables.
Linear differential equatiQns arise when the dynamic
CPNO[X(tO)] = x' (to)PNX(t O) - 2x' (to)RNx d
equatiQns (6) are linearized abQut SQme equilibrium
+ x d '5N x d (N ~ 0)
(28)
state X* a~d the cQrresPQnding values Qf the cQntrQI
variables M*. If we let
P N, ~N' SN being nXn, nXt, and txt matrices, respecX = X*
x, M = M*
m, and Xd = X*
x d, (21)
tively, and
and, if the deviatiQns x, m frQm the equilibrium values
Po = Ro = So = o.
are sufficiently small, (6) leads to. the iinear differential
FQr simplicity, we nQW drQP the arguments Qf x(t o)

(N ~ 0).

Am (to) ) ]

(29)

Now (PN+I [x (to) ] is evidently a quadratic function of
each of the incremental control variables mi(tO). It follows that C9N+1 has a single extremal value [which may
be a minimum or a maximum, depending on the value of
x(to) ] at that value of m(t o) which makes the right-hand
side of (29) zero. It can be shown6 that the extremal
value is a minimum for every x(to). Hence, m(t o) is
found by setting (29) equal to zero, which yields the following expressions for mO(to) and the matrices defining
(PN+1:

113

The objective is to convert raw materials A and B by
means of reaction (i) into C obtaining as much quantity
of C as possible. The optimization of the process. is
complicated by the undesired side reaction (ii) which
produces the contamination product D. Under steadystate conditions, the resulting concentrations in the outflow as a function of the "hold-up" time (reactor volume
/outflow rate) will have the qualitative shape shown in
Fig. 4.

on
c

.~

~
c

G)

u

c

o
(,J
G)

o

(30)

(/)

I
>-

where

-0

= [A'(PN
BN = [A'(PN

AN

+ Q)A]-l(PN + Q)}
+ Q)A]-l(RN + Q) .

o

!

(/)

(31)

o

Hold-Up Time (Reactor Volume/Outflow)

With some further calculations, using (30) and (31) we
find that:

+ Q);
AAN)'(RN + Q);

P N+ 1 = A( - AAN)'(PN
RN+ 1

= A( -

(32a)
(32b)

and
SN+l

= A(SN

+ Q-

B N' A'(RN

+ Q)).

(32c)

The iterations indicated by (32) can be readily performed on a digital computer. Note that SN need not becomputed if only the optimal control vectors are of interest. In the limit N~ 00, all quantities in (32), except
SN+I, may be shown to converge under certain restrictions on F, D, and A. 6
We get by inspection of (30) the important result: In
the linear case, the optimal control variables are linear functions of the actual and desired states of the reactor.
Since the control variables are linear functions of the
state variables, it follows that under closed-loop control
the reactor is a linear dynamic system. It can be shown 6
that the only possible type of limiting behavior in such
systems as t~ 00 is for the state X(t) to converge to an
equilibrium state X*. Since dynamic optimization includes static optimization, it follows at once that: In the
linear case, the states of a dynamically optimized system
tend asymptotically to the same equilibrium state X*o
which is obtained under static optimization.
Example: As a numerical illustration of the results
obtained by the use of dynamic programming in the
linear case, let us consider the following hypothetical·
reactions:
(i)

A

(ii) 2B

kl(T)

+ B--~C
+

k'2(T)

C --~ 2D.

Fig. 4.

We now derive the analytical form of the dynamic
equations of the reactor using the assumption that the
Arrhenius equation (1) holds. Denoting the concentrations of A, ... , D by Xl, ... , X 4 , and the flow rates
of A and B by M I , M2 we find, using conservation of
mass, that:
dXI/dt

= - k l (T)X I X 2 + (Ml/V)U I

dXddt

= - k l (T)X I X 2 - 2k 2(T)X 22X 3 + (MdV)U 2

- [(M I
- [(M I
dX 3 /dt

+
+

M 2)/V]X I ;

M 2)/V]X2 ;

= k l (T)X I X 2 - k 2(T)X 22X 3
- [eM I + M 2 )/V]X 3 ;

(33)

and

Let TI and T2 denote the temperatures of the input flows
MI, M 2 ; let Tc be the average cooling water temperature
inside the cooling coils of the reactor; and let h be the
corresponding average heat transfer coefficient per unit
cooling water flow. Furthermore, let HI be the heat
genera ted per molecule of the first reaction; H2 the heat
generated per molecule of the second reaction; p the
average density of the material in the reactor; and c
the average heat capacity of the material. Denoting the
temperature in the reactor by Xs and the cooling-water
flow rate by M s, conservation of energy yields
dXs/dt

= k 1(T)X I X 2H I

+ k 2(T)X 22X H 2
3

+ (MI/Vpc)(TI - X 5) + (M2/Vpc)(T 2 + (h/Vpc)Ms(Tc - X5).

Xs)
(34)

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

114

TABLE I

At equilibrium, the variables entering (33) and (34)
are assumed to have the values shown in Table I. Note
that the first reaction is assumed to be exothermic, and
the second is assumed to be endothermic.
Using these values, (23) yields the following numerical
values for the matrices describing the dynamics of the
reactor in the vicinity of equilibrium:

r

-0.325 -0.5625
0
-0.225 -0.8125 -0.014286

F=

l

0.225

0.4875 -0.107143

0

0.1500

0.450

0.7500 -0.035714

0

-0.200

0

-0.368
0.116 . (35)

0

0.168

0.014286 -0.1

-0.060

0

Assuming that only the flow rates Ml and Ma can be
changed to effect control, we get:

D =

-5! ~ ~ 1
rJ
l

-21

0

0

3
2

0
0

0
35.5

-

(36)

.

Using a sampling period T = 1, the transition matrix
for the linear system can be obtained using the Taylor
series (25).

r

0.7407
-0.1793

CP(r) =

=

00

=

2:

[Xa d - xa(to

+ k)]2 + [X4 d -

m = loO

0
o

0
0.0788

0
0.3932

X4*

X5*

21

3

120

kl (X6*)

ok l (Xs*) joXs

k2(X6*)

0.00044643
T2

Tc

HI

H2

V

hjVpc

pc

100

49

2

-5

1

0.5

10

To improve the operation of the reactor, it is desirable
to increase the yield of C and cut down the yield of D.
Therefore, the desired state of the reactor may be defined as:
(41)

If the reactor starts out at the old equilibrium state
(Xl=X2= ... =X5=ml= ... =ma=O) at time to=O,
the behavior of the state and control variables as a
function of time will be as shown in Fig. 5. It is evident
from Fig. 5 that it is possible to achieve almost exactly
the new desired state and that the new equilibrium
state can be reached rather quickly. It should be noted,
however, that the results are valid only if the linearized
approximation of the dynamics is valid.

0.89510

0

0.0264

0.1373

0.00946

0.3872

-0.03460

169
-2.
-4.682

0

0.9048
0

1

0.1288
0.8172

(37)

J
1

0.9124 .

0
0

2.5132
32.822

0

+ k) ]2.

(39)

0]
0 x
0

0.00025

TI

0.2895

0

k2(XS*) jaXs

0.005

100

(38)

J
IV.

0.0217

X3*

4

0

The optimal control variables for this performance
index are given by the following functions of the desired
and actual state of the reactor:
0 -0.0680

X2*

10

-0.00448

k=l

rO

XJ*

0.1

0.4095

3.089
7.056

X4(to

M3*

0.05625

The peformance index is defined as:
(P

M2*

0.05

-0.0909l
-0.2197

I -15.744

l-

MI*

0.05

0

48.216
- 7.695

Ll(r)

U2

59

0.00498

0.0151
0.3004

I

U1

65

-0.3581

0.1566

l

VALUES OF REACTOR CONSTANTS

There are a large number of problems which must be
considered before fully automatic dynamic optimization
of chemical reactions can take place.
1) If state variables are not physically measurable,
they must be generated artificially in order to be able
to compute the optimal values of the control variables.

rO.0102
d

+

LIMITATIONS OF THE LINEARITY ASSUMPTION

0.0164

0
0
lo.0064 -0.0327

0.0605 -0.0197 -0.0012l

o

0

0

0.0668 -0.3537 -0.0504

Jx.

(40)

Kalman and Koepcke: Digital Computers in the Optimization oj Chemical Reactions

XI

)JU~IILILIILIMT-..

-20~

.0-

-21.05

ILlMT-..
+11.52

II I
X3 + 5 v t 1

o

I
t LIM T-..

I I I I I I

I

4.998

LIM T-co

x-~-~--+--+--+--I---l---I-1.00l

LIM T-co

I-\--+-++-++-f+-H-f-t--\-t-J~+h-'l -12.22

LIM T-co

::::::::::::::::1-.

0508

ML.2:0===::::::1:::,
....--

+1.0

I--

......-

+.5

I

I
I--

LlMT-co
I- .0061

I---

-1.0

~

..-

o

!--

2

3

4

5

6

7

8

9

10

SECONDS

Fig. 5.

This calls for simulating some of the reaction dynamics
as an integral part of the control system. This, too, can
be done by means of a digital computer. (Most of the
analog-type control instruments used at present may be
thought of as performing essentially this function.)
2) The dynamic programming equations can be solved,
practically speaking, only in the linear case. In reality,
of course, the reaction dynamics are nonlinear. Moreover, they may change with time due to uncontrollable
or unknown effects. There are essentially two possibilities of attacking these problems.
a) The reaction dynamics are linearized over a certain
region in state space. The reaction is then optimized on a
linear basis, computing the dynamic programming equations in real time. If, as a result of this optimization, the
state moves into another region of the state space, another set of linearized equations is obtained to describe
the dynamics in the new region. These equations are
then used to obtain a new dynamic optimization, etc.
This method of attack is closely related to the problem
of designing adaptive or self-optimizing systems 15 about
which little is known at present. The chief difficulty is
R. E. Kalman, "Design of a self-optimizing systemr" Trans.
ASME, vol. 80, pp. 468-478; 1958.
15

115

the rapid and accurate determination of the linear dynamics in the presence of measurement noise.
b) The dynamic optimization is solved directly by
purely numerical methods. The chief difficulty encountered here is the experimental measurement and representation of the reaction dynamics in a nonlinear form.
Very little is known about this problem at present.
c) The control variables cannot be chosen freely but
must lie within certain prescribed ranges; in other
words, the control variables "saturate." The problem of
designing a control system where the dynamic equations
of the control object are linear but where the control
variables saturate has an extensive literature usually
under the subject heading of "Optimal Relay Servo
Problem." At present this problem is solved only in the
case where 1) the dynamic equations are of the second
order and 2) there is only one control variable. 16 Using
the point of view of this paper, a rigorous method was
recently obtained (which is not subject to the above restrictions, 1 and 2)8 for the computation of the optimal control variables; however, this method is very inefficient. When the control object has nonlinear dynamics, no method of computing the optimal control
variables is known.
Despite these obstacles, much progress can be expected from the utilization of the "state" method of
describing reaction dynamics combined with dynamic
optimization as presented in this paper. These new ideas
will probably be most helpful in attempting to control
(by means of real-time digital computation) dynamic
systems which have many state variables.
LIST OF PRINCIPAL QUANTITIES

Sections I I-A and I I-B
U; Ui=vector denoting concentrations III input
streams; its components.
M; Mi=control vector; control variables.
l = number of control variables.
T = tem pera ture.
X; Xi = state vector; state variables (concentrations
and temperature inside reactor).
n=number of state variables.
t; to = time; initial time.
Section II- C

J; Ji = infinitesima1

state transition function; its
components.
T = sampling period.
¢; ¢i = (finite-interval) state transition function; Its
components.
16 R. E. Kalman, "Analysis and design principles of second and
higher-order saturating servomechanisms," App. 2, Trans. AlEE,
vol. 24, pt. 2, pp. 294-310; 1955.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

116

Section II-D

Section III.

= distance function in state space (pseudometric).
'= transpose of the matrix.
Q= positive semidefinite matrix.
) * = equilibrium values.
) 0 = optimal values.
p

(

Section II-E
(p = performance index.
h; hi=optimal control function.

incremental state vector; incremental state
variables.
m; mi = incremental control vector; incremental control variables.
F=infinitesimal transition matrix in linear case.
D = matrix denoting instantaneous effect of control variables in linear case.
tJ>(r) = (finite-interval) transition matrix in linear
case.
A(r) = matrix denoting effect of control variables
in linear case (finite-interval).
AN, B N , P N , R N , SN constant matrices.
x;

Xi =

Simulation of Human Problem-Solving
w.

G. BOURICIUst

IMULATING human problem-solving on a digital
computer looks deceptively simple. All one must
do is program computers to solve problems in such
a manner that the computer employs the identical
strategies and tactics that humans do. This will probably prove to be as simple in theory and as hard in
actual practice as was the development of reliable
digital computers. One of the purposes of this paper is
to describe a few of the pitfalls that seem to lie in the
path of anyone trying to program machines to "think."
The first pitfall lies in the choice of an experimental
problem. Naturally enough the problem chosen should
be of the appropriate degree of difficulty, not so difficult
that it cannot be done, and not so trivial that nothing
is learned. It should also involve symbology and manipula(ions capable of being handled by digital computers.
At this stage of problem consideration, a devious form
of reasoning begins to operate. Usually the people engaged in this type of research will have had a thorough
grounding in conventional problem-solving on computers. Consequently, they are conversant with the full
. range of capabilities of computers and have an appreciation of their great speed, reliability, etc. They also know
what kinds of manipulations computers do well, and
conversely, what kinds of things computers do in a
clumsy fashion. All of this hard-earned knowledge and
sophistication will tend to lead them astray when the
time arrives to choose a problem. They will try to make
use of this knowledge and hence choose a problem that
will probably involve the simulation of humans solving
problems with the aid of computers rather than the

S

t

IBM Res. Center, Yorktown Heights, N. Y.

AND

J. M. KELLERt

simulation of humans solving problems with only paper
and pencil. Conseq uen tly, the characteristics of presen tday computers may confine and constrict the area of
research much more than is desirable or requisite. What
is liable to happen, and what did happen to us, is that
the experimental problem chosen will develop into one
of large size and scope. If this always happens, then
those human manipulative abilities that are presently
clumsy and time-consuming on computers will never
get programmed, simulated, or investigated. Fortunately for us, the two experimental problems we
chose were of such a nature that they could be easily
miniaturized, and this was done as soon as the desirability became apparent.
The second pitfall which must be avoided is the assumption that one knows in detail how one thinks. This
delusion is brought about by the following happenstance. People customarily think at various levels of
abstraction, and only rarely descend to the abstraction
level of computer language. In fact, it seems that a
large share of thinking is carried on by the equivalent
of "subroutines" which normally operate on the subconscious level. It requires a good deal of introspection
over a long period of time in order to dredge up these
subroutines and simulate them. We believe people assume that they know the logical steps they pursue when
solving problems, primarily because of the fact that
when two humans communicate, they do not need to
descend to the lower levels of abstraction in order to explain to each other in a perfectly satisfactory way how
they th~mselves solved a particular proble.m. The fact
that they are likely to have very similar "subroutines"
is obvious and also very pertinent.

Bouricius and Keller: Simulation of Human Problem-Solving
The third pitfall consists of the following fact: any
problem chosen as an experimental vehicle is likely to
produce interesting results if the program ,is successful.
We consider these results to be byproducts of the general problem of studying the methodology of problemsolving, though they do serve as a test for success of the
methods employed. As these byproducts accumulate,
one is increasingly tempted to spend a disproportionate
amount of time on obtaining useful and/or interesting
byprod ucts. This diverts the researchers and they make
little progress along the lines originally intended. ,
The class of problems we chose was the following:
given a set of elements, and a criterion of compatibility
between any tw~ of its elements, find a subset of mutually compatible elements satisfying given constraints.
This covers a large class of problems to which no satis---factory analytic solutions are known. An example
would be: given the set of all airplanes near an airport,
find the largest subset of those which are on compatible
(i.e., noncollision) courses.
The first experime~tal problem we chose consisted of
this: a word list was given together with a table of
synonyms for each word. The test of compatibility between any two words on the list was provided by a
speech-recognition machjne which was not as discriminating as the human ear. The problem was to find a
word list the same length as the original list, but with
synonyms substituted wherever necessary so that all
the words on the resultant list could be unambiguously
identified by the speech-recognition machine. The difficulty encountered in substituting the synonyms is that
each substitution may c'ause added incompatibility relationships. The final·list of mutually compatible words
would be a practicable working vocabulary for voice
control of, say, an air defense center.
We decided that the following three heuristic methods
would probably be used by humans. These are:
Method 1-test the first word 'against all the following words. Wherever two words are incompatible, substitute synonyms for the second word until a compatible
synonym is found, then proceed. Repeat with the second
word of the list. After the last two words are tested,
reiterate. Eventually a word list meeting the compatibility criteria will be found, or else the tables of synonyms will be exhausted.
Method 2-this method is basically the same as
method 1 with this added sophistication: whenever
a compatible synonym is substituted,' it is immediately
tested further for compatibility with all words in the
list occurring before the particular two words concerned.
Method 3-determine which words are incompatible
with the largest number of other words in the list and
substitute synonyms' for these highly incompatible
words first. Then rei tera teo
Two sub methods also suggest themselves as processes
to apply prior to tryin,g the above meth~ds. These are:
Sub method 1-sort the words on the first phoneme.

117

Sub method 2-trv a batch procedure: divide the
original list int~ two or three parts,' apply one of
the three main methods to each in turn, then put
the batches back together before trying the final
manipulation.
All of these methods were programmed and put together in a master program which determined the sequence of the application of each of the methods and
submethods. Quite naturally, this sequence was: first,
the "quick and dirty" method 1; then, the more sophistica ted method 2; and last, the more com plica ted and
most powerful method 3. At the start, each of the methods was allotted a certain amount of time, and if no
compatible word list was produced within the allotted
time, then the master program switched to the next
method. This procedure was modified so that the time
was extended if an analysis showed that the method had
a good chance of successfully producing a compatible
word list. To determine which submethod to employ
first, a random choice was made, and lack of success
automatically switched control to the other submethod.
To simulate these methods we employed a random
number generator to generate 18-bit pseudo words, each
consisting of three 6-bit pseudo phonemes. Inasmuch as
the English language contains approximately 43
phonemes, 6-bit pseudo phonemes can reasonably be
expected to represent adequately human phonemes. The
human ear was considered capable of distinguishing between any two 6-bit pseudo phonemes that differed in
one or more bit positions. The hypothetical speech-recognition machine, being not so discriminating, was considered capable of differentiating- between any two 6bit pseudo phonemes if and only if they differed in two
or more bit positions. Mathematically stated, two
pseudo phonemes Pi and P j are, considered compatible
whenever

where the weight function, W, merely counts the number of ones in the argument, and the operation +. between the two pseudo phonemes is bit-by-bit addition
modulo two, which is equivalent to exclusive OR.
Our experimental byproduct results are given in
Table I, where the "word-list length" is defined as being
the length of word lists tha t have a 50 per cent chance
of being satisfactorily manipulated.
The second experimental problem consisted of finding
the largest mutually compatible set of 12-bit numbers
satisfying the following criterion of compatibility:
W(A~

+. Aj)

~

5.

This set will have the characteristics that double error
detection and correction is possible when employing it
as an information transmission code.!

1 R. W. Hamming, Bell Sys. Tech. J., vol. 29, pp. 147-160;
April, 1950.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

118

TABLE

I

TABLE

EFFECTIVENESS OF SEVERAL METHODS OF FINDING COMPATIBLE
. WORD LISTS

N umber of Check Bits
Method

Word-List Length

Average Computing
Time in Minutes

1 alone
l+sub 1
l+sub 2

325
300
325

0.65
0.74
0.78

2 alone
2+sub 1
2+sub 2

550
525
550

3.70
2.55
2.95

3 alone
3+sub 1
3+sub 2

740
740
740

3.82
3.94
4.50

Master Program

750±

Dependent on
Word List Length

While in general the resulting code is nonconstructive, in this paper we confine ourselves to a subgoal,
namely, the results obtained when constraining the set
to be a constructive group code. With this constraint,
the original compatibility criterion of W(A i+ .A j ) '2. 2E
+ 1, where E is the number of errors to be detected and
corrected, can be reduced to the following set of constraints on binary numbers whose lengths are those of
the check bits only; the so-called parity check matrix:2

W(C i )

>

2E - 1

+. C > 2E - 2
W(C i +. C +. C > 2E W(C i

j)

j

W(C i

k)

3

+. C +. C + .... Cn) > O.
j

k

The size of the set of C's determines the maximum
number of information bits that can be handled by
check bits whose formulas for construction are given by
the columns of the C's. These constraints were programmed and the size of the matrix determined by exhaustion.
The experimental byproducts are given in Table II.
Having satisfied your curiosity regarding the possibility of obtaining useful and/or interesting results
from these methods, we now wish to discuss those aspects that are pertinent to the simulation on computing
machines of human problem-solving strategies and
techniques.
In the beginning we had what we thought were very
good ideas of how to proceed. We had solved all of the
problems on a medium level of abstraction. We soon
learned, however, that an abstraction is nearly always
a refuge from accurate knowledge. Explaining to each
other how to reach a particular goal did not teach us
how to program the simulation of the manipulations in-

2 D. Slepian, Bell 5ys. Tech. J., vol. 35, pp. 203-234; June,
1956.

II

DOUBLE ERROR-CORRECTION CODE LENGTHS

N umber of Information Bits

6

2

7

4

> 9

8
9
10

>13

~16

volved until we descended the abstraction ladder to the
level of the symbolic coding process we were employing.
For example in submethod 1, sorting the words according to the size of the leading 6-bit phoneme is useless.
For detecting l-bit differences it is a useful procedure,
but for detecting 2-bit differences, one must order the
words in an order that bears a useful relationship to the
compatibility criterion. In our first problem we found
this relationship in the weight of the phonemes, and
accordingly arranged the words in order of increasing
weigh t of the first phoneme. The result was a quicker
finding and weeding out of incompatible words in isolated cases.
A more subtle difficulty arose in the second of our
problems. Consider the following five beginning members of a larger set satisfying the constraints previously
described:
1)

11 11 00 00

2)

00 11 11 00

3)

00 00 11 11

4)

11 00 00 11

5)

10 10 10 10.

If element number 5 was tried and discarded because
it was found not to yield a sufficiently large set of C's,
then the number 01 01 01 01 need not be tried because
it will yield similar unsatisfactory results. When doing
this by hand on a piece of paper, one can immediately
detect the equivalence between the two trial numbers
for the fifth element. The criterion for this equivalence
is that the two sets of five numbers containing different
fifth elements are identical upon column permutation.
It took us literally days to describe this simple relationship to the 704 computer whereas it takes only a minute
to describe it to you. One of the reasons, of course, is
that this concept involves two-dimensional visualization at which humans are particularly good and 704's
are particularly inept. As a result, we wasted a lot of
time trying to circumvent the transposition of the
matrix elements in order to avoid using an excessive
amount of machine time. A second-level difficulty arose
when it was realized that all of the possible equivalences
were not discovered by the original code, which did not
contain column transposition. Consider, for example,
the situation when the fifth member is 10 10 11 00. Then
the number 10 10 00 11 is equivalent, but an involved

Fein: The Role of the University in Computers, Data Processing, and Related Fields
column permutation had to be programmed in order to
detect such an equivalence.
In conclusion, we would like to give our motivation in
trying to simulate human behavior. At the present
time, there are many problems whose nature is such
that machines do not handle them successfully even
though they are regularly solved by humans. 3 Therefore, it seemed desirable for us to study a "working
model" in action, to determine its component functions
and to analyze each function to find out what part it
played in the solution of these difficult problems. Our
main goal ,is to be able to solve these kinds of problems,
partly by new and different programs, and partly by
3 A. Newell and H. A. Simon, "Current Developments in Complex
Information Processing," RAND Rept.; May 1, 1956.
H. L. Gelernter and N. Rochester, "Intelligent behavior in problem-solving machines," IBM J. Res. Dev., vol. 2, pp. 336-345;
October, 1958.

119

new and different machine-instruction sets. We do not
insist that the machines do all the work, because we are
convinced that a combination of a human and a machine working in tandem will always have superior problem-solving powers than either working alone. To complement each other, better communication is required
between humans and machines, and this implies communication on higher abstract levels than the ones now
employed. In order for this to be possible the machines
must have either wired-in or programmed "subroutines"
to interpret the directions and/or suggestions given
them by their human colleagues. These "subroutines"
should overlap those of humans in order to be useful.
We feel that this goal will be furthered by continued research along the lines described in this paper.
The authors would like to express their sincere thanks
to N. Rochester for suggesting the problems and outlining the heuristic programming methods utilized.

The Role of the University in Computers, Data
Processing, and Related Fields*
LOUIS FEINt

INTRODUCTION

INCE the Fall of 1956, the author has been studying the genesis and operation of university programs in the fields of computers, data processing,
operations research, and other relatively new and apparently closely related fields. The specific purposes
were:

S

1) To study and evaluate the organization, curriculum, . research program, computing equipment,
financing, and facilities of universities in the
United States having computer and/or data
processing and/or related programs.
2) To identify those fields of study (some already accepted and identified as disciplines as well as those
not yet so designated) that are unambiguously
part of the computer and data processing fields
and those closely related fields that might legitimately be part of a university program.
3) To appraise the role of the universities in these
fields and to determine what universities might
do to build distinguished programs in these fields.

* Part of this work was undertaken when the author was a consultant to Stanford University.
t Consultant, Palo Alto, Calif.

During the course of the study it became clear that a
university program in any field is an important function
of the role of the university in society itself. The identification of this role thus became a fourth separately
identifiable purpose of the study.
Source information was obtained by formal interviews and informal sessions with university administrators, directors of computing centers, faculty members,
students, industrial representatives, and other interested persons. Places of interview were universities,
scientific meetings, social gatherings, industrial plants,
and research institutes. Important information was obtained from a few publications.
A questionnaire in tended for mailing was considered
and dropped because it became clear very early that
only personal interviews could bring out the important
facts and opinions. Hence, the conclusions and recommendations are not always derived from a mass of accumulated data. They do reflect the author's experience
in the computer and data processing field, information
about what universities are doing and especially what
they are not doing, and the influence of those individuals
interviewed who have experienced and reflected seriously on the same problems as those considered in this
study.

120

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
STUDY AND EVALUATION

Modern U. S. Culture
In response to the increasingly large and complex
problems faced by ,society, our culture has changed in
several important respects. For their solution, these new
problems require new techniques and sometimes refinement of already developed techniques. The requirements for, training people in new fields (and in older
ones) and for research, design, and development are
difficult for us to cope with. In fact, we have, as a nation, sought solutions to our problems in training, education, research, design, and development, from -yvhatever institutions or individuals would make themselves
available. On this account, traditional roles of institutions in our society are changing, and in many cases
transformations are almost complete.
In the past, industry, government, and the university
were each roughly assigned specific fields and subject
matter in which they were active almost exclusively.
Research and instruction in the "intellectual disciplines"
were the province of the university, and design and
development belonged mainly to industry and government. Today, one can sometimes scarcely identify
teachers, researchers, designers, and development people as members of a university, industry, or government, for each is involved to a considerable extent with
all three. Nor can they be identified by -their (overlapping) fields of interests. For example, an information
theorist may just as well be found at Bell Telephone
Laboratories as at the Massachusetts Institute of
Technology.
Specifically, the milieu of our society is characterized
by a demand that more phases of our traditional culture
be subjected to a critique guided by rationalistic and
scien tific rules of evidence. Thus,
Information-Data-gathering, storage, searching,
classification, cataloguing, retrieval, encoding, decoding, interpretation, sampling, filtering, analyzing,
checking;
Models-mathematical, Boolean algebraic, Post algebraic, logical, stochastic, computational, statistical;
simulators, computers, automation, language translation, switching theory, information theory, coding
theory, cybernetics, decision theory, statistics, operations research, econometrics, psychometrics, management science, linear programming, game theory, automata theory, artificial intelligence, self-organizing
machines, adaptive mechanisms, neural psychology,
learning machines, and numeri,cal analyses
are examples of topics-some old, some new-that are
having a terrific impact on and general acceptance by
the industrial, government, and university community.
This acceptance is due in no small measure to the
spectacular successes enjoyed by some of these fields
when applied to the solutions of problems in many of

the crash programs that characterize the present generation. The following excerpt from the April, 1956 issue
of the Journal of the Institute of Management Science
characterizes a phase of the milieu:
Primarily, what are needed are methods for grappling with multivariable situations as they occur in the world rather than the singlevariable methods of classic laboratory science. Hopefully, it should
be possible to start with qualitative or rough quantitative approaches
and gradually develop tools of greater precision. This suggests the
use of mathematical models in both their qualitative and quantitative
aspects. Techniques grounded in mathematics, notably including
linear programming, search theory, and game theory (particularly
non-zero sum multi-person games) have outstanding potential contributions. Marketing budgets involve commitments on too large a
scale to permit executives to continue to accept historical patterns
in a management world in which rational explanations of alternative
strategies and planned optimizing are becoming standard operating
procedures.

Industry, business, and government have varied requirements for trained people to do the work requiring
talents in these fields. Need for investigating these fields
from the scholar's viewpoint has also been recognized
although support for this endeavor lags. The problems
of coping with practical problems in and with these
fields, training professionals, training scholars, and doing research, have been handled in a variety of ways by
industry, government, and the university when indeed
they have tried to handle the problems at all. On the
whole, each of these three important segments of the
com~unity has been unprepared by virtue of its traditional organization, philosophy, responsibilities, and
procedures to incorporate and cope with these new situations-situations whose fundamental implications
usually require a modification of their organizations,
philosophies, responsibilities, and procedures.
Thus, the government has set up RAND Corporation(s) for research and project work; it has set up and
supported institutes at universities for research and
project work; it has indirectly supported development,
if not exploratory work, it these fields as byproducts
of government contracts to industry.
Industry and business have set up separate departments charged with over-all responsibility of applying
these techniques to company operation, management,
design, manufacturing, sales, etc. Graduate-level schools
for instruction of professionals in these fields are being
run by industry itself.
The scholars and practitioners in these new fields are
uncertain both as to the nature and structure of the
fields and their relation to each other. As would be expected, new societies and magazines devoted to these
fields have sprung up. The following quotes (from
Transactions of the Institute for Management Sciences)
bespeaks the growing pains of the community of practitioners and scholars in these youthful fields:
The writer has a strong feeling that those who worked so hard and
successfully to organize TIMS were on the right track. However, it
appears that they were led more by broad feelings than by logical
reasoning. They knew (in the sense of having faith) that there was a
need for something not yet in existence but they failed to develop a
clear and distinct picture of what they intended to accomplish.

Fein: The Role of the University in Computers, Data Processing, and Related Fields
Further, the "Statement of Editorial 'Policy" from the
September, 1957 issue of the Journal of Information
and Control,
The theories of communication, computers, and automatic control
were initially of interest primarily to mathematicians and engineers.
The papers relating to these fields have therefore appeared in a wide
variety of engineering and mathematical journals. Some of the results
of this work have been applied by scholars in such areas as linguistics,
psychology, statistics, physics, genetics, neurophysiology, and even
philosophy. Such applications have led to results which are of interest
in their own areas, but which in some instances have also suggested
models and raised questions of fundamental interest to the theories
being applied. Unfortunately, they appear in many entirely different
sets of professional publications, which few readers of engineering
and mathematical journals see.
It is the purpose of this new journal, Information and Control, to
publish papers which make significant contributions to the theories
of communication, computers, and automatic control and also papers
which present experimental evidence or theoretical results bearing on
the use of ideas from such theories in any field to which the ideas are
relevant. Papers will be published, for example, on such topics as:
Theory of communication
Theory of automata
Theory of automatic control systems
Description and analysis of language and other natural information sources
Communications and control systems which may include links
within or between organisms
Information aspects of physics and of the theory of observation
and measurement
Organization, processing and retrieval of data.
Any statement of policy for a new enterprise is, at best, an educated
guess, and this one is no exception. It is, of course, subject to gradual
change without formal notice, as the Journal takes shape under the
diverse pressures of authors, editorial board, editors, and readers.
THE ROLE OF THE UNIVERSITY

The universities, as institutions, are having a hard
time learning to cope with their new role in society in
general and in particular learning how to effectively incorporate these new fields into the academic structure.

Policy
Probably 150 universities and colleges are engaged in
some kind of activity in the fields of our concern. The
purpose of the schools that first got involved was "to
get their feet wet" in fields that seemed to have appeal.
Usually, one man who had a strong interest in this or
that field would give a course in whatever department
he happened to be in. Several institutions have embarked on a program of building large-scale digital computers. Some were successful. Some are now being built.
Later, as the pressures for information in these fields
grew, courses were added, some equipment was obtained,
centers and institutes were established. Only a few universities have made a determined effort to select a field
of interest, set up a policy and goal, and implement it.
Most were feeling their way. Most are now feeling their
way.
The most important impact on university programs
in these areas has been the educational program of
IBM. IBM has a manpower problem now; they know
it will be severe in ten years. Their problem is two-fold.
They need professionally trained people to help sell
their product. They want their customers to have pro-

121

fessionally trained people to use their product properly.
IBM has "presented" 650's to over 50 universities by
now under the condition (among others) that a couple
of courses in data processing and numerical analysis be
given. A 709 has been "given" to U.C.L.A. for "emphasis on the study of business management problems."
M.LT. has been "given" a 704 for "research and education of students in computing techniques." The University of California has a retread 701. Sperry Rand,
Burroughs, Bendix, Royal McBee, and a handful of
other computer manufacturers have also "contributed"
computers to universities (see the Appendix).
I t is fair to say that, in many cases, to the extent that
a university computer activity has a purpose at all, it
has been made for them by IBM. It is true that for
many universities, this is good. Otherwise, they may
never have developed a program at all. Nevertheless,
there are no distinguished academic centers of computers, data processing, and related fields, and I believe
that this is so because not enough attention has yet been
given to the development of an integrated program and
policy in response to the needs and conditions of the
whole community rather than as a supplement to computers obtained at a "bargain."
The scramble to get in on a "free" 650 computer from
IBM is a disgrace in some cases. Course titles and contents have been created on the spur of the moment to
fit the IBM requirements. Faculty have been assigned
on the basis of their not having a full teaching loadmore evidence of what is done without a clear, welldefined policy and program.

Organization
The university entity most popular as the center of
activity for computers, data processing, operations research, industrial engineering, mathematics, business,
etc., is the computing service center. Sometimes an
interdepartmental committee is in charge of the service
and of a few courses. Sometimes, all courses are given
in one department without any necessary relation between course content and department. Some schools run
one-week seminars. Others run symposia. In many
cases, institutes, not altogether part of the academic
structure, are involved.

Faculty
The range and variety of faculty participating in university computer and data processing programs compares favorably to the range and variety of the programs themselves. Everyone from novices to people
with ten years of experience is participating. Part-time
instructors from industry and government have been
used. Interest without regard to experience and ccompetence is sometimes the main requisite of the teacher
in many instances. With the exception of some instruction in programming, courses designed for faculty
training hardly exist.

122

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

There are several reasons for this unfortunate situation. First, the fields are so new and the need so great,
that not enough experienced and competent people have
developed to handle this need. Secondly, many people
who could most adequately fill the need are being more
highly paid for their services by industry or by the
government. Thirdly, competent researchers ordinarily
have better facilities and equipment for work, which,
in many industries and government, is identical to what
he would be doing at a university.

Curriculum
A few selected examples of topics in computers, data
processing, and related fields covered in the present
curriculum of universities are:
Use of electronic data processing
Logical design of computers
Computer electronics
Electronic digital computers-digital computer CIrcuitry
Theory of and operation of computing-machines
Statistics in business forecasting
Dynamic programming
Numerical mathematical analysis
Numerical mathematical analysis laboratory
Matrix analysis
Matrix analysis laboratory
Numerical solution of differential equations
Numerical solution of differential equations laboratory
Principles of digital computers
Programming for digital computers
Business and industrial analysis
Sta tistical methods-regression
Probability models
Linear programming
Game theory
Monte Carlo techniques
Da ta processing
Systems and analysis
Information theory
Switching and computing circuits
Theory of coding
Information storage and retrieval
Documentation and classification.
Such topics are covered not only in normal courses,
but in special courses, seminars, symposia, and lectures.
It may be seen that even this incomplete list covers
aspects in the design and design models for computers,
programming, and applications of interest to people in
the business schools, engineering, the physical and biological sciences, applied mathematics, statistics, industrial engineering, logic, etc. These courses reflect an
appreciation of the ability of a computer to handle new
computational models. They reflect an appreciation of
the theoretical techniques of computer design.
What is impressive is the lack of courses reflecting appreciation of the computer as an aid to routine mental

effort, a theory of computers, a theory of programming, a
theory of applications. This in turn is probably a reflection of the youth of the fields. Such theories have not
yet been born. When they are created and developed,
courses will undoubtedly follow.
Few, if any, of the university curricula are integrated.
Even those that do have an aspect of integration-as
the one at the Engineering School at the University of
Pennsylvania, or the one at the Business School at
U.C.L.A.-concentrate on the interests of people in
particular disciplines interested in computers or data
processing as tools for their own use exclusively. This
is not to say that this is bad. But it is narrower than a
university program might be.
Research-Subject Matter
,_ The range and variety of potential research activity
in the design, programming, and utilization of computers, and in closely related fields is enormous.
Theses and research papers have been written on
topics in computer logic, automatic coding, switching
theory, coding theory, neurophysiology, inventory control, production control, office automation, machine
translation, organization, classification and retrieval of
knowledge, and solutions of problems having computational models in almost every quantitative discipline.
Here, as in the curriculum, the fields of computer
theory, application theory, model theory do not yet appear to have been successfully attacked.

Equipment Facilities
Most of the equipment (see the Appendix) now present in universities in the United States has been selected
on the basis of a single criterion, the cut-rate price, in
spite of the fact that the selection committee would
sometimes have preferred other equipment on the basis
of technical and operational criteria. The effect of this
on the quantity and quality of the research output cannot be measured. However, it is clear that the basis for
selection of equipment ought to be the maximum benefit
to the university community. It is difficult to see how
any of the universities, now selling computing time to
sponsored contracts (in order to pay the bill, to be sure)
and having little time (usually the second shift) for
legitimate academic research and instructional pursuits,
will gain an outstanding reputation for solving university research problems with computational models.
They will become known for providing bargains for
service to sponsored (usually government) research.
The main objection to this sort of thing is that the
benefit to the university research staff derived from the
activity is fortuitous and incidental rather than the
planned, aggressive activity it ought to be and must be
for maximum benefit to the proper university function.
Some universities are still building their own computing equipment. There seems to be little reason to
attempt this now, even' at the present prices of commercially available computers.

Fein: The Role of the University in Computers, Data Processing, and Related Fields
SOME OBSERVATIONS FOR A UNIVERSITY PROGRAM

Probably the most important reasons for the universities' inability to cope adequately with their new
role in instruction and research in these fields, are their
failure to identify clearly the role they want to play, and
their failure to try to fit the new fields to their role.
This section includes some observations that should be
aids in the conscious selection by universities of their
particular roles and of how to implement these selected
roles in these new fields.

The Function of the University
The legitimate function of a university in any society
has been the subject of ancient and continuing controversy. The controversy has revolved around the questions loosely described as training vs education; practical vs theoretical subject matter; routine vs nonroutine activities; scholarly vs professional endeavor;
and the various shades of grey in between. A. N. Whitehead stated and elaborated on one position at the dedication of the Harvard Business School in 1927,
This article will only deal with the most general principles, though
the special problems of the various departments in any university
are, of course, innumerable. But generalities require illustration, and
for this purpose I choose the business school of a university. This
choice is dictated by the fact that business schools represent one of
the newer developments . .. of university activity ....
There is a certain novelty in the provision of such a school of
training, on this scale of magnitude, in one of the few leading universities of the world. It marks the culmination of a movement which
for many years past has introduced analogous departments throughout
American universities. This is a new fact in the university world; and
it alone would justify some general reflections upon the purpose of a
university education, and upon the proved importance of that purpose
for t4e welfare of the social organism.
The novelty of business schools must not be exaggerated. At no
time have universities been restricted to pure abstract learning . ...
There is, however, this novelty: the curriculum suitable for a business
school, and the various modes of activity of such a school, are still
in the experimental stage. . . .
These reflections upon the general functions of a university can
be at once translated in terms of the particular functions of a business
school. We need not flinch from the assertion that the main function
of such a school is to produce men with a greater zest for business.
... Business requires a sufficient conception of the role of applied
science in modern society. It requires that discipline of character which
can say "yes" and "no" to other men, not by reason of blind obstinacy,
but with firmness derived from a conscious evaluation of relevant alternatives . ...

Whitehead spoke well. If we substitute for the field of
business, computers, data processing, and closely related fields (let us call them the "computer sciences"),
then universities may well select as their role:
1) Training of professionals in these fields of "com-

puter sciences" of interest and competence in the
university.
2) Training scholars in these fields.
3) Doing exploratory research in these fields.
4) Developing the subject fields into new disciplines.

Disciplines and the Computer Sciences
l\lany fields are now providing models for many other
disciplines. Thus, work under the province of operations

123

research, or game theory or decision theory or management science, or linear programming, or econometrics or
statistics is being applied to business problems like
market analysis, inventory control, long-range planning
etc. In many instances, a computer is used to process
data, solve equations, do analyses, even make decisions
based on criteria it has been given.
Switching theory, coding theory, information theory,
Boolean algebra provide models not only for design and
programming and applications of computers, but also
of analogous fields like neurophysiology.
The computer thus provides a significant link among
various established disciplines as well as those fields of
endeavor of intense present interest. Computers are related to other fields in one or more of three ways:
1) Workers in these fields use the computer as a
mechanical or a men tal aid.
2) These fields provide models useful in the design,
programming, and applications of computers.
3) Analogies exist in the internal structure and organization of a computer with structures and organizations in other fields.
I t seems plausible to designate the fields mentioned
above and those enumerated in the Introduction, as the
"computer sciences" since they are related to each other
in one or more of the three ways enumerated above.
We must expect that some of these fields will coalesce
and develop into disciplines on their own. These will
then almost certainly be universally accepted as the
legitimate province of the university scholar. Others
may not turn out to be disciplines and will gradually be
abandoned by universities. But not knowing now which
field(s) will meet which fate, the university scholar must
presently be interested in all of them.
The term "discipline" has been tossed around loosely,
so perhaps we had better try to define it.
Although a set of agreed-to criteria do not exist by
which one can determine whether or not a field rates as
a discipline, there are several characteristics of accepted
established "disciplines" that may be referred to when
one has the problem, as we do, of deciding whether or
not a given field is potentially a discipline.
Established disciplines, like mathematics, have the
following characteristics:
1) The terminology has been established, a glossary
of terms exists.
2) Workers in the field do nonroutine intellectual
work.
3) The field has sometimes been axiomatized.
4) The field is open, i.e., problems are self-regenerating.
5) There is an established body of literature, textbooks, sometimes treatises-even handbooksand professional journals.
.
6) University courses, sometimes departments, and
indeed schools are devoted to the field.

1959 PROCEEDINGS OF THE WESTERN JOINT COlJ;fPUTER CONFERENCE

124

Most aspects of computers, data processing, and the
'related fields discussed in this study now meet these
specifications or may be meeting them in the next ten
years. It is clearly the job of university people to help
create the axiomatization, theory, terminology, curriculum, etc. Computer science is not an isolated field. It is
interdisciplinary. It is analogous to a library, or mathematics, where library science or mathematics are disciplines in themselves as well as providing service tools
to other disciplines. Thus, future university courses of
instruction and research should be provided in the unit
devoted to the disciplines of interest and in units using
these disciplines as tools.

The Computer
Too much emphasis has been placed on the computer
equipment in university programs that include fields
in th~ computer sciences. Curriculum and research programs have been designed as supplements to the computing equipment. The, reverse should be the case.
Cuters should be supplements to a well-organized
and integrated university program in the computer
sciences! An important supplement, to be sure, but a
supplement. Indeed, an excellent integrated program
in some selected fields of the computer sciences should
be possible without any computing equipment at all!

Un'iversities and Their Competition
Universities must recognize and design around the
fact that both industry and government have already
undertaken and will continue to participate in the university's traditional role of instruction and research in
the computer sciences. This implies competition with
government and industrycfor personnel, equipment, and
financing. The university structures and policies built
over the years to cope with traditional situations may
not be adequate to cope with the present situation.

A

RECOMMENDED

U NIVERSlTY

PROGRAM

The following is a detailed integrated program designed around the observations made in the previous
section.

Organization
A graduate school, called the Graduate School of
Computer Sciences, should be formally created. A policy
committee, including a dean and his academic and
service departments heads, should be appointed. Its
primary function will be to set up and implement policy
and budget on the function of the school, the curriculum
for an integrated program of instruction and research,
its relation to other schools and departments of the university, and other pertinent points, including financing.
A "five-year plan" should be formulated.
The dean of the school should report to the president.
His responsible administrators will be scholars first and
foremost. Their professional fields of interest will be in
the fields of interest of the school, although they will

have an appreciation and respect for the problems,
philosophy, and ideals of scholars in fields other than
those of their own interest. They will be enthusiastic,
competent, and experienced..
'
The school administrators will not be promoters. The
problems of budget, financing, equipment support, and
public r~lations-all of which nor~ally require the
talents and time of a promoter type-will, however,
be assigned to profession <;tIs in these fields. They ,will
report to the administration and will be observers in
policy meetings. Theirs will be a service activity and not
an academic one.
None of the academic administrators will be required
l
to show a 'profit on his activity. All activities with an
objective of making money will be organized under the
service activities associated with, but not an essential
part of, the academic unit.
The declared policy of the school concerning its fields
of interest should be that it is interested in instruction
and extension of knowledge in all fields that provide
either quantitative models, or techniques for solutions
of problems in quantitative models for phenomena of
interest to the scholar. The theoretical and experimental
fields enumerated earlier in this report are examples of
these fields.
Since it is recognized that some of these fields are disciplines in themselves as well as tools for use by other
fields, the Schools of Business, Industrial Engineering,
Electrical Engineering, Mathematics, Physics, Psychology, etc., should be encouraged, when they find it
desirable, to give courses and even to do research in
fields normally covered by the new graduate school.
But it is the responsibility of the new school to program
an integrated activity for the whole university,
The declared and announced policy of the school
should be that its function is fourfold: 1) To train professional scientists; 2) to train scholars; 3) to do exploratory research; and 4) to develop the new disciplines.
These are only a few policies recommended for adoption, It cannot be emphasized too strongly, however,
that the significant recommendations here are not ~o
much in the detailed recommendations enumerated
above, but in the recommendations for deliberate policy
making and policy review of these matters.

Departmental Structure
The departmental structure of a new school must of
necessity be arbitrary. One struct~re for consideration
would consist of four departments as follows:
Department 1-Computer Department: This department would include groups concerned with computer
equipment and service, faculty instruction in programming and computational models, model making, automatic programming, logic, computer organization, computer mathematics, computational models, computer
theory, component and 'circuit research (hardware) if
not covered in other departments, systems research
(hardware) if not covered in other departments, etc.

Fein: The Role of the University in C()mputers, Data Processing, and Related Fields
Instruction and research in these fields will fall in
this department. The computer equipment used in a
service bureau or for instruction or research will be
under the cognizance of the Computer Department
head.
Computation center: The purpose of the "computation
and data-processing center" is to provide the tool for
solutions of problems that have been cast into computational models by members of the university community.
The head of this center, who should be a scholar
primarily, will be responsible for organizing the center
operation for maximum efficiency. This will involve
educating the general faculty, providing programming
and operating assistance, promoting research and development of automatic programming and handling
techniques, as well as assisting in the creation of computational models. By whatever means the equipment,
facilities, and personnel are financed, the head of the
computing center should not be obligated to show an
operating profit. If profit is a prerequisite, then he
should by all means go into business' for himself. Functionally, the computer center should be organized into
groups individually responsible for: 1) faculty education; 2) programming, coding, and operation assistance;
3) model construction assistance; 4) research in automatic coding, handling, etc.; and 5) a planning, scheduling, and monitoring activity.
The future demand for the products and services
made available by a computing center depends, of
course, upon the willingness and ability of the research
faculty at the university to use them properly, as well as
upon the utility and applicability of computational
models in their work. It is true now that there is hardly
a field in the physical, mathematical, biological, or social
sciences that has not used computational models for research purposes. Future researchers in these fields will
be using computational models more and more. A computing center will fill an important need on the university campus. It may be said, "To the extent that a researcher does not use such facilities when they are available, applicable, and potentially useful, to this extent
a researcher is not doing a competent job."
Department 2-0perations Research: This department
will cover instructional and research activities in operations research, linear programming, dynamic programming, game theory, queueing theory, and decision
theory.
Department 3-1nformation and Communication: This
department will cover instruction and research activities
in information theory, switching theory, coding theory,
automata theory, artificial intelligence, learning, language translation, and theory of simulation.
Department 4-Systems: This department will cover
instruction and research activities in management
science, econometrics, systems theory, information classification, indexing and retrieval, model theory, selforganizing systems, and adaptive mechanisms.

125

Curriculum
The curriculum will contain courses, seminars, lectures, etc., that constitute an integrated program for
students pursuing an advanced degree. What constitutes an integrated program will be determined as part
of school policy. Special courses, seminars, or symposia
may be given as the need arises. The curriculum will
be expanded or modified in accordance with periodic
scheduled evaluations' 'by curriculum personnel. Extensive special studies of what other universities and
industry and government are doing will make appropriate information available to the curriculum evaluators.
The curriculum will also reflect the needs of other
departments and schools within the university.

Faculty
J

One of the reasons for recommending an independent
entity such as a graduate school-rather tha,n a, department in an already existing s~hool or interdepartmental
committee-is to allow a freedom of choice in policy
that can respond to the practicalities of the situations
faced by the university today, without the constraints
imposed by an existing structure designed to cope with
situations that no longer exist. Thus, the department
heads, if not others, may command perhaps $20,000 a
year for their services in today's competitive market. If
this is indeed so, then the school should plan to put itself in a position to obtain these people and to pay such
pnces.
Each course, lecture, and symposium will be conducted by faculty members who are competent to teach
the course, give the lecture, or conduct the symposium.
The faculty will be interested, enthusiastic, and competent, and preferably experienced. They will be paid a
salary commensurate with their experience and ability
on a scale more in keeping wi th present ind ustrial scales,
not present university scales.
No faculty member will be assigned to instruct because his teaching schedule isn't full, or because he is
a competent researcher who "ought" to teach, or because he is enthusiastic but inexperienced in the field.

Research and Related Activities
Research by graduate students or the faculty cannot
be a planned affair, of course. One would expect to see
research activity on selected topics of interest and
within the capabilities of the school.
However, there are related activities that may be
arbitrarily categorized as research having to do with the
development of new disciplines and the incorporation
of information developed by research into the curriculum. A well-established discipline usually has a wellestablished terminology and a glossary, axiomatization,
and text books, treatises, handbooks, journals, and
organizations. Thus one should expect to see the research and instruction faculty engaged in establishing
terminology, axiomatizing a field, writing or editing

126

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

books, journals, and other such material, and even
helping to organize professional societies.

Manual Aids
Manual aids, such as desk computers, computresses,
bookkeeping machines, punched card machines, and the
like, have been extremely valuable to university scholars
and students. The ability of modern high-speed equipment means that the quality and quantity of university
output should be improved to the extent that this kind
of manual aid is made available and used efficiently.
It may be here noted that the quantity of the research output should increase not only because of the
high speed and versatility of the electronic aid, but also
because more of the problems, tasks, and the models
created in the various disciplines become tractable with
the use of the new aid. The quality should improve for a
similar reason. There exist models of various phenomena
in almost all disciplines whose problems have not been
soluble with present techniques.
In many cases simplifying assumptions are made and
the model is revamped in order to make the problems
soluble. In these cases, the wrong problems have been
solved quite accurately. Present high-speed equipment
makes possible the accurate solution of more of the
right problems.
The availability of high-speed computing systems
and efficient operating crews has a much greater impact
on the university output than, for instance, the availability of batteries of desk calculators and operators. The
curriculum, research program, and even faculty education are affected. Scholars and students will have to
learn enough about the available devices to use them
well with their present models. This knowledge should
serve to motivate them to the creation of new machinesoluble models. In fact, a general theory of the construction of techniques of machine-soluble model construction for any discipline seems clearly to be a job for the
university scholar.
This is an unexpected but inescapable development
growing out of the computer as a manual tool!
In addition, those techniques peculiar to the design
and programming of high-speed equipment should be
developed, if they can serve to make it a more efficient
aid for manual routine work and especially if it results
in an aid that could help handle more models. As a
basis for this endeavor, there will undoubtedly be required at least a competent theory of computing machines,
a competent theory of models, and a competent theory relating the two.

Mental Aids
Equipment for aid in routine mental tasks has fewer
precedents than equipment as an aid for routine manual
tasks. Computers are now being used to check results of
some mental efforts. For example, they are being used
to check the logical design of other computers. The ability to use equipment as an aid to do routine mental
work, and thus make it literally an adjunct of one's self
in creating new disciplines or developing existing ones,
may turn out to be a discipline in itself. To be sure, this
is rather vague, but this in no way weakens the conviction that it is a legitimate interest of the university. The
university scholar must also be interested in the peculiar
programming and design techniques of an equipment
that will make the equipment most useful as an aid to
routine mental work.

Financing and Student Body
Specific recommendations for sources of funds are
not made here. The strong conviction exists that the
present growing demand for this kind of activity by industry and government is so great that students and
money should be knocking on the door where such a
program is available.
Students will come from government, industry, and
undergraduate schools. They will be interested in later
applying their new knowledge to industrial or business
pursuits; others will wish to be educated to become researchers and scholars in selected fields. How many of
these aspirants attend the university will be a measure
of the appeal, if not the distinctiveness, achieved by the
university in these fields.
ApPENDIX
SUMMARY OF COMMERCIALLY AVAILABLE COMPUTERS
INSTALLED AT UNIVERSITIES USED FOR UNIVERSITY RESEARCH AND INSTRUCTION

Number
Installed
IBM 650
709
704
701
Sperry Rand 1101
1103
1105
Univac 1
Burroughs 220
205
Bendix G-15
Royal McBee LGP-30
NCR 102D

65
1
2
1
1
2
2
4
1
7
12
13
3

Estimated Number to Be
Installed in 1959
No estimate
No estimate
No estimate
No estimate
5 large-scale machines
(models undetermined)

2
6
12-24
20

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

127

The RCA 501 Assembly System
H. BROMBERGt, T. M. HUREWITZt,

URRENT techniques in automatic coding attempt to shift the user's tasks away from the
computer and closer to his application. Sacrificing coding details in this way, it is believed that monumental savings will result both in computer acceptability and utilization since everyone is now able to describe his own programs. Universal acceptance of problem oriented languages has, for several reasons, not yet
followed. One influence is that the generated object
programs reflect the adroitness of the executive routine
and the remoteness of the input language.
In a recent count, there appears to be well over one
hundred automatic coding systems produced for twenty
or more different computers. This reflects the recognition of the disparity that exists between the methods of
problem preparation and actual problem solution.
By most methods of classification, these hundred-odd
automatic codes range more or less continuously between extremes. They vary considerably in complexity,
extent of problem area of useful application, and in
range of intended user.
One categorization used to differentiate these automatic codes is by the sophistication of the input language-particularly whether this language is "problem
oriented" or "machine oriented." However, it must be
admitted that if a ranking of these automatic codes is
made according to efficiency of ' the object program, the
list would tend to be in nearly inverse order to that obtained by ordering on the level of the input language.
I t should also be noted that evaluations of the academic
aspects of these automatic codes are often greatly at
variance with the judgments of the occasionally unfortunate users of these toutines.
This is not to say that object program efficiency is
the only value criterion of an automatic system. Frequently for short programs or where the capacity of the
data-processing equipment greatly exceeds the required
performance, it is almost irrelevant. But, in instances
where object program efficiency is significant, alternative coding procedures are desirable.
It is conceded that the Problem Oriented Language
deservedly has greater prestige than the Machine Oriented Language and greater theoretical interest (at least
from a philosophic or linguistic point of view). N evertheless, the current mechanization of these languages
and the distribution of computer expenses dictate demands for both types. It is recognized that direct, facile
communication between the layman and his computer
as well as the advantage of interhuman communication
of the problem definition are obtainable from a Problem

C

t RCA, Camden, N. ].

AND

K. KOZARSKYt

Oriented Language. However, there' are also needs for
programs handling tasks near the limits of the equipments' capabilities as well as for infrequently changing,
very highly repetitive, data-processing routines.
One of the often expressed goals in automatic coding
is the development of complete problem oriented languages entirely independent of any computer. To produce any "most efficient" coding in this circumstance
means that, among other things, psychological inferences as to the intentions of the writer are to be made
by the automatic code. Furthermore, the apparent
trend in machine design toward many simultaneous
asynchronous operations, multiprogramming and the
like, increase the problems associated with producing
efficient machine programs from a problem oriented language. I t is hardly unreasonable for a user of the new
potentially powerful systems to request a coding scheme
capable of using these complex, expensive features.
I t appears likely then, in the near future at least,
that some problem oriented languages will be augmented by some prosaic statements, directly or indirectly computer-related, which will permit attainment of a
more "most efficient" machine code. Similarly, machine
oriented languages may also yield to this trend and incorporate some features, within their inherent limitations, which tend to be associated with problem language codes.
All this may justifiably be construed as motivation for
the automatic routines offered with the RCA 501. The
first of these, a machine oriented automatic code, the
'RCA 501 Automatic Assembly System, is described in
this paper.
The Assembly System provides for: relative addressing of instructions and data; symbolic references for
constants and data; macro-instructions and subroutines; variable addresses; and descriptor verbs.
SOME MACHINE CHARACTERISTICS

I t is appropriate, as background for what follows, to
describe briefly some of those features of the 501 Computer (Fig. 1) which have influenced the design of the
Automatic Assembly System. (Incidentally, this computer has been in operation in Camden since April,
1958.)
The 501 Computer has a magnetic core storage with
a capacity of 16,000 characters, which is increasable in
steps of the same to a maximum of 262,000 characters.
Each character, consisting of six information bits and
one parity bit, is addressable, although four characters
are retrieved in a single memory access. Binary addressing of the memory is provided, requiring 18 bits or three
characters per address.

128

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Operation
Code

A Address

Address
Modifier

B Address

o

AAA

N

BBB

Fig. 2-A description of one instruction.

p

A&B

T

Stores
Address
of Next
Instruction

Store Address
of Operands
During
Execution

Third
Memory
Address

Fig. 1-The 501 computer.
Fig. 3-Control registers.

The instruction complement consists of 49 two-address instructions (see Fig. 2). Each instruction consists of:
One character for the operation symbol
Three characters for the A address
One character for the selection of address modifiers
Three characters for the B address,
for a total of eight characters or sixteen octal digits.
There are several control registers, each of which
stores 3 characters, or one address (see Fig. 3). The
A and B registers are used to store the A and B addresses of operands during their execution. The P or
program register stores the memory address of the next
instruction in sequence. The T register stores a memory
address which is made use of by certain instructions
which require three addresses during their execution. All
of these registers are addressable and therefore directly
accessible to the program.
Seven address modifiers are available. Four of these
are standard memory locations and three are the A, T,
and P registers. Use of the P register as an address
modifier permits the writing of self-relative machine
coding which may be operated, without modification,
in any part of the memory.
The RCA 501 exploits certain control symbols (Fig. 4)
in the data. The Start and End Message control symbols define a message on tape, and in the memory also
act as control symbols for certain instructions. The Item
Separator control symbol is not used as such on tape,
but is used in the memory to control certain operations.
These control symbols permit variable item lengths
and variable message lengths both on tape and in the
memory. The entire message may be variable, dependent upon the number and size of the individual items.
The instruction complement includes both symbol controlled ahd address controlled operations.
The 501 includes provision for simultaneous readwrite, read-compute and write-compute. This is accomplished by designating magnetic tape instructions
as "potentially simultaneous" and establishing a program controlled gate between the normal and simultaneous modes of operation. Thus, the programmer can

Start Message and End Message
( <)
(»)
Define Message on Tape
and Control Operations
in Memory
Typical Message

I tern Separator
( .)

Controls
Operations
in Memory

<·12564 . John-Doe· 8934 . 7)

Fig. 4-Control symbols.

permit completely automatic switching of tape instructions to the simultaneous mode or he may optionally
bracket off portions of the program where such switching is inconvenient.
As for the 501 Assembly System, there are two programmer-prepared inputs. To guide the Assembly System in generating a running program from the pseudocode, the user provides the system with a description of
the data files which the program is designed to process.
A portion of a data sheet is illustrated in Fig. 5.
Certain auxiliary computations are performed on
these data sheets and the results printed out for the
programmer's information-such as average message
lengths, approximate tape passag~ time, and weighted
average.
DATA ADDRESSING

Completely variable length data, on the one hand,
yields economies in tape storage ahd effective file passage time; on the other hand, it presents certain problems with respect to symbolically addressing data items
in the memory. These problems are handled in several
different ways:
1) Those items whose lengths are fixed relative to the
beginning of the message may, of course, be directly addressed by the data name designated in the
data sheets.
2) The variable length items of a message may be
transferred to a working storage area of memory
where space is allocated for the maximum possible
size of each variable length item. A single pseudoinstruction_ performs this function. From this
point on, the variable length item may be directly

Bromberg, Hurewitz, and Kozarsky: The RCA 501 Assembly System

Item No.

No. Char.
Sub Item

1

Abbreviation
Date

A

Month

B

Day

C

Year

Description
File Label

FAA

Max.

Avg.

6

6

2

2

2

2

2

2

- -- -- -

R

X

% Use

Sign

JY

129

Wtd. Avg.

100

- -- -- -

- - - -- -- -

- -- - - - - Fig. 5-The automatic code data sheet.

Instr. No.

Comments

OP

A
Address

B

Address

T
Address

C
S
G

GO TO

IF

GO TO

IF

GO TO

IF

GO TO

IF

GO TO

IF

GO TO

EF

PR040+1

ED

PDQ8+1O

+

PR023+5

-

0

N

IF

Fig. 6-The automatic code program sheet.

Instr. No.

Comments

PRO 1
(Macro-Instruction)

PRO 20

"Extra Beneficiary"

OP

A
Adress

LRF

POLCY

TEST

POLCY

SC

DATE

DEFK

KBEN

DA

RATE

ADV

IAd!ess

T

Address

DATE

"122558"

TAX

WRATE

C
S
G

PHI 15

+V4
Fig. 7-Assembly pseudocode entries.

addressed by the data name preceded with a W
(representing working storage).
3) It is possible to locate the address of any item in a
message by using an instruction which scans a
message searching for and counting the control
symbols defining items. This instruction leaves
the address of the item in an address modifier.
The second input, whose format is shown in Fig. 6, is
the pseudocode written on the program sheets. This is seen to be an expansion of the machine code
format which normally includes the operation field and
the two addresses A and B. This is augmented on this
sheet by the T or third address which for several machine instructions requires presetting. Furthermore,
there are provisions for 3 "IF-GO TO" statements providing for conditional or unconditional transfers of con'
trol.
The inclusion of these IF-GO TO statements as an

optional part of every pseudoinstruction line has two
primary motivations. First, it accommodates as a single
pseudocode statement the function "Compare and
Jump" which has a relatively high frequency in dataprocessing problems. Second, about i of the 501 instructions automatically set a register to 1 of 3 states depending on conditions encountered during the operation of
the instructions. Branching instructions may then be
used to select different paths depending on the setting
of the 3 state register and are easily designated in the
IF-GO TO columns.
A single character entry in the CSG column generates
an instruction to open or close the simultaneous gate,
controlling the phasing of simultaneous tape operations.
VARIABLE INSTRUCTION GENERATION

An interesting feature of the Assembler is the handling of the normal complement of 501 machine instruc-

130

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

tions. Each of these instructions, when used in the expanded pseudocode format, assumes the identity of a
macro-instruction. Up to five, two address machine instructions may be generated by employing a single
machine operation code with appropriate entries along
the pseudocode line. As an example, the 501 instruction, Decimal Substract, may accomplish not only a
three address subtraction, but also a simultaneous gate
operation, and transfers of control dependent upon the
sign of the difference.
SYMBOLIC AND RELATIVE ADDRESSING

Included in the automatic system are such features as
mnemonic operation codes, symbolic and relative addressing including the use of both alphanumeric and
octal literals for constants, and acceptability of machine
code should it be desired. Fig. 7 shows examples of pseudocode en tries.
The Instruction number is composed of characters
designating the page and line number of the instruction.
When addressing an instruction, reference is made to
that instruction whose elements appear in the Operation, A, and B fields of the program sheet. Since, however, one single pseudoinstruction may account for as
many as five machine instructions, it is desirable to
address those. Therefore, a stipulated suffix to an instruction number will allow a reference to any generated instruction and to any field or desired character
within one of these instructions.
Relative addressing of these symbolics enables the
programmer to refer to the Nth pseudoinstruction following or preceding a given symbolic. The program will
be ordered by page number in alphabetic sequence and
within pages by line number before any processing is
undertaken. Accordingly, to accomplish an insertion,
one need only assign appropriately sequential labels to
the desired instructions and the program will place
them in the proper positions.
I t is not necessary, however, to label every instruction. Relative addressing allows reference to be made
to unlabeled instructions in the program. One might
usually expect to be labeled the first of a sequence of instructions performing a logical function and those to
which frequent reference is made by other instructions,
but this is left solely to the discretion of the programmer.

These verbs are executed during assembly and are
deleted from the final program.
VARIABLE ADDRESSING

Another programming aid incorporated within the
system is the variable address feature. A variable address allows the specification of addresses or constants
to be symbolically named and to be defined later in the
program. A variable may be substituted for any other
machine or symbolic address in any instruction. This
feature, for example, permits tagging, as a variable, the
address of an instruction not yet written. It is only
necessary then, at a subsequently convenient moment,
to employ the Descriptor Verb, "Define V," to supply
the actual address of this variable for every place it was
used.
I t is also possible to use variable addresses in addition to any machine or symbolic address. This is accomplished by placing the variable address in the same
column as the one to which the variable is to be applied
and in the directly succeeding line. A plus or minus prefix will then specify addition or subtraction of the variable address. A variable to be added or subtracted will
not be applied until the variable is converted to an actual machine address. The use of variable addresses,
then, allows for symbolically designated modification of
the program at the actual or machine code level.
LITERALS

Literals, or constants whose address and name are
identical, are used in the assembler. Two types of literals are provided, alphanumeric literals for operations
with data, and octal literals for operation with instructions which, it has been noted, are binary coded.
A literal is normally carried along with the segment in
which it appears. However, a terminal character of the
Ii teral may be used to specify that the literal be stored in
a common constant pool available to all segments of a
program. A terminal character may also be used to
designate and to differentiate among duplicated copies
of the same literal in the program. Here, too, these
duplicate literals may be associated with the segment
or with the common pool of constants.
Alternatively, of course, constants may be defined by
a "Define Constant" Descriptor Verb and assigned an
arbitrary symbolic address. Terminal characters on
these constants perform the same functions with these
constants as those just described.

DESCRIPTOR VERBS

Descriptor verbs constitute an important part of the
automatic code. These verbs contribute, in general, only
to the description of the program and do not become a
directly converted active part of the machine code.
These special verbs perform a variety of functions
such as the definition of program segments, overlaying
memory regions, reserving areas of memory, extracting
the machine address corresponding to any symbolic
name, defining constants and variables and providing
for insertions, deletions, and corrections in pseudocode.

MACRO- INSTRUCTIONS

The macro-instructions included with the Assembler
create 2 address symbolic coding which is spliced directly into the main body of coding in place of the macroinstruction pseudocode call-line. A single macro-instruction will generate all of the instructions required to perform some task which would normally require the writing of a sequence of machine instructions.
Parameters, which the macro-instruction uses, are
specified at the pseudocode call-line by the program-

Haibt: A Program to Draw lvlultilevel Flow Charts
mer. No restrictions exist as to number or size of these
parameters. If a macro-instruction is to be generative, it
contains one other part aside from the main body of
stored coding. This part decides, from an interrogation
of call-line parameters, which particular set of macroinstruction coding is to be included in the main routine.
SUBROUTINES

The assembly system provides for an expandable library of subroutines to be available to the programmer.
These subroutines generate assembly language pseudocode and as such may use all the assembly features such
as macro-instructions, descriptor verbs, and so forth.
Subroutines may be open or closed and generative or
fixed.
Parameters for subroutines are specified at the pseudocode calling line. For open subroutines, parameters
are incorporated during the operation of the assembler.
These parameters may merely be substituted in the
subroutine as in the case of a fixed routine or may be
subject to considerable testing and manipulation as occurs with a generative subroutine.
Closed subroutines may either incorporate parameters during assembly or use parameters generated by
the running machine program. In this case the parameters are located relative to the subroutine call-line.
The design of the system is open to the extent that
any useful number of macro-instructions and subroutines may be added.

131

structions referring to them. In addition, the Assembly
System generates an information block preceding each
object program. This block, which contains all program
stops, breakpoint switches, and tape addresses is available for input to a service routine which will modify any
corresponding entries within the object program.
There are two types of error indicators used by the
Assembler. One causes the Assembly System to print
the source of trouble and stop immediately. The other
and major class consists of on-line printed statements indicating the type and location of errors. In this case the
Assembly System continues its functions ignoring the
"guilty" statements until all such indicators have been
found. This permits the user to specify corrective measures for all errors at one time.
In summary then, the 501 Assembly System lies in
an intermediate category. On the one hand, it is definitely machine oriented, amplifying the 501 instruction
complement and requiring a knowledge of the 501.
However, it also provides for a flexibility of order statements, not confined to the 2 address machine order
code. A variable number of machine instructions are
generated dependent upon the number and types of
entries made on each pseudocode line. Both macroinstructions and subroutines may be of the generative
type and since the library is open-ended, may be augmented whenever necessary.
In short, the RCA 501 Assembly System is a programmer's aide, enabling him to make maximum use
of machine capabilities with a minimum of clerical effort.

PROVISIONS FOR PROGRAM MODIFICATION

The Assembly System offers two main listings for
program up-dating. First, listings are given of the object
machine code and the Assembly language pseudocode.
Second, is a list of all symboJjl' "1.ddresses and those in-

ACKNOWLEDGMENT

The authors acknowledge the extensive contributions
of M. J. Sendrow, who partici pa ted in the planning
and creation of the RCA 501 Assembly System.

A Program to Draw Multilevel Flow Charts
LOIS M. HAIBTt

·
T

INTRODUCTION

HE preparation of a program for a digital computer is not complete when a list of instructions
has been written. It still must be determined that
the instructions do the required job, and if necessary the
instructions must be changed until they do. Also a description of the program should be written for others
who may want to understand the program. A useful tool
for the last purpose is a graphical outline of the program-a flow chart.

t

IBM Res. Center, Yorktown Heights, N. Y.

Flow charts serve two important purposes: making a
program clear to someone who wishes to know about it,
and aiding the programmer himself to check that the
program as written does the required job. A flow chart
drawn by the programmer would serve for the first purpose, but drawing one is often a tedious job which may
or may not be done well. For the second purpose, it is
important to have the flow charts show. accurately what
the program does rather than what the programmer
might expect it to do. Consequently, it was decided to
write a program, the Flowcharter, for the IBM 704
to produce flow charts automatically from a lIst of in-

132

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

structions. Another reason for the project was to get
further insight into the characteristics of computer programs.
Since programs in many programming languages and
even for different machines differ mainly in superficial
aspects such as names and numbers for various operations, names and types of registers, it was decided to
have the Flowcharter do the main part of its work on a
common, machine-independent language and to have a
set of preprocessors, each of which would translate one
programming language into this common internal
language.
We also felt it was desirable not to attempt to show
the whole program as one chart, which for a moderate
size program would either present a confusion of detail
or be too general to serve the purpose. In order to provide both a good general picture of the program or of
any part of it, and a more detailed description of a
smaller piece of program, the Flowcharter produces a
series of flow charts on a number of levels of detail; each
part of a chart is shown in more detail on a succeeding
chart. How to determine the makeup of the charts was
one of the most difficult problems encountered in planning the Flowcharter.
Another feature of a flow chart is a description of
the procedure represented by each box. The Flowcharter provides a summary of the machine input-output done in the box and a summary of the computation
done in the box, listing the quantities computed and
those used in the computation of each of them.
DESCRIPTION OF THE FLOWCHARTER

The Flowcharter is composed of four main parts: the
preprocessors, the flow analysis, the computation summary, and the output program.
The preprocessors each do a simple translation from
the external instructions to the internal language. For
most machines, an instruction may represent several different processes done in the machine, such as fetching
from memory, storing the memory, and instruction sequencing. These operations are each described separately in the internal language. One external instruction
is translated by the preprocessor into a suitable list of
these operations.
Many of the problems which arose in designing a
Flowcharter were in the section which determines what
is to be shown on each chart. It is very easy for the programmer to mark off his program into logical parts, but
to determine these from the program itself is quite difficult in most programming languages. We have worked
out a set of techniques which we feel will do quite well
for most programs and will be acceptable in other cases.
We have also provided facilities for the programmer to
specify how he would like the breakdown done on various levels if he does not like the choices made by the
Flowcharter. The techniques used depend mainly on
analysis of the flow properties of the program but provision is also made in the Flowcharter for using the data
to help in the analysis. The Flowcharter is written in

I

such a way that various techniques and combinations of
techniques can be tested to see what results they give.
This flow analysis is done by iteratively forming regions from groups of subregions. The smallest subregions are individual instructions. In general, each
region will be represented by one flow chart and each
box drawn on the chart will represent a subregion of
that region. However, when it is reasonable, two or
more regions, each consisting of only two or three subregions, will be shown on one flow chart. This is done
to keep the output moderately compact. Also, those
regions which are formed directly from instructions are
not shown as flow charts but are given as a list of the
instructions in the region, with a reference to the page
on which this region is shown in context. (The Appendix
shows an example of this.)
The techniques used for region formation are of two
kinds, combination and division. A combination technique is one which starts with individual instructions
and, by repeated applications, combines them into
larger and larger regions. A division technique is one
which starts with the whole program and divides it into
smaller parts. Each of these parts is in turn divided
until each part consists of not more than six or seven of
the regions formed by the techniques of the first type.
Each technique is represented by a subroutine.
Each combination subroutine searches for a particular
configuration of flow in the program. Three such subroutines are: STRING, DIAMND, and TEST, which
look for "strings," "diamonds," and "test sets."
A "string" (see Fig. 1) is an ordered set of regions
satisfying the condition that every region, except the
first, has an entry only from the preceding region and
each, except the last, has an exit only to the next one.
A "diamond" (see Fig. 2) is a set of regions containing
a first region F, a last region L, and some intermediate
blocks. Each intermediate block must not have any
predecessor other than F nor any successor other than
L. All successors of F and predecessors of L must be in
the "diamond."
A "test set" is a set of regions which together make up
a compound test. A set of regions forms a "test set"
if each region ends with a test' of the same special
register. Also, every region except the first may have
only one predecessor which must also be in the set.
Finally, only the special register tested may be changed
by the instructions in any of the regions except, possibly, the first one. For example, consider the 704 SAP
instructions:
CLA
TZE
SUB
TZE
SUB
TZE
SUB
TZE

ALPHA
ISZERO
ONE
ISONE
ONE
ISTWO
ONE
ISTHRE

Haibt: A Program to Draw Multilevel Flow Charts

133
I

----,

I
I
I
1

I
I

1

i
1

I
1
1
1
1

I
,I
I

I
I

I

I

1_ _ _ _ _ _1

(c)
(a)
(b)
Fig. 1-In each case the dotted lines enclose a "string." (Circles
represent regions formed earlier and solid lines represent paths
of flow in the direction of the arrow.)

r----~--__,

I

I

I
I

I
I

I

I

I

I

1

I

I

(a)
(b)
(c)
(d)
Fig. 2-In each case the dotted lines enclose a "diamond." (Circles
represent regions formed earlier and solid lines represent paths
of flow in the direction of the arrow.)

The pair CLA, TZE, and each pair SUB, TZE make
up a region found by STRING; then these four regions
-Will be combined by TEST.
I t should be pointed out that the first two configurations, "strings" and "diamonds," are sufficient to describe most programs. Iterative loops do not have to be
taken care of separately; when the program within the
loop is combined into a region, the return path of the
loop is also included in the region. For example, in Fig.
l(a) and Fig. 2(d), the return path, P, although not a
part of the string or diamond, is a link between the subregions forming the region and is therefore included in
the flow chart of that region. The example used to show
the output of the Flowcharter is a program for which
only STRING and DIAMND are needed.
The division subroutines attempt to discover particular configurations "in the large." Two such subroutines are UNWRAP and SPLIT which look for loops
and easily separable parts of the program. The division
subroutines are not allowed to separate the regions already built up by the combination routines.
UNWRAP determines if the program is essentially
one large loop; that is, it has an entry block El, which
has only one successor S, an exit block E 2 , which has
only one predecessor P, and there is a path from P to S.
In this case, the region representing the program is
made up of three subregions: E l , E 2 , and the subregion
including everything else. The last now becomes the
"program" to be divided further.

SPLIT looks for the situation where the program is
composed of several essentially distinct parts, each of
which has only one entry point and one exit point for
paths to or from other parts. Each such part is one
subregion and is divided further if necessary.
At present, STRING, DIAMND, and TEST are
used repeatedly until none of them can do any further
combining. If there are no more than six regions left,
these are combined to make the region representing the
en tire program. If there are more than six left, UNWRAP and SPLIT are used repeatedly until they have
either divided the entire program into the regions left
by the combination routines or can not divide it any
further. In the latter case, at present, arbitrary divisions,
are made until the program is so divided.
This method should be adequate for most programs;
however, the Flowcharter is written in such a way that
routines can be added and other methods tried easily.
In planning the computation analysis, the major
problem encountered was that of determining when cells
or registers were used only as temporary or erasable
storage. In order to' keep the amount of information
down to a readable size, we wanted to list only the cells
actively used in the region. We started with the idea of
labeling a quantity computed but not used as "output,"
and those computed and then used, "tentative outputs" to indicate that they might be erasable cells. A
"tentative output" was carried forward until an exit
from the program or a use of the same' quantity was encountered. If there was such a use, the "tentative output" became a real output-if not, it was considered
erasable and would not appear further on the flow
charts for that part of the program. Since a "tentative
output" had to be carried forward on all possible paths
but changed to a real output only on those paths on
which a use was encountered, the bookkeeping necessary
became unmanageable when the flow of the program
was complicated.
If the computation is traced backward rather than
forward, the procedure becomes much simpler. If a
quantity is needed at one point of a program, it must
be available along every possible path backwards from
that point until some point is encountered where the
quantity is computed or until an entrance to the program is encountered. In the latter case, this quantity
must be available at that entrance to the program.
With each region shown on a flow chart, all the quantities computed in that region are listed except those
erasable cells which are used only within the region.
For each quantity computed, there is given a list of
quantities which are required at the entrances to the
region and which enter into the computation of this
item whether directly or indirectly.
The last part of the Flowcharter arranges and prints
the results of the other sections. The appearance of the
final flow charts will be one of the most important features to anyone using the Flowcharter and will be as
much like hand-drawn flow charts as possible. Each page
will show one region composed of as many as six or seven

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

134

smaller regions. Each of the boxes will indicate the
entry and exit locations of the subregion it represents,
the number of the page that has a detailed flow chart
-of the subregion, the location of any exits and entrances
in the middle of the region and the exit conditions for
.any exit. Each box will be outlined by asterisks and all
transfer paths will be represented by lines to the appropriate points. If the lines go outside the region, the name
and page number of the instruction at the other end will
be given at the top or bottom of the page for entrances
or exits, respectively. The boxes themselves will be arranged in two dimensions to show the flow in the region
as clearly as possible, using horizontal as well as vertical
displacements of the boxes. On the right-hand side of the
page will be listed further information in three columns:
the names and numbers of any input-output units used,
and which memory cells were involved in each case; a
list of those quantities for which values must be available at the entrance to the region; and the computation
summaries mentioned above. Provision is also made for
reproducing comments, box titles, page titles, and other
comments given by the programmer. (See the Appendix
for an example of the output.)

STATUS OF THE PROGRAM

On February first, the program described here was
nearly complete and checked out with a preprocessor
for 704 SAP language. The parts not yet finished were
UNWRAP, SPLIT, the drawing of the boxes and lines
in the output program, and provision for some of the
specifications by the programmer. In each case, much
or all of the planning has been done.
As soon as these are complete, it is planned to write
70S and FORTRAN preprocessors and at least one region forming subroutine which considers mainly the
data used by the program, and has much less emphasis
on the flow properties than the routines described here.
Also planned is some experimentation with various
methods of region formation.
ACKNOWLEDGMENT

The author wishes to acknowledge the contributions
of Alex Bernstein, who collaborated on the initial phases
of the project, and of James Lagona, who wrote c.ertain
of the subroutines. The author also would like to thank
colleagues in the Programming Research Department
for helpful discussions and advice.

ApPENDIX

The source progra!ll flow charted here is:
C
C
C

A PROGRAM TO MULTIPLY TWO MATRICES AND SUBSTITUTE PLUS ZERO FOR
EACH ZERO ELEMENT, PLUS ONE FOR EACH POSITIVE ELEMENT, AND MINUS
ONE FOR EACH NEGATIVE ELEMENT.
10 READ 200 «M(l, J), 1= 1,3), J = 1,4), «N(J, K), J = 1,4), K= 1,5)
20 DO 1401=1,3
30 D0130K=1,5
40 L(I, K)=O
50 DO 60 J=l, 4
60 L(I, K) =L(I, K)+M(I, J) * N(J, K)
70 IF (L(I, K) 120, 100, 80
80 L(I, K) = +1
90 GO TO 130
100 L(I, K) =0 .
110 GO TO 130
120 L(I, K) =-1
130 CONTINUE
140 CONTINUE
150 PRINT 200 «L(I, K), 1= 1,3), K= 1,5)
160 STOP
200 FORMAT (1514)

EXPLANATION OF CONTENTS OF BOXES IN THE FLOW CHARTS
PATHS INTO THIS SUBREGION
V

V

V

****************************

* FIRST
* LOCATION

LAST
LOCATION

*
*

*
*
WHERE TO FIND MORE
*
*
DETAILS
ABOUT
THIS
*
*
SUBREGION
*
*
*
*
*
*
EXIT CONDITIONS
*
*
****************************
V
V
V
PATHS OUT OF THIS SUBREGION

READING
WRITING

VALUES
REQU,IRED

COMPUTATION
DONE

Haibt: A Program to Draw Multilevel Flow Charts

135

PAGE 1
A PROGRAM TO MULTIPLY TWO MATRICES AND SUBSTITUTE PLUS ZERO FOR
EACH ZERO ELEMENT, PLUS ONE FOR EACH POSITIVE ELEMENT, AND MINUS
ONE FOR EACH NEGATIVE ELEMENT.
ENTRANCE TO
READING
VALUES
COMPUTATION
PROGRAM
WRITING
REQUIRED
DONE

V

**************

* 10
20 *
P.3
*
*
*
*
** * * * *UNCOND
* * * * * * * * **
V

M(I, J) .•. CARDS
N(J, K) ... CARDS

READ CARDS···
M(I, J)

I··· +1

N(J, K)

V

**************

* 30
30 *
P.3
*
*
*
*
** * * * *UNCOND
*
*********
V

~

+1

K···

V

*************

* 40
130 *
P.2
*
*
**
**
K IS TO 5
* GREATER LESS, = *

I
K

K···K

+1

M(I, J)
N(J, K)

L(I, K) ... +0

I

I· .. I

-1

+1

**************
V

**************

* 140
140 *
P.3
*
*
**
I IS TO 3
**
* GREATER LESS, *

**************
V

**************

* 150
*
**

P.3
STOP

160

*
*
**

**************
V
EXIT FROM
PROGRAM

PRINT·· .
L(I, K)

L(I, K)

+1

136

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
PAGE 2

30
P.l

READING VALUES
COMPUTATION
WRITING REQUIRED
DONE

v

V

**************

* 40
50 *
P.3
*
*
*
*
UNCOND
** * * * * * * * * * * * * **
V

I
K

L(I, K) ... +0

J ... +1

V

**************
*60
60*
P.3

*
**
J IS TO 4
* GREATER LESS,

I

J

*
**
*

K

M(I, J)
N(J, K)
L(I, K)

**************

L(I, K) ... L(I, K)
M(I, J)
NO, K)

J ... J

+1

V

**************

* 70
70 *
P.3
*
*
** L(I, K) IS TO 0
**
* GREATER
LESS *

I
K

L(I, K)

**************

V

**************
*80
90*

*
**

P.3

UNCOND

*
**

I
K

L(I, K) ... +1

I
K

L(I, K) ... +0

I
K

L(I, K) ... -1

K

K···K
+1

**************

V

**************
110 *
* 100

P.3
*
*
*
*
UNCOND
** * * * * * * * * * * * * **

V

**************
* 120
120 *
P.3
*
*
*
*
UNCOND
*
*
**************

V

V

V

**************

* 130
130 *
P.3
*
*
**
K IS TO 5
**
* GREATER LESS, = *

**************

V

140
P.l

137

Arnold: A Compiler Capable of Learning
PAGE 3
FOR CONTEXT
SEE PAGE

INSTRUCTIONS
C
C
C

A PROGRAM TO MULTIPLY TWO MATRICES AND SUBSTITUTE PLUS ZERO FOR
EACH ZERO ELEMENT, PLUS ONE FOR EACH POSITIVE ELEMENT, AND MINUS
ONE FOR EACH NEGATIVE ELEMENT.
10 READ 200 ((M(I, J), 1=1,3), J=I, 4), ((N(J, K), J=I, 4), K=I, 5)
20 DO 1401=1,3

1

30 DO 130 K=I, 5

1

40 L(I, K)=O
50 DO 60 J=I, 4

2

60 L(I, K) =L(I, K) M(I, J) * N(J, K)
(END OF DO AT 50)

2

70 IF (L(I, K) 12, to, 80

2

80 L(I, K)=+1
90 GO TO 130

2

tOO L(I, K)=O
110 GO TO 130

2

120 L(I, K)=-1

2

130 CONTINUE
(END OF DO AT 30)

2

140 CONTINUE
(END OF DO AT 20)
150 PRINT 200 (L(I, K), 1= 1,3), K= 1,5)
160 STOP

1

A Compiler Capable of Learning
RICHARD F. ARNOLDt

INTRODUCTION

E WOULD like to consider a new approach to
the general problem of programming computers.
To date, the methods of handling programming
problems can be roughly classified into two families,
each of which have certain characteristic advantages
and disadvantages which seem to complement those of
the other.
The first group, developed from the subroutine philosophy, includes all interpretive schemes, as for example
the "Bell Labs Interpretive System" for the IBM 650.
The advantages of interpretive routines are that they
are very versatile in the languages they can interpret
and are comparatively easy to write. It is a fairly simple
matter to write an interpretive routine to simulate another computer and thus achieve program compatibility between different machines. The crippling drawback
is the excessive time needed to execute routines inter-

W

t Michigan State University, East Lansing, Mich. This research
was supported in part by a grant from the Nat!. Sci. Found. and
taken from a thesis written under the direction of G. P. Weeg.

pretively. Higher order interpretive schemes increase
executions time exponentially.
The second group consists of compilers and assembly
programs. They are characterized by the fact that, unlike interpretive routines, they produce object programs
which may be executed in reasonable amounts of time.
Compilers, however, are difficult to write. "Fortran,"
for example, took twenty-five man years to write. A
second difficulty of compilers such as "Fortran," is that
although they are becoming more and more versatile,
they still fail to express certain types of operations, and
it has become necessary to make it possible to adapt the
compiler so that the "Fortran" language may be temporarily left and programming done in a language closer
to the initial machine language. Of course, this is a
desirable feature for a compiler to have, but it does not
solve the initial problem for which it was created
namely, to avoid machine languages completely. A fur~
ther disadvantage is that as a compiler system becomes
adapted for use on more than one computer, ~many of the
"coding tricks" will have to be avoided. This may be desirable from the point of view ()f the compiler writer, but

138

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

optimum programs will not be written. This will be particularly true as newer generations of computers are built
which may have less similarity to present ones than we
expect.
Also, as new computers come into existence, it is
likely that it will become increasingly difficult for even
the trained programmer to use the machines efficiently.
Only time can tell whether or not compilers and their
improved offspring are the answer for the future.
However, we would like to offer a different approach
to the problem which, although it will probably not
prove to be a better solution for the present, may indeed
offer a much better line of attack for the future. Let us
review first what is desired. We would like a scheme
that would accept a program, either in the pseudo code
of a compiler or in a different machine language, and
would produce an object program for a given machine.
We would like this program to take full advantage of
all the special features of the machine and to be coded
in a fashion such that running time and program length
are at a minimum. We would also like to avoid too much
work in adapting this scheme to a new machine. Since
we may not know ourselves just how best to program a
given machine, we would like it if the scheme itself could
find the optimum procedures and apply them. Finallyand this is the one requirement which will be most difficult to meet-the scheme must produce a program in a
reasonable length of time and not require a tremendous
amount of memory space.
With the exception of the last requirement, we believe
that the following scheme is capable of satisfying all
these requirements, and that perhaps this one too may
be met. The techniques used have almost all been suggested in other contexts. 1 Freidberg's programs2 also
have some organizational resemblances to the compiler
described.

A

COMPILER USING A RANDOM PROGRAM GENERATOR

The Compiler
Let us consider a new kind of compiler. It is similar in
function to other compilers in that it accepts a program
in language A and produces an eq ui valen t program in a
language B. Since the requirement of our languages is
that each order must specify exactly an operation, these
languages define computers. Therefore, we may sometimes speak of computers A and B. The compiler operates as follows. rt first generates a program in language
B at random, as will be described below. Although the
program must be of specified length, it may be any conceivable combination of orders. rt then proceeds to determine whether this candidate program is equivalent
to the given program in language A. The method of determining the acceptability of the program involves the
1 "Automata Studies," Ann. Math. Studies, no. 34, pp. 215-277;
1956. Note last 4 articles by Ashby, Mackay, and Uttley.
II R. M. Freidberg, "A learning machine: part I," IBM J. Res.
Dev., vol. 2, no. 1, pp. 2-16; January, 1958.

use of two interpretive routines, A and B, which are capable of executing the orders in the two languages. The
subject program and candidate program are then both
executed using identical data to ascertain whether the
candidate program B is capable of producing identically
the same output. If it is determined that they are equivalent, then this program will be punched out and the
translation will be complete. If not, a new candidate
program is produced and tested in a similar fashion. The
process. is repeated until an acceptable program is produced.

The Random Program Generator
The random program generator will have the following characteristics. Provided with the length desired,
the first order of the program will be chosen at random
from all possible orders that might appear there, so that
the probabilities associated with the choosing of all
orders are identical. The second order will be chosen in
the same manner, and then each successive order in the
program, until it is complete. This becomes the candidate program. This method could generate any conceivable program of a given number of orders. However, the
probability that a particular program is generated is exactly the same for all. The program generator will utilize
a random-number generating routine, and the range of
the random numbers will be partitioned in such a way
that by assigning equal intervals to each order and selecting that order which corresponds to the interval containing the random number, a program may be generated in the desired fashion. rt is recognized that the
computer can generate only "pseudo" random numbers
and that this will introduce difficulties. It is not too
much to expect that the numbers be distributed rectangularly. Of more importance, however, is the sequence that the generation of these numbers may take.
rt will be necessary that the random numbers not appear in a sequence so that the corresponding programs
do not contain certain sequences of orders. We must
continue to watch for this possibility even if the compiler works, since it may be producing only certain kinds
of programs.

The A cceptability Criterion
The Interpretive Routine Simulation of Computers A
and B: The simulation of the A computer will be straightforward. All one really needs to know is what the output
is for a given input. However, we shall provide the interpretive routine that executes the A program with one
additional feature, an "address stop." That is, the programmer, or in this case perhaps some other part of
the compiler, will be able to specify, before transferring
control to the A interpreter, a particular location which,
if appearing in the control counter will cause the routine
to jump to some previously designated location outside
the interpretive routine itself. Any order which would
otherwise cause the computer to stop would be treated
in a similar fashion.

Arnold: A Compiler Capable of Learning
The B computer will be simulated in a like manner
with the address stop feature. The B interpreter will
also be equipped with a "clock" that will keep track of
the running time of the program it is executing as if it
were being executed directly by a real computer B.
Thus, it will be possible to discriminate between two
acceptable programs which have different running
times.
Also, by placing a limit on the length of time a program may run on interpreter B, it will have a means of
discovering and stopping endless looping. The "clock"
would be checked before each order was executed to see
if the maximum time permitted had been exceeded.
Our final specification will be that these interpretive
routines be in some standard form as it is hoped that to
adapt the compiler to any other two languages, it will
only be necessary to put in the interpretive routines for
those languages and to provide the random program
generator with a list of the orders comprising the object
language.
The Use of Randomly Generated Data in Acceptability
Tests: A first definition of equivalent programs only required identical outputs given identical inputs. It would
be possible of course to require that for two programs to
be considered equivalent, they must use the same algorithm. However, given a program, it is not possible to
recover the algorithm used, and the only alternatives
are either that the routine itself be treated as the algorithm or that we consider only results. Acceptance is
therefqre determined by a statistical criterion and the
probability of correct answers on a production run may
be made arbitrarily close to 1 by increasing the number
of randomly selected sets of input data for which the
candidate program must successfully compute the correct answers. It will, however, only be necessary to use
one set of data until a candidate program succeeds in
producing the correct output for that one. It will also be
necessary that the range of the data variables be specified along with the subject routine.
The procedure will be to examine the output and compare it with that of computer A at the completion of the
running of a program. If the output is different, the
candidate program will be rejected as it would also be if
a specified time on the "clock" were exceeded or if the
computer "hung up." If the output is found to be identical to that of computer A, then a new set of data would
be generated and run through computer A with the subject program to determine the correct output; and computer B would be given the same data and the candidate
program executed again. When an acceptable program
is found, it may either be punched out on cards or tape
or else retained and a search made for a better program
in terms of number of words and running time.
We would be fortunate indeed if the compiler, as it
has been described, could produce a program of moderate size in the life of the machine, much less within the
hour or so maximum that might be allowable. Nevertheless, aside from this difficulty, this compiler would do

139

everything else that was desired. If the compiler were
allowed to run long enough so that it could choose
among the best of many acceptable programs, it would
not only tailor the program to the machine involved but
also in fact find many new "coding tricks" that a programmer might never stumble upon. How then can the
expected time involved in a translation be cut down?
Two methods will be discussed. The possibility of breaking the subject program into parts and translating one
part at a time will first be considered. Then we shall
consider how the program generating part of the compiler may be replaced by a unit which will gradually
modify the probabilities associated with the generation
of programs in a manner such that they will produce acceptable programs more frequently.

Sectioning the SUbject Program
Criteria for Sectioning: If it is desired to break the
subject program into sections, it will be necessary that
for a given section there be a criterion for the acceptability of that section. Also, it would be required that this
criterion be such that if all the sections were correct, the
en tire program would also be correct. If a method of
sectioning can be found and these requirements met,
the compiler could then handle one section at a time in
the same way it previously handled the whole program,
and such a procedure might be much quicker.
The acceptance or rejection of the program described
in the previous section was based on the fact that both
computers had input and output devices and, furthermore, that these input and output devices were enough
alike in the form in which they handled the data to make
comparison easy. However, when considering a section
of a program, we can no longer define correctness in
these terms because it is unlikely that input and output
operations even occur in that section. The input and
output devices were treated as equivalent parts of the
computer. However, other parts may also be treated as
equivalent, and this will make possible the development
of adequate criteria for the testing of sections. Since it
will be the procedure to test sections by executing them,
it will be necessary to specify a set of states, one of
which computer B should be in at the termination of the
execution of a section. Since this set must be determined
by the state of computer A at the same stage, there
must be a one-to-many mapping of the states of computer A onto B. This is obtained by defining equivalent
parts. One natural equivalence is that the contents of
the two memories should be in some sense analogous. It
might be said that the state of A is equivalent to the
state of B if their memories contain identical information. If it were required that every part of computer A
have an equivalent part in B, then the normal operations of B would have to be abandoned and B made to
mimic A's every move. This indeed would be undesirable since, for example, if it were required that B be in an
equivalent state to A after every order, then even if it
were a decimal machine, B would have to find a way in

140

1959 PROCEEDlNGS OF THE WESTERN JOINT COMPUTER CONFERENCE

which it could do· operations such as forming logical
products, which are extremely awkward in any system
other than a binary one. Therefore, equivalent parts
will only be defined when the equivalence is natural.
Initially, this ;will mean that the memories, control
counters, and input and output devices must be made
to correspond. If only equivalent parts may be examined, it is necessary that only those parts of the computer may contain information relevant to the program.
This then requires that our sections be constructed so
that if the state of all other parts of computer A were
altered at the time when tests for equivalent content
were being made, there would be no interference with
the correct operation of the program. This requirement
will be used in sectioning the subject program. The subject program will be "cut" between those orders when
the state of all nonequivalent parts of computer A may
be altered without affecting the running of the program.
The procedure will be that all possible "cuts" be made,
the states of those nonequivalent parts altered on the
basis of a random number, and the program continued.
Those cases where no errors are introduced will then be
recorded and the program divided at those points.
The comparison of the inputs and outputs, if any,
and the control counters is fairly straightforward. The
addr,esses specified in the control counter would have
to be compared by ascertaining whether or not the sections referred to by the two addresses are equivalent.
The comparison of the two memories is more difficult.
Regarding memory itself, it is clear that locations which
have in them numbers that are computed functions of
the data should be compared for equality. Orders which
haye been modified by a section must somehow be examined to see if they wi11 perform correctly in their modified form. Also, the contents of other locations may have
information which depends for its form on the order
code of the particular computer, for example, "binary
switches" and "dummy" orders. It is necessary then to
establish the correctness of such locations by linking
them with other sections. Thus we can conclude that if
these sections are acceptably executed throughout the
entire program, then the location in question must be
acceptable. A similar procedure is used for sections
whose function is to modify instructions. For each section this entails making a list of sections whose acceptance must come prior to the acceptance of that section. In some cases it may be necessary to work backwards from sections which only operate on data and
may be checked immediately through several different
sections, the acceptance of each one being a necessary
requisite for the acceptance of a prior one.
Admittedly, the above discussion is not complete and
there may be situations arising which we have not considered. Nevertheless, it is felt that the method itself is
powerful and can probably be adapted to new difficulties as they arise.
Expected Running Times: The purpose of sectioning
the program is, of course, that it is desirable not to

throw out a program completely when only a part of it
is incorrect. However, though it might seem obvious
that the expected running time would be reduced, this
does not follow just because the program is being compiled in sections, as will be seen.
Consider first the probability that an entire program
generated by the random program generator is exactly
a given program. If the program contains l orders, each
of which might be any of k different orders, this probability will be 1/kl. The expected number of trials until
it is constructed will be kl trials. Now, if the program is
sectioned into g parts of lengths ml, m2, ... , my, then
the probability of a particular candidate section being
a particular one is 1/k mi , the expected number of trials
is k mi , and the expected number of trials for the entire
program is

However, this is not the situation. In fact, there are
many acceptable programs. If there are c acceptable
programs of length l, then the expected number of trials
to hit one of them will be kl/ c, and if the program is divided into g sections with lengths ml, m2, . . . , mg and
there are Ci acceptable ways of writing the ith section,
then the expected number of trials to translate the entire program will be

If
g

II

Ci

=

C,

i=1

then it could be concluded that the expected number of
trials for the procedure using sectioning is much less
than the other one provided the minimum expected
number of runs for any section is greater than 1/gg-I.
For g>1, 1«1/gg-I)~2 and the expected number of
runs for any section will never become that low. However, by requiring the program to achieve certain criteria at many points during its execution, many programs that would satisfy the criteria of identical outputs will be rejected. Therefore, in general
g

II c < C.
i=1

There is no a priori method then of saying conclusively
that sectioning will lower the expected number of runs,
although it seems that it would except in unusual circumstances. However, it might at some point be possible to pool some of the sections and reduce expected
translation time, while at the same time increasing the
probability of producing a better program because of
the fewer restrictions.

Arnold: A Compiler Capable of Learning
THE ADAPTION OF LEARNING MODELS TO
THE COMPILER

The Alteration of the Random Program Generator to Permit Learning
So far, the random program generator has selected
each of the k possible alternatives at each step with
equal probabilities. It is entirely possible to allow the
orders to be selected with other probabilities. Given a
set {p} of probability vectors P = (PI, P2, ... , Pk)
such that

where P~ is to be associated with order Oi, the selection
of an order may be made in the following manner. Given
a pseudo random number R such that 0 :::;R:::; 1, and a
vector P, then OJ will be selected when
i-I

i

LPi:::; R < L Pi.
i=l

j=1

If J? has a rectangular distribution as it should have,
the~ OJ will be selected with probability pj. A specific
selection of an order will constitute a response, and the
set of probability numbers (PI, P2, ..• , Pk) will be
abbreviated P.
Intuitively, it is felt that some P's will be more satisfactory than others, and a method must be found to arrive at some "best" one. First, some measure of performance is needed. The measure might include not
only how often a particular P produces an acceptable
program, but also whether the program is concise and
has a short running time. However, it will be more convenient at present only to consider a U(P) =Pr. (A
candidate program using P is acceptable.) It would then
be desirable to find the maximum of this function where
P ranges over the set {p} and use the corresponding P.
At present there is no information about this function.
A guess might be made that it is continuous but has
several modes. It will be assumed, however, that it has
but one, the hope being that if any of them are discovered, a fairly satisfactory selection of orders will ensue.
One method that might be used would be to modify
the P/s whenever an acceptable program is achieved,
or if sections are being dealt with, an acceptable section.
Given that a particular order 0, were in the acceptable
section, the current set of p/s would be replaced by

[ (PI - ~l), (P2 - ~), ... ,

(Po + (1 ~ Po)), ... ,(po - :.)]
where d is a parameter that would reflect the magnitude
of the desired change. This procedure would be repeated
for each order in the acceptable section. The expected

141

value of P can be shown to approach in expectation the
value at which U(P) has its mode. It is desirable to let
d be a function of the number of modifications already
made, so that as a modal value is approached, the variance about it will be decreased.
This procedure might be called learning because it
permits an increment in performance to occur as a result of the "experience" of the compiler.
The Stimulus Situation
Consider now the situation of the response selector at
a given point in its selection of orders for a candidate
section. Previously, the P's used for the selection of
each order in the program have been identical. However, if P y is the set of probability vectors that might
be associated with the selection of a particular response
y, it would seem that max U(P y ) will occur at different
values of P for different values of y. In order to devise a
method for obtaining these different values of P', there
must first be a method of classifying the y's. It may
seem at first that y could be classified according to its
location alone in the program, but this would not help a
great deal. This seems to have been one of the difficulties
of Freidberg's programs. It is clear that most of the deviation of P y from P may be explained in terms of y's
relation to the orders around it in the program. For example, P y should certainly depend on the selection that
has already been made for (y-1). Since max P y is conditional upon the value of (y-1), a set of k P's may
be used, one for each value of (y-1). In general, not
only will max P be dependent olV (y-1) but also on
(y - 2), (y - 3), and in fact on many other variables that
might not even be defined in the candidate program.
Other variables that should be considered are the
particular orders that make up the subject section being translated and the contents of certain locations in
both the interpretive routines used. While the relationship of the subject orders might be obvious, that of the
interpretive routines is probably not. The interpretive
routines do contain variables in the sense that (y -1) i;
a variable, for example, the "left-right counter" necessary in an interpretive routine simulating Princetontype machines having two orders stored in one location
in memory. In fact, it might be suspected that since a
human being can know all there is to know of a computer's importance to programming by examining an
interpretive routine simulating it, indeed a great deal
might be gained by considering the relationship between max P y and some of these variables. The reason
for the concern with the dependence of all these variables with max P y is that prior to selecting the value of
y, the particular values that these other variables have
taken will be known; and if the relationship is also recognized, a better P y may be used. Certain definitions follow naturally. If X = (Xl, X2, • • • ,xn ) is a set of variables
whose values may be determined prior to the selection
of order y and S = (Xli, X2i, ••• , Xni), a set of particular
values for X, then max P Y ' will mean that value of
8

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

142

{p}

for which U(P y • =Pr (a response chosen using
P Y ' 8 is 'acceptable given X = S) is maximized. S is called
the stimulus vector, and Xl, X2, • • • , X n , stimulus variables.
8)

The Selection of the Response
Considering again the selection of the value of y,
it is desired to find max U(Py • If the best P Y ' were
determined independently for each possible S and if
d = 1, that is, if Pc becomes 1 when c is the correct response for the specified S, and if all possible stimulus
variables were included, the scheme would have these
characteristics. It would classify completely all the correct responses corresponding to the given stimulus
vectors. But the same situation, that is, S, would never
be encountered twice unless the scheme were again
asked to translate the same program. Therefore, it
would never be any better than a scheme which took
no cognizance of the stimulus situation. Of course, this
method is impossible because of the space requirements.
Restricting the number of stimulus variables might be
one way out. However, if one such variable could take
on the average 40 values, the p's expressed to five binary
digits and k =40, the 512,000,000 words of 40 bits
necessary to store the p's for just five variables would
be prohibitive.
8 ).

8

A Linear Modal
Clearly the trouble lies at least partly in allowing the
max P Y ' to be freely determined for every S. While it is
true that in general the effect of one stimulus variable
Xi on max P Y ' will depend upon the values of the other
stimulus variables, ii'is not necessarily the case that the
effect of Xi on max P Y ' will change for every change in
the value of any stimulus variable.
A procedure that would take advantage of this fact
migh t be developed along the lines of linear regression
theory. The modal assumes that max P Y ' is a linear
combination of max P Y ' Xi ' i where PY'Xi.j, represents the
ith probability vector associated with the stimulus variable Xi. Given then an S = (Xli, X2i, ••• , Xni), the value
P Y ' to be used in the selection of y would be
8

8

8

8

8

PY ' 8

= bl(max P y • x )

+ b (max P
2

y • X2i ), • • • ,

bn(max P y • Xni )·

The max Py.Xi./s would be estimated in the standard
way; P y , xi'i would be modified to increase Pc when Oc
was an acceptable order occurring at y when Xi had
value i.
bi should be increased as all the individual probabilities within {PY ' xJ } tend away from max P (max P taken
in the sense of the last section). This could be accomplished by letting bi increase in increments according to
a sample taken from a random variable highly correlated with
k

L
i=l

(PY ' Xi ' H

-

P h )2

where P Y ' Xi + h is the hth component of max PY'Xi.j, and Ph is
the hth component of max P. Likewise, bi should be decreased as Xi shows increasing correlation with the other
variables of X.
I t is suggested that a measure of how much bi should
be lowered because of its correlation with other variables might be some function of the similarity of the particular PY'Xi.j, to the other n -1 vectors determined by
the particular S.
A method for the select,ion and rejection of stimulus
variables would follow naturally. At periodic intervals
that Xi whose bi were lowest would be discarded, and a
new one selected at random from the remaining variables not currently represented in the stimulus variable
set.
The model may be made even more flexible by permitting variables which are the products of two or more
stimulus variables.
A dubious future would seem to be ahead for the
model described above if required directly to "learn"
how to translate programs for an 1103A to 704 language.
The unfortunate situation is, however, that almost
nothing is known of the joint distributions of what have
been called the stimulus and response variables in such
a translating endeavor, and therefore our judgment of
any model, prior to its incorporation in a translator program, must be intuitive.
AN

EXAMPLE

For the purpose of exhibiting some of the general
characteristics of the type of translator proposed, a
program has been written that does in fact compile complete programs for an imaginary one-address machine
given those for an imaginary three-address machine. To
be sure the two machines used are neither of the size nor
complexity of current machines, nor are they as different from each other as current machines. Nevertheless they are sufficiently powerful to be able to compute
transcendental functions, invert small matrices, etc.
Our experience with this compiler is limited and a
full report on its performance is not yet ready; however,
our results to date have proved both educational and
encouragmg.
Our estimations of the initial expected time of translation was of the order of one hour per instruction of the
subject program. After gaining experience, the translator should reduce this to about 90 seconds per subject
instruction. In the first programs translated, the translater did considerably better than anticipated. Upon
examining the programs produced, the reason became
evident. We had overlooked a large class of acceptable
programs, namely, those in which numbers were left in
the arithmetic register of the one-address machine at
the termination of a section; those same numbers could
be of use in the succeeding section. The translator had
promptly taken advantage of this.
The principal difficulty encountered so far has come
with conditional transfers of control. The trouble lies

Crowley: The Solution to Industrial and Commercial Automation
in that the acceptance of a sequence containing a conditional transfer is contingent on its correct operation
at several points in the program. Therefore, a large portion of the program must be executed before a candidate
unit may be rejected or accepted. This implies that for
very long programs the probability of success for a
particular candidate unit containing a conditional
transfer is going to have to be much higher than would
be necessary for a purely arithmetic unit. This same difficulty will arise in the situation of units whose function
is to modify instructions.
The translations produced have been far from optimal, primarily because the length of unit translated at
one time was one subject order. If two at a time had
been taken, the initial expected running times would
have been much too great. However, the present translator may now be modified to use its experience on oneat-a-time translation in translating two at a time, and
the initial expected running times will be reasonable.
This suggests that eventually the translator should
choose the size of the unit it attempts to translate according to the subject orders involved. Thus the translator would reduce the size of the unit attempted if it

143

were recognized as one with too high an expected translation time, and increase the size as it gained experience.
CONCLUSION

I t was suggested at the beginning of this paper that
the type of compiler described might not be of immediate use. In the author's opinion, this is not because our
machines are too small and too slow, although certainly
larger random access memories would be helpful. The
cause seems to lie more in our ignorance of machine design and programming, which amounts to essentially
the same thing. Any increases in speed are quickly absorbed in the realm of combinatorics.
I t is hoped that the ideas expressed here will have
some value in finding the solution to this large class
of problems, which includes not only compilers but
human-language translation, game playing, and other
problems where the initial complexities suggest a solution that is self-improving. Development of information processing theory will certainly make these modified British Museum techniques obsolete. Nevertheless,
they may be of value in the interim and may be able to
contribute to the more rapid development of that theory.

Special-Purpose, Electronic Data Systems-The Solution to Industrial and Commercial Automation
WILLIAM V. CROWLEyt

INTRODUCTION

HE concept that any "standard" electronic data
processing or computing system should be adapted
solely through programming to suit a particular
business or industry is no longer supportable, except as
an interim step for testing the planning and design of
the data system through simulation. Nor does the
answer to making "standard" electronic data systems
more easily adaptable to various business applications
lie in the area of so-called automatic programming. This
is not to imply that this endeavor has not or will not be
valuable and useful.
Its main contribution, however, will be to force data
definition discipline in business systems so that more
powerful electronic data processing equipment, with
more pertinent and functional instructions and commands, will be built.
A careful study and comparison of the current data
systems of many different types of business enterprises

T

t Information Systems Dept., Ramo-Wooldridge Corp., Los
Angeles, Calif.

will show considerable similarity in the general structure and media of the data systems, such as files, action
documents, information documents, reports, etc., but
will disclose numerous differences in the composition
and arrangement of the data itself, as well as management's philosophy of and concern with data handling.
Some of these differences are degree of variability of the
length of the data items; relative occurrence of alphabetic and numeric information; language idioms or usage peculiar to the business or industry; attitudes and
practices concerning the coding of information; government regulations; accounting customs, etc. Because of
these existing variable factors and the social and economic resistance to absolute standardization, industrial
and commercial data systems require equipment designed, not adapted, for their particular situation.
NEGATIVE INFLUENCES

Let us review some of the important factors which
tend to inhibit this inevitable trend toward the building
of electronic data systems tailored to a particular in-

144

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

dustry, type of business, public utility, or governmental
organiza tion.
Perhaps the most important single deterrent to more
rapid development and use of "specialized" general purpose electronic data systems is the lack of realization,
understanding, and acceptance by top management
officials of the ultimate advantages of a tailored, controlled data system in efficiency; compression of traditional time cycles; provision of real cost red uctions; and
detection and elimination of errors. For example, most
manufacturing executives are willing to invest in and
work out financing for an expensive machine tool to improve performan~e and output. Many are still reluctant
to invest the money and effort associated with implemen ting a large electronic data system. As the middle
management of today moves into the top jobs the pace
of electronicization of data flows will increase markedly.
The second most influential inhibiting factor is that
the business machine manufacturing industry is largely
anchored to the past both philosophically and economically. Conceptually, the idea of integrating the informational aspects of business enterprises, and the design of
techniques to accomplish this, did not come primarily
from machine manufacturers, but from business systems
analysts and imaginative managers who first understood
the data flow requirements of business and then sought
to learn of the capability of existing electronic machines.
Many of the first electronic data systems were oversold
through use of traditional marketing approaches. The
state of the art of design and manufacture of electronic
machines has advanced rapidly in the last four years.
The art of applying the machines effectively has also
advanced, but not as rapidly. Some commercial enterprises such as insurance companies have advanced
markedly in data systems understanding. Others, such
as manufacturing concerns, have not progressed as
much.
Another important reas9n for a lag in advancing
toward the use of special electronic data systems for
general purpose use in a particular industry is the matter of communication and agreement between the equipmen t designers and manufacturers; business systems
analysts; equipment programmers and operators; and
business operating and staff managements. The differing
objectives ~f these groups and shortage of managers who
understandfall these points of view have impeded the
faster introduction and use of electronic data systems in
business. The designers and manufacturing personnel
are faced with keeping costs down, and as a result important equipment functions may be left out. S~p1ilarly,
a clever equipment design feature may contribute
nothing but higher cost to the user. Competent business
systems analysts who really understand the objectives
of the enterprise and the data processing requirements
are considered staff and are often not placed sufficiently
high in the organization to enforce their improvements.
Theirs is a continual job of "selling" both to top managemen t and operating management. The programming

staff is always seeking to ease its function, and if a very
precise statement of the problem is not presented, interpretations will be made which may weaken or change
the desired end result. Top management wants quick
results, whereas the relative efficiency of the existing
system, and the specified objectives, and the inherently
complicated planning and implementation procedure
require a considerable investment in time, personnel,
and money. Since a justifiably efficient EDP system
implies functional integration, operating management
may be reluctant to accept the inevitable re-centralization of authority.
Many attempts are being made to reconcile these
points of view through EDP equipment users' organizations, industry committees, conventions such as this
one, and university and special courses which attempt
to explain and pres en t the various points of view adequately.
FAVORABLE INFLUENCES

There are several distinguishable factors and developments which promote and support the concept of
"specialized" general purpose electronic data systems.
Perhaps the most important positive indication of
this trend is the existence and development of several
such specialized electronic data systems and projects
in different organizations. Some of the better known examples are electronic equipment to maintain the status
of airline reservations; equipment tailored to banking
operations; equipment suitable primarily for the control
of process industry operations; equipment especially designed for hospital operations; and govern men t special
systems for capturing, aLsimilating, and presenting defense warning information for field army operations,
and data needed in U. S. Air Force logistics.
Increasing costs of comparable standard systems
and progress in the state of the art make it possible to obtain a specialized electronic data system at not too much
greater cost than the ultimate cost of implementing a
"standard" adapted system. Furthermore, the buyer
is assured of a well operating system because of specially
negotiated contractual terms. The increasing costs of
standard systems are associated with increasing marketing costs imposed by competition, the need for more
highly trained people to harness the more powerful
equipment, and higher development and manufacturing
costs.
Another significant indication of this trend is found
in the actions of certain well-known accounting and
business machine manufacturers to concentrate on a
specific industrial or commercial business.
BENEFITS OF THE TREND

There are many advantages which will accrue from
the extension of this predicted trend, toward the use of
specialized general purpose electronic data systems.
For the user this development will mean that the
buyer and user are assured of getting a data system that

Crowley: The Solution to Industrial and Commercial Automation
will work. Because of individually negotiated contracts,
there will be no vague paragraphs in contracts about
training of personnel, library and subroutines to be
furnished, assistance in system analysis and programming, availability of various techniques of adaptation,
and the attempt to force the user's system into an artificial equipment usage schedule which is not related to
the specific situation. Under present terms the user will
get a more or less good job done depending on the
strength or weakness of the local equipment manufacturers' office.
For the equipment manufacturers the most advantage
will accrue to those who accept this trend first. It will
also mean some changes in marketing and operating
methods. The highly skilled sales team approach to selling will be required. Thus many manufacturers will be
placed at a temporary disadvantage depending on their
present situation. Ultimately, manufacturers will benefit in that, although they may have to apply more selling effort and customer assistance, there is a greater
chance that they will be paid more fairly and adequately
for their efforts. The relationship of manufacturers and
buyers will be clarified, and there will be less chance for
dissatisfied users-a condition which is harmful to the
broad concept of electronic data systems and the entire
industry.
For the state of the art, advancement will accelerate
because, with the closer user-manufacturer relationship the buyer will be willing to pay for advanced equipment built especially for him.
For personnel trained and experienced in electronic
data system techniques and an understanding of the
business data requirements of the various commercial,
industrial, and governmental enterprises, there will be
expanded opportunities for employment and innovative
work.
For the economy as a whole, we can look for continued increased productivity per individual, and the
ultimate, virtual elimination of the wasteful errors of
carelessness, misinterpretations and the varying application of logical rules which plague the modern business
enterprise in its daily operations.
RESEARCH AND DEVELOPMENT REQUIRED

Before generalized special electronic data systems
reach the anticipated rate of installation more pure and
developmental research must be made in several areas.
The first field of development has to do with further
scientific investigation into the information requiremen ts of the various levels of management in the several types of business enterprises. In many industries
the creation, maintenance, and dissemination of data
has become an end in itself. De-integration and compartmentalization of larger business organizations has
given custody and control of certain portions of vital
opera tional data to su perficial or minor organizational
units which are their only source of power. Often these
data are in the form of reference files, where the chief

145

file clerk is a force to be reckoned with in the existing
administrative mechanism. Other artificial administrative data terminals may be illustrated by a coding section where data are coded for facilitation of processing
and summarization. Still another artificial administrative data terminal is found in the specialty of cost
estimating. Usually the data required to estimate any
particular cost reside in or are generated by:
1)
2)
3)
4)
5)
6)
7)

Vendors' files outside the company
The engineering section
Material control and purchasing section
Prod uction planning section
Accounting department records
Payroll department
Written or verbal policies of profit margins and
markups.

These data are also required for many other purposes.
Why then should not one central company file of necessary business data be maintained for presentation to
the functional area officials as required? Under this concept management can truly manage, by exercising direct
control over that vital body of company operational
in telligence.
A second area of investigation should be into the
nature of various functional areas themselves. For example, such obvious questions as: What is production
control? What information is used in making production
control decisions? Where does it originate? Where does
it go? How is it used? etc. For every type of business a
study must be made by function, noting any peculiarities. of a particular function in the specific business.
The peculiarities themselves must then be analyzed to
determine the significance of the differences. After logical conclusions are reached a foresighted, enlightened
management is required to implement the findings.
These investigations can best be performed by competen t independent organizations such as the business
schools, research organizations, consultants, and ind ustry committees.
EQUIPMENT DESIGN

Concurrent with applications, research should be continuing in equipment design. It should be universdlly
accepted among business data processing specialists
that information files are the centers or center from
which data to form decision patterns must come. Therefore, the handling of file maintenance, file reference, and
data organization should be the primary area of research. The other significant problem is the accurate,
rapid capture of raw data as they occur. In addition to
the physical devices needed to capture the data there is
often the problem of transmitting the data to some
central location. This problem is more or less simple depending on the distance involved and the format of the
data.
Most of this is and should be carried on by the various manufacturers. There is an economic cost to this of

146

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

course, and unless some of the research is done by endowed organizations, the immediate costs are likely to
be high.
There must also be a closer liaison between the digital
data processing engineers and the communications engineers. As electronic data systems become more responsive, communicating and transmitting devices will
be needed to connect the data processing center with
the various segments of the system. Terminal data
transfer and translation problems must be solved to
permit the ultimate automation of data manipulation
that is logic::llly feasible.
CONCLUSION

The demise of the medium and large-scale general
purpose electronic data processor or computer for business purposes is in sight. A sufficient number of indus-

trial and commercial procedural analysts are now able
to specify their data system requirements with cognizance of the speed and ability of electronic devices so
as to build what is needed-not use just what is available. Small general purpose computers and large capacity computers for scientific calculation will continue
in long usage.
Many large companies with special electronic data
handling problems have found the traditional large manufacturers of business machines unwilling to do more
than tie together existing standard lines of equipment.
Often unwilling to entrust the smaller electronic manuJacturers with their problems, several companies have
embarked on their projects of tailor-made electronic
systems.
I predict that this trend will continue until, or unless,
some better-known companies enter the field.

The Residue Number System
HARVEY L. GARNERt

INTRODUCTION

N THIS PAPER we develop and investigate the
properties of a novel system, called the residue code
or residue number system. The residue number system is of particular interest because the arithmetic operations of addition and multiplication may be executed
in the same time as required for an addition operation.
The main difficulty of the residue code relative to arithmetic operations is the determination of the relative
magnitude of two numbers expressed in the residue code.
The residue code is probably of little utility for generalpurpose computation, but the code has many characteristics which recommend its use for special-purpose
computations.
The residue code is most easily developed in terms of
linear congruences. A brief discussion of the pertinent
properties of congruences is presented in the next section.

I

is valid for some value of t, where A, a, b, and tare
integers, a is called the residue, and b the base or modulus
of the number A.
As examples of congruences, consider

== 7 mod 3
10 == 4 mod 3
10 == 1 mod 3.
10

In these examples the integers 7, 4, and 1 form a residue class of 10 mod 3. Of particular importance is the
least positive residue of the class which in this example
is one. The least positive residue is that residue for
which 0 ~a~b.1
Consider the following set of congruences:
Given

CONGRUENCES

The congruence relationship is expressed as

A ==

a

mod b

==

An

==

a1 mod b
an mod b.

Then
1) Congruences with respect to the same modulus

may be added and the result is a valid congruence.

which is read, A is congruent to a modulo b. The congruence states that
A = a

Al

+ bt

t University of Michigan, Ann Arbor, Mich.

1

The equality sign may exist on only one side of the expression.

Garner: The Residue Number System
I t follows that terms may be transferred from
one side of a congruence to the other by a change
of sign and also that congruences may be subtracted and the result is a valid congruence.
2) Congruences with respect to the same modulus
may be multiplied and the result is a valid congruence.

IT

A,

== (

TABLE

Least Postive Residue
Number

a i ) mod b.

The material of this section has presented briefly,
without proof, the pertinent concepts of congruences.
Additional material on the subject may be found in any
standard text on number theory.2

2 G. H. Hardy and E. M. Wright, "An Introduction to the
Theory of Numbers," Oxford University Press, London, Eng.; 1956.

Mod 2

Mod 6

Mod 3

Mod 4

0
1
2
3
4
5

0
1
0
1
0
1

0
1
2
3
4
5

0
1
2
0
1
2

0
1
2
3
0
1

6
7
8
9
10
11

0
1
0
1
0
1

0
1
2
3
4
5

0
1
2
0
1
2

2
3
0
1
2
3

12
13
14

0
1
0

0
1
2

0
1
2

0
1
2

TABLE

II

NATURAL NUMBERS AND CORRESPONDING RESIDUE NUMBERS

Natural
Numbers

2357

Natural
Numbers

2357

Natural
Numbers

2357

0
1
2
3
4
5
6
7
8
9

0000
1111
0222
1033
0144
1205
0016
1120
0231
1042

10
11
12
13
14
15
16
17
18
19

0103
1214
0025
1136
0240
1001
0112
1223
0034
1145

20
21
22
23
24
25
26
27
28
29

0206
1010
0121
1232
0043
1104
0215
1026
0130
1241

DEVELOPMENT OF THE RESIDUE CODE

A residue code associated with a particular natural
number is formed from the least positive residues of the
particular number with respect to different bases. The
first requirement for an efficient residue number system
is that the bases of the different digits of the representation must be relatively prime. If a pair of bases are not
relatively prime, the effect is the introduction of redundancy. The following example will illustrate this
fact. Contrast the residues associated with bases of magnitude 2 and 6 against the residues associated with bases
of magnitude 3 and 4. In the first case, the bases are not
relatively prime while in the second case the bases are
relatively prime. The residues associated with the bases
of magnitude 2 and 6 are unique for only 6 states while
the residues associated with the bases of magnitude 3
and 4 provide a unique residue representation for 12
states. This is further clarified by Table I.
An example of a residue number system is presented
in Table II. The number system shown in Table II uses
the prime bases 2, 3, 5, and 7. The number system,
therefore, contains 210 states. The 210 states may correspond to the positive integers 0 to 209. Table II shows
the residue number representation corresponding to the
positive integers 0 to 29. Additional integers of the
number system may be found by congruence operations.
Let a, b, c, and d be the digits associated with the bases
2, 3, 5, and 7, respectively. The following congruences

I

REDUNDANCY OF A NON RELATIVELY PRIMED BASE REPRESENTATION

g

I t follows that both sides of the congruence may
be raised to the same power or multiplied by a
constant and the result is a valid congruence.
3) Congruences are transitive. If A == Band B == C,
then A ==C.
4) A valid congruence relationship is obtained if the
number, the residue, and the modulus are divided
by a common factor.
5) A valid congruence relationship is obtained if the
number and the residue are divided by some common factor relatively prime to the modulus.

147

TABLE

III

NUMBER OF STATES AND DIGITS ASSOCIATED WITH A RESIDUE
REPRESENTATION

i

Pi

1
2
3
4
5
6
7
8
9

2
3
5
7
11

13
17
19
23

t,Pi

.~l

Ii Pi
i~l

2
5
10
17
28
41
58

77
100

2
6
30
210
2,310
30,030
510,510
9,699,690
223,092,670

Pi

bits
1
2

3
3
4
4
5
5
5

r.bitsPi
1
3
6
9
13
17
22
27
32

define a, b, c, and d for the residue representation of the
number N:
N == a mod 2

N
N
N

== b mod
== c mod
== d mod

3
5
7.

The residue number system is readily extended to
include more states. For example, if a base 11 is added
to the representation, it is then possible to represent
2310 states. Table III shows the product and sum of the
first nine consecutive primes greater than or equal to 2.

148

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The product of the primes indicates the number of
states of the number system, while the sum of the
primes is a measure of the size of the representation in
terms of digits. Table III also includes the number of
bits required to represent each prime base in the binary
number system.

TABLE IV
MOD

-

S=AE9B

0

--

0

0
-

-

0

1

0

1

0

0

EB

--

1

0

1 2

1

1

2

-

1

-0 0
--

2

0
-

0

2
2 0 1
sum mod 3

0

0

1

2

0

0

0

0

1

0

1 2

-

2 0 2 1
product mod 3

1 0 1
product mod 2

employing residue numbers illustrate the addition and
multiplication operations and the presence of an isomorphism or the lack of isomorphism in the case of overflow. Residue numbers will be distinguished by the use
of parentheses.

+ 27 = S

29

= 56

29

~

(1 2 4 1)

27

~

(1 0 2 6)

56 ~ (0 2 1 0)
(1 2 4 1)

E9

(1 0 2 6)
(0 2 1 0)

The following· operations are considered in performing
the addition of the two residue representations:
1

+ 1 == 0 mod 2

2

+ 0 ==

2 mod 3

4

+ 2 ==

1 mod 5

1

+ 6 ==

0 mod 7.

Consider the~addition of two numbers which produce a
sum greater than 209.

S = 100
E9

+ 200

(0 1 0 2)
(0 2 0 4)
(0 0 0 6)

The residue representation (0 0 0 6) corresponds to the
real positive number 90. In this particular example, the
sum has overflowed the residue representation. The resulting sum is the correct sum modulo 210.

P = A 0 B.
300
Consider a residue number representation with bases
2,3,5, and 7. We assume an isomorphic relation between
the residue number system and the real positive numbers 0 to 209. An isomorphic relation then exists for the
operations of multiplication and addition only if the
product or sum is less than 210. The following examples

SUMS AND PRODUCTS

--

1
1 0
sum mod 2

RESIDUE ADDITION AND MULTIPLICATION

The residue number representation consists of several digits and is assumed to be in one-to-one correspondence with some positive integers of the real number
system. The digits of the residue representation are the
least positive residues of these real positive integers
with respect to the different moduli which form the
bases of the residue representation. I t follows as a direct
consequence of the structure of the residue number system and the properties of linear congruences that operations of addition and multiplication are valid in the residue number system subject to one proviso: the residue
system must possess a number of states sufficient to
represent the generated sum or product. If the residue
number system does not have a sufficient number of
states to represent the sums and the products generated
by a particular finite set of real integers, then the residue
system will overflow and more than one sum or product
of the real number system may correspond to one residue representation. For a residue number with a sufficient number of states, an isomorphic relation exists
with respect to the operations of addition and multiplication in the residue system and a finite system of real
positive integers.
Each digit of the residue number system is obtained
with respect to a different base or modulus. It follows,
therefore, that the rules of arithmetic associated with
each digit will be different. For example, the addition
and multiplication of the digits associated with moduli
2 and 3 follow rules specified in Table IV. No carry
tables are necessary since the residue number system
does not have a carry mechanism. Addition of two residue representations is effected by the modulo addition
of corresponding digits of the two representations. Corresponding digits must have the same base or modulus.
Modulo addition of digits which have different bases is
not defined. Multiplication in the residue system is effected by obtaining the modulo product of corresponding digits. The operations of addition and multiplication of two residue numbers are indicated by the following notation:

EB

2 AND MOD 3

==

90 modulo 210.

Finite real number systems and residue number systems
have the same overflow characteristics. The sum which
remains after the overflow is the correct sum with respect to a modulus numerically equal to the number of
states in the finite number system.

Garner: The Residue Number System
The following is presented as an example of the process of residue multiplication:

p=

10 X 17 = 170

()

10 ~ (0 1 0 3)
17

~

(0 1 0 3)
(0 2 0 2)

170 ~ (0 2 0 2)

The process of multiplication involved consideration of
the following relations for each digit:

==
2 ==
2 ==
3 ==

1X 0

0 mod 2

1X

2 mod 3

OX
3 X

==

D = A

B = A E9 B'.

We consider first the case where the magnitude of A is
grea ter than B.
Let A = 200

B = 100.

In residue representation,

B' = (0 2 0 5)
and
(0 2 0 4)

2 mod 7.

E9 (0 2 0 5)

(0 1 0 2)
The residue representation of the difference corresponds
to positive 100 in the real number domain. We consider
next the case where the magnitude of B is greater than
the magnitude of A.

160 mod 210.

SUBTRACTION AND THE REPRESENTATION OF

A' = (0 1 0 3)
then

NEGATIVE NUMBERS

The process of subtraction is obtainable in the residue
number system by employing a complement representation consisting of the additive inverses of the positive
residue representation. The additive inverse always exists, since each of the elements of the residue representation is an element of a field. There is no basic problem
associated with the subtraction operation. There is,
however, a problem associated with the representation
of negative numbers. In particular, some mechanism
must be included in the number system which will permit the representation of positive and negative numbers.
This problem is discussed here and in the following section.
The additive inverse of a residue number is defined by
the following:
a E9 a'

= O.

The formula may be considered to apply to a digit of
the residue system or equally well to the whole residue
representation. Consider the following examples with
reference to the modulo 210 residue number system:
a = (1 2 4 1)

then
a'

e

0 mod 5

An overflow resulting from a multiplication is no different from the overflow resulting from an addition.
Consider the product obtained from the residue multiplication of the numbers 10 and 100. The result in the
modulo 210 number system is 160, since
1000

The following examples have been chosen to illustrate
the subtraction process and to some extent the difficulties associated with the sign of the difference:

(1 2 2 3)

(1 2 2 3)

149

= (1 1 1 6),

since
(1 2 4 1 )
E9 (1 1 1 6)
(0 0 0 0)

D = A' E9 B
and
(0 1 0 3)
E9 (0 1 0 2)

(0 2 0 5)
The difference (0 2 0 5) is the additive inverse of
(0 1 0 2). Unless additional information is supplied, the
correct interpretation of the representation (0 2 0 5) is
in doubt. (0 2 0 5) may correspond to either
110 or
-100.
The difficulties associated with whether a residue representation corresponds to a positive or negative integer
can be partially removed by the division of the residue
number range into two parts. This is exactly the scheme
that is employed to obtain a machine representation of
positive and negative natural numbers. For the system
of natural numbers, two different machine representations of the negative numbers may be obtained and are
commonly designated the radix complement representation of negative numbers and the diminished radix
complement representation of negative numbers.
The complement representation for a residue code is
defined in terms of the additive inverse. Thus, the
representation of negative A is A I where A E9A
0,
and the range of A is restricted to approximately one
half of the total possible range of the residue representation. This can be illustrated by consideration of a
specific residue code. This residue representation employing bases of magnitude 2, 3, 5, and 7, is divided into
two parts. The residue representations corresponding

+

,=

150

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

to the natural numbers 0 to 104 are considered positive.
The residue representations corresponding to the
natural numbers 105 to 209 are considered inverse
representations and associated with the negative integers from -1 to -105. The range of this particular
number system is from -105 to + 104. The arithmetic
rules pertaining to sign and overflow conventions for
this particular number system are the same rules normally associated with radix complement arithmetic.
The complement representation does eliminate in
principle any ambiguity concerning the sign of the result of an arithmetic operation. However, there is a
practical difficulty. The determination of the sign associated with a particular residue representation requires the establishment of the magnitude of the representation relative to the magnitude which separates the
positive and negative representations. The determination of relative magnitude for a residue representation
is discussed in the next section, where it is shown that
the determination of relative magnitude is not a simple
problem.

CONVERSION FROM A RESIDUE CODE TO A
NORMAL NUMBER REPRESENTATION

I t is frequently desirable to determine the natural
number associated with a particular residue representation. The need for this conversion occurs frequently
in investigation of the properties of the residue system.
The residue representation is constructed in such a
manner that magnitude is not readily obtainable. The
presence of digit weights in the normal polynomial type
number representation greatly facilitates the determination of magnitude. However, it is possible to assign
a weight to each digit of the residue representation in
such a manner that the modulo m sum of the digitweight products is the real natural number in a consistently weighted representation. m is the product of
all the bases employed in the residue representation.
The conversion technique is known as the "Chinese
Remainder Theorem." The material which fol1ows describes the conversion technique but omits the proof.
A simple and straightforward proof is found in Dickson. s
The proof does not refer specifically to residue number
systems, but rather to a system of linear congruences.
If so regarded, a system of congruences defines a component of a residue number system.
Consider a residue number system with bases
m ... mt. The corresponding digits are labeled
al ... at. The following equations define the conversion
process:

m

. + atA t -mt == S mod m
a L. E. Dickson, "Modern Elementary Theory of Numbers,"
University of Chicago Press, Chicago, Il1., p. 16; 1939.

where

m
Aimi

==

1 modmi

and
t

m =

II mj.
j=1

The conversion formula for a particular residue number
system is now obtained.

ma = 5
105 A I

70 A2
42 As
2 As

30 A4
2 A4

105 al

==
==
==
==
==
==

+

1 mod 2

so Al = 1

1 mod 3

so A2 = 1

1 mod 5
1 mod 5

so As = 3

1 mod 7
1 mod 7

70 a2

so A 4 = 4

+ 126 as +

120 a4

== S mod 210.

The conversion formula is now used to determine the
natural number corresponding to the residue representation (1 204).

105 (1)
725

==

+ 70 (2) + 126 (0) + 120 (4) = 725
S mod 210

S = 95.
The conversion process described above requires conventional multiplication and modulo addition.
Other conversion techniques exist. In particular it is
possible by means of a deductive process to determine
the magnitude of a particular residue representation.
This requires both a knowledge of the nature of the
residue system and the natural number representation
associated with at least one residue representation.
Due to the deductive nature of the process, it is more
suitable for human computation than for machine computation. The process is explained using the residue
number of the previous example (1 2 0 4). The knowledge of the residue representation for unity which is
(1 1 1 1) is assumed. Consider the effect of changing the
second digit from one to two. The change adds the
product mImam4 = 70 to the number, since 70 is congruent 1, modulo 3. The resulting residue representation (1 2 1 1) corresponds to 71. The effect of changing
the third digit is to change the magnitude by some
multiple of the product mIm2m4 = 42. The correct change
in magnitude is 42x where 42x == 4 mod 5; so 42x = 84 and
the residue representation (1 2 0 1) corresponds to 155.
The fourth digit is modified by the addition of a three .
The effect of this change is determined by 30x == 3 mod 7.
The magnitude change is 150. The sum of 150 and 155
modulo 210 yields the correct result 95, in correspondence with (1 2 0 4).

151

Garner: The Residue Number System
Sign determination for the residue code is dependent
on the determination of a greater than or less than relationship. A possible method might involve the conversion techniques described previously. Such a scheme
would involve the standard comparison techniques associated with the determination of the relative magnitude of two numbers represented in a weighted representation. An alternate conversion procedure yields a
conversion from the residue code to a nonconsistently
based polynomial number representation by means of
residue arithmetic. Consider a residue code consisting
of t digits. The t digits of the residue code are associated
with t congruence relationships as follows:

1~ i

~

t.

Ai is the integer part of the quotient of S divided by mi.
In regard to a greater or less than relationship, the determination of Ai divides the range of the residue representation into mimi parts. We proceed to calculate Ai
from the set of t equations given above. Let
m

<-·
mt

This equation is then used to replace S in the remaining
t -1 equations, yielding t -1 equations of the form
l~i~t-l

or

== (ai + at')/mt i mod mi
At == d t i mod mi
At

where /mt i is the multiplicative inverse of mt with respect to base mi. The multiplicative inverse is defined as4
Xt/Xti

== 1 mod mi.

d t i is the least positive residue of (ai+at') /mt with
respect to base i.
at' is the additive inverse of at.
Let A t be expressed as
A t-

m

1

< ---.
mtmt-l

If this expression is substituted for A t a set of t - 2 equations remain. The equations are of the form

== [dti + (d tt- 1)'l!mt_l mod m~
A t- 1 == d i t _ 1 mod mi.
A t- 1

4

1~i~t-2

The existence of the multiplicative inverse reauires that

m, be relatively prime.

S = at + Atmt
t-l
At = dt + A t- 1mt-l
t-2
A t-l = d t- 1 + A t-2mt-2

= d 32 + A 2m2
A2 == d 21 mod ml

A3

The equations are combined to yield
S

S is the magnitude of the number expressed in normal
representation. It is also possible to express the number
S as

At

The system of equations shown below is generated by
repetition of the above substitution process until no
equations remain.

Xt

and

t-l

[ t-2

t-3

+ mt {dt + mt-l dt- + mt-2(dt- 2 +
t-l
t-2
t-3
= at + mtdt + mtm t- dt- + mtmt-lmt-2dt-2 +
= at

1

1

1

where
At -

[

X

l><

~

operator control signal

.....

control

~

I
...

II

COORDINATE
DECODER

'1

antenna position

X bmary
Y binary

I

..

... ~ ... ~

~

MASTER
CONTROL
COMPONENT

..

control

Ito.

'1

MANUAL
TRACKING
COMPUTER

..

~

r

t-....

control

data

-

-

......

--

STORAGE
~

,..

~

data
control

Fig. 8-System mock-up fixtures.

Fig. 7-Typical subsystem block diagram.

Subsystem Test
After the component has been completely evaluated,
the next step for system completion is to integrate the
components together into the various subsystems as
determined by a logical sequential build-up. Fig. 7
demonstrates an integration of one subsystem consisting of four components. Simulation equipment needed
in this phase is less than during component test. The
example shows control and decoder components that
have the facility for entering data into a special-purpose
computer which steps through a wired program cycle
and stores information of a magnetic drum. Parts of
this information are used in the control component . . . ,
during system operation. This makes the sub-system
Fig. 9-Final system installation.
a small closed loop within the system. Logical tie-in
and timing errors can be found and solved during
this part of system completion. Simulation equipment closely as possible, yet provides ample working space for
for subsystem test usually consists of inhibiting signals many more of the engineering personnel so that much
that affect the closed loop operation and generate all system testing and trouble shooting can be carried on
those other signals which are necessary to make the simultaneously in many areas of the system. An addiloop operate. In the example shown, X and Y coordinate tional advantage gained by this 2-step operation is prodata in Gray Code, simple operator control buttons, vided by the ability to modify the final wiring installaand radar antenna position signals were the only signals tion as required as problems are encountered in the
needed to be simulated. Parts of existing component mock-up test phase. All system errors will be discovered
test fixtures can be used during subsystem test as they in the mock-up phase and corrections can be made to
contain the necessary simulation equipment.
the equipment before installation into the vehicle. Upon
completion of the tests in the mock-up area, the equipSystem Test and Evaluation
ment is then transferred to the vehicle and the complete
This phase is the culmination of all the test and evalu- system can be integrated with a minimum of personnel
ation effort that has been performed previously. All the due to the fact that the system has been completely
elements, units, and components have been proved to tested and all system errors removed. In fact, the only
perform within the framework of the several subsystems difficulties to be encountered are wiring errors caused by
and now it is necessary to prove complete system opera- human inefficiencies. The vehicle interior working area
tion. This is done in two phases. Since the final military is shown in Fig. 9.
installation is a vehicle that has limited working space,
In the process of the design of the laboratory test
a laboratory mock-up is provided. This mock-up as equipment, many of the simulation concepts evolved
shown in Fig. 8 simulates the trailer installation as are readily useful and on occasion can be incorporated

158

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

into the system as self-test features. This equipment can
be utilized in the initial system evaluation as well as
later during normal test modes of system operation.
Since the end user of this equipment will be military
personnel, many self-test and automatic indicator devices were incorporated to decrease the training requirements for operation and maintenance of this equipment.
Another requirement that is very often specified for
military equipment is that of providing operation for
23 out of 24 hours. This fact dictates the requirement
for having a very rapid means of performing operational
and preventive maintenance checks by semiskilled
personnel.
Military requirements include a controlled complete
system test to prove that the system meets the initial
specification. A comprehensive system test plan is most
often written by the system engineers to test all the
functions of the system. During this test only external
system inputs must he simulated; following the test, the
system is ready for operational use and field evaluation.

for his assigned component, his system knowledge must
perforce increase because of interdependency of the components in the subsystem. Since all components have
been completely tested, there need be only one engineer
now assigned to each subsystem. The remaining component engineers, however, are available for consulting
as needed. When all the components are finally integrated as a system, there remain but a few engineers
necessary for systems testing, each with a broad knowledge, rather than many component engineers with limited specialized knowledge. Final installation can be
completed more efficiently with a minimum of personnel.
CONCLUSIONS

It is apparent that any complex system can be tested
a.nd evaluated by a step-by-step instrumentation. Providing the necessary special-purpose instrumentation
has proved to be more rapid and economical than the
accumulation of general-purpose testing devices. In the
testing of special-purpose computer components within
SYSTEM TEST PERSONNEL TRAINING
a system, there are many instances in which generalThe previously mentioned steps in providing for se- purpose instrumentation devices would not suffice, nQ
quential testing of all components up to the complete matter how much and how varied the instruments could
integration for system test and evaluation allow certain be interconnected. In each of the stages of the system
personnel to acquire gradually system knowledge neces- integration, particular classes of errors and failures can
sary to perform efficiently and rapidly the complex task be uncovered.", During element tests, electronic part
of testing such a large system. It is obvious that no one failures and mechanical errors are discovered and corindividual, no matter how magnificently endowed with rected. After element testing, each element is considered
mental powers, can be expected to understand all neces- operative and the troubles found in unit tests cannot be
sary details of such a complex system containing equip- attributed to the elements. During unit testing, logical
ments involving such diverse fields as displays, conver- and timing design errors can be uncovered and intra-unit
sion, and data processing. A plan was evolved for certain connections are ascertained to be correct. At the comspecialists to become facile in the over-all system con- . pletion of the unit test, each unit is considered to be
cepts, yet utilize the certain portions of their specialty completely operative. Therefore, during the component
to a large extent as possible. This control is achieved in test phase, any difficulties discovered cannot be atthe following manner: in the initial design phases, each tributed to the unit, but rather to logical tie-in errors
component or allied group of components is assigned to between units and inter-unit wiring. Similarly, the proba cognizant circuit engineer whose responsibility during lems within the subsystem test are related to only those
this phase is to design the logical circuitry for the com- difficulties encountered in integrating more than one
ponent, or in the case of the display subsystem, to imple- component because of the completeness of the compoment the original specifications. Once these data have nent evaluation. System testing is merely an extension of
been released for equipmenting and packaging, this the previous statements, but now referring to problems
same engineer proceeds to design and build the neces- encountered in integrating subsystems. The sequential
sary unique test equipment for, component evaluation. building of test complexity gives us the advantage of
As part of this task the engineer also writes the pro- solving small problems first before becoming involved
cedurals to be followed in the testing of the component. with the intricacies and troubles inherent in any large
This is the first step in causing the engineer to investi- system integration.
gate external requirements of the component assigned
Finally, the experience of the personnel involved in
to him. As the component is integrated into a subsystem the test build-up enables a better understanding of the
area, it is necessary for the engineer to become more system operation, thereby decreasing the time required
familiar with the input-output requirement of the adja- to integrate a large system made up of many di8crete
cent components, so that, while he remains a specialist and special components.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

159

Automatic Failure Recovery in a Digital
Data-,Processing System
R. H. DOYLE,t R. A. MEYER,t

INTRODUCTION

ERFECT reliability in digital computers has not
yet been ach~eved by simply designing ruggedness
mto the eqUlpment components. Nevertheless, it
is essential for a computer to perform dependably under
all conditions. In certain computer applications, errors
resulting in unscheduled maintenance delays can be tolerated, but only at the cost of expensive computer time.
In some special military and civil applications, such as
the SAGE system and air-traffic control systems, poor
equipment reliability can be disastrous since input information not processed when the system is inoperative
can become obsolete during the time required for manual recovery.
Although it is virtually impossible to guarantee that
failures will never occur, it is possible to maintain high
over-all reliability of the system by immediately recovering from these failures with a negligible loss of operational time.
The FIX program was designed to effect automatic
recovery from failures either by

P

1) reinitiating the operation that failed,
2) preventing the operational program from processing incorrect data, or
3) determining the effect that a particular failure
would have on a word of information and then
modifying the information to compensate for this
failure.
The error-detection circuitry of the computer is relied
upon to indicate the existence of an error in computer
operations. When an error is detected by this equipment, the FIX program will be operated in an attempt
to diagnose the failure and to compensate for it.
Although FIX was specifically designed to work with
the Air Defense Program of the SAGE Computer, the
technique employed may be modified for other operational or production systems.
Several other methods for maintaining system reliability have already been developed. Some of these methods will be briefly outlined in the preliminary section of
this paper, followed by a detailed description of the
structure and operation of the FIX program.
RELIABILITY TECHNIQUES

In a complex computer system, component quality
standards are necessary but cannot in themselves insure

t

IBM Corp., Kingston, N. Y.

AND

R. P. PEDOWITZt

complete reliability. To approach the goal of high reliability, a more sophisticated viewpoint has been taken
in designing both the equipment and the computer programs.
In the SAGE system, for example, the complete central computer has been duplexed, and the two computers
alternately performed the operational program and a
standby program on a 24-hour schedule. Special alarm
circuits provide for alerting the standby computer when
the active computer breaks down so that the standby
machine will prepare to assume the active role. A portion of the standby-computer time is devoted to attempting to predict potential failure conditions before
they occur. This technique, known as "marginal checking," consists in operating and testing various circuits
while an abnormal voltage is supplied to them. In this
simulated aging of the equipment, the potential failure
spots are anticipated.
Modern computing equipment is usually designed
with built-in circuitry! that will automatically detect
the majority of errors that occur during system operation. Many operational programs are written to take
advantage of this circuitry by including alarm-interrogation routines which will automatically repeat any
operation that generated an alarm.
Error-checking routines have also been incorporated
directly into operational programs. 2 In programs where
it is necessary to store blocks of information on auxiliary
storage drums or tapes before reusing it, the accuracy of
the transferred information may be checked by comparing the arithmetic sum of the block before it is
stored to a similar sum obtained after the block is
brough t back from storage. If the two check sums are
not equal, the reliability of the information block cannot be depended upon and the program should be rerun. If the program is of considerable length, this task
may be shortened by periodically saving the environ'ment of the program as it operates. This will provide a
con venien t recovery point should it be necessary to regenerate a particular block of information.
Elaborate equipment and coding systems, such as the
Hamming Code,3 can provide for automatic self-correction of errors and for detection of multiple errors. This
• •1 C. ]. Swift, ""¥achine features for a more automatic system for
dIgItal computers, J. Assoc. Compo Mach., vol. 4 p. 172' April
1957.
'
,
,
2]. H. Brown, ]. W. Carr, III, L. Boyd, and ]. R. McReynolds
"Prevention of propagation of machine errors in long problems ,1
J. Assoc. Comp. Mach., vol. 3, p. 348; October, 1956.
'
3 R. W. Hamming, "Error detecting and error correcting codes"
Bell Sys. Tech. J., vol. 29, p. 60; 1950.
'

160

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

is accomplished by dividing the information to be
checked in to grou ps of bits and by parity-checking each
group. The groups of bits are chosen in a manner such
that an error in any bit in the entire word will generate
alarm indications for a unique combination of these
groups. Conversely, incorrect parity counts for any
combination of these groups will uniquely identify the
erroneous bit in the word. Since the incorrect bit can be
identified, circuitry can be provided to correct the error.
This ingenious coding system achieves excellent results,
but only at considerable expense. Channel capacity of
the equipment must be increased to provide for enough
checking bits to represent a number equal to the total
number of information bits plus the checking bits.
Although FIX incorporates some of the techniques
described above, the distinguishing feature of this program is that it achieves automatic failure recovery by
means of programming techniques after an error has
been detected by machine circuitry. While variations of
the FIX concept will be necessary for other operational
systems, depending upon the error-detection circuitry
of the computer and upon the form of the operational
program, this paper will serve to illustrate the general
principles of the FIX technique.
Errors in a computer can occur either during the
actual processing of data, such as sorting, collating,
arithmetic computations, etc., or during the transfer of
information between the central computer and the various auxiliary drum storage units. Since the Air Defense
Program requires a large storage area, it is stored on
auxiliary drums, and a considerable number of information transfers continually occur during normal operations as the various subprograms and their data tables
are brought into core memory to be operated. It is extremely important that these transfers be performed
correctly; hence a large portion of this paper discusses
the technique of monitoring and .correcting errors in
such transfers.
Errors incurred during either central computer or
transfer operations may be either transient or "solid" in
nature. Errors which are due to high stresses of voltage,
temperature, shock, etc., and which have a low probability of recurring, will be referred to as "transient errors." Those errors which are a result of a persistent
equipment malfunction and which can continually be
expected to reappear whenever the submarginal area of
equipment is used, will be referred to as "solid errors."
The FIX program has achieved a high degree of success
in automatically recovering from most of the classes of
errors described above.
PROGRAM DESIGN

Storage requirements for the present version of the
FIX program are 50 core-memory registers and 5000
auxiliary-drum-storage registers. This represents approximately 3 per cent of the total storage available in
the SAGE computer. During any alarm condition, the
short FIX routine that is permanently stored in core

memory will save a portion of the operational program
in order to provide a working area for the main section
of FIX. This same routine will then read the appropriate diagnostic FIX routine into core memory.
Fig. 1 is a flow chart of the logical structure of the FIX
program. This structure may be analyzed in terms of
four functions:
1) Alarm monitoring and control
2) Diagnosis
3) Logging
4) Recovery.
These functions are closely related and, although the
above list represents the over-all time sequence of the
operations to be performed, there will be considerable
overlapping in the detailed structure. Since the design of
the FIX program is a function of the makeup of the
operational progr9-m and of the system to be monitored,
some of the features of the SAGE system, including the
error-detection equipment of the computer and the
structure of the Air Defense Program, will be discussed
during the analysis of the FIX program.
ALARM MONITORING AND CONTROL

The operation of the FIX program is greatly dependent upon the means by which FIX can be notified
of an error occurring in the monitored system. In the
SAGE computer, this is provided for by error-checking and alarm-control circuitry.
Self-checking is performed by the use of parity-code
generation and checking circuits that determine if the
correct number of bits in a binary word have been transferred from register to register during the normal dataprocessing operation. This is accomplished by increasing channel capacity to allow for one redundancy bit to
be contained in the information transferred. As each
instruction or data word is stored in the computer, it
passes through a buffer register, which counts the number of "one" bits in the word. The parity bit associated
with each word will be set to a "one" or a "zero" to give
an odd number of "one" bits in the word, including the
parity bit.
The parity bit will then be stored with the rest of the
word. When this instruction or data word is referred to
by the program, a parity-check count is again performed in the buffer register as the word is brought out
of storage. If the total parity count is not still odd at this
time, the word is presumed to be incorrect and a parity
alarm will be generated. If no error is detected, the operation will continue and a new parity assignment will
be performed prior to storing the word after it has been
operated upon. The parity circuitry is used to check the
correctness of all data transfers that occur in the system. It should be noted that a major shortcoming of the
parity-checking system is that if two bits in the word
are altered as the result of some failure, the odd parity
count will not be disturbed and the error will not be
detected by the parity circuitry. Such an error might
remain unnoticed, in which case the final result would

Machine errot c _
.either a PlOllrammeci
branch or an automatlc branch to fix

Sa... machine ,..Ist~
ers,lacate instNctian
and operand

1

I

DRUM

YES

• •_ _ n

_ _ _ .. · _ _ _

Which type of dNm?

Solid or transient
error?

Yf
Repai r the erroneous
words 01 indicated in
the leaming table

"Or""_'"

Search for incorrect
words and delete

TRANSIENT

them

sor o
Has this error accurred
befo.. ?

NO

Campare error word
good word, cIe'e.mi". bit foiling
and en.e. informa.ion
in leaming .obl.

1
I

I
False memory parity?

'0

NO

YES

Ent.r error information in leamin" table

NO

Mak. a March of .h.
wo.ds in .rror discov.r which bit II
foiling. Repoi. the
word and make en.ry
in leamin~ .obl.

'0

"'1
I

D; ..number
tor 'NOof; ......
and
errors

Display mochine regis'ers, instruction
and operand

Display machine registers, instNction
and operond

1

r
Display dNm identity,
number of errors, and
bit failing

TRANSIENT

Solid or transient
error?

Re-set mochine environment as it ex-

isted when error was

detected

SOt
Re-ent.r operational
program at paint
foll_ing error detection

Re-enter operotional
program at poi nt
foll_ing error detection

Re-initiate opera.ional program at
start of a frame

Re-se. machine envi ronment 01 it existed when error
WGi detected

Display machine re~
gisters, instNction and
operand

1
Re-enter operotional
program at paint
following error detection

Re-enter operational
program at paint of
error detection

'--

(b)

(a)

Fig. 1-Flow chart of the logical structure of the FIX program.

Re-initiote operational program a.

,'ortof f _

162

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

be incorrect, or it might result in other error indications
which could be detected.
Automatic detection of other abnormal conditions in
the SAGE computer is also provided by circuitry.
Sometimes because of circuit failure or an undetected
parity error, or because of a peculiar set of environmental circumstances unanticipated by the program
designer, the computer can begin a nonterminating
cycle of meaningless operations, commonly referred to
as an "illegal loop." Similarly, the computer might begin
an inactive period during which it does nothing but wait
for some anticipated event. If for some reason the event
can never occur, the computer will remain in this inactive condition indefinitely. Special circuitry designed to
impose time limits on such conditions can, upon sensing
an illegal delay in computer operations, terminate the
condition and by means of an inactivity alarm indicate
to the computer that the delay existed. The inactivity
alarm will be activated if a pulse is not generated by the
program at regular intervals or if too many of these
pulses are generated within a given time period, usually
about eight seconds. The programmer must, therefore,
insert the pulse-generating instruction at regular intervals throughout his program if he intends to use this
circuitry. If the program operates normally, the pulses
will be generated at regular intervals. If the program is
"illegally" delayed in a routine, or if it continuously
loops through a few instructions, either too few or too
many signals will be generated, and the inactivity alarm
activated.
Finally, in certain instances an error in a series of
arithmetic operations may result in the attempted development of a sum or quotient which has increased
in size beyond the physical limits imposed by the
register capacity of the computer. This condition, too,
can be sensed by machine circuitry in the SAGE computer and indicated by means of an overflow alarm.
The programmer can choose various modes of operation by using switch settings when planning the reactions of the SAGE computer to these alarm conditions.
These options can be set to have the computer automatically
1) stop on alarms,
2) branch on alarms, or
3) continue on alarms.
Under option 3 the program retains the ability to
interrogate the alarms at some convenient time before
taking any automatic action.
The mode of operation used by the FIX program was
determined by the nature of the errors that would be
encountered. Certain types of computer malfunctions
demand immediate transfer of control to the FIX program. For example, if there is a parity alarm when the
computer refers to its internal memory for a new instruction step or operand, further operational steps
would be useless and might even destroy information.
An inactivity alarm, too, will cause an immediate trans-

fer, since to continue in this case means to continue the
abnormal function. The overflow alarm can also cause
an automatic branch to FIX, but this feature is designed so that the alarm may be suppressed in the operational program when it is known that overflows may
occur during normal operation.
An automatic transfer of control to FIX is effected
by setting the core memory parity, inactivity, and overflow alarm switches in the "active" position and the
stop-branch switch in the "branch" position.
Transfers of data between core memory and magnetic drums may be monitored in another manner.
Erroneous information that might be included in such
a transfer cannot adversely affect the computer until
used. Therefore, a drum parity alarm need not cause
an immediate branch to the FIX program. Instead, the
drum-parity-alarm switch is set to continue on alarm.
At the conclusion of every block transfer, FIX checks
the drum-parity-alarm indicator by means of a program instruction. Since the Air Defense Program was
designed so that all transfers are controlled in one section of the program, the insertion of one interrogation
instruction is the only modification of the operational
program necessary to enable FIX to perform its entire
monitoring function. If the alarm indicator is sensed inactive, there is no change in the normal sequence of
events in the operational program. If at the end of a
block transfer of data into core memory the appropriate
alarm indicator is tested and found to be active, a programmed branch to the drum recovery section of FIX
is effected.
DIAGNOSIS

At this point FIX will attempt to perform all diagnostic work necessary for recovery. Where time permits, FIX will also perform several diagnostic operations that are desirable for corrective maintenance
studies. In all cases the results will be saved for logging
and, depending on the circumstances, they may also
be displayed immediately.
In the event of any type of alarm, initial FIX action
would save the contents of all the computer registers,
such as the accumulator, index registers, buffer register,
program counter, etc., as they existed at the time of the
error. This information is used as an aid to diagnosis, as
part of the record of the error for maintenance purposes,
and also to enable FIX to restore the environment of
the Air Defense Program prior to effecting a recovery.
The type of error, such as would be indicated by a drum
parity alarm or memory parity alarm, will be determined by considering the mode of entry to the FIX program and by sensing the various alarm indicators.
If there has been a memory parity alarm, FIX will
refer to the program counter setting as it was at the
time of the error and will locate the incorrect information that was being operated upon at that time.
When the erroneous information is referred to a second time, a second memory parity alarm mayor may

Doyle, Meyer, and Pedowitz: Automatic Failure Recovery in Data-Processing

163

not be generated. Considering the case where a second drums are perforfued under program control. Addressparity alarm is not generated, FIX will continue its able drums are therefore readily available for diagnosing
diagnosis by comparing this instruction or operand to by the FIX program.
the word that was parity-checked in the buffer register
Upon noting an erroneous transfer of this type, FIX
at the time of the error. If the contents of the buffer will search the block of transferred information in core
register match either the instruction or data word, FIX memory for the information in error. The original verconcludes that there was a false parity error, i.e., an sion of data transferred incorrectly will remain unerror in the parity-checking circuitry itself, and that changed on the drum until it is deliberately replaced
the operation was in fact completed correctly.
with new data. For this reason, when an incorrect word
If upon comparing the buffer register to the memory is found, FIX will locate the original information on the
register FIX finds that the buffer register was com- drum and repeat the transfer of the incorrect word to
pletely zero at the time of the error, this would indicate determine whether the error was tr:ansient or solid. If
that the alarm was probably due to a failure to get a the second attempt succeeds, a correct version of the
start memory pulse and that no operation had begun word is now properly transferred, and by a comparison
when the alarm was generated.
of the correct and incorrect information, the exact cause
Finally, a condition may arise where the buffer regis- of the failure may be determined and saved for logging.
ter is neither all zero nor equal to the instruction or data This process is repeated if a second word in the transfer
word in memory that supposedly generated the alarm. IS III error.
The timing requirements of the operational program
This would indicate a memory readout failure and an incorrectly completed operation. If a second parity alarm will not permit the luxury of individually treating more
is generated on the second reference to the instruction or than two such words solely for maintenance purposes.
operand in core memory, the error is considered genu- If more than two words in a given transfer are found
ine, and, once again, the operation could not have been to have been in error, the remainder of the erroneous
completed correctly.
transfer is repeated at once, sacrificing additional diagThe results of the investigation of each memory nostic information for increased speed in recovery.
If anyone of the recovery transfers is not successful
parity error are included in a record for maintenance
purposes and will also serve as a guide to proper recov- on the second attempt, the error is considered to be of a
ery action. No diagnostic action is taken in the event of solid nature. Further diagnosis is necessary, but recovan inactivity or overflow alarm other than saving the ery can still be achieved. Test data of known structure
contents of the computer registers, and recording the may be transferred over the same channels and checked
type of alarm and the alarm exit location for the main- by return transfer. Since the failure is solid, these transtenance records.
fers will also fail, but this time the exact nature of the
Errors incurred during the transfer of information failure can be determined. FIX maintains a history of
from the magnetic drums to the central data-processing results obtained in this way in a Learning Table. This
unit are treated according to the class of drum involved. table is used by FIX to compensate for future errors and
Input status drums represent the supply of new in- also serves as a guide for corrective maintenance.
Fig. 2(a) represents an abbreviated word, consisting
formation to the computer from an external source, such
as a radar site. Under ordinary circumstances input data of five bits plus a parity bit, which FIX has determined
cannot be stored on a status drum by a program, nor is had been incorrectly transferred into core memory from
it possible to transfer the same information from a status an addressable drum. The total number of "one" bits,
drum to the central computer more than once. Conse- including the parity bit, must be odd in order to be corquently, a status drum is not normally available for rect. The parity alarm which identified this word for
complete testing and diagnosing by FIX without undue FIX was a result of a check which indicated only that
delay of the operational program.
an even number of "one" bits was transferred in this
Recovery from status-drum errors will vary according word. Therefore, further diagnosis is necessary to deterto whether the failure was transient or solid. Therefore, mine which bit has been modified in transit.
Fig. 2(b) illustrates the standard method of testing
when a status-drum error is detected, FIX will examine
the block of transferred information in core memory the transfer channels to and from an addressable drum.
and, on the basis of the number of errors found, classify By using a pattern of all "ones" and then of all "zeros,"
the failure as transient or solid. An erroneous status- the channels may be tested for evidence of bit modificadrum transfer is classified as a solid failure if the number tion. This method, however, precludes the possibility
of errors contained in that transfer is more than five (an that a unique pattern of bits in a word contributed to the
arbitrary figure). Further diagnosis for recovery and failure of one particular channel. Investigation has
main tenance consists in determining the iden ti ty of the , established that failures may sometimes be uniquely
failing drum-input channels and the total number and associated with one word pattern and not with another.
frequency of similar errors.
In the critical circumstances under which FIX is actiAddressable drums serve as an auxiliary information- vated, it is felt that the extreme importance of being acstorage area. All data transfers to and from addressable curate has justified using a more detailed testing pro-

164

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

PARI TY BI T
ORIGINALLY ASSIGNED
AS A ONE BY THE
COMPUTER

' .

(a)

TEST PATTERN (l's or O's)
WRlmN ONTO DRUM.
ALL BITS ARE BEING TESTED

PARITY BIT IS SET TO
ZERO BY COMPUTER
PRIOR TO TRAN SFER

TEST WORD READ INTO CORE
MEMORY TO TEST FOR FAILURE.
BIT D HAS FAILED TO RETURN
CORRECTLY

PARITY BIT STILL ZERO
RESULTS I~ EVEN NUMBER OF
ONE BITS, INDICATING
WORD INCORRECT

(b)

MODIFIED ORIGINAL WORD
WRITTEN ONTO DRUM.
BIT D IS BEING TESTED

CORRECT PARITY COUNT
ASSIGNED BY COMPUTER
PRIOR TO TRANSFER

TEST WORD READ INTO CORE
MEMORY TO TEST FOR FAILURE.
BIT D HAS FAILED TO RETURN
CORRECTLY

PARITY BIT STILL ONE.
RESULTS I N EVEN NUMBER OF
ONE BITS INDICATING WORD
INCORRECT

(c)

Fig. 2-(a) Even parity count indicates word incorrectly transferred but does not indicate which bit failed. (b) Standard test pattern"technique for checking transfer channels comparing "before" and "after" words discloses the discrepancy. (c) Fix technique using original
word [see 2(a) above] to check one channel at a time. The bit discrepancy is double checked by "before" and "after" comparison.

Doyle, Meyer, and Pedowitz: Automatic Failure Recovery in Data-Processing
cedure than the common test pattern. FIX checks the
transfer channels by using the original pattern of bits in
the incorrect word as nearly as possible.
When a solid failure is detected, FIX first checks the
Learning Table for a history of solid errors on this particular drum. If such records exist, FIX checks each
transfer channel indicated by the Learning Table as
having failed before. Fig. 2(c) illustrates a channel being
tested in this manner. A bit which was suspected of having been lost in transit is changed to a "one" in core
memory, and the entire word is then transferred to and
from the drum to test this channel. If this bit fails to return as it was sent out-in this case as a "one"-this
discrepancy is recorded. The Learning Table search will
usually take no more than about 50 msec. If only one
bit is found to be erroneous in this word at the completion of the Learning Table test, the results will be used
to effect a recovery. If the Learning Table examination
is not fruitful, recovery may still be achieved by further
diagnosis.
In this event the next step would be for FIX to conduct a complete examination of the word. This is basically the same as the Learning Table test, except that the
entire word is tested for evidence of failure instead of
only those channels indicated by the Table. Each bit in
succession is complemented, transferred to and from
the drum along with the rest of the word, tested for
evidence of modification in transit, and restored to its
original state.
Any bit that fails to return in the same form as it
was sent out is recorded. The bit-by-bit findings are
accumulated until the end of the examination, which
takes about one second. If the complete examination
discloses that only one bit has failed in this word, the
results will be used to effect a recovery.
LOGGING

All information gathered by the FIX program will be
either printed on the teletype monitor, displayed immediately, or recorded in the Learning Table. The recovery that will follow is meant to improve the system reliability, not to shield equipment failures. If failures were
not logged when recovery was achieved, the equipment
could deteriorate with age until, without warning,
catastrophic failure occurred.
During any alarm condition where automatic transfer occurs, FIX saves the current contents of all the
computer registers for logging. The various alarm indicators will be tested to determine which type of alarm
occurred. This information, together with the identity
of the operational routine interrupted by the alarm, the
data that was being processed at that time, and the details of the error as diagnosed will be logged on the teletype printer immediately after the error occurs. A record of the number of such errors will also be maintained
on a display.

165

Status- and addressable-drum errors are internally recorded in the Learning Table and are also displayed as
they occur. Each time a status-drum error occurs, the
drum field in error and its input channel are recorded
and displayed together with the total number of such
errors recorded up to this time. The record and display
for addressable-drum errors includes the drum field,
failing transfer channel and the nature of th~ failure,
i.e., whether the erroneous bits were "ones" or "zeros,"
whether the failures were solid or transient, and the
total number of such errors.
Enough pertinent information concerning each failure incident is logged to permit maintenance study
teams to attempt to duplicate the trouble and keep detailed statistics on the reliability of the circuits in the
system. Maintenance personnel can resolve machine
difficulties only if this kind of logging is done. As experience is gained, the difficulties will recur less frequently,
since equipment and program design improvements
will be suggested by the statistics.
RECOVERY

The function of the recovery sections of FIX is to
perform all operations necessary to restore control of
the computer to the operational program. The choice
of the method depends on the following factors:
1)
2)
3)
4)

The nature of the interrupting malfunction
The results of the diagnosis
The time spent in detecting and diagnosing
The number and frequency of this type of failure
incident.

Some failure incidents in the central computer may
be rectified by restoring the contents of the computer
registers and internal memory to their original values,
as saved by the alarm monitoring and control sections
of FIX, and then transferring computer control back to
the operational program at the point of interruption.
This recovery method is most efficient, requiring up to
about 30 msec, and is used, wherever possible, in the
case of memory parity errors.
If diagnosis indicates that the operation was completed correctly, as in the case of a false parity error, the
environment of the Air Defense Program will be restored, and recovery will be effected by reinitiating
operations with the next program step following the one
that operated at the time of the error. The exception to
this would be if the interrupted instruction involved a
transfer operation, in which case program control would
be returned to the same transfer instruction.
If the instruction was never completed correctly, as
in the case of a failure to get a start memory pulse or a
memory readout failure, recovery will be attempted by
reinitiating the Air Defense Program with the instruction that originally failed.

166

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

When an error is diagnosed as a genuine memoryparity error, the erroneous word must be corrected in
core memory before the operational program can be
resumed. At the present time FIX cannot automatically
regenerate a correct version of a faulty word in core
memory. However, since all of the program instructions and most of the data that are used by the Air Defense P;ogram are stored on magnetic drums, FIX will
attempt to locate a correct copy of a word in auxiliary
storage and substitute it for an erroneous word in core
memory. Whenever this is possible, recovery will then
be achieved by reinitiating the Air Defense Program
with the correct version of the instruction that originally failed.
If any of these methods are not successful in achieving
recovery, i.e., if the identical failure is immediately encountered, or if these methods are not feasible, as in the
case of inactivity or overflow errors, an alternate method
of recovery is available.
The Air Defense Program consists of a series of subprograms, each of which operates in its turn upon the
latest input data fed to it. The entire process is an
iterative one. After the last program has been completed, the first program will be called on to repeat its
function upon the latest available data.
When an operation cannot be resumed at the point
of interruption, recovery can often be achieved by reinitiating the Air Defense Program at the beginning of
a cycle or frame of operations so that completely new
input data may be processed. The startover procedure
takes approximately five seconds. Thus, an abnormal
condition which was the result of a transient failure will
not degrade the performance of the system. A more
serious or solid failure in the central computer will of
necessity cause repeated restarts of the operational program. It is left to the discretion of the operator to impose limits on the number or frequency of the attempts
to recover in this manner.
FIX does not distinguish between central computer
errors that occur while the Air Defense Program is operating and those that occur during FIX operation. If a
second error is encountered while one error is being corrected, the later error will take precedence. FIX will attempt to correct a central computer error within itself
in exactly the same manner as an error occurring within
the Air Defense Program. The original error, however,
will then be disregarded, recovery of the Air Defense
Program being achieved via startover. Of course a serious error in a vital section of FIX will preclude automatic recovery, and manual intervention will be necessary.
Recovery from transient status-drum errors is
achieved in another manner. In the event of a statusdrum failure, the testing procedure is limited by the
fact that there is no practical way to make an experimental transfer between the central computer and a.

status drum without a prolonged interruption of the Air
Defense Program. Since FIX cannot determine exactly
what failed in this type of transfer, it is not possible to
estimate what the data should have been.
The solution is to render the erroneous data harmless
by eliminating them from core memory, adjusting the
Air Defense Program's records to compensate for the decrease in the amount of input information to be processed, and then returning to the logical operation in the
main program that would normally follow the statusdrum transfer. The momentary loss of some input data
to the computer from a radar site, for example, will
have no more effect on the Air Defense Program than
would the slight interruption of radar fixes that are expected to occur during normal operation. Such temporary losses, or "miss fixes," as they are commonly called,
are not unusual and the Air Defense Program provides
for this by extrapolating or filling in for missing radar
fi?Ces when they occur. In this manner an accurate plot
of the velocity of a hostile ship can easily be maintained
despite the fact that a few positional fixes are missing,
if the available fixes are dependably correct. A much
smaller number of incorrect fixes can destroy the accuracy of a course plot if no means is provided to prevent
these fixes from being included in the plotting computations and therefore the elimination of incorrect input
data is much more desirable than treating such information as valid.
FIX cannot allow the situation to continue where a
large number of consecutive errors reduces the flow of
information to the Air Defense Program below an acceptable minimum. An excessive number of errors in one
status-drum transfer will require that recovery be attempted by reinitiating the Air Defense Program at the
beginning of a new frame of operations so that new
input data can be called for and processed.
A dynamic display of all facts which are pertinent to
this type of error is up-dated each time an error occurs.
By observing the display, maintenance men may be
able to determine the input channel from which most of
the errors are coming. They may thus be able to eliminate or reduce the quantity of status-drum errors by
substituting a spare input channel without interrupting
the operational program.
Recovery from addressable-drum failures may be
achieved by modifying a bit in the incorrect word or
words in core memory according to the nature of the
failure. Transient errors are corrected during the diagnosing operations.
In a solid failure during the transfer of a block of information to core memory from an addressable drum,
many words II).ay be expected to have transferred incorrectly. In an operational period, time does not permit
that FIX be allowed to diagnose each word before correcting it. When FIX is satisJ.ed that the Learning
Table test or the complete examination has disclosed

Doyle, Meyer, and Pedowitz: Automatic Failure Recovery in Data-Processing

167

the failing transfer line for one word, it will use this inINFORMATION BITS
PA R·ITY
formation to correct this word and the remaining words
BIT
A
B
E
c
D
in error in the same transfer. In Fig. 3(a), let us say that
Words 1-10 were incorrect in the block of transferred
10
1
0
1
0
1
1
information shown. Suppose that a complete test of
0
9
0
0
1
1
0
Word 1 indicated that channel A had failed, and that
0
0
1
0
0
0
the bit in position A should have been a "one." After
1
1
1
0
1
0
8
I
0
1
0
0
0
7
correcting Word 1 in core memory as shown in Fig.
0
0
0
0
0
1
3(b), FIX would continue to examine the remaining in1
I
1
1
0
0
6
correct words. If a check of bit position A in each in1
C
1
0
0
0
5
1
1
correct word indicates that it was possible that the iden1
0
1
0
4
1
1
0
1
0
0
tical error occurred in each incorrect word, these words
1
1
0
1
1
0
3
would also be corrected in the same manner as Word 1.
1
0
1
1
1
0
2
[See Fig. 3(b), Words 2-7.]
0
0
0
1
0
0
1
0
1
1
0
In an actual transfer, several thousand words might
be involved. If a solid failure occurred, the number of
o
0
0
0
1
0
words in error could be expected to be quite large. Con1
0
1
1
0
0
sequently, if the error initially found in the complete
test did not apply to all of the incorrect words in a
(a)
transfer, it is felt that this fact would soon be obvious.
INFORMATION BITS
Continued examination of the remainder of the incorPAR lTV
BIT
rect words should reveal at least some words which could
A
B
c
D
E
not have failed in the same manner. In Fig. 3(a), Words
8 and 10 now contain a "one" in bit position A. There- 10
1
1
1
0
1
0
0
0
1
fore, that bit could not have dropped during the original 9
0
0
J
0
0
0
1
0
0
transfe;. This would indicate that another channel had
8
1
1
0
0
1
J
failed in transferring these words and might also have 7
0
0
1
1
0
1
failed during the transfer of any of the previously "cor1
0
0
0
0
0
6
0
1
1
J
J
1
rected" words. This inconsistency would invalidate the
5
0
1
1
0
0
1
"corrections" made earlier to these transferred data. In 4
1
0
1
1
1
1
such a situation, recovery is effected by restarting the
1
0
0
1
1
0
1
1
1
0
1
1
operational program at the beginning of a frame so 3
1
1
1
2
1
1
0
that new input data may be processed. If more than one
1
0
0
0
0
0
channel is found to have failed, this information will be
1
1
1
0
1
immediately logged prior to initiating a startover of the
o
000
1
0
Air Defense Program. The maintenance men may then
1
0
J
1
0
0
determine whether or not it will be possible to permit
the Air Defense Program to continue to process new in(b)
put information.
Fig.
3-(a)
A
portion
of
a
block
of
data transferred into core memory
The main section of the FIX program is also stored on
from an addressable drum. Words 1-10 were incorrectly transan addressable drum. If an error is encountered in readferred. (Note the even parity count in each erroneous word). An
examination of Word 1 indicated that bit A had dropped in
ing FIX into core memory, the diagnostic and repair
transit. This bit would be corrected in Word 1 and all other insections cannot be used to correct this error. Instead,
correct words which could have failed in the same manner.
(b) Bit position A has been corrected in Words 1-7. The inconthe permanent FIX routine in core memory will initiate
sistency in Word 8 prevented further attempts at recovery in this
a startover of the Air Defense Program. If this techtransfer.
,
nique is unsuccessful for recovering from an error, manOn the basis of these results, FIX theoretically is capable
ual intervention will be necessary.
of automatic recovery from 84 per cent of all failures
RESULTS
occurring in the period studied.
After a six-month study of failures during 87 missions
Shortly after this study FIX was employed for an exof the Air Defense Program of the SAGE computer, it tended number of evaluation missions on the SAGE
was determined that the causes for failures were dis- computer.
tributed as follows:
During this large trial period of computer operation,
FIX
provided automatic recovery from more than 92
Drum parity (status and addressable)
53 per cent
Memory parity
10 per cent
per
cen~ of the failures. Of the remaining errors, only
Inactivity
21 per cent
about 2 per cent require unscheduled maintenance, the
Miscellaneous (power failures, stopped by operator, etc.) 16 per cent

168

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
TABLE I
SUMMARY OF FIX ACTION FOR EACH TYPE OF ERROR
Type of Error

False memory parity: within Air Defense Program and/or FIX.
Fail to get start memory pulse, or other memory readout failure:
in Air Defense Program and/or FIX.
Genuine memory parity: in Air Defense Program and/or FIX.
Inactivity, overflow, genuine memory parity that cannot be corrected, solid memory parity: within Air Defense Program and/or
FIX. Solid status-drum failure (an excessive number of errors
in one transfer) in Air Defense Program only. Drum parities
while bringing in FIX: FIX only.
Transient status-drum parity: in Air Defense Program only.
Transient addressable-drum parity: in Air Defense Program only.
Solid addressable-drum parity: in Air Defense Program only.

Any catastrophic error from which FIX does not successfully recover.
Errors which do not result in memory parity, drum parity, inactivity or overflow alarms.

other 6 per cent being due to operator and program
errors.
It appears that the inclusion of FIX in the tests increased computer efficiency. In addition to the improvement in system performance achieved with FIX, a longterm gain should be realized in basic equipment reliability and useful operational time because of the increased
accuracy of the maintenance information supplied by
FIX.
In summary, the advantages offered by FIX are these:
Recovery from most errors is accomplished automatically, almost immediately,. and accurately, thus
assuring the correct results with a negligible amount of
lost operational time. Table I is a summary of FIX action for each type of error.
The Learning Table record permits more immediate
correction of some errors the second time they occur.

Recovery Procedure
Reinitiate program with next instruction following the one that operated at the time of the alarm.
Reinitiate program by repeating instruction that operated at the time
of the alarm. ,
Replace incorrect word with good copy from auxiliary storage drum
and reinitiate program by repeating instruction that operated at the
time of the alarm.
Reinitiate Air Defense Program at the beginning of a new frame of operation. FIX imposes no limits on the number of "startovers" that
may be initiated in attempting to recover. The operator must determine if an excessive number of restarts is cause for manual intervention.
Eliminate erroneous information from core memory, adjust records of
Air Defense Program and continue operations at point following
transfer.
Repeat transfer. Re-enter Air Defense Program at point following
transfer operation.
Correct erroneous words in core memory. Restart Air Defense Program
at point following transfer. If diagnosis is inconclusive for purposes
of correcting error, restart the Air Defense Program at the beginning
of a new frame of operation.
Manual intervention.
None; FIX not activated except by the alarm circuitry of the computer.

Solid errors in addressable-drum transfers, for example,
have been virtually eliminated as a source of reduced
computer efficiency.
The logging feature of FIX affords a detailed record of
all errors exactly as they occurred as an improved aid
to corrective maintenance.
Although the advantages and results obtained so far
have been limited to one specific application in the
SAGE system, it is felt that variations of the FIX
concept can be successfully applied to other operational
and production programs written for a data-processing
system.
ACKNOWLEDGMENT

We wish to acknowledge the cooperation of those who
assisted with the first trials of the FIX program, which
were made at Lincoln Laboratory, Lexington, Mass.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

169

A High-Speed Data Translator for Computer
Simulation of Speech and Television Devices
E. E. DAVID, JR.t, M. V. MATHEwst,

INTRODUCTION

HIS PAPER describes a data translator which,
when used with a digital computer, permits simulation of speech- and television-processing devices.
The usual function of such devices is to realize efficient codings of information-bearing signals thereby
economizing on transmission requirements. The merit
of any particular coding must be based upon human
judgment. Such subjective reactions can seldom be predicted by analytical means. For example, the evaluation
of speech-transmission systems relies heavily upon
quality and intelligibility tests using human observers.
Thus, instrumentation of operating models to generate
test speech samples invariably becomes a part of any
such evaluation. In addition to transmission codings,
signal-processing devices have been used to investigate
automatic recognition of abstractions such as phonetic
elements in speech or geometrical figures. Such recognition functions typically require even more extensive
instrumentation.
Models for studying coding and recognition schemes
currently being considered are complicated, involve
great logical complexity, and require large flexible
memories. The cost of construction is high not only in
money, but in the time and energy of the researcher.
Several years are often involved in adequately testing
relatively simple notions, and systematic evaluation of
some well-founded proposals has been delayed more
than 20 years. Clearly, the need for extensive instrumentation is a bottleneck limiting creativity in this
area.
I t is now possible for most laboratory purposes to
eliminate the construction of much complicated equipment by means of simulation with general-purpose
digital computers. A computer can be made to act the
part of any specified device for the duration of an experiment and requires only a cha'nged program to
transform its action to simulate a new device. Not only
are savings in money and time realized, but many other
equally important advantages can be achieved. For
instance, the range of ideas which can be considered is
greatly expanded, the simulated device and the data can
be precisely controlled, and device parameters can be
easily and flexibly modified.
Simulation of speech and television signal processing
devices requires an input and output translator capable
of delivering such signals into the computer and recover-

T
I

t Bell Telephone Labs. Inc., Murray Hill, N. J.

AND

H. s. McDONALDt

ing them from the computer without appreciable degradation. We have described in a previous paper! one attempt to satisfy this requirement. A block diagram of
that translator is shown in Fig. 1 and we shall review its
operation and limitations before describing a new highspeed version. The analog input is filtered to a band
limit of W cps, and then sampled 2 W times per second.
An analog-digital converter codes each sample into 11
binary digits which appear in parallel at its output. Ten
of these digits are monitored by the recording and playback electronics whose function it is to arrange the digits
in the proper format and record them on a magnetic
tape medium. The initiation, cessation, and timing of
these events are assured by the control unit. This translator is bilateral, recording tapes for computer input,
and decodi,ng tapes written by the computer. Thus the
processed computer output can be displayed audibly or
visually for subjective evaluation.

/
BAND LIMITED
INPUT OR SAMPLED
OUTPUT
ANALOG
INPUT
OR OUTPUT

Fig. 1-0riginal data translator.

The operating parameters of the translator are set
by the tape format, which of course is in turn determined by the computer input requirements. Our translator is designed to match the IBM 704 computer
which calls for a seven-track nonreturn-to-zero recording with digits recorded simultaneously in each track.
There need be 200 such seven-digit characters per inch
on the tape. The translator operates in two modes. In
mode I, each input sample is represented as a six-digit
binary number (the remaining four digits are discarded)
and is recorded in one tape character (only six places
are available in each character since the seventh place is
reserved for a parity check digit). In mode II, each input
sample is represented as a 10-digit binary number and
1 E. E. David, Jr., M. V. Mathews, and H. S. McDonald, "Description and results of experiments with speech using digital computer simulation," 1958 WESCON CONVENTION RECORD, pt. 7, pp.
3-10.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

170

is recorded in two tape characters. The five most significant digits, identified as such by a zero in the sixth
place, occupy the first character, while the five least
significant digits, identified by a one in the sixth place,
occupy the second character.
In either mode I or mode II, the character density
on the tape together with the linear tape speed fixes the
input sampling rate. For instance at a tape speed of 50
inches/second, 50 X200 = 10,000 characters per second
are recorded. In mode II, this figure corresponds to
5000 input samples/second. Three tape speeds are
available on the tape transport, and the corresponding
input sampling rates are summarized in Table L, These
figures show both an upper and lower limit on the sampling rate. Experience with speech and television simulation experiments have shown that these limits are
inconvenient constraints2 on the analog-signal bandwidth. For instance, unless the analog signal is subjected
to a time-scale transformation before its introduction
into the translator (such a transformation can be accomplished by recording and reproducing the analog
signal at different tape speeds), a 5000-cps speech wave
can be represented to only six-digit accuracy. Even
time-scale transformations do not alleviate these limits
greatly, since record-reproduce speed transformations
using analog recorders in trod uce their own signal degradations and thereby have restricted application.
TABLE I
Tape Speed
Inches/Second

Mode I
Sam pIes /Second

Mode II
Samples/Second

12.5
25
50

2500
5000
10,000

1250
2500
5000

The synchrony of the sampling and recording processes in this translator not only places an upper and
lower bound on the available sampling rate, but in addition restricts that rate to a number of discrete values.
This limitation has likewise proved to be an inconvenience limiting the flexibility of the translator. Two
fl..!.rther shortcomings result from this same basic cause.
First, for ease of programming signal-processing experiments, it is desirable to have the digital data
divided into several fixed-length blocks called "records,"
each separated by a gap in which nothing is recorded.
The present translator has no provision for so punctuating the tape with "record gaps." Recording must
proceed continuously during the presence of an analog
input if it is to be sampled without interruption. Such
a procedure often results in long recordings which are
inconvenient for the computer to handle.
The second shortcoming is of even greater importance
and arises from irregularities in character spacing on
2 R. E. Graham and]. L. Kelly, Jr., "A computer simulation chain
for research on picture coding," 1958 WESCON CONVENTION RECORD, pt. 4, pp. 41-46.

output tapes written by the computer. These irregularities are introduced by the low-inertia, fast start-stop
tape transports associated with the computer. When
such tapes are reproduced through the translator, these
fluctuations appear as "flutter" and "wow" in the
analog output. Though this effect has been minimized
by careful selection of tape transports, it remains the
most severe limit on the fidelity of reproduction.
To recapitulate, all of the limitations imposed by the
translator, namely: 1) upper and lower bounds on input
sampling rate, 2) only discrete sampling rates available,
3) data blocks on digital tape of arbitrary size, and 4)
analog output having flutter and wow, result from the
inherent synchrony between the input-output sampling
rate and the recording-reproducing rate on the digital
tape. The remainder of this paper describes the design
and construction of a new translator which avoids these
problems by divorcing the two rates.
SYSTEM DESIGN OF DATA TRANSLATOR

The limitations in the existing translator have been
overcome by incorporating a buffer storage between the
analog-digital converter and the digital tape recorder.
During recording, the buffer stores the samples of the
input while the tape recorder is inserting record gaps.
Thus convenient fixed-length records may be produced
without interruption of input sampling. During playback, the buffer has sufficient capacity to continue delivering characters (to the digital-to-analog converter)
during record gaps in the digital tape, thereby assuring
an uninterrupted flow of output samples. In addition,
the buffer can smooth the jitter in the flow of samples,
producing a uniformly timed sequence to the digital-toanalog converter.
A buffer of practical size can hold only a small amount
of data compared to the amount of data passing through
it in most applications. Consequently, to prevent completely emptying or filling the buffer, the average rate
of recording samples must be equal to the average rate
of sampling the input. (This discussion will be limited
to the recording process in most cases. Extension of
these considerations to the playback process is usually
obvious.) Matching the average rates while maintaining
a uniform character spacing on the digital tape requires
controlling the average tape speed. This control could
be accomplished either by servo control of the speed
or by using a fast start-stop tape mechanism and varying the idle period. The latter procedure was chosen because it results in a much simpler mechanism.
A block diagram of the buffered system during recording is shown in Fig. 2. The system is constructed to
allow any sampling rate from zero to the maximum
allowable, to produce constant-length records, and to
automatically control the start-stop cycle of the tape
recorder so as to prevent the buffer overflowing. The
analog signal is sampled, digitized, and the digits transmitted into the buffer each time an external synchronizing signal is applied. A preset reversible counter keeps

David, Mathews, and McDonald: Computer Simulation of SPeech and Television Devices
ANALOG
SIGNAL

EXTERNAL
SYNC
SIGNAL

DIGITAL
SAMPLES

BUFFER

DIGITAL
SAMPLES

PRESET
REVERSIBLE
COUNTER

BUFFER
ALMOST FULL
SIGNAL

Fig. 2-Buffered recording system.

track of the contents of the buffer, and when sufficient
samples almost to fill it have been accumulated, the
counter emits a signal which starts the tape recorder.
For each character to be recorded, the recorder produces
an unload pulse which causes the buffer to deliver one
character's worth of digits. The unload pulse also
counts down on the reversible counter and up on a
record-length counter. The latter, after'a preset number of characters have been recorded, stops the tape recorder, which then waits until the next "almost-full"
signal is received to start! recording the next record.
Buffer loading continues during the recording cycle.
This procedure requires the buffer to interleave loading
and unloading operations in any sequence, and possibly
to load and unload simultaneously. A modified commercial unit described in the next section meets these
req uiremen ts.
During its ON cycle, the recorder operates at a constant speed, thus simplifying the recording electronics
over that required for a variable speed machine, and
increasing the reliability. If the number of characters in
a record is set to be less than the size of the buffer, and
input sampling rate is less than the rate at which the
buffer can be emptied, then the input sampling rate is
entirely independent of the recording rate. Indeed,
asynchronous sampling can be used equally well.
The maximum sampling rate depends on the recording rate, the\ecord length, and the minimum idle time
of the tape transport. The relation for the maximum is
recording rate
maximum input rate = - - - - - - - - - - - minimum idle time
1+-------record writing time
where the minimum idle time is the minimum time to
produce one interrecord gap and the record writing
time is the time to write the characters in one record.
For the equipment described below, the recording rate
is 30,000 characters per second and the minimum idle
time 4.7 milliseconds. Thus for a record length of 1000
characters, the maximum input rate is very close to
26,000 characters per second (26,000 mode I samples
per second or 13,000 mode II samples per second).
During playback the digital-tape machine supplies
characters to the buffer and the digital samples from the
buffer are converted to analog output samples at the

171

command of an external synchronizing signal. The operation sequence is such that one record of characters
is put into the buffer from the tape initially. Thereafter,
each time the buffer becomes almost empty, as indicated
by a preset reversible counter, another record is put into
the buffer. The tape transport is started by the "almostempty" signal, and stops at the end of each record of
data. All starting and stopping is thus done in the record
gaps, and hence no data are read while the tape is accelerating or decelerating. This control prevents the
buffer from becoming empty and divorces the rate of
delivering output samples from the rate at which data
are coming from the tape.
The buffered recording system thus overcomes the
principal limitations of jitter, inflexible sampling rate,
and record length which harassed the operation of the
original data translator. In addition, by using a higherspeed transport, a substantially faster recording rate is
achieved. The details of the new translator are described in the next section.
DATA TRANSLATOR DESCRIPTION

A photograph of the new high-speed data translator
to perform the task of recording speech and television
signals on digital tape is shown in Fig. 3. The unit is
contained in three racks. The rack at the left houses the
tape transport and read-write electronics. The center
rack contains the control circuits and the buffer storage
unit, while the rack on the right contains the analog-todigital and digital-to-analog converters.
Many of the components of the new high-speed system are commercially available stock items. The tape
transport is an Ampex FR-300 Instrumentation Tape
Transport with a tape speed of 150 inches/second. It
was found necessary to construct special reading translators to attain the desired reliability. The recorder
produces all features of the IBM tape fortpat including
lateral parity bit, longitudinal parity character, and
end-of-file marks and gaps. During playback a clock
generator which forms a logical "or" of all the seven
tracks to produce a character synchronizing signal is
required to provide timing. The IBM format of using
six data bits plus an odd parity sum makes this unit
necessary since it is possible to have a character with
only one "one" in it and that "one" can occur in any
of the seven tracks.
The buffer storage unit is a Telemeter Magnetics
1092-BU-7R buffer which is capable of storing 1092
seven-bit characters. The buffer is capable of either
loading or unloading one seven-bit character in 10 }-tsec
but it must not be loaded and unloaded simultaneously.
In this application, input data must be loaded during
writing on the tape, so that buffer is augmented by the
addition of a seven-bit storage register and a guard circuit. In the c~mposite unit, loading of the buffer is unchanged except it is delayed for 10 }-tsec by the guard
circuit. This circuit establishes a zone which starts 10
}-tsec before and ends 10 }-tsec after the actual loading of

172

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Fig. 3-The new high-speed data translator.

the data. If an unload is attempted during the time of
this zone, the actual unloading is delayed un til the end
of the zone. In this manner, the actual loading and unloading of the magnetic cores can never occur within
10 p,sec of each other. As a result, unloading is no longer
uniform in time and may be delayed up to 20 p,sec depending upon the loading conditions. An additional
storage register removes the 20 p,sec uncertainty in unloading by holding the output data from the magnetic
core unit until the subsequent unload operation is attempted. The augmented buffer unit, comprising the
core storage, the guard circuit, and the seven-bit storage
register can now be loaded and unloaded simultaneously. However, the changes have reduced the maximum rate of cycling of the buffer from 50 kc to 33 kc
and have introduced a delay of one unload-sample-time
in the output.
The analog-to-digital and digital-to-analog conversions are performed by two Datrac B-611 converters
manufactured by Epsco, Inc. These units are equipped
with a sample-and-hold feature and can sample and
convert data at any rate up to the tape writing rate.
Eleven binary digits are available to form two tape
characters which specify the input signal to 0.05 per
cent. The FR-300 allows simultaneous recording and
playback. During recording, the tape is monitored by
the playback heads and the resultant pulses are converted to analog samples by the digital-to-analog converter. Because the buffer is occupied during recording,
the output samples are not buffered and appear only
when the tape is in motion, but such a signal is adequate
for monitoring with an oscilloscope.

The control circuits are composed of about 130 solidstate logic units which perform the functions of data
sorting, data counting, and control to integrate the tape
transport, the buffer, and the converters. This unit,
along with the packaging of the other components, was
designed and constructed to our specifications by the
D. G. C. Hare Co. of New Canaan, Conn. A great deal
of care was taken to prevent uncertainties from creeping
into the time of input sampling, the output sampling
rate and duration of the output samples. During playback, the sample duration is controlled by the time between positive and negative zero crossings of the external synchronizing signal. The operator supplies this
signal so presumably the duration can be made as precise as desired. Baring any digital errors, the only signal
distortion is the quahtizing noise, which is less than
0.05 per cent and timing uncertainties which are functions only of externally supplied synchronizing signals.
There are several functions which are manually executed by means of front panel controls. The digital tape
can be punctuated by an end-of-file mark by means of
"Write End of File" control. Any residue of data remaining in the buffer at the end of a recording can be
dumped on the tape by the "End of Data" button. During playback the unit will read out data until an end of
file is sensed, at which time it will stop and disconnect
the external synchronizing signal as if the stop button
had been depressed. A reset control clears the buffer and
all of the logic. Front panel indication of buffer full,
buffer empty, and parity error are provided.
EFFECTIVENESS OF NEW TRANSLATOR

The principal advantage of the new translator is its
high speed and accuracy. This unit is capable of recording data in computer format on tape moving at 150
inches/second.
The unit has a channel capacity of 150,000 binary
pulses per second. These pulses are used to represent
analog signals to either 11- or 6-bit accuracy at sampling
rates up to 12.5 and 25 kc, respectively.
The rate of sampling input data is independent of
digital tape speed and data density on the digital tape.
The rate of output samples from digital tapes is also
independent of these factors. It is possible, for instance.
to record a speech wave sampled 10,000 times a second
and play it back at two samples per second into a pen
oscillograph.
There are no discontinuities in gathering data, yet the
translator produces a digital tape with convenient punctuation gaps at frequent intervals. This punctuation is
removed automatically when reproducing the signal.
There is no wow or flutter in the output due to fluctuations in the tape motion.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

173

Some Experiments in Machine Learning
HOWARD CAMPAIGNEt
VER since the development of automatic sequence computers it has been possible for the
machine to modify its own instructions, and this
ability is the greatest single faculty in the complex that
tempts the term "giant brain." Friedbergl demonstrated
a new technique in the modification of instructions; he
allowed the machine to make alterations at "random,"
and lent direction to the maneuver by monitoring the
result. This technique is far from being a feasible way to
program a computer, for it took several hundred thousand errors before the first successful trial, and this was
for one of the simplest tasks he could imagine. A simple
principle of probabilities shows that a task compounded
of two tasks of this same complexity would take several
hundred thousand times as long, perhaps a million computer hours. It is the object of this study to examine
techniques for abbreviating this process.
The work reported here is not complete, nor is it
likely to be for several years. The field is immense, the
search proceeds slowly, and there are few clues as to
where to look. This paper is, therefore, in the nature of
a preliminary report.
The memory of a computer can be pictured as a piece
of scratch paper, ruled into numbered cells, on which
notes can be written or overwritten. I set aside a portion
of this memory in which the computer is supposed,
somehow, to write ,a program. Some of this dedicated
space is for data and some for instructions. I programmed
a generator of random numbers, and instructed the computer to use these numbers to write a program (and
later to modify a program already written). The
method of using the random numbers is such that all
addresses generated refer to the data, and all instructions generated are from an approved list. After a little
thought one sees that by putting sufficient limitations
on what the computer is allowed to write the result will
be a foregone conclusion. The object is to design an experiment which leaves the computer relatively free from
limitations but which will still lead to a meaningful result. Friedberg did this.
One way for the machine to write the instructions is
to have it write random bits into a form word, the form
and the dedicated spaces being selected to be coherent.
Now a simple task is envisioned and a monitor routine written to test whether the randomly generated program (which Friedberg christened HERMAN) has accomplished the task. If it has it is tried again until the
probability is high that the task is being done correctly.

E

t American

University, Washington, D. C.
R. M. Friedberg, "A learning machine," pt. I, IBM J. Res. and
Dev., vol. 2, p. 2; January, 1958.
1

If at any step the task is not done then a change is made
before another trial. It is clear that once a routine which
can do the task has been arrived at no further changes
will be made. The procedure can be pictured as a random walk. The number of possible programs is finite; in
one of my experiments it was 296. The rule for going from
one trial to another can be chosen in various ways, but
in the same experiment there were just 64 alternatives
at each step. The number of routines which satisfy the
test must be, judging from my results, on the order of
2 96/2 12 = 284. For some tasks and for some repertories of
instructions it can be estimated directly.
AN EXAMPLE OF A MACHINE WRITTEN PROGRAM

Operation

QJP
COQ
LDQ
STQ
COQ
QJP
LDQ
QJP
STQ
LDQ
STQ
QJP
COQ
STQ
COQ
LDQ

27
24
22
23
24
27
22
27
23
22
23
27
24
23
24
22

00
00
00
00
00
00
00
00
00
00
00
00
00
00
00
00

Operand

Address

0447
0443
0453
0457
0452
0452
0445
0457
0444
0444
0450
0454
0445
0442
0447
0451

0440
0441
0442
0443
0444
0445
0446
0447
0450
0451
0452
0453
0454
0455
0456
0457

Data

13351604
77777777
0
57312317
0
77777777
77777777
77777777
0
0
0
77777777
54040476
77777777
77777777
0

In designing these experiments one has a tremendous
number of choices. There is the repertory of instructions
from which the machine chooses to make up its routine.
The instructions of this repertory need not be selected
with equal probability; some can be used more often
than others. There is the size of the area of the memory
dedicated to the random routine. There is the task to be
performed. There is the time allowed the routine to
make its trial. And there are a multitude of other variations, some of which will be mentioned later.
I first used the simplest repertory to be found capable
of performing the Sheffer "stroke" function. It is:
LOAD from y into the accumulator,
STORE at y from the accumulator,
COMPLEMENT the accumulator,
JUMP to y if the accumulator is positive.
This differs drastically from that used by Friedberg.
The area of the memory devoted to the random routine must be in two parts, one for data and the other
for instructions; otherwise the machine would try to
execute data and come to an intolerable halt. Therefore
the address y of the JUMP order must be interpreted
differently from those of the LOAD and STORE orders.
A problem related to that which leads to the se1:>aration of the two dedicated areas is that endless loops are
highly probable and intolerable. This problem can be

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

174

solved by the routine which interprets the jump addresses. At the occasion of a JUMP a calculation is
made about the time of running; if the time is excessive
then the trial is terminated and called a failure. If the
time is acceptable then the address to which the jump
is made is interpreted.
In an earlier experiment I used a clock to time the
routines and adjusted the jump addresses with a B-box.
The use of a clock made checking and debugging very
difficult and eventually forced me to the later method.
I have used several sizes of dedicated spaces and plan
to try still more, but 'most of the trials have been with
sixteen lines of coding. Friedberg used sixty-four lines.
Clearly one must allow enough lines to permit coding
the task. More than enough slows the "learning" process. To allow just enough prejudices the event and is
not "fair." A felicitous solution to this quandary would
be to find a way to expedite the "learning" so that very
large areas of memory can be dedicated, thus guaranteeing that the answer has been supplied by the machine.
It is the object of this study to find such a solution.
The time allowed to execute the routine can be varied.
I ts absolute minimum is that for three instructions. The
maximum could be quite large if the routin~ were retraced several times. Oddly enough the time allowed on
each trial has only a small effect on the number of trials
before learning. This may be because when the machine
has extra time it uses that time to destroy what it may
already have accomplished. If so then two effects tend
to neutralize each other; the freedom of more time tends
to lengthen the learning period, and the greater number
of potential solutions tends to shorten it (Table I).
TABLE

I

THE EFFECT OF TIME ON LEARNING

Time

Median number of
trials

Number of
experiments

14
10
7

4400
5800
6800
15,500

31
47
30
13

4

The task selected was to transfer a word from one
place to another. This was chosen because of its simplicity. An easier task was to copy a word at a prescribed spot from one of several places (redundant input). A harder task would be to copy a word at several
places (redundant output). With redundant input (the
key word written at two accessible places) the median
number of trials was 2600, compared with 5800 in nonredundant experiments.
The procedure for the machine making changes in the
routine offers many opportunities to be different. In one
series of trials the routine was rewritten completely
after each failure. In another only one instruction was
rewritten, the instructions being taken in turn in successive tests. In another series just one instruction was rewritten, this time chosen by an elaborate procedure involving "success numbers." The success numbers were
accounts, one for each instruction of the routine, which
were increased when a success or what appeared to be a
success happened, and decreased when a failure occurred. This procedure was roughly the same as that of
Friedberg\s, although not quite as elaborate.
A comparison of the results of these alternative methods were that "learning" occurred most rapidly with the
first method, about twenty-five hundred errors before a
success. It was less rapid with the second, about six
thousand trials, and slowest with the last, over one
hundred thousand trials. The "learning" is an abrupt
process, a "flash of insight." The procedure with success
numbers resembles that used in other experiments with
self-improving programs, such as those used by Oettinger. 2 It seems to be inappropriate here since a line 01
coding is not itself a unit. If the program were organized
into units to which success numbers could be appropriately applied then it would begin to appear that the
answer to our problem was being built into our approach. Success numbers will have an important place
in the ultimate "learning" machines, but some sort of
self-improving without them will also be required.
A SAMPLE OF EXPERIMENTS ARRANGED IN ORDER

No. of Trials

These experiments differed only in the time allowed
each trial. The time is 'measured by the number of program steps possible. Each experiment was run until the
machine had demonstrated that it succeeded (one
hundred consecutive successes) and then the number of
trials was recorded. In this set of experiments about 38
per cent of the trials appeared by accident to be successes. Thus the number of different programs tried was
about 62 per cent of the total number of trials. As the
time was shortened it became more difficult to find a
successful routine.
The' median has been quoted here to avoid a bias
which might affect the average. The range of the number of trials is large, and there might be some prejudice
against the longest runs, such as stopping them to
check for faults.

20
38
54
68
72
76
76
88
156
160
162
234
222
276
294
330
338
352
386
521
2

Apparently Right No. of Trials

20
32
37
39
46
40
47
53
77
89
88
131
124
130
153
160
173
160
181
249

Apparently Right

576
592
598
612
614
618
680
800
996
1090
1100
1246
1324
1412
1420
1528
1616
2016
2872
3198

273
280
303
317
297
290
331
385
509
526
526
595
629
688
678
753
782
990
1366,
1522

A. G. Oettinger, "Programming a digital computer to learn,"

Phil. Mag., vol. 43, pp. 1243-1263; December, 1952.

175

Campaigne: Some Experiments in Machine Learning
TABLE II
SYNOPSIS OF EXPERIMENTS

Type of change

All

Task

One at
a time

Length

X

2
14
14
16
16
8
16
16
8
16
16
16
16
16
16
16
16
16
8
16
8
16
16
16
16
16

X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X

Time

2
10
10
12
12
4
10
10
4
10
10
10
10
10
10
12
10
10
4
8

4
10
7
10
10

8

Transfer
word

Com peting twins?
Choose
an exit

Yes

X
X
X
X
X
X
X

X
X

X
X
X
X
X
X

X
X
X
X
X
X
X
X
X
X

X
X
X
X
X
X
X
X
X

In these experiments the mmlmum possible space,
two orders, was allowed for the program. The learning
was judged complete wh~n 20 successive right answers
were given. Thus there is a chance of something less
than one in a million that a given experiment did not
succeed, despite appearances. Notice the wide range,
the slowest taking 160 times as long as the fastest
(Table II).
These experiments throw some light on this kind of
"learning," and particularly on which versions learn
fastest. The process is analogous to evolution, since a
routine survives only if it meets the challenge of its environment. This analogy suggests some further experiments where each trial is built on two successful trials,
perhaps by taking lines of coding from each, much as
genes are taken from chromosomes. I have hopes that
routines can be built in this manner to meet complex environments and that the number of trials will be only
the sum of the number for each individual requirement
rather than the product.
In the experiments described above once a task has
been "learned" there is no further improvement. It
would be desirable for the routine to improve its per-

X

X
X

X
X
X
X
X
X

X
X
X

No

X

X
X
X
X
X

Median no.
of trials
before
learning

548
960
1530
1567
1775
1902
2097
2210
2280
2608
2683
2998
3125
3610
4200
4300
4401
4406
4988
5177
5280
6700
6800
7273
8600
9266

Notes

Reversed exit
Input redundant

Reversed exit
Reversed exit
Input redundant
Input redundant

formance even after it had demonstrated acceptable
skill. This can only be done by some sort of flexible criterion of satisfaction, and by some way of keeping progress already made. I have tried to do this by twin learning routines. The twins compete to see which will first
learn the task. When one of them has learned the other
continues to try, but now it must complete the task
more quickly than its sister. In this way some improvement takes place. It is not quicker than the alternative
of insisting on a high standard ab initio.
Other techniques need to be tested. One of these is to
use as components not lines of coding but subroutines.
In this way the average coherence of the trials should
be raised. Another is some way of accumulating successes and then using them cooperatively to meet more
and more complex environments. This resembles biological history, where evolution has produced increasingly complex organisms which become more and more
effective in dealing with their environment.
If someone could invent a technique which produced
programs as effective as organisms in a time which is
electronic rather than biological it would be a revolution in programming.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

176

Some Communication Aspects of
Character-Sensing Systems
CLYDE C. HEASLY, JR.t
INTRODUCTION

LARGE number of bad jokes have been made
about the "real characters" encountered" in the
reading machine business. If the jokes are not
too funny, they serve nonetheless to highlight a significant fact. Character-sensing machines must read real
characters if the machines are to be useful outside of
the laboratory.
Practical character-sensing machines have been in
use for a number of years. Enough experience has now
been accumulated to warrant an analysis of the environment in which they must operate. Specifically, this
will involve an understanding of the nature of real characters, the factors which determine their nature, the
techniques which may be employed to improve these
factors, and some of the considerations which must be
judged in determining which techniques are appropriate" to a given system.

A

A

CHARACTER-SENSING SYSTEM MODEL

Fig. 1 shows a character-sensing machine enmeshed
in the conventional model of a communication system.
Messages are sent along this communication system by
selecting a series of symbol shapes and transferring the
shapes in inked form to a document. The document is
then moved by various handling processes to the point
where the message is to be received. There the document is scanned, the shapes converted to signals and
the signals decoded.
As the model is considered in greater detail, the encoder is found to degrade the symbol shapes by variation in strength of impression and in amount of inking,
and by inking noise. On typical business equipment,
these factors may result in line-width variations of five
to one, missing portions in light impressions, blotted
portions, of heavy characters, random additional interference on either light or dark characters, and a pronounced ribbon pattern.
The handling process further degrades the symbol by
superimposing interference. In the worst cases this may
be offset endorsement stamp ink, or in less drastic
cases, a mild smudging of the printed characters.
Finally, the sensing mechanism and quantizing apparatus cooperate to produce a signal representative of the
character shape. Whether the decoding of this signal
will yield the original in tended characters depends on
how carefully information has been conserved all along
the channel.

t

Intelligent Machines Res. Corp., Alexandria, Va.

:

DECODER

L ________ _

Fig. 1-The character-sensing machine is the decoder
of a communication system.
THE ENCODER-PRACTICAL DATA PRINTERS

Selection of a typeface which complements the reading scheme or combats typical interference is one of the
least expensive changes. Apart from tooling costs, type
bars or wheels are inexpensive. The symbols must be
similar for the conventional characters which they
represent and should conform to the limitations imposed by the printi~g techniques to be used. Within
these constraints, however, there is ample room for the
designer to increase the amount of information which
represents each character.1
Regardless of the type shapes used, it is intended
that the shape be completely black and that everywhere
else the documept be white. It is the job of the printing
process to approach this intended ideal as closely as possible. Direct printing from inked type and lithography
border on the ideal. Unfortunately, such proce~ses can
only be used for fixed or serial data which are applied
before the document is in field use.
Variable data are frequently printed by key-driven
business equipment, typewriters, adding machines,
cash registers, and the like. Generally these equipments
print with ribbons and metal type. When the recorder
is built from scratch, it is desirable to use controlled
impression and direct-transfer ribbons or the newer inksaturated rubber type. Another class of data recorder
is the identification imprinter which prints from embossed plates by means of ribbons, ink-saturated rollers, or carbon papers.
1 C. C. Heasly, Jr., "Selfcheck-a new common language" presen~ed at ACM ~ymp., Los .Angeles, Calif.; May 8, 1958. 'Copies
avaIlable at IntellIgent Machmes Res. Corp., Alexandria, Va.

Heasly: Some Communication Aspects of Character-Sensing Systems
However, the data recorder is the most numerous element in many character-sensing communications. The
ability to use existing business equipment with only
minor modifications often provides the economic impetus for a character-sensing system. Thus the system
designer may find himself limited to changing typeface
and perhaps use of an improved ribbon.
A third source of variable data is tabulators and various high-speed printers which record the output of data
processing systems. Tliese printers for the most part
use ribbons and, because of their high speed, a direct
transfer ribbon becomes a very costly nuisance.
The preponderance of ribbon printers and low-cost
recording devices virtually forces system designers to
examine their output and to find ways to recover the
information which they record. The principal characteristic of ribbon printing is the dot structure imposed by
the ribbon itself. The size of the dots varies with ribbon
inking, strength of impression, and chance alignment
between type and ribbon. Under light impression and
light inking, some dots may fail to print. Under normal
inking, the dots blend together to form lines of appreciable width with irregular boundaries. Heavy inking and
pressure may result in considerable bleeding of ink well
beyond the intended boundaries of the character shape.
In nearly all cases, unintended spots occur in the nominally white areas of the character, their number and
darkness being dependent on inking and impression.
These dots of varying size and intensity are not individually reliable. However, they may be aggregated by
statistical operations to yield reliable bits which define
the strokes of the characters. The printing resolution of
the dots, usually 8 to 10 dots from top to bottom of a
character, may be statistically "quantized" to a reliable
vertical resolution of five bits. This is equivalent to
stating that a five-bit code might be printed reliably in
the vertical space occupied by a character symbol. As
will be shown, this resolution is about the minimum
which can be used to print symbols which resemble convention~l character shapes. Fig. 2 shows enlarged views
of typical characters under three different impressions.

177

Fig. 2-Character line widths and noise vary
greatly with inking and impression.

is possible to use inks which have more than one sensible
property. The most widely known is of magnetizable ink
which has optical properties for human recognition an9.
magnetic properties for machine sensing. A second approach which has likewise been used in the banking
problem is the use of glossy inks. Here varnish agents
cause a glossy character surface which can be sensed in
combination with direct black to discriminate. against
all but the extremes of handling noise. An even more
powerful approach can be achieved by use of a combination fluorescent ink which appears black under normal illumination, but which fluoresces under specia1 illumination. Here again, combination sensing techniques may be used to overcome interference due to
handling.
I t is worth noting that selection of a printing medium
is· in no way helpful in combating the noise resulting
from the printing process itself. For example, the carbons used in credit identification invoice forms are the
chief source of noise in that system. Efforts to reduce
this noise by improved carbon have been very rewarding, but no advantage would be obtained by using magnetic or fluorescent material, since the noise would
have the same property as the characters.

THE COMMUNICATION CHANNEL-HANDLING NOISE
AND DISCRIMINATING INK

SENSING METHODS

Handling noise, surprisingly, can often be controlled
by fairly simple methods. For example, on bankchecks
the largest sources of noise are the handwritten figures,
signature, etc., and wet endorsing ink offset from the
back of one check to the front of the next. Intelligent
format design can isolate handwritten data from the
reading field. Endorsing stamp offset can be controlled
by restricting the variety of ink colors to those which
can be discriminated by color analysis scanning.
Regardless of the printing method, the choice of
printing medium offers interesting possibilities. Conventionally, the sole requirement of inks is good optical
contrast. However, where the primary source of noise
is in the handling rather than in the printing process, it

Perhaps the most important system choice and certainly the most interesting from an analytical point of
view is the choice of a sensing method. The two choices
worth serious consideration are amplitude scanning and
two-dimensional scanning.
For quite a while these scanning schemes were closely associated with the medium in which the characters
were sensed. Magnetic character sensing has been associated almost exclusively with amplitude scanning,
while optical character sensing has been based on twodimensional scanning. However, two-dimensional scanners have recently been developed for magnetic characters, and an amplitude scan system has been announced for optical character sensing.

178

1959 PROCEEDINGS OF THE WESTERN JOINT CO]{PUTER CONFERENCE

The principal difference between the two schemes is
the way in which they respond to the two-dimensi~onal
information in a character. Two-dimensional scanning
covers each point of the area, usually in a scanning
raster similar to television. Amplitude scanning compresses information in one direction into an amplitude
signal. For example, in magnetic sensing, amplitude
scanning is accomplished by passing the characters
under a vertical gap. The scan signal responds to the
total amount of magnetic ink passing under the gap.
As reading progresses, the read head produces an output function related to the horizontal distribution of
magnetic material along the character. Although it responds to the vertical amount, it cannot respond to the
vertical distribution of the contributing elements. This
results in a loss of information.
Superficially considered, the amount of information
lost by amplitude scanning may seem trivial. However,
even casual analysis will reveal that the signal recovered from scanning a conventional "2" is virtually
identical with that of a conventional "5." Similarly, the
difference between conventional "4" and "9" or "0" and
"8" are alarmingly small. In practice these pairs may
be rendered different by changing the horizontal scale
of one or by changing widths of vertical strokes. This is
a perfectly valid solution to the problem, but these examples serve to underscore that shape differences
easily detected in a two-dimensional scan are obscured
by amplitude scanning. It is of some interest to gain a
more exact knowledge of information contained in characters and of the ways in which it is preserved or lost.
ANALYSIS OF SCAN INFORMATION

The maximum amount of information which can be
used to convey a set of character symbols may be said
to be the number of bits which can be resolved by scanning and quantizing the "printing cell" occupied by one
character. It will be shown presently that the value of
this resolution will depend on the scanning method,
but for the moment let resolution be considered independently of scanning method. This may be thought of
as the channel capacity in bits per symbol. While channel capacity could be used as the independent variable,
it is both more convenient and more meaningful to relate results to an independent vertical resolution which
may be defined as the number of bits which can be resolved along a vertical line extending from the top to
the bottom of the cell.
The manner in which amplitude scanning reduces information can be easily understood from an example
based on a vertical resolution of "4," although better
resolutions are required if the constraint that the symbols resemble conventional characters is to be obeyed.
With a vertical resolution of 4, 16 different inputs,
Xij may occur. The amplitude scanner can respond to
such inputs by providing 5 different outputs, Yj. Fig. 3
is a matrix which shows the joint probabilities P~j relating the 16 inputs and the 5 outputs.

FIVE POSSIBLE
OUTPUTS, Yj

0000

-+.-

0001

...,.

0010

-,\--

0011
0100
SIXTEEN
POSSIBLE
INPUTS

-+
-+
-+
-+

---I.~-

0101

...,.

0110

....-

0111

Xi

1000

•

..-

1001

---&;-

1010

-&;-

1011

....,. "

1100
1101

-+

1110

-tr-

+

1111

-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+

-tr-

+
*-t.

-ir
*-tr-

+
+

INPUT,
PROBABILITIES
P,

-+.--tr-1-6-

+
+.-+.-+.-

t t t t t

..... ....,.

+.- --tr-

+

OUlPUT PROBABILITIES, qj
JOINT PROBABILITY, Poj MATRIX
AMPLITUDE SCAN, VERTICAL RESOLUTION -4

Fig. 3-Probability matrix shows how amplitude
scanning response loses information.

The information content H(X) of the input (which
for the present is assumed to be equiprobable) can be
seen to be 4 bits.

H(X)

- L

Pi log2 Pi

- 16(1/161og 2 1/16) = 4.
The probabilities, Qj, of the five scanner outputs are
found by summing the elements of each row of the
matrix. From these in turn the output information
H( Y) is found.
H(Y) = = =

L

Qj log2 Qj

(2/16 log 1/16

+ 8/16 log 4/16 + 6/16 log 6/16)

2.03 bits.

Thus, for every vertical column of 4 bits in the input
channel, an amplitude scanner captures 2.03 bits and
loses 1.97 bits. Regardless of the number columns in the
cell, nearly half of the input information is destroyed.
The graph of Fig. 4 shows how the efficiency of this
type of scan decreases as the resolution'increases. The
actual channel capacity (input information), conditional information (lost information) and transinformation (output information) are shown for a character cell
having aspect ratio 5 to 4 (vertical to horizontal).
CHARACTER INFORMATION

Whether or not this information lost is meaningful,
of course, depends on the extent to which the input symbols utilize the channel capacity. Fig. 5 shows a set of
symbols based on a vertical resolution of 5 with 5 to 4
aspect ratio. It can be seen that this is about the poorest resolution which can be used and still obtain ten
symbols which resemble the numeric digits. It can also

Heasly: Some Communication Aspects of Character-Sensing Systems
70

60

INFORMATION & INDEX
VERSUS RESOLUTION
AMPLITUDE SCAN
SYMBOL ASPECT RATIO 5/4

/
/

I

/

I'

I

//'VH(x)

I

I
50

CHANNEL
CAPACITY
(INPUT)

/

I

(UTILIZATION
) INDEX ('10)

/

I

I

P

I

I

I

/

40
I

P

/
/

/

I

/

/
/

/
/

/

20

,§

/

/

/

/
/

F

/

/

/
/

--

/

,PI"
/

10

/

/

_JY-----

_.0

_----.qL~Y)

/

01'/

(LOST)

/

;f
/

~Hy(x)
CONDITIONAL
INFORMATION

p

I

I

30

/

/

I

.cr

OUTPUT
INFORMATION

VERTICAL RESOLUTION (BITS)

Fig. 4-Efficiency (utilization index) of amplitude scanning
decreases as resolution increases.

Fig. 5-Exemplary numeric font based on vertical resolution
of 5 uses 15.4 bits (out of possible 20).

179

step involves performing a statistic on the results of
the first step to determine whether a particular small
area is intended to be black or white. It is in this sense
that the resolution has been defined, the small areas
which can be resolved in a printing cell being the bits
which comprise channel capacity.
An amplitude scan must perform the equivalent
q uan tizing in a single step and must q uan tize to a larger
number of levels, in the 5 to 4 case just shown, an amplitude scan must have the ability to quantize to six
output levels in order to resolve the five vertical bits.
The significance of this relationship between quantizing and resolution is more apparent in the case of
nonuniform impression. Fig. 6 shows the two examples
of the character 5, one printed light and one dark. It is
at once obvious that a two-dimensional scanning system
could easily resolve either input character with a vertical resolution of 5 bits. The transmitted amplitude-scan
information is shown below in the two examples. If the
two amplitude-scan signals are normalized so that maximum responses are equal, quantizing to six levels will
produce the two different results shown. In order for
printing to have resolution of 5 bits for amplitude scanning, vertical line widths of horizontal strokes have to
be constant within plus or minus one-sixth of the nominal line width. It can be shown that this is equivalent
to a vertical resolution of twenty-five bits for a twodimensional scan.
Thus it is seen that, if amplitude scanning is used,
the freedom from handling interference which magnetizable inks and magnetic sensing offer, is purchased at
the cost of considerable loss of recovered information.
This may be compensated by greatly improved resolution. Either the printing cell must be greatly increased
in area or the printing must be held to very close tolerances. Since character area is at a premium in many applications or fixed in advance by the printing mechanism, close printing tolerance is apt to be the only answer.
QUANTIZING AND DECODING

be seen that variations in style are quite restricted. This
font has been deliberately designed to circumvent the
limitations of amplitude scanning by changing the horizontal scale of the "2" to differentiate it from the "5."
The channel capacity is 20 bits. The type font utilizes
15.4 bits, of which 8.18 bits are recovered by the amplitude scan.
VERTICAL RESOLUTION AND QUANTIZING

I t may not be readily apparent that the vertical resolution as defined does not have the same value for a
two-dimensional scan as for an amplitude scan. This
difference arises from the methods whereby quantizing
may be used to effect resolution.
A two-dimensional scan system offers the possibility
of quantizing in two steps. The first step involves examining each elemental area of the image to determine
whether it is intended to be black or white. The second

In most practical character sensing decoding equipment, it is difficult to draw a fine line between the quantizing mechanism and the decoding mechanism, since
these system elements often work in close cooperation.
I t is probably true that the real sophistication in character-sensing machines is in the quantizing mechanism.
I t is in improved quantizing methods that information
may be preserved and engineering considerations of
simplicity, reliability, and cost may be most profitably
pursued.
Once the information in a character has been sensed
and quantized, the decoding of the resulting signal is
fairly straightforward. Choice of logic schemes is not
as complicated as might be supposed. The success of one
decoding scheme as compared to another will depend
primarily on the efficiency with which it utilizes the redundancy of the quantized signal to promote accurate
decoding.

180

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

5

.~

H

II

UGHT CHARACTER

DARK CHARACTER

RAW INFORMATION WAVE SHAPES

5 __ ~_____ __
4__ _____ _ __
3__ _ _ _ _ __
2__ _______ __

~ --

------- --

5 __ ~
________ _
4__ ________ _
3 __________ _
2 ___________ _

~ -- --------

--

NORMALIZED INFORMATION WAVE SHAPES

5

5

5

5

II

H

OUTPUT INFORMATION WAVE SHAPES QUANTIZED
FOR VERTICAL RESOLUTION OF 5

Fig. 6-Amplitude scanning is greatly affected
by variations in impression.

Thus the character is seen to be a code which is superior to other conventional codes only in the excess of
information it embodies and in the ease with which decoding can be performed by human beings who have
learned to read. It might be remarked that a fair portion of the information present in real characters has to
be used in combating the low fidelity and noise introduced by conventional printing methods. These considerations aside, there is little choice between bar or
dot codes and conventional characters so long as area
and printing resolution are identical.
Once the nature of the character,is identified, it becomes quite apparent that any decoding scheme will
give better performance if it has more information as
its input.
SUMMARY

The attempt here has been to consider the entire
communication system of which the character-sensing
equipment is but one functional part and to identify
both the factors which reduce information transmitted
by the system and the techniques which may be used to
insure that the message is finally received ungarbled.
If there is any single conclusion to be drawn, it is perhaps that the best way to achieve economical character sensing is to consider the whole communication system in which it is embedded and to improve the information transmitted by every element of the system insofar as economical and system requirements permit.

Beyond this the points at which the most improvement
per dollar could be obtained might be re-emphasized.
Since the printers are frequently the most numerous
elemen ts in the system, the extent of change is usually
limited. Changes of typeface are inexpensive and may be
quite effective in improving the amount of information
per character. Improvements which enhance printing
resolution are the most rewarding. A simple increase in
the character cell area is as effective as improving the
printing tolerance, but frequently neither improvement
can be made without printer redesign. Reduction in
printer interference is particularly worthwhile, especially if special inks are needed to combat handling
interference.
The best solution to handling interference is to control the source of interference. If interference cannot be
eliminated by document design and improved handling
procedures, it may be possible to restrict the kinds of
interference to those which can be discriminated by
special sensing techniques. If this is not possible, then
use of special inks which may be detected outside of the
interference medium is useful.
,
The scanning employed should recover as much of
the character information as possible. From an information point of view, it may prove to be better to use
scanning which conveys all of the character information even though degraded by interference, than to scan
in a medium which is free of interference with a scanning technique which degrades the character information. This last conclusion is particularly true when
close printing tolerance cannot be maintained, when alphabetic or alphanumeric information must be transmitted, when the typeface cannot be controlled in advance, or when serious interferences are inherent in the
printing process.
With respect to quantizing, considerable care is warranted to insure that the information is quantized
rather than some other variable which is only casualJy
related to the information.
In this regard it is well to note all a priori information
about the symbols and the printing process. Full utilization of this information in the quantizing is the only
way to insure maximum information at the decoding
stage.
Finally, it might be said that system design of character-sensing communication systems is primarily a
matter of selecting that set of system elements consistent with the problem requirements which will convey
the maximum amount of information. It is hoped that
this survey of system elements and their characteristics
has been helpful to that end.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

181

An Approach to Computers That Perceive,
Learn, and Reason*
PETER H. GREENEt

HE purpose of this paper is to mention some of
the problems which I believe will be central ones
in the effort to design machines that can in some
sense be said to perceive, think, learn, make reasonable
inferences, or learn to perform acts of skill. First I wish
to discuss a general conceptual outlook which has important consequences for our approach to the design of
such machines. I shall show how this outlook calls for
modifying or supplementing some current approaches;
however, specific techniques for dealing adequately
with the problems raised remain to be found. The
second part of the paper will point to specific techniques which now exist for dealing with the problem of
reasonable inference and inductive judgment, but which
to my knowledge have not been considered in connection with computers and control mechanisms.
Polya [34] gives certain heuristic principles whereby
we tentatively arrive at judgments of the likelihood of
propositions. Yet he categorically states that there can
never be a machine which can perform such plausible
inferences. Why is this? I suggest that one reason is
merely that he did not anticipate progress which has
occurred since his book in the analysis of such inferences.
I shall discuss this progress later in this paper.
Another reason for Polya's pessimism is more fundamental. Present-day machines cannot really be said to
contain the meaning of the propositions they handle.
Every proposition is reduced to a hole or an electric
charge or a magnetic field at some point of space. Thus
the propositions may be sorted and combined but their
meaning resides in the mind of the person who looks at
a list which says that a hole in this place means that
proposition. Pattern recognition, too, depends in all but
a few experimental computers upon preset responses
which do not take meaning into account. Thus it has
become fashionable to talk in terms which exclude
meaning. We recognize the next letter or word, so it is
said, because we know the transition probabilities.
This is only a partial solution; it ignores the outstanding
fact that we are able to use relevance and meaning as
guides, rather than probabilities alone. The first topic
will concern a general conceptual outlook which is essential if computers are ever to become capable of
having this too.

T

* This research was supported by the U. S. Air Force through the
Air Force Office of Sci. Res. of the Air Res. and Dev. Command
under Contract No. AF 49(638)-414. Reproduction in whole or in
part is permitted for any purpose of the U. S. Government.
t Committee on Mathematical Biology, University of Chicago,
Chicago, Ill.

Most of our perception is too rich to be described in
what is called a discursive code, a one-dimensional sort
of language that can be written on a typewriter or
spoken or put on magnetic tape. Thus we shall need to
make use of non-discursive symbols (or presentational
symbols in Susanne Langer's terminology) which are
rich enough to embody complex patterns in their entirety as presented to the computer. Such recognition,
pattern by pattern instead of element by element, has
been discussed before-for instance, in connection with
Dttley's conditional probability machine [35]. However, the basic mechanism for the recognition of patterns in all these studies is that of abstraction, which is
supposed to account for the recognition of complex patterns and general ideas. This makes use of association,
the formation of a linkage between representations of
similar patterns.
That abstraction as generally defined is totally inadequate as a fundamental principle has been pointed
out convincingly by many authors, who unanimously
arrive at that judgment from diverse starting points.
For their arguments I refer you to the gibliography on
judgment and concepts. A first inadequacy of abstraction as the fundamental mechanism of concept formation involves the person's or computer's way of approaching the matter to be examined. If I hold up my
hand with one finger extended, what makes you look in
the direction of my finger instead of the direction of my
shoulder, or indeed look in any direction whatever? The
answer is that the meaningfulness of the gesture depends upon a background of conventions which determine one's way of approach and which must precede
any alleged abstractions. Thus more is needed than
abstraction [12 ] .
The second fundamental difficulty is that abstraction
is said to proceed by the formation of groups of things
which are similar in regard to the abstracted concept.
But the formation of these classes presupposes some
notion of the concept just to determine which things are
similar in regard to it. It presupposes at least a certain
"point of view" from which the elements can be designated like or unlike. The development of the concept
then involves a further articulation of this general point
of view [3].
The third fundamental difficulty is that abstraction
works by forgetfulness. Cassirer points out that this
separates us more and more from the pattern and produces something superficial. He contrasts this with the
generality reached, say, when a mathematician generalizes. For instance, it is a generalization to say that cir-

182

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

cles and ovals are all instances of the same concept,
namely ellipses of various eccen trici ties; but this generaJization, rather than neglecting the particular features, embodies a universal rule for the connection and
ordering of the particulars. In general, the essential
feature of the general concept is a generating relation
which gives a principle of ordering and dependence. We
have seen that this is presupposed by the ability to form
classes of similar elements. I t is also presupposed by the
ability to make reasonable judgments about a whole
group of forms without running through the particular examples-often an infinite set-individually. To do
this we need some representation of a generating rule
of construction [3] .
Th us the first task which precedes abstraction is not
the comparison of representations but the formation of
impressions into representations adequate to sustain
abstraction. We tend to think that abstraction alone is
sufficien t because this first task has for the most part
already been performed in infancy and in the construction of our traditional language. We therefore presuppose that the raw materials for our judgments are
not disconnected particularities, but are already given
to us as a connected manifold. The essence of these arguments was fir~t arrived at by Kant in 1781 [2], [5]-[7],

[9], [10].
The consequences for computer design are many.
First, we must find a way of building up meaning,
which, though it may construct hierarchies of patterns
in a way calleq for by many authors, must use essentially richer means of construction than the means
called for by those authors. That is to say, we need more
than the principles of association and abstraction, although these will be important ingredients. We shall
see later that abstraction itself, in its valid aspect, requires more than association. Second, the construction
of the unitary rule of connection of a manifold-the
"first universal" of the 19th century logician Lotzerequires a principle of construction which supplements
the relation of class inclusion which is appropriate to
the formation of the "second universal"-the class of
similar objects. It is this latter relation alone which has
been studied in any detail by people interested in computers. Third, since the truly general concept contains
structure which includes the generative principle of the
system in which it stands, it follows, as Kant and Cassirer indicate, that no content can be posited without
thereby positing a complex of other contents. In other
words, true generality embodies the judgments of causal
relations which we are seeking to embody in computers

[2], [5], [6].
Fourth, we must not regard the linguistic concepts of
the computer as copies of a definite world of facts; if so,
we are doing the machine's job of drawing the significant
outlines of meaningful units. Rather, the computer, before it compares experiences, must in Cassirer's words
concentrate the experiences and distill them down to
one point. This point is the unit which is at present

handled in the ingenious ways known to computer designers, but here we see that an involved process must
go into its formation. Since the particular outlines constructed depend upon generative principles, it would be
expected that any conceptual product of such a computer would be constructed out of manifolds of related
precursors formed around individual generative principles built into the machine. This appears to be the
case in human thinking [1], [13], [17], [19 ]-[21].
Fifth is the need to supplement those notions of the
relations between a language and the world that have
so far been worked out, for example, Carnap's treatment of mathematical semantics. In treatments like this
a sentence is true if and only if the fact to which it refers
happens to be the case. Thus they assume an extralinguistic world of facts open to observation. In our
problem, on the contrary, observation is possible only
through symbolic forms contained in our language;
hence we may say that for us, or for the machine which
we envisage, the world of observables comes into existence along with the language. The same drawback is
apparent in the analysis of meaning given by Morris,
which is very frequently cited in connection with our
problem. The first link between information theory and
meaning, namely the notion of semantic information of
Carnap and Bar-Hillel, suffers from the plight of logical
atomism, the doctrine that complex ideas may be
analyzed into one of the 2N possible conjunctions of N
atomic facts or their negations. Please bear in mind
that I am not making a single complaint against any of
the methods I have cited. These methods, together with
all existing proposals for logical networks, must be supplemented by something which produces the meaningful
symbolic forms which are the raw material for these
methods and networks. At present it is the human
operator, not the machine, which produces this raw
material [11], [29]- [31] .
I. shall conclude this conceptual part of the paper
with a few remarks on what we would mean by subjective action and thinking in a machine. The question is
very involved and I am not competent to provide an
adequate answer. I shall therefore name only a few
properties which we can all agree on as necessary, and
which will lead to the discussion of specific techniques
which are now available.
We may all agree that one feature that we seek is the
presentational nature of the internal symbolism of the
machine, involving the ability to perceive patterns too
complex to be communicated verbally and the ability
to acquire skills which we are not able to program discursively for\he machine. Since the internal workings of
the machine would proceed by the manipulation of nondiscursive symbols, while the only known means of
communication with the machine would be via discursive symbols, we could find out how the machine was
arriving at its conclusions only by a long process of inference, and we could modify its basic patterns only
through a gradual process of "training." The patterns

Greene: An Approach to Computers That Perceive, Learn, and Reason

183

would have enough internal structure so that new in- cise concepts in the object language may arise from
formation could be obtained by operations based upon vaguer concepts in the metalanguage (the language
content, rather than the external operations of sorting which talks about the object language). Can we find
and propositional logic which present computers use. languages embodying the complexity of our experieNce
The symbolic operations, rather than the visible re- which are based upon a hierarchy of meta- and metaceptor structure, would be the entities closely related metalanguages and so on down to a language so simple
to subjective meaning. For instance, we should not that it can be adequately embodied in the electronic
want to call the pattern on a television screen or the units (or the physiological units) available to us? This
visual cortex subjective. These are patterns which are possibility would be consistent with the psychological
causal transformations of external objects. The con- . views which I have mentioned.
struction of subjective meaning from these raw maThe last aspect of subjective meaning that I wish to
terials must proceed in the ways which I have discussed discuss is the one which will allow us to proceed from
in the first part of this paper, and in fact does proceed the terra incognita of the synthesis of symbolic forms
in that way in human beings; the fundamental units of to more welcome regions in which valuable resources
perception being dynamic, unstable visual effects which have already been discovered. This is our well-known
are combined into a coherent image, and which may ability to become acquainted with causal relations and
become dissociated in pathological states. Feeling is with the potentialities of things and actions, and to
more closely related to t4ese processes of combination make appropriate generalizations and inductive inthan to nerve pathways, as we observe when we feel the ferences. This part of my paper involves the recent
texture of an object with another object such as a pencil progress mentioned in connection with Polya's pessiheld in our hand and tend to localize the meaningful mism regarding intelligent machines. This progress infeeling in the end of the pencil rather than in our hand. volves methods which were devised mainly with referBy consulting the references provided on psychology, ence to meaning and inference in the methodology of
it will be possible to learn how all the concepts I have science. However, we can use them to handle the condiscussed are embodied in examples of what I believe to tent of our science, which involves just these problems
be the only plausible and nontrivial psychological re- of inference. Perhaps it will be best to list a substantial
sults and hypotheses which actually deal with the number of specific logical tools so that you may get a
mechanisms for the synthesis of meaning. Anyone inter- general idea of the scope of these methods and turn to
ested in the problem of intelligent machines should con- the reference for details of any that sound interesting
sult these references to see how one might go about [25]-[28].
building meanings on a basis of non-discursive symbols
The basic fact to remember is that all these methods
[13]- [24].
deal with the weighing and balancing of judgments
Next I wish to discuss that aspect of subjective mean- against a background of relevant material which in
ing that involves our feeling that the meaning should in general is not explicitly expressed, and the resulting
some sense be self-contained-that the machine read rules of logical manipulation differ considerably from
its own dials and in some sense understand them the the customary symbolic logic of isolated propositions.
way we would. In this way the machine's consciousness Among other differences, the validity of a statement
would be a collection of internal feedbacks which had will depend not only on what is presented but on how it
some special feature. What is this feature? I think one is presented, with logically equivalent statements often
aspect may be seen by means of an example. Goodman not having equal validity. Validity will involve matters
[32] uses a tick-tack-toe diagram to represent iso- of past linguistic usage and thus the whole previous
morphically the pattern of hostilities existing among history of how the world has been described and anticifour gorillas in a zoo. The four lines stand for the four pated by use of the language. Thus these techniques
gorillas, and the four intersections stand for the four deal realistically with common sense judgments in their
relations of hostility which happen to obtain. To us in- full complexity of context and suggest that inductive
tersecting lines seem unlike gorillas so we say that the judgments in machines may be a real possibility in the
full meaning is not self-contained. On the other hand, not too distant future.
our men tal gorillas seem just like physical gorillas so
To begin the list we may consider a problem treated
we say that we contain the meaning. Since physical by Goodman [25]. This is the problem of contrary-to-fact
gorillas are known to us only by means of mental goril- conditional statements, or counterfactual conditionals.
las, of course they seem similar. For our present pur- These are statements such as "If I had struck the match
poses we might say that the machine contains the it would have lighted," which we regard as true, and
meaning of what it perceives in case its internal dials "If I had struck the match it would not have lighted,"
register symbols whose complexity compares favorably which we regard as false. Since we did not strike the
with the complexity of our own ideas.
match, and the match may not exist at present, these
This notion suggests a topic which I think is worth conditionals (which are thus contrary-to-fact) may not
investigating. Reichenbach [26] remarks that it seems be directly verified, and the question arises as to what
to be a general property of semantical systems that pre- criterion we use to establish the truth of one and the

184

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

falsity of the other. The ability to judge what consequences would result from certain circumstances or actions which are not actually present is an essential part
of intelligent behavior. It turns out that the problems
to be met and criteria to be formulated in judging
counterfactuals suggest specific tasks that would have
to be performed by components of intelligent machines.
The first problem is to define the set of relevant conditions which are necessary for the truth of the counterfactual (oxygen, dryness, etc., in our example). We cannot include all true statements because many of these
conflict with the antecedent of the conditional, since by
definition it is a contrary-to-fact conditional. Here is
another blow at simple association as an adequate basis
for relevance. We need more structure. It turns out to
be a difficult problem to specify in advance what type
of information must be considered. We shall come to
an application of this specification later.
The next consideration is that not all conditionals
may be used counterfactually, but only certain ones
which express laws as opposed to mere facts. The structure which distinguishes laws from facts turns out to
have potential applications to structures in machines,
and is moreover intimately connected with the rule that
it is not the truth of laws but their inductive verification that makes them laws. Reichenbach in particular
gives rules for the structure of laws that may be translatable into rules of structure and function in inductive
computers. The problem of counterfactuals leads
Goodman to the problem of establishing dispositionsthe potentialities of things to behave in certain ways
under certain circumstances-on the basis of behavior
which has been actually manifested. The ability to embody dispositions is another important feature of intelligence. Now the class of objects having a given disposition (e.g., flexible) is broader than the class of objects having the manifest property (in this case, is
flexed). The problem of extending the smaller class to
the larger, which Goodman calls projection, leads him
to a consideration of induction. The main problem here
is that of choosing among the infinite variety of generalizations consistent with given evidence, and this is
another important feature of intelligence. Goodman and
Reichenbach give rules for the resolution of conflicts
between generalizations which seem as though they
could be embodied in a conditional probability machine
like that of Uttley, or else in models of the nervous system proposed by Hebb and others. The exact form of a
proposition is important to its mode of confirmation
since it is possible for the same evidence to confirm one
proposition yet be irrelevant to the truth of a logically
equivalent proposition [25] ("paradox of the ravens").
Reichenbach [26], [27] concentrates upon the formal
properties of implication in ordinary and counterfactual
conditionals and in laws which make them seem reasonable or unreasonable. It is his great achievement to have
formulated rules concerning the detailed logical structure of propositions which amazingly aJlow statements

that seem reasonable to common sense and exclude most
statements/that seem unreasonable, despite the fact that
the difference is one which seems most subtle and elusive
and which depends upon the exact form in which the
statement is expressed in addition to its extensional
truth value. I shall select certain topics which will be
applicable to computer design. His rules are generally
such that they could probably be programmed for computers with no more trouble than any other program
now employed.
Reichenbach first discusses the rules for a form of
implication (which he terms connective) which expresses necessary connections, and th us differs from the
usual form of implication of symbolic logic (or adjunctive implication), which merely expresses the fact that
certain conditions are adjoined. To know the truth or
falsity of the adjunctive implication "if A then B" one
must first ascertain the truth or falsity of "A" and of
"B" and call the conditional true in case these truth
values stand in a certain relation to each other, whether
or not there is any natural connection between A and B.
The connec~ive implication, on the other hand, asserts
a connection between A and B; it asserts that if "A"
were true then "B" would be true, and we can decide
whether such a connection exists without knowing the
truth or falsity of "A" and "B." Connective implications
are involved in most reasonable inferences.
Using his notion of connective implications, Reichenbach is able to define a class of nomological statements,
which correspond to laws of nature and logic, and a
narrower class, the admissible statements, which correspond to ordinary reasonable statements. He distinguishes three orders of truth, analytic, synthetic
nomological, and factual, and is able to arrive at a
characterization of reasonable statements based solely
upon the orders of certain parts of the statements.
Then he states a variety of rules for the transformation
and combination of reasonable statements which differ
substantially from the ordinary rules of symbolic logic.
He proposes modified rules to cover what he terms
semi-adjunctive implications, which make assertions
about individual events with no claim that the same
result will obtain upon repetition.
With these rules, also conceivably adaptable to computers without too much trouble, we come to our last
group of possible applications to intelligent computers.
First we consider the problem of structuralization
raised by the question of what are the relevant conditions. Practical1y everybody who has considered the
problem of learning in computer or person considers the
basic operation to be a certain kind of association whose
inner structure is not known or else assumed to be
simple. Sellars shows that if it is the case that the operation of entailment is a simple connection, then unreasonable results can be avoided only if the entities considered
in the j udgmen ts are classified into four types. The system must be able to distinguish among and deal with
expressions for kinds of things, kinds of circumstances,

Greene: An Approach to Computers That Perceive, Learn, and Reason

something done to a thing, and something it does in
return. The descriptive function "x is F at time t" obeys
different logical rules depending upon which category
is represented by "F." Even the phrase "at time t"
means something different in each case. On the other
hand we can simplify the entities and put all the complexity into the notion of entailment, which must then
contain specifiable logical complexities. These considerations are of greatest importance in deciding upon basic
operations adequate to produce reasonable judgments.
Second, another form of counterfactual, the counterfactual of noninterference, states " 'A' is true. Even if
'B' were also true 'A' would still be true." Although
these are more complicated grammatically than the
ordinary counterfactual, Reichenbach shows that the
criteria for their validity require only notions of conditional probability. Hence they could be built into
Uttley's machine with only minor modifications.
Third, Reichenbach shows that for many purposes
we may use the ordinary adjunctive operations instead
of the more complicated connective ones and not arrive
at unreasonable consequences provided that we do not
admit negations of lawful statements. When in this
usage we wish to deny a connective implication, we
must do so by saying "Do not use!" in the metalanguage. This is another sense in which negations may be
absent in addition to the frequently remarked absence
of negations of presentational symbols.
Fourth, Reichenbach discusses the important case in
which some result follows lawfully from a factual statement. Although the result is a fact, and hence not nomological or lawful, it is, however, nomologically derivable
from a fact, and it is termed nomological relative to the
fact. In the interesting case we use an elliptic form of
speech which omits mention of the law. If this can be
validly done, the relative nomological statements are
called separable. These account for the important kind
of reasoning which uses implications for counterfactuals
even though they do not directly represent laws of
nature. The rules of manipulation of these statements
predict correctly that what we regard as reasonable
logical operations may vary with the extent of our
knowledge. Since in practice we never draw the boundaries of the system under consideration so wide that
all relevant information is explicitly included, the
separable statements assume the greatest importance.
Fifth, Reichenbach employs the term proper implication to refer to a class of separable implications all taken
relative to some convenient reference set of background
information. A majority of our implications will be of
this form. According to Reichenbach, the rules for the
application of connective logical operations to these
statements are so complicated that in general, in order
to construct derivations one must use the simpler but
less appropriate adjunctive calculus and then check the
results to see whether they are reasonable. He concludes
that the class of reasonable statements is thus not complete, in the sense that in order to construct derivative

185

relations between members of this class we have to go
beyond the class. It would in this sense appear that
not only on the lowest levels of the synthesis of meaningful symbols, but at the highest propositional levels,
too, reasonable statements may be produced only by a
process of rejection of a multitude of lawfully produced
but unreasonable statements.
In summary, we have seen that present-day computers deal primarily with external relations among
concepts which are given in a form that does not represent their "inner structure." In order for a machine to
discover new patterns and to make inductive generalizations, it appears that the following notions must be
taken into account.
The primary challenge to be met is that of forming
impressions into logical elements. This involves Kant's
notion of the synthesis of a connected manifold of impressions-that is, the individual element must contain
the schemata for its connections with other elements.
No conceptual product can exist in isolation from all
other conceptual products; rather, each concept must
contain a partial representation of many other concepts.
Thus, as Kant first noted, spatial and causal relations
are not derived from experience; experience is derived
from them-they are prerequisites for experience of
certain types. The consequence for computer design
is that one must not seek to build a machine that perceives and conceptualizes and then inserts into its concepts independently achieved schemata of spatial and
causal relations. On the contrary, in order to perceive
at all it must perceive according to schemata, and the
spatial and causal schemata must be an inherent part
of the percepts and concepts themselves.
I t appears that any activity of a computer rich
enough in structure to deserve the term concept must
be expressed in a language which is given meaning by a
hierarchy of metalanguages whose initial members are
simple enough to be organized rather directly around
structures in the computer of a technological level which
is perhaps within our present command.
Much of the activity in the earlier levels of this hierarchy would then consist in operations upon nondiscursive symbols, and in their development from a
state in which the basic computer structures are represented by a multitude of particular presentational
symbols to a progressively articulated state amenable to
the rules of discursive logic. In short, it seems that if a
computer is to "think," its concepts will have to undergo
the process of development that Freud [13] termed the
primary process.
Finally, the logic-of inferences and inductive generalizations made against a background of tacit premises and
relevant information is quite different from ordinary
propositional logic, but some of its rules are already
known in a form which may be used at the present time
in programming computers. A computer will probably
have to arrive at reasonable inferences by selection from
a larger set of inferences which are not all reasonable.

186

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

I hope that this selection of ideas from a mass of
technical, logical, and philosophical investigations, and
from certain psychological ideas not generally considered in connection with computers, may help provide
a program for computer research, and that bringing this
program to your attention may be of some use in adding
to the present capabilities of computers and control
mechanisms.

[16]
[17]

BIBLIOGRAPHY

Judgments and Concepts-Analytical Studies
[1] E. Cassirer, "Language and Myth" (1925). Translation by S. K.
Langer, Dover Publications, Inc., New York, N. Y.; 1946. See
pp. 8-14, 23-28 (function of language).
[2] E. Cassirer, "The Philosophy of Symbolic Forms, Vol. I. Language" (1923). "Vol. III. The Phenomenology of Knowledge"
(1929). Translation by R. Manheim, Yale University Press, New
Haven, Conn.; 1953, 1957. See vol. 1, pp. 13-15, 27, 50-52, 9798,101-102, 104-105,278-283;vol. 3, pp. 13-15,61, 64-65,107108,114-117,123-130,139-141,142-143,150-154, 160-161, 164,
167, 169, 172-173, 176-179, 191, 202-203, 270-272, 287-288,
308-313.
[3] E. Cassirer, "Substance and Function" (1910). Translation by
W. C. Swabey and M. C. Swabey, Dover Publications, Inc.,
New York, N. Y.; 1953. See chap. 1 (abstraction); pt. 2, chap. 5
(induction and generalization).
[4] P. Geach, "Mental Acts: Their Content and Their Objects,"
Routledge and Kegan Paul, London, Eng.; 1957. See chaps.
6-11 (abstraction).
[5] I. Kant, "Critique of Pure Reason," 2nd ed. (1787). Translation
by N. Kemp Smith, Macmillan Co., New York, N. Y. and
London, Eng.; 1956. See pp. (B) 104-105, 122-124, 129-131,
136-138, 176-181 (connected manifold).
[6] I. Kant, "Prolegomena to Every Future Metaphysics That May
Be Presented as a Science," (1783). Selections, translation by C. J.
Friedrich. From "The Philosophy of Kant," C. J. Friedrich, ed.
I
Modern Library Inc., New York, N. Y.; 1949. See secs. 17,20,
26-28, 30-31, 34 (the conditions of experience). (See sec. 3 of
Introduction by Friedrich for Kant's terminology.) A lucid summary of Kant's methods.
[7] S. Korner, "Kant," Penguin Publications, Harmondsworth,
Middlesex, Eng., and Baltimore, Md.; 1955. See chap. 3, secs.
2, 3, 5; chap. 4, secs. 1, 6.
[8] S. K. Langer, "Philosophy in a New Key," Mentor Press, New
York, N. Y.; 1942. See pp. 75-80 (presentational symbols).
[9] H. Lotze, "Logic" (1874). Translation by B. Bosanquet, 2nd
ed., Clarendon Press, Oxford, Eng.; 1888. See book 1, chap. 1
(concepts ).
[10] H. Lotze, "Outlines of Logic" (1885). Translation by G. T. Ladd,
Ginn and Co., Boston, Mass.; 1887. See pt. 1, chap. 1 (concepts).
[11] J. O. Urmson, "Philosophical Analysis: Its Development Between the Two World Wars," Clarendon Press, Oxford, Eng.;
1958. See chaps. 2, 5 (logical atomism), 9, 10 (defects of logical
atomism).
[12] L. Wittgenstein, "Philosophical Investigations," Blackwell &
Co., Oxford, Eng.; 1953. See sees. 72-74 (what is common to
particulars), 81-86 (rules and conventions).
Thinking Mechanisms-Psychological Studies
[13J S. Freud, "The Interpretation of Dreams" (1900-1930). Translation by J. Strachey, Basic Books Publishing Co., New York,
N. Y.; 1954. See pp. 277-278, 279-281, 305-308, 310-314, 330,
339-340, 488-493 (representation of thoughts in non-discursive
images), 536-542, 565-567, 593-597,598-603, 610-611,616-617
(genesis of thoughts and images).
[14] J. Piaget, "The Construction of Reality in the Child" (1937).
Translation by M. Cook, Basic Books Publishing Co., New
York, N. Y.; 1954. See pp. vii-ix, 86-96, 209-214, 308-319,
350-364.
[15] J. Piaget, "The Origins of Intelligence in Children" (1936).
Translation by M. Cook, International University Press, New

[18]
[19]
[20]

[21J

[22J
[231

[24]

York, N. Y.; 1952. See pp. v-vii, 32-35, 42-45, 122, 125-133,
137-143, 147-148, 150-152, 153-156, 188-189, 192-195, 210211, 225-230, 236-240, 247-248, 253, 264, 294-305, 311-314,
322-327, 343-344, 351, 353-355, 357-419 (detailed study of the
stages from automatic actions to manipulation of mental representations).
.
J. Piaget, "Principal factors determining intellectual evolution
from childhood to adult life" (1937), in D. Rapaport, "Organization and Pathology of Thought," see ch. 6 of [18] below.
D. Rapaport, "The psychoanalytic theory of thinking" (1950),
and "The conceptual model of psychoanalysis" (1951), in "Psychoanalytic Psychiatry and Psychology," R. P. Knight and
C. R. Friedman, eds., International University Press, New
York, N. Y.; 1954. See pp. 259-273, 221-247 (condensed summary of the views on thinking referred to in text).
D. Rapaport, ed., "Organization and Pathology of Thought:
Selected Sources," Columbia University Press, New York,
N. Y., 1951.
D. Rapaport, "Toward a theory of thinking," in D. Rapaport,
"Organization and Pathology of Thought," see pt. 7 of [18].
P. Schilder, "Mind: Perception and Thought in Their Constructive Aspects," Columbia University Press, New York,
N. Y.; 1942. See chaps. 1-4 (basic units of perception), 17-18
(memory and thinking).
P. Schilder, "On the development of thoughts" (1920), and
"Studies concerning the psychology and symptomatology of
general paresis" (1930), in D. Rapaport, "Organization and
Pathology of Thought," see ch. 24, 25 of [18J (stages in the synthesis of a thought).
W. H. Thorpe, "Learning and Instinct in Animals," Methuen
& Co., Ltd., London, Eng.; 1956. See chap. 2.
N. Tinbergen, "The hierarchical organization of nervous mechanisms underlying instinctive behavior," in "Physiological
mechanisms of animal behaviour," Symp. Soc. Exptl. Biol.,
vol. 4, pp. 304-312; 1950.
N. Tinbergen, "The Study of Instinct." Clarendon Press, Oxford, Eng.; 1951. See ch. 5 (hierarchy of stages of synthesis of
an action).

Reasonable Inferences
[25J N. Goodman, "Fact, Fiction and Forecast," Harvard UniversitY
Press, Cambridge, Mass.; 1951.
[26] H. Reichenbach, "Elements of Symbolic Logic," The Macmillan Company, New York, N. Y.; 1947.
[27] H. Reichenbach, "Nomological Statements and Admissible Operations," North-Holland Publishing Co., Amsterdam, The
Netherlands; 1954.
[28] W. Sellars, "Counterfactuals, dispositions, and the causal modalities," in "Minnesota Studies in the Philosophy of Science,"
vol. 2, "Concepts, Theories, and the Mind-Body Problem,"
H. Feigl, M. Scriven, and G. Maxwell, eds., University of
Minnesota Press, Minneapolis, Minn., pp. 225-308; 1958.
Semantic Analysis
[29] R. Carnap, "Meaning and Necessity: A Study of Semantics and
Modal Logic," University of Chicago Press, Chicago, Ill., 2nd
ed.; 1956.
[30] R. Carnap and Y. Bar-Hillel, "An outline of a theory of semantic
information," Res. Lab., M.I.T., Tech. Rep. No. 247; 1953.
[31] c. Morris, "Signs, Language and Behavior," Prentice-Hall Inc.,
New York, N. Y.; 1946.
Miscellaneous
[32] N. Goodman, "The Structure of Appearance," Harvard University Press, Cambridge, Mass.; 1951. See p. 24 and passim.
[33] D. O. Hebb, "The Organization of Behavior," John Wiley &
Sons, Inc., New York, N. Y.; 1949.
[34] G. Polya, "Mathematics and Plausible Reasoning, Vol. II. Patterns of Plausible Inference," Princeton University Press,
Princeton, N. J.; 1954.
[35] A. M. Uttley, "The Conditional Probability of Signals in the
Nervous System," Radar Res. Establishment, Malvern, Worcestershire, Eng., Memo. No. 1109; 1955.
[36] A. M. Uttley, "Conditional probability machines, and conditioned reflexes," in "Automata Studies," C. E. Shannon and
T. McCarthy, eds., Princeton University Press. Princeton,
N. J., Annals of Mathematics Study, No. 34, pp. 253-275; 1956.

187

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Automatic Data Processing in the Tactical Field Army
A. B. CRAWFORDt

HE use of digital computer techniques has become
mandatory in the whole realm of ballistic missile
design and guidance. And we are rapidly approaching the era in which computer techniques similarly might well become a necessity in many phases of
ground combat. The feasibility of applying these techniques in a tactical warfare environment has been established, and the Department of the Army has a high priority program underway to place a prototype Automatic
Data Processing System (ADPS) into its tactical organization within the next four years. This paper will explain the design objectives of this system and the progress to date in the applications area, and describe the
extension of several techniques required to make the
system function.
The technical design qbjectives may be summarized
in about three key phrases-integrated, common-user
service center, employing general-purpose hardware.
The system in employment has been likened to the widespread tactical communication system, that is, a network of switching centers operated by specialists but
tying together all users or customers, largely through
the use of common-user trunks and switchboards. More
specific design guidelines include the following:

T

1) Common-user facilities with a minimum number
of single-user processors;
2) Integrated data files;
3) Simple foolproof input devices;
4) Interrogation-immediate reply;
5) GP, building-block hardware;
6) Automatic operation;
7) Flexible output.
The tactical AD PS will consist of several elements.
(See Fig. 1.) Simple, foolproof input devices will be
located at the source of data input. Connecting the inputs to the data-processing center and centers with each
other is the Army's area communication system. According to the work load and applications being processed,
each center will employ one or more general-purpose
computers. Finally, a whole range of output devices is
available according to the type of output desired, or
rather required, by the commander to facilitate his
making a sound decision-volatile but rapid video-display graphical or map-overlay form, or hard-copy reports.

t Automatic Data Processing Dept., U. S. Army Electronic Proving Ground, Fort Huachuca, Ariz.

USERS & INFO SOURCES

DECISION
€~CHINE

O----tAREA
~-4cOMM

-----t

~~YSTEM

~--4

CONTROL

DATA PROCESSOR

I

COMPUTATION
FILES

....J...

...J....

OUTPUT

,.,

.-i

DISPLAY FOR
HUMAN DECISION

STORED PROGRAM

r

....

HUMAN
MONITOR

,

I~

DECISION IN
ELECTRICAL
FORM

Fig. 1-The elements of the tactical ADP System are located in such
a fashion as to minimize the burden on the users of System
operation.

Functionally, the system is portrayed as an integration of subsystems according to the military staff function mechanized. The Army's traditional categorization
is the well-known G-1, 2, 3, and 4 or Admin., Intelligence, Operations, and Logistics. In each of these categories, however, the uses of information are similarthat is, immediate reaction, longer range planning, and
historical preparations. Our system design must be
tailored to meet the rigorous specifications of high speed
for the immediate re&.ction use as well as the capacity
for large amounts of file maintenance and processing.
Looking at the over-all system configuration, one sees
a far-flung complex of data processors and data transmission links, the magnitude of which is presently
unique. As is well known, the tactical Army's command
structure runs from the Company headquarters to Battle Group, Division, Corps, Army, etc. It appears desirable-and in fact a necessity-to install a data-processing capability within each headquarters from Battle
Group back. The physical size and processing abilities
will of course increase as we proceed toward the rear of
the Combat zone. In fact, at Corps and Army more than
one computer will be linked together to make up that
data-processing center.
Now, everything described to this point might be
called long-range planning. The title of the system discussed is ARMYDAT A, and the general time-frame is
beyond the immediate future. Nevertheless, we are
guided in the immediate systems developments by these
objectives. The remainder of this paper wiil describe the
Army's project to attain a prototype ADP System as
an in termedia te step toward reaching the ultimate.
Slightly less than two years ago, a major project was
launched within the Army to place into operation as
soon as possible a system which could support the ever-

188

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

increasing military requirements for mobility and effectiveness in combat. Because of the long lead-time
inherent in the R&D cycle for complex digital computers, the decision was made to parallel the development of the system's hardware and the system's concepts and applications. (The full description of the socalled FIELDAT A family of equipment has been left
to the companion paper prepared by Captain Luebbert
of the Signal R&D Laboratories.)
At the time the decision was made to proceed on the
hardware, a plan of action was laid out whereby the
entire Army's methods for information handling in the
field were to be scrutinized in great detail. The ultimate
objective was a detailed description of each user's dataprocessing requirement with the more immediate byprod uct of streamlining the manual methods.
Those persons associated with business data-processing systems are familiar with the peculiarity of the systems problems involved in mechanizing integrated
business procedures of any magnitude, and here I use
the word "system" to mean "procedural" structure. We
propose to integrate some 50 or more separate procedures (for as many customers) from concepts developed initially in an isolated fashion. That is, the
logistics applications are being derived through systems
analysis by the logistics specialists, the combat intelligence applications by G-2 specialists, and so on. To
take full advantage of the great power of automatic
data processing, we must mold or integrate these individual requirements into a realistic yet imaginative
working system which meets the needs of the diverse
staff requiring its services. The types of applications to
be satisfied again can be broken down into the' traditional areas of Operations, Intelligence, Logistics, and
Admin. A different categorization according to dataprpcessing techniques might be Combat Control (random access), Combat Support (batched processing),
and Combat Computation.
Approximately twenty separate study groups are analyzing some seventy separate potential ADP applications. To assure that these studies will be more directly
usable, an instructional booklet has been distributed
setting forth a very rigid report format and content.
The fact is that the area specialists are not experienced
in the techniques of systems analysis, so we designed the
booklet more or less to lead the study group "by the
hand" through the steps of an ADP application study or
systems analysis. Further, through contracts and the
use of our own systems analysts, we are providing technical assistance to the study groups, particularly as
they reach the phase of proposing an automated procedure for their problem.
Upon completion the individual study reports are subject to a technical review to determine feasibility, completeness, and the amount of dependence on other applications. Analysts return to the agency which con-

ducted the study to fill in details and to eliminate ambiguities. The object of the next step is to define the
application to the detail required for programming and
to derive certain workload data to be used in the over-all
system design.
About this time, too, the preparation of digital models
of the system and its parts is to begin. This, I believe,
implies the underlying approach to our whole problem:
simulation! We shall be utilizing all means and all levels
of digital.simulation techniques. First of all, interpretive simulation must be employed to permit the preparation and checking out of the computer programs
prior to the delivery of the militarized computers. A
successful simulation of the MOBIDIC on the IBM
709 has been completed, and so far we have put through
successfully a few math subroutines, an intelligence
filing and retrieval experiment, a combat surveillance
target acquisition simulation, and a limited payroll
problem. To extend this technique we are attempting
now to develop a simulator generator which will enable
us to experiment inexpensively when altering machine
parameters or order codes. (This latter is an attempt
to lay the groundwork for the later definition of the
advanced system characteristics and computer designs.)
Before the detailed design of the prototype system is
launched, a subset of the available application areas
will be selected for implementation. Only then can we
say specifically how comprehensive the FIELDATA
system will be in covering tactical procedures. Since it
is recognized that the hardware cpmponents are experimental in nature, in order to meet a tight schedule without a crash program the FIELDAT A system will itself
be experimental in nature.
Parallel to the detailed programming of each application will be the use of a digital model of the system information flow to permit a prediction of the needed data
rates, alternate routing, and potential bottlenecks. A
model is being used now using the 709 which simulates
a general army area communications complex. Probability distributions and Monte Carlo techniques are
employed throughout, from the preparation of a message entering the system to human switchboard operator
actions and to message processing as it progresses
toward its destination. A statistical analysis routine
produces model-run data reduction and also recommends a sample size according to our level of confidence
desired.
To explain the next type of simulation, first I shall
outline the organization of the ADPS Test Facilities at
the Proving Ground. Computer test facility is around a
large-scale computer center, specifically the IBM
709. Herein all of the simulation work will be conducted,
and herein will be the means for conducting controlled
environmental tests for predicting the validity of our
proposed computer procedures. The field-test facilities

Luebbert: Data Transmission Equipment Concepts for FIELD A T A
are for the purpose implied by the name and will naturally be employed to try to prove out the simulation
and model results. Even during these field-test phases
we are to utilize simulation techniques. In this context
the 709 will be linked to that part of the system being
subjected to evaluation in a field operational environment; a single thread employment of equipment will be
supplemented by simulation of the remainder of the
system. (See Fig. 2.) That is, the computer will provide
the data sink to introduce input into the system and to
absorb output from these echelons actually being operated.
The Computer Center became operational in February, 1959. The larger part of the application studies
are completed, and detailed analysis has begun on
several of them with demonstration runs already made
on at least two. The major hardware items of the prototype system are on order, with the first to be delivered
this coming Fall. Combining this progress with that
reported by Captain Luebbert on hardware, transmission, and programming aids we remain confident that
our objectives can be attained.
In summary, this paper has reported on an ambitious
and futuristic program undertaken by the Signal Corps
to provide the Army with a vast tactical Automatic

SIMlE

189
1~REA8<

./

ARMY

Fig. 2-The IBM 709 will serve as a source and destination of system input and output by simulating the missing echelons during
field tests.

Data Processing System. The proposed system in prototype form is to be operational by 1963 and will incorporate the very latest developments in digital techniques, i.e., new miniaturized general-purpose dataprocessing devices, computer-to-computer communications, and automatic programming. The research efforts in this project, and certain standards derived, are
bound to have an effect on and contribute to related
commercial data-processing activities.

Data Transmission Equipment Concepts
for FIELDATA
w.

F. LUEBBERTt

F

IELDATA is an integrated family of data processing and data transmission equipment being developed for Army use. A unique feature of this
family is the almost complete disappearance of conventional distinctions between communications and data
processing. This paper deals primarily with the concepts
and techniques developed to create this evolutionary
merger emphasizing the ways in which conventional
communications concepts have been adapted to achieve
a high degree of interoperability with computers and
other data processing equipment, and an extraordinary
degree of flexibility and adaptability of application.
In order to explain and illustrate the FIELDATA
concepts, this paper makes extensive use of specific examples of design decisions, particularly those dealing
with common features such as codes, voltage and impedance levels, data rates, etc. Among the equipments

t u. S. Army Signal Res. and

Dev. Lab., Fort Monmouth, N.

J.

of the FIELDATA family developed in accordance with
these concepts and common standards are the following,
all of which are scheduled for completion prior to the
end of 1960: the MOBIDIC computer (Sylvania), the
BASICPAC and LOGICPAC computers (Philco), the
AN/TSQ-35 19,200 bit/second data transmission equipment (Bendix-Pacific), the AN/TSQ-33 2400 bit/
second data transmission equipment (Collins), the
AN/TSQ-32 1200 bit/second data transmission equipment (Stelma), the DATA COORDINATOR, a facilities coordination and control equipment for an integrated communications and data processing system
(IBM), and a host of miscellaneous equipments such as
magnetic tape transports (Ampex), a flexowriter-like
electric typewriter (Smith-Corona), high-speed printers (Anderson-Nichols), security equipment (Collins),
etc.
The fundamental capabilities of data processing
equipment can be described as the ability to transform

190

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

information into more desirable or useful forms. Using
the same viewpoint, the fundamental capability of convention~l communications equipment can be described
as the ability to transfer information to a more desirable
or useful location. Of course, from an abstract logical
viewpoint a transfer is merely one special kind of transformation, one in which only physical location is
changed. Combining these descriptions, one is led to the
concept of a generalized data handling system capable
of performing generalized transformations, of which
conventional one-location data processing would be one
special case and conventional no-processing data transmission would be another special case. In such a system
there would be no fundamental distinctions between
data processing and data transmission, but only distinctionsof convenien~e based upon application, use, and
design emphasis.
Is such a concept a reasonable one, and does it have
an y practical utility? The answer to the first part of this
question will be examined in detail in this paper; the
answer to the second part will be determined by the success in actual use of the equipment and system concepts
it generates, the concepts explained in this paper which
are receiving initial implementation in the FIELDATA
family of equipments.
An examination of the information flow and manipula tion in typical data processing and data transmission
equipments shows almost immediately that there is a
very considerable mixture of functions going on in both
types of equipment. A very considerable amount of the
activity going on inside any computer or data processor
consists of simple transfers of information from one part
of the processor to another. Wi thin data processing
equipment a major part of the acti~ity is concerned with
the generation, manipulation, and other processing of
information used for control, supervisory, and error reduction purposes. In many cases nearly identical operations go on in both data processing and data transmission equipments, with the differences, if any, being matters of design emphasis based upon ;'pplication and use.
Many practical cases of this similarity are immediately obvious, particularly in the area of devices used
for data entry and output. For example, computers frequently use paper tape readers similar to those used in
teletypewriter transmitter distributors, and paper tape
punches that could easily be used in teletypewriter
reperforators. Similarly the idea that kinds of inputoutput devices such q,s card readers and punches and
magnetic tape transports which are widely used for data
processing can effectively be adapted for use of communications lines is also being exploited in equipments
such as the IBM transceiver, the Collins Kinetape, etc.
FIELDATA emphasizes this kind of exploitation to
the extreme, particularly encouraging it by assuring
that interconnection of semiautonomous equipment
mod ules be made in accord with common standards
without distinction whether these equipments are conceived primarily for computer-associated or trans mis-

sion-associated functions. This makes it possible for the
same data terminals to operate not only with data processing-type inputs and outputs such as computers, paper
tape, magnetic tape, and IBM cards; but also for it to
operate with real time weapons system data and with
telegraphic data.
Control circuitry is so devise9 that pure binary as
well as alphanumeric (alphabetic-numeric) data may be
handled. Since any digital code, be it Baudot (teletypewriter) code, Holerith (IBM card) code, or any of a
wide variety of computer codes may be represented in
binary bit-by-bit form, the FIELDATA devices have
the potential of transmitting or handling any type of
digital data. Thus, they could be used with digitized
voice, digitized facsimile, or other types of digitized analog signals.
The use of common standards, codes, and standard
data rates makes possible the kind of data transmission
equipment concept shown in Fig. 1. This concept leads
naturally to a division of the subassemblies of data
transmission equipment into three kinds:
1) Input-output transducers are devices for converting information from some human or machine usable form such as paper tape, magnetic tape,
punched cards, analog electrical voltages, strokes
on a keyboard, etc., into digital form.
2) Transmission transducers are devices for converting data in digital form into appropriate signals
for transmission over radio, wire or other kinds of
propaga tion media.
3) Embolicl equipment, normally inserted between'
input-output transducers and transmission transducers, is used primarily to perform control and
supervisory functions, error detection and/or correction, buffering, and/or speed conversion, code
conversion, or encryption necessary for proper
system operation of the data transmission equipment. The functions of embolic equipment are information processing functions. Inputs and outputs will both be digital in form, although supplementary analog information may also be available
particularly in some kinds of error control
schemes. A general-purpose computer is potentially a very powerful and flexible type of embolic
equipment, but the necessary functions can often
be performed much more economically by specialized equipment.
This division of subassemblies may be a physical division into separate items of equipment, into semiautonomous parts of a single equipment (either in a
separate box or in a single box) or it may be merely
conceptual with no physical implementation.
1 Embolic is a coined term from the Greek embolisimos meaning
to put between or insert. This word is also used in medical, astronomic,
and ecclesiastic literature to describe other specific kinds of intercalations.

Luebbert: Data Transmission Equipment Concepts for FIELDA TA

191

ANY OF M
INPUT OR OUTPUT DEVICES
CHOSEN TO MEET USER
DATA FORM REQUIREMENTS

ANY OF N
TRA0JSMISSION DEVICES
CHOSE~ TO MEET COMMUNICATIONS
MEDIUM A0JD PROPAGATION REQ'M'TS

paper tape reader or punch

amplitude modulation sequential transmission

magnetic tape transport

frequency-shift keying sequential transmission

card reader or punch
electric typewriter
printed character reader
high-speed printer

COMMON LANGUAGE
phase reversal keying sequential transmission
EQUIPMENT
for
frequency-shift keying multiple subcarriers
error (.antrol
supervision,
phase quadrature modulation multiple subcarriers
cryptography,
r------etc.
etc.

1.-----

computer input or output
etc. - - - - - - - - - - . . /

-------------------------------------------------------------------------------------------------Fig. l-FIELDATA equipment modularity concept.

Obviously maximum flexibility and adaptability can
be achieved if any input-output transducer can operate
with any transmission transducer. This permits one to
tailor equipment to meet the requirements of a particular situation. It permits one to choose the in-out device
suitable for the user's most convenient data form (paper
tape, magnetic tape, IBM cards, etc.) independently of
the nature of the transmission facility available. One
can then choose the transmission transducer on the
basis of the transmission medium used (VHF radio relay, HF radio, loaded cable, carrier transmission on wire,
etc.) independently of the user's data form. Then join
the two together taking advantage of common standards of interconnection or intercommunication between
modules to create a well-tailored combination.
The question then arises, "Why embolic equipment?"
Certainly by proper choice of common interconnection
or intercommunications characteristics one can minimize requirements for code conversion, buffering, speed
conversion, etc. Ideally one should be able to join the
input-output transducer to the transmission transducer
without embolic equipment, so why have it? The answer
is that there are three important functions in a data
terminal which seem convenient to separate from both
in-out transducers and transmission transducers: 1)
communications supervision, 2) error control, 3) cryptogra phic sec uri ty.
Supervisory functions are conveniently separated
from the input-output transducers and the transmission
transducers because supervisory requirements may be
strongly influenced by both. Error control requirements
may also be strongly influenced by both the in-out data
• and the transmission situation, and may even in some
cases not be desired at all. Thus it is best treated as a
module which can be tailored to the paired requirements of in-out and transmission or completely omitted.
Cryptographic security is conveniently a separate
module so that it may be omitted in those cases where
security is not required or desired.

In order to discuss supervisory activities c~nveniently
it is desirable (0 make a distinction between two kinds
of information which flow through a communications
system:

1) Primary information is that which a user wishes
transferred to another location, that is, the information to be communicated.
2) Secondary information is added to the primary
information either by the originator or by equipment of the communications system which is used
to perform functions of supervision, routing, error
control and related activities necessary or desirable
to permit the primary data to be effectively communicated. This information is used by communications equipment and personnel and is normally
of no use or interest to the ultimate recipient of
the primary information.
A basic requirement for maximum flexibility and
adaptability is that the user, who enters data at an input
device and receives data at an output device, need not
be required to exercise judgment or knowledge, to perform special activities, to use or to interpret secondary
information. This, in turn, makes it desirable to isolate
secondary information from the input-output devices,
which are the user's point of contact with the communications system. A convenient way of establishing and
enforcing this isolation is the creation of a distinct embolic module which generates, receives, interprets, and/
or acts upon secondary information and passes on action requirements derived from this secondary information over local control lines to its associated input-output transducer and transmission transducer. It is important to note that such equipment need not have any
ability to interpret or act upon primary data, and thus
its operation can be made completely independent of
the coding used for primary data so long as there is a
unique method of distinguishing primary from secondary data.

192

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

In the FIELDAT A family of equipments the segregation of primary and secondary data is accomplished in a
particularly simple way by providing separate transmission symbol "alphabets" for primary and secondary
data. The basic unit of data in FIELDAT A codes is a
six-bit character. For primary data this may be merely
a six-bit block with no specific meaning assigned to specific bits within it, and no restrictions on the permissible
code combinations which may be transmitted. If this
information is alphanumeric in form so that specific bit
meanings must be assigned for electric typewriters,
printers and similar input-output transducers to operate, then the FIELDATA alphanumeric code given in
Appendix I is used. A similar but distinctly different sixbit FIELDAT A supervisory data code has also been
created and is outlined in Appendix II. Message heading and addresses, dialing information, multiplexing information, signals indicating start and end of message
blocks, and control signals are all created using this
alphabet. '
In many situations, for example inside computers, it
is desirable to both the six-bit primary data alphabet
and the six-bit FIELDAT A supervisory code alphabet
with implicit differentiation between them similar to
the implicit differentiation between data and instructions in the computer. However, other situations arise
where it is desirable to provide an explicit identifying
tag to specify which alphabet is being used.
A basic tagging method has been adopted for use on
interconnecting cables employed for Joining equipment
modules. A seventh or tag bit is added to each six-bit
symbol: a binary "one" if the symbol is primary data, and a binary "zero" if the symbol is from the
FIELDATA supervisory alphabet. In the interconnecting cables an eighth bit in the form of an odd parity
bit is also added to provide error protective redundancy.
This basic eight-bit form creates the appearance of an
eight-bit code. It has widespread use, but cases do exist
where other alphabet tagging techniques and other
redundancy is preferable. For example, serious complications would arise if this basic eight-bit form were used
for the actual punching of FI ELD ATA information on
paper tape. This occurs because control difficulties occur with a paper tape punch-reader system when primary data symbols are allowed to use either the blank
tape condition (no channels punched) or the deleted
tape condition (all channels punched). Since the use of
the basic eight-bit form inevitably results in one or the
other of these symbols being a primary.data symbol
depending upon whether a hole is interpreted as a
binay "zero" or as a binary "one," a slightly different
tagging and parity scheme must be adopted. This, of
course, results in another eight-bit form which could
be interpreted as an eight-bit code. Other tagging and
redundancy schemes could lead to other apparent codes.
Thus, there is no such thing as a unique eight-bit
FIELDATA code, although it may be convenient to
think of the basic eight-bit form as an eight-bit code.

\

In some situations transmission errors may create
important reasons for using other tagging and error protective redundancy schemes. Errors which cause loss or
change in supervisory information such as dialing information, message headings, start and end of message
indications, and so forth, may completely disrupt the
proper functioning of a communications system. Thus
they may require protective redundancy many times
more powerful than the simple parity used in the basic
eight-bit form. However, compared to these secondary
data, primary data may be capable of tolerating considerably higher error rates. For example, if the data are
English text the inherent redundancy of the language
may permit significant corruption without loss of intelligibility. Numerical data may consist of successive observations of a physical phenomenon in such a form that
inconsistent data may be deleted and ignored, or they
may be protected by numerical checks similar to those
used by accounting systems to detect bookkeeping errors. In a situation where error rates were severe it
might not be desirable to apply as powerful error control to these less demanding primary data as to secondary data; thus one might desire to use methods of
differentiating between primary and secondary data
which are more resistive to corruption by errors than the
simple tags used in the basic eight-bit form together
with different error control schemes for primary and
secondary data.
This kind of differentiation illustrates a fundamental
assumption about error control which is part of the
FIELDATA concepts. Specific error control requirements should be determined by the nature of the data
and their use. As mentioned above there may be cases
where primary data can tolerate frequent errors with
little loss of usefulness, but there are other cases, such
as the transmission of computer programs, where very
low error rates are required. In some cases if errors
above the desired maximum rate occur and are detected,
the erroneous information may be deleted without significant harm; in other cases they must be corrected. In
some cases where correction is required it may be deferred and handled by a service message; in other cases
it must be corrected before the data are released, and so
forth.
Notice that all these requirements are determined by
the use of the data, and that these demands remain the
same regardless of the transmission path and types of
transmission equipment through which the data may
be required to flow. However, the occurrence of errors
is anything but independent of transmission factors. Although errors do occur in input-output devices and
embolic equipment, by far the most variable and diffi-cult to control errors normally occur in the transmission
portion of a data link. These errors are often quite variable in frequency and interrelated in occurrence. They
are quite strongly dependent upon the 'nature of the
propagation medium and the kinds of disturbances to
which it is subject. For example, an HF radio link sub-

Luebbert: Data Transmission Equipment Concepts for FIELDATA

193

ject to severe fading and multipath disturbances could FIELDATA that the characteristics of transmission
hardly be expected to have the same error problems as 'transducers should as nearly as possible be independent
a wire and cable link subject to intracable crosstalk and of the details of user input-output characteristics and
impulse-type switching noise. Furthermore, for a given data employment.
propagation medium and kind of noise and disturbances,
If one were to incorporate the error control features
the frequency and interrelationships of errors are into the transmission transducers and one desired to
strongly dependent upon the characteristics of the provide p classes of con trolled error service to users with
transmission transducers used. For example, if ampli- q modulator/demodulator assemblies, then one would
tude mod ula tion is used one would expect different require p times q types of complete transmission transerrors than if frequency shift keying were used; if ducers. The obvious answer is to separate modularly,
sequential transmission of short bauds on a single sub- making the error control module an item of embolic
carrier is used one would expect different error problems equipment. This allows one advantage of similarities in
than if parallel transmission of long bauds on multiple requirements and raw error characteristics among the
subcarriers is used; and if sampling or nonintegrating different situations to reduce the variety of equipment
types of detection are used one would expect different to be constructed.
error problems than if full integrating detection schemes
In view of the present difficulties in predetermining
the specific error characteristics of transmission transwere used.
The number, variability, and difficulty of measure- ducers prior to construction and test, it also permits
ment and analysis of the various factors which con- construction of new transmission transd ucers and their
tribute to the frequency and interrelationships of trans- use with existing error control embolic equipments until
mission errors or particular circuits is staggering. At the the specific error characteristics of the transd ucer can
present time the state of the art is such that only crude be measured and new error control embolic equipment
estimates of frequency of error can be made for typical designed if necessary to meet user requirements with the
equipments when exposed to disturbances other than measured transmission error characteristics.
Gaussian noise, and that practically nothing can be estiIn addition to these practical advantages of placing
mated in advance about the interrelationships of errors error control responsibilities in embolic equipment
under practical conditions of impulse noise, crosstalk, mod ules rather than in transmission transducer modpropagation variations, etc. This is particularly unfor- ules' there are conceptual advantages associated with
tunate because the effectiveness of the various digital maintaining the simplest possible information flow
error control schemes available is strongly dependent patterns and division of activities among the three basic
upon the interrelationships of errors. For example, a kinds of assemblies.
simple parity check is capable of detecting single errors
In general transmission, transducers pay no attention
but not double errors. If errors occur randomly, double to the information content of the digital information
errors will seldom occur and this very simple check will they convert to modulated transmission form, neither
be quite powerful. Thus if the bit error rate is 10-4, a knowing nor caring whether the data are primary or
simple parity check will reduce the undetected error secondary, whether they are redundant or irreduridant,
rate to 10-8 • On the other hand, if errors tend to be or what code or codes they use. In contrast to this, emclustered a parity check will be rather ineffective. Thus bolic equipments normally act as information processing
if the conditional probability of a second error immedi- devices. Thus, in supervising a transmission link emately following the first is 0.5 and the bit error rate is bolic supervisory equipments act on sensory informa'10-4, then a parity check will reduce the undetected er- tion received from transmission transducers and in-out
ror rate to only 0.5 X 10-4• Given knowledge of the in- transducers and generate, process, and interpret secterrelations, checks can be designed which give high ondary supervisory information to control the over-all
protection with a minimum amount of checking equip- operation of the communications link. The error conment. Unfortunately this knowledge is usually unavail- trol problem is exactly parallel. Acting on information
able.
about user requirements from the in-out transducer
If the most important sources of errors are in a data side, and information and sensory information about
communications link, is it proper to incorporate the transmission errors from the transmission transducer
major error control features of the link into the trans- side, error control equipment is required to generate,
mission transducers? The FIELDATA concepts answer process, and interpret error control information using it
this question with a resounding NO! Why? The key rea- to control (often via supervisory operations) the overson is that while the occurrence of errors and the raw all operation of the communications link in such a way
error rate and characteristics are determined primarily as to control its errors. Thus it is obvious that from
by transmission factors, the error requirements are deter- the information processing viewpoint the performance
mined by the use of the data. The means and techniques of error control as an embolic function has a close
of error control appropriate to a particular situation ob- parallel to other embolic functions and is distinctly difviously depend upon the interaction of these factors. ferent from the functions otherwise performed by transHowever, it is a fundamental modularity principle of mission transducers.

194

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Some of the FIELDATA equip~ents being developed
in implementation of these concepts are listed in Table I.
The transmission transducers of the FIELDATA
family are limited in number. However, the choice of
the 75 X2 n pattern of data rates permits widespread
augmentation by minor modification of existing or developmental teletypewriter multiplexed transmission
equipments. In addition, future expansion is simplified
by the fact that future equipments will be able to
utilize the same embolic and input-output equipments,
and will thus be cheaper to develop than data transmission equipments which require development of embolic
and/or input-output equipment as part of the same
package. Expected future expansion will place greater
emphasis on transducers for radio circuits.
TABLE I
Transmission Transducers
AN/TSQ-32
AN/TSQ-33
AN/TSQ-35

600-1200 bit/second
600-2400 bit/second
19,200 bit/second

Stelma Corp.
Collins Radio
Bendix-Pacific

General Purpose Computers
MOBIDIC
BASICPAC/LOGICPAC
INFORMER/DATA COORDINATOR

Sylvania
Philco
IBM

In-Out Transducers
Electric Typewriter
Paper Tape Reader
Paper Tape Punch
High-Speed Printer
Paper Tape Transport
Magnetic Tape Transport
Tacden

Smith-Corona
Smith-Corona
Smith-Corona
Anderson Nichols
Ampex Corp.
Ampex Corp.
Aeronutronics Systems

Special Embolic Equipment
CV-689
CV-690
CV-691

Cryptosecurity Adaptor
Control Equipment
Data Concentrator

Collins Radio
Collins Radio
Collins Radio

The AN/TSQ-32 is a transmission transducer capable of accepting data from a standard FIELDATA
connection and of transmitting it serially as frequencyshift modulation of a single subcarrier over a standard
voice channel. Transmission rates are 1200 bits/second
(1500 words per minute) over good quality circuits,
and 600 bits/second over poorer circuits.
The AN/TSQ-33 is a transmission transducer capable of accepting data from a standard FIELDATA connection and of transmitting it as 8 channels as synchronous phase quadrature modulation of four subcarriers
over a standard voice channel. Transmission rates are
2400 bits/second (3000 words per minute) over good
quality circuits, with 1200 and 600 bits/second available for use over poorer circuits. This equipment is
essentially a militarized, miniaturized, FIELDATA
compatible version of the Kineplex TE-206.
The AN/TSQ-35 is a transmission transducer capable
of accepting data from a standard FIELDATA connection and of transmitting it by amplitude modulation of

8 subcarriers orthogonally spaced in frequency and
time. The transmission baseband is the 48-kc band between 12 and 60 kc used for military and commercial
cable and carrier circuits. Representative circuits are
the Army spiral-four and associated AN /TCC-7, 8, and
11 equipments; and commercial types Nand K carrier
equipments. In addition to point-to-point .full duplex
operation, this equipment has special features for multiple station "common net" round-robin type operation.
In addition to these transmission transdu<;ers specifically designed for FIELDATA a wide variety of existing military and commercial teletypewriter transmission equipments can be used for FIELDAT A transmission with only minor modification. Examples of such
equipment are the AN/FGC-54 capable of transmission
and diversity reception of FIELDATA information at
2400 bits/second,over long-hauI3-kc radio circuits using
32 channels each operating at 75 bits/second. Another
example is the AN/FGC-29 potentially capable of
transmission and diversity reception FIELDAT A information at 1200 bits/second using 16 channels each
operating at 75 bits/second. Yet another example is the
AN/TCC-30 potentially capable of transmitting 1200
bits/second using 16 channels operating at 75 bits/
second.
The FIELDATA computers form the most complete
and well-balanced portion of the FIELDATA family.
The FIELDATA computers may act as either inputoutput transducers or as embolic equipments for data
transmission purposes, having the ability to accept,
process, and emit either primary or secondary data. All
are designed for direct operation with data transmission
equipment. In their employment the MOBIDIC, which
was the first machine to have its characteristics frozen,
is the least capable of transmission of machine data; it
has serious restrictions on its use of the supervisory code
functions. The most flexible in its employment of data
transmission will be the DATA COORDINATOR, a
newer equipment which will have the capability of terminating a large number of data transmission circuits simultaneously and which will have a number of special
capabilities and console positions to facilitate its use as
a facilities coordination processor for an integrated communications/data processing system.
All the FIELDAT A computers are general-purpose
processors of modular design and great flexibility. Their
relative computation speeds are shown on the bar chart.
All are designed for field use with operation and maintenance simple enough for field personnel. The largest,
MOBIDIC, mounts in a semitrailer van, while the
others mount in shelters which can be carried on a
truck. It is interesting to note that the minimum assembly which forms a fully operational stored program
computer of the BASICPAC/LOGICPAC type or the
INFORMER/DATA COORDINATOR type weighs
in either case less than 150 pounds subdividable into 50pound or smaller packages. However, these central processors are normally used for data processing purposes

Luebbert: Data Transmission Equipment Concepts jor FIELDATA
in vehicular assemblies which far exceed these weights
because of auxiliary equipments such as multiple tape
transports, special real-time input-output units, additional magnetic core memory modules, data transmission equipment, input-output converters, etc. For example, a vehicular BASICPAC would augment the
central processor with four magnetic tape transports, a
high-speed paper tape reader and punch, an AN /TSQ33 transmission transducer, and other auxiliary equipment. Fig. 2 indicates the relative computational capabilities of the various FIELDATA processors.

-

35

-

30

.

25

z0

~

II!

20

"'0
"'0
",0

z0)(

i

;--- r--

15

r----'

I

I

I
I
I

0

10

r---

I
I
I

r-:-~,;'~
I'.

.1

r·· :~.--:

o

r==::J l
FIELOATA
IBM-BASE

I

I

'J

IBM 705

i
FIELDATA
IBM-BASE
+ACC

FIELDATA

FIELDATA

,f~rll'!,jr: ~~~:Aco

J
IBM 704.

FIELDATA

1!~i:~~E

FIELDATA
MDBIDIC

Fig. 2-Speeds of computation, FIELDA TA processors.

The input-output transducers of the FIELDATA
family serve double duty as computer input-output devices and as data transmission input-output devices.
The group under current development constitutes a
minimum group of general usage items, a number of
which are in only partially militarized form. This minimum group will be augmented by future field teletypewriter equipments which will utilize FIELDATA code,
and by advanced equipments now under study to provide specialized input-output capabilities. Since the
items are mostly quite conventional, detailed descriptions are omitted here in order to save space.
The specialized embolic equipments in FIELDATA
provide cryptographic ,security, interconnection of input-output and transmission transducers, and tie-in of
FIELDATA circuits to existing teletypewriter circuits.
Three major items of special embolic equipment are
being developed for FIELDATA. The CV-689 is a special cryptosecuri ty adaptor which permits an existing
type of security equipment to be inserted just before the
transmission transducer in anyJFIELDATA data transmission assembly, thus providing cryptographic security.
The CV-690 is a device which provides supervisory
control, error control, and synchronizing buffer facilities
for connecting paper tape or magnetic tape units to the
AN/TSQ-32 or 33. Although future plans call for similar special militarized embolic equipments for other
kinds of input-output equipment such as card equip-

195

ment none is now under development. However, rather
minor modifications of commercial Collins Kinecard
equipment will permit nontactical employment of
AN/TSQ-33 equipment for card transmission, and at
least some versions of the MOBIDIC will include card
equipment which, through the computer, can reach
transmission facilities.
Although FIELDATA concepts make provision for a
wide variety of error control and supervisory control
systems only the particular system used in the CV-690
will actually be used initially except for computer-tocomputer transmission, since other embolic devices using different error control or supervisory control schemes
are now under development. It is expected that when
experiments determine the actual frequency and interrelationship of errors for particular transmission transducers, more effective schemes will be devised. However,
the very simple and easy to implement two-dimensional
(interlaced) parity error detection scheme followed by
request-back or rerun-type error correction used in the
CV-690 is particularly easy to implement, and is expected to suffice for initial testing.
The CV-691 Data Concentrator is a device designed
to bridge the gap between existing large-scale 60- and
100-wpm teletypewriter facilities and FIELDAT A
equipment. Although all FIELDATA processors have
normal provision for a paper tape reader which accept~
teletypewriter tape as well as one for FIELDAT A tape,
there exists a significant need, especially at locations
where FIELDAT A processors might not be available,
to accept multiple channels of teletypewriter information and convert it into FIELDAT A form to take advantage of FIELDATA transmission transducers and
error control) permit recording on FIELDAT A magnetic or paper tape, permit printing on FIELDATA
high-speed printers, simplify entry into FIELDATA
computers, etc. The CV-691 accepts up to 25 (or 50)
teletypewriter inputs, stores the information in a buffer
core memory (made up of the same memory planes as
used in MOBIDIC), assembles it into message blocks,
converts it into FIELDATA form, applies the same error and supervisory control as the CV-690, and emits
the data in FIELDAT A form at rates up to 2400 bits/
second (3000 words per minute). The receive side performs the inverse functions.
I t is expected that as more and more of the voice and
other analog communications systems convert to pulse
code modulation and other digital forms that additional
types of FIELDATA embolic equipment will be required to perform the error control, supervisory functions, and buffering/synchronization necessary to tie
input-output transducers to their digital bit streams.
The FIELDAT A family is an attempt to create an
integrated family of data transmission equipments to
meet Army needs. Though lacking many of the features
and equipments of an ideal family of data transmission
equipments, it will make available in experimental
quantities by the end of 1960 the first integrated family

196

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

of equipments for experimental establishment of a truly
integrated communications/data processing system. It
will be the first system in which data processing and
communications equipment both utilize the same inputoutput and storage devices, the same voltages, impedance levels, codes, and other common interconnection characteristics, and in which the equipments are so
designed that in many cases the only way to determine
whether a device is used for communications or data
processing is to look at its specific application in the
system.
ApPENDIX I
FIELDATA ALPHANUMERIC CODE
A key step in achieving the required compatibility
between the various elements of an automatic data system is the adoption of a common "language" for the
storage and transmission of data throughout the system. The basic 6-bit alphanumeric code for use in this
family shall consist of 2 indicator bits and 4 detail bits.
The pattern of character assignment for the code is as
follows with the 2 indicator bits determining the choice
of column and the 4 detail bits determining choice of
row in Table II.
ApPENDIX II
FIELDATA SUPERVISORY CODE
The FIELDAT A supervisory code is used for message
headings, dialing, multiplex identification, supervisory
control, and other activities associated with secondary
data. This code is similar to the FIELDATA alphanumeric code used for primary data. It also consists of 2
indicator bits and 4 detail bits. The pattern of control
assignment is as follows with the 2 indicator bits determining the choice of column and the 4 detail bits determining the choice of row in Table III. When it is
not necessary to provide alphabetic supervisory information only the latter two columns are used. In this
case when the basic 8-bit FIELDAT A form is used,
an OR of the first indicator bit and the tag bit will
provide clocking for the 96 legitimate characters of the
8-bit form.

TABLE II
01
(Upper and
Lower
Case)

00
(Upper and
Lower Case)

«

0000
0001
0010
0011
0100
0101
0110
0111
1000
1001
1010
1011
1100
1101
1110
1111

Master Space
Upper Case
'\
Yo
Lower Case
Tab.
<>
Car. Ret.
<
Space
b.
A
B
C
D
E

F
G
H
I

11
Lower
Case

)

0
1
2
3
4
S
6

K
L
M
N

-

+
<

0
P
Q
R

=

>
-

7

$

*

S
T
U
V
W
X
Y

8
9,

(

/I

i

?
!

z

J

10
Upper
Case

Stop' EB.

Special 0
Back Space

TABLE III

--

00

10

01

11

BLANK/IDLE
Control Upper
Case
Control Lower
Case
Control Tab.

k
1

Dial 0
Dial 1

m

Dial 2

n

Dial 3

Not Ready to Receive
End of Blockette

0

Dial 4

End of Block

0101
0110

Control Carriage
Ret.
Control Space
a

p
q

DialS
Dial 6

0111

b

r

Dial 7

End of File
End of Control
Block
Acknowledge Receipt

1000
1001
1010

c
e

s
t
u

Repeat Block
Spare
Interpret Sign

1011

f

v

Dial 8
Dial 9
Start of Control
Block
Start of Block

1100

g

w

Spare

1101
1110
1111

h
i
j

x

y

Spare
Spare
Spare

Control Word Follows
S.A.C.
Special Character
Delete

0000
0001

0010
0011
0100

d

z

I

Ready to Transmit
Ready to Receive

Non-Interpret Sign

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

197

A High-Accuracy, Real-Time Digital Computer
for Use in Continuous Control Systems*
w. J.

MILAN-KAMSKlt

T HAS become evident during the last few years that
the accuracy requirements of analog computers
have become too difficult to be easily satisfied. The
rIsmg pressure to achieve better computational accuracy has led to significant improvements in the computational techniques used in analog computers. These
new improvements have made it possible to achieve a
high degree of precision so that a 0.1 per cent accuracy
has gradually become a realistic figure in many analog
machines.
However, present-day analog computer technology is
completely helpless if accuracy requirements approach
the magnitude of 1 part per million, or 0.0001 per cent.
The only available computers which can achieve this
degree of accuracy are obviously digitial computers.
Many attempts have been made to design digital
computers so that they might be used as direct replacements for analog computers. However, a rather unexpected difficulty has arisen. Digital computers, which
have received a great deal of publicity as being the
fastest computational tools, are extremely slow when
compared to analog computers. Since the comparison is
made between digital and analog computers, the operation of the digital computer must be such as to satisfy
the bandwidth requirements of the analog computer.
By this equivalence, the bandwidth of a digital computer can be defined as the bandwidth of an equivalent
analog computer.
There are three distinct approaches in solving the
problem of designing high-accuracy, real-time digital
computers. All three of these approaches are directed
toward building high-accuracy digital computers which
can replace analog computers in applications where accuracy requirements exceed present capabilities of these
machines.
At least one approach has come from engineers whose
experience and background have been chiefly in the
field of analog computers. Their basic approach was to
replace various analog computer elements by equivalent
digital operational blocks. For example, an integrator
which consists of a motor with appropriate velocity control can be replaced by a reversible counter; a potentiometric multiplier can be replaced by a digital element
which is called a rate multiplier, and so on.

I

Since the operation of a computer of this type is incremental, its design approach led to the development
of a family of computers called incremental digital computers.
The second approach was to translate the problem
into a differential equation and then to solve the differential equations by integration. Since the solution of
differential equations is done using finite increments,
the family of digital differential analyzers is closely related to the family of incremental computers. The output function of incremental computers and of the digital
differential analyzers is determined by the increment of
the input function and by the internal state of the machine. These computers, therefore, can be regarded as
deterministic transducers with infinite memory.
The third family of real-time digital computers is represented by machines which go through a complete computational cycle every time a new input sample is taken.
These computers normally adopt computational techniques which have been developed in programming
general-purpose digital computers.
These machines normally have short memories or, in
many cases, no memory at all. Their output is always
uniquely determined by the input.
The latter group of computers is particularly suited to
applications in which a number of problems must be
solved simultaneously and concurrently. It is achieved
usually by interleaving several programs.
The computational speed of digital computers is
usually defined as the number of additions or multiplications which the computer can perform within a certain
period of time. This computational speed is extremely
high when compared to the computational speed of a
desk calculator. In real-time computation, however, the
speed of operation is defined as the ability of the computer to generate output functions, which vary rapidly
with time. Not only must the output function contain
large values of higher order derivatives, but also must
not be delayed by the finite computational time of the
computer. The transfer function of real-time computers
is often complicated and usually contains trigonometric
functions. If a high degree of accuracy is desired, the
word length required may be as large as 30 binary digits
or more.
It is possible to show that a high-accuracy machine
has a limited ability to generate output functions which
contain large values of output function derivatives. The
computational time increases very rapidly as the word
length increases.
I

* Presently being developed under a subcontract from the Military Products Dept. of Detroit Controls, in Norwood, Mass., for the
U. S. Navy.
t Epsco, Inc., Boston, Mass.

198

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The design of a real-time digital computer is usually
based on an input-output accuracy specification and on
the bandwidth requirements. For a digital computer, the
bandwidth requirement can usually be expressed in
terms of the amplitudes of output function derivatives.
Maximum possible values of the derivatives can normally be determined by analyzing the geometry and the
dynamic character of the output function.
The first trial in the determination of the maximum
permissible computational time can be accomplished by
fitst 'calculating the greatest possible velocity of the output function, and then by selecting a computational
time such that the change of the output function within the computational time will not be greater than a
maximum permissible error.
Errors due to quantization, truncation, round-off,
function approximation, etc., must be considered separatelyas additional system errors. In certain problems,
the computational time calculated from the investigation of the maximum output velocity may be extremely
short. Extremely s'hort computational times can be
realized with incremental computers. However, internal
rates of several megacycles are necessary in order to
construct incremental machines which have equivalent
bandwidths equal to the bandwidth of analog computers
and accuracy of 1 part in 10,000.
In many applications, long computer memory is undesirable as, for example, in all real-time control and
stabilization computers. Computer response to step inputs in target tracking applications must be excellent.
Errors must be self-correcting, and the accuracy of the
computer must be independent of the accuracy of previous computations.
These requirements cannot be readily satisfied by
purely incremental computers. The selection of a certain type of real-time computer should be based on the
specific requirements of each problem.
The best results can be achieved if the design of realtime computers is specially tailored to each problem.
The specification for a real-time computer is usually
determined by accuracy requirements and the characteristics of the time function to be controlled.
There are usually several other factors which are normally well specified; for example, the weight and size
of the computer and the type of hardware to be used.
These requirements, combined with the environmental
specification, usually determine the maximum practical
internal rate of the machine.
Several design parameters must be considered to determine the optimum combination of computer accuracy, internal speed of operation, approximations
used, sampling rate, and the time of computation.
The maximum permissible computational time can
be determined by analyzing the nature of the output
function. The output function can always be expressed
in terms of a Taylor series. The actual mathematical
manipulation can be quite involved. It may also be
difficult to determine the maximum possible values of

all the derivatives of the output function. However, if
the motion of a physical object is considered, it is usually
sufficient to analyze only the first two or three derivatives in order to describe adequately the output function. Rapid changes in acceleration are very rare, and,
therefore, higher order terms of the expansion can be
disregarded.
The Taylor expansion can be regarded as a polynomial
in t. It is possible then to substitute a polynomial for
the output function. The period of time in which the
polynomial substitution is valid can be determined by
calculating the difference between the polynomial approximation and the output function. The difference
must be less than the maximum permissible error. The
higher the order of the polynomial used, the longer the
period of time over which the substitution is valid. The
computational time can then be determined by the time
it takes the output function to diverge by a certain predetermined amount from the polynomial approximation.
The minimum sampling rate and the maximum
computation time can then be determined for each order
of the polynomial used as the output approximation.
Computation times are progressively greater as the order of the polynomial increases.
The determination of the coefficients of the polynomial require the determination of the appropriate derivatives of the output function. The polynomial coefficients can be calculated on the basis of several samples computed at given time intervals. Using Newton's
backward interpolation formula, it is possible to determine the coefficients of the polynomial by simply calculating the differences on the basis of several samples
of the output function. (See Fig. 1.)

XI

X2
X:3 _ _ _
Xh--INPUT
FUNCTIONS

MA I N
COMPUTER

OUTPUT
FUNCTION
DELAYED BY
THE
COMPUTATION
TIME

DIFFERENCE
COMPUTER
POLYNOMIAL
SUBSTITUTION

POLYNOMIAL
APPROXIMATION
TO THE OUTPUT
FUNCTION

Fig. 1-Computer block diagram. Output function is compensated
for the computational delay by means of a polynomial substitution.

The computation of function differences involves subtraction. Since random errors are not correlated, they
are not subject to cancellation. In systems in which
random and bias errors are of the same magnitude, a
second order polynomial is probably the highest order
which can be practically used. The computation of the
terms of the polynomial makes it necessary to memorize
the results of several computations. In other words, it is
impossible to construct a computer which uses polynomial approximation and has no memory. However,
the memory is relatively short. If a second order polynomial is used, the computer memory is equal to only
three computation cycles.

Milan-Kamski: Digital Computer for Continuous Control Systems
The use of a polynomial approximation to the output
function offers an added advantage which may be important in certain applications. The output function
can be generated in steps which are smaller than the
maximum permissible errors. The need for this form of
output may arise if a high performance servo is controlled by the output of the computer. It can be seen
from Fig. 2 that the actual output of the computer consists of a sequence of polynomial segments and that
there is a discontinuous jump from a polynomial to the
polynomial whose terms have just been calculated. This
discontinuity can be made as small as desired. The reduction of the output function steps, however, can be
achieved only at the expense of the computational time.
I t is possible, then, to trade computation speed for
accuracy and vice versa.

COMPUTED VALUES
OF THE OUTPUT FUNCTION

T

T

T

T

COMPUTATION

T
TIME

D.ELAY

T

Fig. 2-Computer output function. Output function is
approximated by polynomial substitution.

In between the computational times, the output function is not directly controlled by the input functions.
However, the nature of the ouput function is such that
it cannot possibly diverge from the approximated value
by more than a certain predetermined value. This maximum deviation can be calculated by taking the terms
of the Taylor e4pansion of the output function which
do not appear in the polynomial approximation.
Once the sampling rate and the order of the polynomial approximation of the output function is determined, it is possible to determine the bandwidth of the
computer. The bandwidth can be calculated by evaluating the accuracy of the computer as a function of the
output function frequencies.
The frequency of the output function is postulated,
and the rm~ value of the errors due to the polynomial
approximation is calculated. For every frequency, a
certain value of the rms error can be determined. The
bandwith of the computer can then be defined as the
maximum frequency at which the rms error is still within the permissible limits.

199

In all real-time control and stabilization computers,
it is always necessary to compute some trigonometric
functions. There are many ingenious schemes of computing these functions by using the incremental, techniques. All these techniques, however, suffer from the
limitation of having infinite or very long memOl:ies. In
the Epsco STARDAC Computer, the trigonometric
functions are calculated using the Tchebycheff polynomials. Sine and cosine functions are usually needed
simultaneously. In the Epsco ST ARDAC Comput~r
they are calculated concurrently by using the powers of
the argument and the multiplying the result by appropriate Tchebycheff coefficients. A very high degree
of accuracy can be realized if the Tchebycheff polynomial is used within an interval of 0° to 90°. Simple logic
is used to accommodate arguments outside of this tange.
In this high-accuracy, real-time system, error analysis
is probably the most important phase of the system
design. All possible sources of accuracy-limiting factors
must be carefully analyzed.
In the applications in which the computational time
cannot be disregarded, a polynomial substitution for
the output function is used to offset errors due to the
computation time. The polynomial substitution can be
only approximate and consequently an erro~ is introduced. Truncation and round-off errors can be determined by analyzing the number of significant digits lost
in the computations. Errors introduced by the substitution of Tchebycheff polynomials for the trigonometric functions can be determined.
Output errors due to the errors present in the input
functions must be carefully analyzed since these errors
determine the maximum realizable accuracy of the system.
The accuracy of the input function has a profound
effect on the decisions which must be made in the design
of the computer. If the computer is designed correctly,
the errors it introduces are normally smaller than the
output errors caused by the errors in the input functions. However, the propagation of the input errors
through the computer must be carefully analyzed since
some of them can be amplified in the computer more
than others. The input function errors can be divided
into two categories, bias and random.
Bias errors can be defined as those whose magnitude
is consistent. In other words, the magnitude of an error
can be predicted with a cert;:tin accuracy on the basis
of the errors present in several previous measure men ts.
On the other hand, random errors can be defined as unpredictable. The random error in any sample has a
probability which is independent of the errors present
in the previous samples.
The propagation of these errors through the computer can be traced ea'sily by using appropriate partial
derivatives. This error analysis is well known to those
who have designed fire control computers. However, the
relative magnitude of bias and random errors in realtime digital computers is normally different from the

200

1959 PROCEEDINGS OF THE WESTERN ibINT' COMPtlT1JJ?..,rednFERENCE

Fig. 5-Digital computer module, assembled.

Fig. 3-Computer with covers in place.

Fig. 6-Digital computer module. disassembled.

Fig. 4-Computer with covers removed, showing access for servicing.

relative magnitude of bias and. random errors in, for
example, radar returns.
In real-time control computers, input random errors
are usually small and they are very often introduced
only by the input quantization. The quantization random error has a rectangular probability distribution

with a maximum possible error equal to one half of the
least significant digit.
Various methods can be used in order to minimize the
effect of random errors on the output function. Input
random errors are particularly harmful if differences are
employed in the computation of the polynomial which
is used as the approximation to the output function.
For example, if a second order polynomial is used, the
third difference is calculated and is used to smooth out
the output function. This compensation is valid only if
the noise level is such that,the third difference of the
output function is much smaller than the measured
third difference due to random input errors. This method, however, leads to relatively complicated equations.
I t, is often possible to obtain a significant improvement
by simply reducing the quantization errors. This is obvious since bias errors are not amplified as much in the
computation of differences as are random errors.
Accuracy analysis would not be' complete without a
description of the selection of the control equations. In

Milan-Kamski: Digital Computer for Continuous Control Systems

201

.1 = sin OK cos A - cos OK sin A,
real-time, digital, control computers, accuracy can be
greatly limited if a large number of mathematical operations must be carried out in order to compute the out- but () was selected at random and was not equal to A.
put function. Long computations are undesirable for So the equation can be rewritten as:
two reasons. Large numbers of computations are timeA = sin (A + AO)K cos A - cos(A + AO)K sin A
consuming; and also, in each arithmetic addition as
much as one half of the least significant digit may be
or
lost. It is then necessary to know exactly what is the
A = K sin AO.
largest possible number of operations which might be
necessary under the worst possible combination of input variables. The number of computations, sometimes,
For small.1(), the value of .1 is equal to K.1(). The funcis very difficult to predict. This is particularly true if tion converges rapidly if the value of the coefficient K
the computer function involves division and if the de- is close to unity, and in a few iterations the error benominator, under certain conditions, approaches zero.
comes negligible even for systems which require exUnfortunately, this condition arises often in all prob- tremely high accuracy. The program is simple. No
lems in which spherical geometry is involved; this hap- ambiguities arise and the arithmetic operations contain
pens, for example, if it is necessary to compute an angle only multiplications, additions, and complementing. All
whose tangent is determined by a ratio of two expres- these operations are particularly easy if performed in
sions which, in turn, are determined by some other straight binary code.
trigonometric functions. The angle itself is uniquely deThe packaging techniques used in the construction of
termined for the whole interval from 0° to 360 0 ; how- the STARDAC Computer can best be presented by
ever, the tangent is discontinuous at 90° and 270°.
referring to Figs. 3-6. Fig. 3 illustrates the computer
In the STARDAC Computer, this problem was solved complete with power supplies and input-output equipby the use of an iterative routine, which made it possible ment. Fig. 4 shows the computer with covers removed
to compute the argument even if the tangent of the and the frames pulled out for servicing. Figs. 5 and 6
angle approached infinity.
show typical modules used in the computer.
As mentioned before, the ST ARDAC Computer has a
I t is felt at Epsco that a family of real-time computers
built-in sine-cosine function generator. First a number is such as described in this paper will find broad applicasubstituted for the value of the argument and the com- tion in the field of high-accuracy real-time control sysputer calculates the sine and the cosine. Then the sine tems such as stabilization computers, fire control comof the argument is multiplied by the denominator and puters, navigation computers, autopilots, etc.
the cosine of the argument is multiplied by the numeraA computer whose design is based on the approach
tor. In the second step of the computation, a comparison outlined in this paper can offer an ideal solution to the
is made between the two products. The difference is problem of maintaining extremely high internal acthen added directly to the number which was sub- curacy. It is believed that the need for these computers
stituted for the argument. Then the cycle is repeated.
will grow together with the need for miniaturized, genMathematical justification for this operation is almost eral-purpose computers. It is felt that this new type of
self-evident if the numerator of the fraction is'repres- computer will soon establish itself as a member of the
ented as sin A and the denominator as cos A. The term family of computers together with the stored program.
which is added to the argument can be expressed as
general-purpose machines and analog computers.

202

1959 PROCEEDINGS OF THE WESTERN JOlNT COMPUTER CONFERENCE

The Man-Computer Team in a Space Ecology
J. STROUDt

AND

N OUR coming adventures in the conquest of space
we have more problems than the well-publicized
one involved in merely getting off the earth; once
in space, man has the problem of survival in an extremely hostile environment. We are adapted to life on
this earth, not in space nor on any other planet of which
we have knowledge. In our first frantic efforts to find
solutions to the many problems attendant on this, we
might reflect that once upon a time man was not
adapted to live on the surface of this earth either. In
order to survive on land, the many specialized cells
which cooperate to make up modern man found it
necessary to bring their environment with them when
they crept ashore. These cells now live and replicate in
a miniature sea in which the temperature and even the
salinity are controlled at about the same values as those
of the ocean in which they once lived.
So it seems that the solution to the problem of survival in space is obvious: To stay alive man must again
take a reasonable facsimile of his accustomed environment with him.
We have found the answer!-Or have we? The only
way we can think of implementing our solution would be
to arrange an "air lift" to man's abode in space, and thus
keep him supplied with his every need. The trouble with
this is the high cost of the "lift." Current American
freight rates from earth to a minimum satellite orbit,
the staging area for our journey into space, happen to
be about $30,000 a pound. And present indications are
that it will be a long time before the rates are reduced
by a factor of ten-and a lot longer before they can be
considered "reasonable." So when we consider that a
man eats and breathes his own weight in food and oxygen every two months, and that he will certainly need
other supplies, we find that our first solution just isn't
feasible.
Stated another way, we are forced to conclude that
to survive and replicate himself in space, man must be
equipped before he leaves his staging area to "make do"
with what he finds in space, namely, abundant radiant
energy from the sun and some very raw, raw material
from an occasional asteroid.
Certainly there is sufficient energy; more than a kilowatt per square meter at the distance of the earth from
the sun. This is an amount which, if applied to a man's
hide, would be sufficient to run him if he were only an
efficient transducer. It would also be enough if applied

I

t
t

Naval Electronics Lab., San Diego, Calif.
Convair-Astronautics, San Diego, Calif.

J. McLEODt

to the hide of the space ship, or better, an extension of
it, to propel it about the solar system from the inside of
the orbit of Mercury to the orbit of Jupiter.
Of course, as is sometimes said of the impossible,
solving the problem of converting asteroid chips to support life in space may take a bit longer. But with energy and sufficient know-how, it can be done. Theoretically man can maintain himself in a space-ecology indefinitely. In fact, we can say with some certainty that
someday precisely this process will take place, and that
the human population of the solar system may very well
increase a millionfold in a couple of thousand years,
with 99.9 per cent of this population living in comfort
and even luxury in little artificial worlds in space.
Most of the knowledge which allows us to live on this
earth is currently locked up in our own genes and those
of the plants and animals which do the real work of our
planetary ecology, while we exist as the prime parasites.
To re-educate the necessary genes to live in space would
be at best a slow evolutionary process, which we have
neither the knowledge nor the time to accomplish. We
must find a fast, a revolutionary, way to furnish man
with the necessary know-how.
Admittedly, to satisfy our reactionary genes will take
a lot of know-how. It is beyond the scope of this paper
-that is to say, it is beyond the combined abilities of
the two authors-to say just how much a man or a
colony must know to survive in a space ecology.
Or, much more to the point, considering that the cost
of a first-class (and only class) one-way ticket just to the
stagin~.area for a space mission is about half a megabuck per person, the question becomes: How few people
can collectively know enough to survive in space? So
far as we know no one knows, or is making a serious
effort to find out. This is either a sad commentary on
the direction of our space effort or a happy one on our
security system-if anything about security can be
happy. However, if it is true that we are not making
an all-out effort to answer this question, to point out
that it is already frighteningly late to begin such a
fundamental task is to belabor the obvious. Suffice it to
say that we must get on with the task as though our
very existence depends on it, as well it might.
But no matter what amount of know-how is found to
be required to exist in space, it is the burden of this
paper that computers are the only means by which the
necessary knowledge can be made available to men out
there at freight rates we can afford.
Having stated the problem and suggested a broad
solution, we leave the details to the fertile imaginations

Stroud and McLeod: The Man-Computer Team in a Space Ecology
of our bright young colleagues while examining in more
detail one important aspect of the problem: the mancomputer relationship which will be required.
First let us take a brief look at man. His internal
memory is so unreliable that he must depend on external
aids, usually written records. These must be 50 to 75
per cent redundant to be understood, and they cannot
contain more than 100 bits of information per square
inch and still be read. Microfilming can increase the
density of information but it cannot improve man's
reading rate beyond the five bit per second human limitation.
Moreover, there are very severe limitations as to
what man can stand physically and emotionally. His
sensitivity to his environment, already mentioned, may
be reflected in a quite understandable anxiety concerning his own well-being under one set of conditions and
boredom under another. Neither is conducive to reliable
performance.
Computers, however, can read non-redundant material with densities as great as a billion bits per square
inch, and they can read at rates of the order of a million
bits per second. That they can do literature searches and
prepare abstracts is hardly news, but it is also true
that they can perform feats of symbolic logic and deductive "reasoning." Moreover, they don't get dangerously
upset under some conditions and negligently bored
under others.
For these and other reasons, computers are being
widely used on earth to augment man's efforts in science and industry, even when the salary and overhead
of the individuals being augmented may not exceed
$30,000 a year.
Certainly then, all will agree that when man is sent
into space where the cost of his per diem and transportation is nothing short of fabulous, he must have computer support.
Note that we have used the words "augment" and
"support." Even in our enthusiasm for some of the superior capabilities of computers we do not suggest that
man will continue to allow them to explore space without him, although it is evident that several generations
of computers will have acquired quite a wide experience
in space operations before man ventures forth.
Eventually man must go into space; if for no other
reason, "because it is there!" However, for a sufficiently
great number of people to go in one colony to collectively know enough to survive is not practical with any
-earth-orbit ferry which we can expect to have in the
foreseeable future.

203

For these reasons the man-computer team proposed
here for a space ecology does not include a large number
of human specialists, but rather a few humans of unusually broad background who will only have to be
able to ask the right questions and do any inductive
reasoning which might be required. The computers will
select and supply all detailed information needed, make
necessary computations, and make all decisions which
can be reached by symbolic logic or deductive processes.
We must recognize, however, that to use computers
in space effectively as a source of most of the know-how
required for man to thrive will require improving the
art more than a little, and mostly in the area of establishing man-computer rapport.
At present, many people are inclined to refer all too
freely to computers as "high-speed" morons. It is just
possible that this is because the esoteric art called programming is so esoteric that it is practically a cult,
headed by the Senior Programmer as High Priest. If
computers do, in fact, sometimes behave like morons,
perhaps we should ask if anyone has taught them to do
otherwise. We think nothing of spending twenty-odd
years programming a two-legged computer. And the
amount of effort that has gone into developing the material required is measured in man -cen turies.
In contrast, we have had high-speed electronic computers for less than twenty years-only a few computergenerations. We take, at most, a few man-months to
"educate" them, and then say, in effect, "You do exactly
what I say in exactly the way you are instructed to or
I'll brain-wash you!" If we are to get along well with
our computers, we are going to have to give a great deal
more thought and effort to their education than we have
to date. At present, we are paying some pretty fancy
prices for a very inferior brand of education for some
rather bright hardware because we got started on the
wrong track. Instead of instructing them by rote, we
should teach our computer to be actively curious, to attempt to find a few answers of its own.
Such a radical reorientation of our approach to computer education is going to be quite painful to some. As
the old man said, "If you want to train a dog, ya gotta
be smarter than the dog be!" If the same can be said of
computers, man") "trainers" will be immediately disqualified. Do you know what makes you curious? Do
you know how you distinguish sense from nonsense?
Neither do we, precisely. But we had better find outand teach our computers-if we are to survive in space
or anywhere else!

204

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The RCA 501 High-Speed PrintersThe Story of a Product Design
c. ECKELt

AND

HE 501 High-Speed Printer is an output device
of the RCA 501 Electronic Data Processing System. This system is fully transistorized. It is expandable in' both the area of the high-speed memory
and the input-output devices. In the specific case of
the High-Speed Printer,. this expandability takes the
form of optional uSe of the printer mechanism either as
an on-line device or an off-line device.
Initially, the 501 product plan was to design and
produce a minimum cost EDP system. Therefore, the
first printer specifications called for on-line operation
with the printer being driven directly by the computer.
This accomplished two things. It held the cost of printer
electronics to a minimum, but still allowed the system
to have a high-speed output capability.
"
Subsequent product planning developed the need for
system expandability, that is, a system which could be
enlarged at the user's convenience if the work load increased. To meet this requirement, a program was also
started to design buffer electronics to allow the printer
to be run directly from magnetic tape-off-line.
Fig. 1 shows a scale model of a basic RCA 501 system.
Information is entered through the paper tape reader
at 1000 characters per second. Printed output is available from either the monitor printer or the on-line
printer. Fig. 2 shows an expanded system with provision
for punched card input and output and additional magnetic tape storage. The High-Speed Printer is now an offline device with buffer electronics permitting it to operate from magnetic tape.
The specifications set up for this printer were that it
should be capable of printing at least 600 lines per minute with J20 columns per line. It should be capable of
producing at least an original and three carbon copies,
offset masters and ditto masters, and contain the necessary logic to be applicable in all types of format printing. Not an integral part of the design specifications,
but perhaps most important over-all, were the criteria
that the printer should be low enough in manufacturing
cost, but high enough in performance so that these factors alone could ward off early obsolescence. A corollary
to this was the fact that the length of design cycle must
be held to a minimum and the design cost should be reasonable.
The mechanism selected for the printer was of the
"flying wheel" variety. Basic techniques employed in
printers of this type are well known, therefore we shall

D. FLECHTNERt

T

t Electronic

Data Processing Div., RCA, Camden, N. ].

Fig. I-Basic 501 system.

Fig. 2-Expanded 501 system.

only dwell on the functional aspects that, illustrate the
product development.
Fig. 3 shows a flow diagram for the on-line printer.
Information to be printed is stored in the high-speed
memory of the computer and the format is controlled
by a program in the computer program control. The
memory con ten ts are scanned and a line is printed with
each revolution of the prInt wheel cylinder. Synchronization of the memory and the character identity coming into position to be printed is accomplished by a
photoelectric code disk assembly, mounted on the same
shaft as the print cylinder. The coded bits emerging
from this disk are mechanically phased with respect to
the character they represent on the print cylinder. This
allows sufficient time for a particular character to be
compared for its occurrence in the computer's memory
and if it exists, a bit is placed in the shift register corresponding to the proper print column. A clock pulse
from the computer then causes printing of this character identity. The process is continued until the computer memory has been examined for all 51 possible
printing characters. The computer next generates a sig-

Eckel and Flechtner: RCA 501 High-SPeed Printers-The Story of a Product Design
COMPUTER

PRINTER

CODE DISC

205

TAPE STATION

,.------.,..,/°..\11'1, .
_____
P_RIN_T_CY_L_IN_OE_R_>J,'a.~I\~

MAGNETIC
TAPE

COMPUTER
PROGRAM
CONTROL

PAPER
CONTROL

Fig. 4--'-..;:Off-line flow diagram.

Fig. 3-0n-line flow diagram.

nal indicating the amount the paper should be moved,
and upon receipt of a return signal indicating that the
paper has been moved, the entire process is repeated.
Fig. 4 illustrates the off-line operation employing
suitable buffering logic. The buffer unit is designed to
accept one line of information'at a time, from magnetic
tape, store it temporarily, and then print it out. The
line is stored in a core memory, the input to which consists of a coincidence between character identity and
column location. The 'memory is clocked 'out by the
photoelectric cod~ disk assembly as each character
identity comes into print position.
Printing is normally accomplished in an asynchronous manner. That is, provision is made to determine
when all character identifies to be printed on a line have
been printed. Upon receipt of this signal, another line of
information is immediately read in from magnetic tape
as the paper is shifted. In this manner, basic printing
speeds may exceed 600 lines per minute reaching as high
as 900 for numeric printing. The logical circuitry in this
area also serves as an accuracy check on the number of
characters printed vs the number of characters which
should have been printed.
In order to control the printed format, several features have been incorporated. First, by means of a plugboard, incoming information may be tabulated to any
of 24 predetermined positions; this same feature may
also be used to delete information which is not wanted.
It is also possible to effect multiple printing of the same
data on one line, again by use of the plugboard.
Fig. 5 shows the printer mechanism, which is used for
either the on-line or off-line operation.
Now that the printer has been described, the following discussion will outline ~ome of the factors which influenced the product desig~.
The need for economical high-speed printers for computer output has persisted. Both the electronic and
electromechanical printers were considered at RCA.
From an economic and state-of-the-art point of view, the
electromechanical seemed more promising. Rotary

Fig. 5-Picture of printer.

wheel printers (Fig. 6) looked to us to be the best compromise as far as simplicity of mechanism and high
printing speed are concerned. The earlier equipment
was designed using mechanical printers of this type.
Since we already had experience with this type of printing mechanism (certain problems were known to be
problems) the new product development for the RCA
501 system consisted of refinements and improvements
in the techniques. We already knew how to make good
print wheels, and how to be consistent with the solenoid
fabrication, and were familiar with the many other
necessary techniques.
.
An area that we felt needed some investigatIon was
that of high-speed paper shifting. At the time the project
was initiated, we had a development design of an electromechanical detent spring clutch which gave promise
of very high-speed paper shifting. We found, however,
that ,a magnetic clutch, suitable for paper shifting,
though not quite as fast, was already commercially
available, and so it was adopted.

206

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

LINK
LINK

Fig. 7-Normal printing and O.005-inch misalignment
above and below a line.

Fig. 6-Print wheel area (ribbon removed).

In order to appreciate the problems involved in fabrication of the print solenoids let us examine for a moment
the general concept of print quality. Admittedly, this is
a subjective type of thing and depends primarily on the
ability and the resolving power of the human eye.
We have found, for example, that it is possible to detect a vertical misalignment of about 0.005 inch between
adjacent characters in a line of print without much difficulty. In the conventional typewriting or typesetting,
where similar misalignments occur more frequently in a
horizontal direction, the effect is not generally displeasing. People are more familiar with this type of printed
copy, and they usually accept it without notice. However, they detect any vertical misalignment quickly and
question its reason for existing.
Fig. 7 shows the word "link" in which the i is 0.005
inch above a line and n is 0.005 inch below the line with
land k on a line. Here, also, you can see the irregular
horizontal spacing in the top word, a design requirement of wheel printers. We readily accept this difference
in spacing because of the differing widths of letters.
Obviously, vertical misalignment is a normal consequence of rotary wheel printers which must be minimized. The primary way in which this is done is to assure that all 120 solenoids are as near alike as possible
both electrically and mechanically.
From an ease of manufacturing and of maintenance
standpoint, we would have liked to have placed the
prin t solenoids in two banks, one on each side of the
print wheel center line, that is, 60 per side, so that all the
solenoids would be similar. However, the problem of getting enough energy into and out of a solenoid, which is
only 2/10 of an inch thick, without crosstalk, resulted
in a design compromise of two banks of solenoids on
each side. We found that with the two-bank design, we
could assemble and pot the solenoids in groups of five.
Fig. 8 is a representation of the solenoid area of the
printer.
The potted assembly of five solenoids is machined in
a single operation so that the tolerances can be held.

JUL

Fig. 8-Print hammer action.

This technique is similar to that used in fabricating digital magnetic recording heads. By this means, we are
able to make a good solenoid economically with the correct tolerances built in, thus eliminating later assembly
adjustment. Fig. 9 shows the solenoid area with a view
of an individual unit.
Another area of investigation centered in the code
disk. This assembly has the function of establishing the
angular position of character on the print drum and
translating the character into coded notation. The
code disk has perforations corresponding to the 7-bit
code used in the RCA 501 system. Fig. 10 shows the
code disk area.
Here we were able to affect manufacturing economies
by photo etching through a plate to obtain the coding.
The logic of the on-line and off-line printers is implemented with circuit boards of standard configuration.
Most are the same board types used elsewhere in the
system (Fig. 11).
The use of standard plug-in packages, plug-ins that
are also used in the computer and the rest of the units
of the system, also helps to keep manufacturing and
service costs down.
Design for simplifi~ation and ease of field maintenance meant that the logic should be straightforward
and easy to understand. All necessary adjustments
should be in convenient locations and the mechanism
should be designed in modules which are easily replaceable, such as the print drum, ribbon drive, and paper
shift assembly. In the on-line case, a small maintenance
panel simulates the computer so that the unit can be
serviced independen tl y wi thou t tying up the rest of the
system.

Braun: A Dif!.ital Computer for Industrial Process Analysis and Control

207

Fig. ll-Typical plug-in transistor circuit board.
Fig. 9-Solenoid area.

CONCLUSION

Fig. lO-Code disk.

In the computer field, the major problem encountered in product design is time. The technology is advancing at a rate which constantly makes new products
obsolete in the design stage. The product design team
must carefully weigh the technological advances which
can be incorporated in a design against the need for
production release so that the device can be made
ready for sale. To insure that the product design remains saleable, the following three items are basic.
First, the design should be functionally good. Second,
it should be reliable, and third, it should be reasonably
priced. When these characteristics are achieved in a
product design, regardless of technological advance, the
product will not become obsolete-it will remain marketable.

A Digital Computer for Industrial Process
Analysis and Control
EDWARD L. BRAUNt
INTRODUCTION

AMONG the more important reasons advanced for
relatively unexploited use of digital computers in industrial process control systems are

ri.. the

1) a lack of knowledge concerning process dynamics,
2) inadequate development of computers engineered
for and suited to process control applications, and
3) inadequate reliability of current digital computers.

t

Genesys Corp., Los Angeles, Calif.

Our purpose here is to describe a computer which has
been designed specifically for industrial process control
applications. It promises to satisfy reliability requirements, and can be of great utility even in the absence
of complete information on the dynamics of a process.
This type of machine can be used for either one or both
of the following major functions: I t can be used to advantage in the quantitative determination of the effects
of different controllable parameters on process performance and also as a process optimization control computer.

208

1959,PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

We shall not consider here the dynamics of a particular process nor attempt to present a quantitative picture
of the benefits to be realized from the use of new instrumentation and digital control computers in various industrial processes. These are the subjects of recent and
continuing studies by a number of organizations. These
studies indicate that the utilization of machines which
allow control to be based on process dynamics as well as
steady-state considerations may offer, in particular
areas, one or more of the following advantages:
1) A reduction in capital investment for new process
plants by the substitution of small responsive
equipment and control systems for some of the
mass and storage capacity on which many plants
presently rely for stability and self-regulation
2) A reduction in expenditures for raw materials,
heating, cooling, catalysts, etc., as a result of more
precise control
3) Improved productivity
4) Improved quality control
5) Realizable, effective control for new processes necessitated by technological progress, and which
must function under conditions beyond the present
limi ts of con trolled process variables.
ANALYSIS OF A PROCESS

The effectiveness of computer control of an industrial
process is dependent to a large degree on the data avail.
able concerning the effects of controllable parameters on
pertinent characteristics of the process. However, this
does not necessarily imply that the exact dynamics of a
process must be known in precise analytical terms before a process can be controlled. The fact is that control
can be and is effected with only a qualitative knowledge
of system behavior, coupled with the use of feedback
methods.
The objective of a process analysis is the determination of the relationships between the major process
parameters and the location of optimal operating regions for the process in terms of these variables. Once
the optimum operating regions have been determined, a
decision can be made in regard to which variables need
be controlled and in what manner in order to maintain
optimization of one or more characteristics of the process in the presence of disturbances that may arise.
Let us consider now a relatively simple procedure for
analyzing the effects of various parameters on system
performance which can be incorporated into the special
purpose computer to be described. The procedure consists essentially of specifying initially allowable variations in a number of variables and then programming
these changes in order to obtain data on their effects on
the various characteristics of the process. The information entered into the computer prior to an investigation
consists of the process parameters that are to be varied,
the size of the incremental steps of the variation (variable over a limited range), and the upper and lower
limits within which the variations must be kept in order

to prevent upset of the process. Where two or more
parameters are to be varied, the program will cause all
combinations of these parameter values (within the imposed limits) to be impressed upon the system. The
only other quantity that need be entered is the stabilization time required for the process to stabilize after each
programmed change. This delay is usually referred to as
dead time or process lag. It allows for the occurrence of
two events. First" it permits a process variable to reach
the steady-state value called for by a command in the
program. Also, it provides time for the over-all process
to adjust to th,is value.
The values of the various quantities whose effects on
the process are to be investigated are determined by
suitable sensing devices (whose outputs usually are in
the form of dc voltages) which are coupled to the compater via an analog-to-digital conversion system. A convenient way of effecting the programmed incremental
changes in the process variables is simply to generate
commands which cause the set points <;>f the various
controllers in the system to be altered. Just prior to the
initiation of each new "change" command, the computer causes the current values of all input and output·
variables of interest to be read out, either in printed
form or graphically.
A useful procedure that can be incorporated into: the
program is to cause a reversal in the sign of the subsequent incremental commands whenever either the upper
or lower limits of a particular variable are reached. This
produces automatic cycling between preset limits of a
given variable. A convenient method of programming
changes in a second variable under investigation would
be to cause it to be changed by a single increment each
time the first variable reached a limit and reversed. A
third variable would be advanced by one increment each
time the second reversed, and this type of procedure
could be used, for as large a number of variables as desired.
Specifically, the analysis procedure outlined takes
place as follows. First, the variables of interest are programmed for relatively large incremental changes within
a limited range around their nominal set points, the latter being determined either theoretically, from simulation experiments, or operator experience. For purposes
of visualization, assume that the effects of two input
variables upon a particular output variable are being
investigated. For each pair of values of the input variables there will be a corresponding value of the output
variable. In general there will be a set of values of the
input variables which produce the same value for the
output and define what may be referred to as process
output contour lines. Examination of these contour
lines in a particular area will indicate in what direction
within the plane to proceed in order to find better values for the output variable. Once the direction has been
determined, the computer investigates the new area in
like manner. When the new area to be investigated becomes relatively small, i.e., convergence to a solution is
approached, the size of the programmed increments

Braun: A Digital Computer for Industrial Process Analysis and Control
may be diminished. Once the optimum setting for a pair
of variables has been determined (for fixed values of
other process variables), it will be found generally that
the effect of a third variable is to cause both a shift and
change in size of the optimum output variable contour
line. Fig. 1 illustrates the variation in the contour lines
of the VI, V2 plane as a function of a third variable, V3 •
The values associated with each contour line are given
by k i •
The same type of procedure may be used not only in
an experimental effort to gain information about a particular process but also in using a computer to control
the process. In this case, small adjustments are made in
the controlled variables until a set of values is obtained
that produces an optimum output. When'ever a deviation from the optimum occurs, the computer initiates a
search for a new set of values of the controlled variables
that will produce an optimum output. Thus, by experiment and successive approximations, an optimum solution can be produced even in the absence of complete
quantitative knowledge of the process dynamics. Useful
data are obtained not only on the relationship between
specified input and output variables but also on the accuracy of control that would be required for an allowable
change in a given output variable.
COMPUTERS FOR PROCESS CONTROL ApPLICATIONS

For optimal control of an industrial process, it is usually necessary to maintain close control over a large
number of process variables in a way which takes into
consideration not only the effects of the individual inputs on certain specified outputs but also the relative
effects of these input quantities. The control system
should have the capacity (in the event that a particular
input cannot be made optimum) to generate a compensating change in one or more other variables of the
system. Also, it is desirable that the computer be capable of optimizing different process outputs in accordance with the current economics influencing the relative
desirability of producing different output products, i.e.,
the fluctuations of supply and demand.
Once the computer has, by the processes indicated or
similar ones, produced data on how a given process may
best be controlled, it may subsequently be used to control that process. As a process ~ontroller, it is desirable
that it have the capability to
1) monitor, store, and log process data,
2) determine the values of the controlled variables
that will optimize the output,
3) actuate con trollers, and
4) check the system and itself to detect malfunctions
in either.
Before proceeding to the description of a digital computer useful for analysis and control of a typical industrial control process like fractionation or distillation,
it is desirable to review the general characteristics and
capabilities of analog computers and the two major
types of digital computers-namely, the arithmetic or

209

Fig. 1-Planar maps showing the variation of an output variable as
a function of three input variables.

integral transfer type of machine and the incremental
transfer machine.
The relative merits of analog and digital computers
for process control applications have already been considered in the literature. The conclusions reached from
these comparisons are that while an analog computer
may be adequate in certain cases, it does not in general
have adequate capabilities to suit it for more sophisticated control systems. It is limited in its ability to perform operations like the multiplication or division of
variables, the generation of functions of several variables, data correlation, extrapolation, etc. It does not
have the capacity for logical operations, nor does it provide adequate data storage facilities. It is not well suited
for complicated correlation or data processing. Often,
it may not be adequate even for relatively simple computations if there are a large number of them, or if nonlinear functions are involved. In addition, a digital machine offers greater flexibility in the sense of relative
ease of modification of control functions and also in that
it provides a number of facilities in addition to the
computations required for control, such as data storage
(including the storage of calibration data), logging operations, alarm generation, etc. Finally, the analog computer is more prone to faulty operation from marginally
operating components and does not offer the self-checking feature of the digital machine.
The relative merits of integral transfer and incremental transfer machines may be summarized as follows.
The integral transfer machine has excellent data storag~ and processing facilities with a large measure of operational flexibility. The cost of this is the price of ~
large main store and a large number of arithmetic and
control circuits. The upkeep is also high because the
complexity of programming. and preventive mainte-

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

210

MAGNETIC (jORE-TRANSISTOR
SEQUENTIAl. NETWOllK

Temperatures

-;II

Pressures

---.It

Liquid I.evels

-,)I

Compositions

-.lI

Set Point., etc.

--ill

11
ANAI.OG-TO-AN!-LOG
CONVERSION

11

J

Data Storage

I

~

Calibration
"

Common Language

-------

I

I
I

J

I

I

Alarm Control :

Log and Di.play

E

Plot
Log
Display

-I J
. and Crisil
Alarm

C---...

1

E

. .

.

Alarm
Danger

COMMUTATOR

I
ANAI.OG-TO-DIGITAL
CONVERSION

DATA INPUT SYSTEM

I
I

Analysis

I

Correlation.

I

1

I

Set point Control :

MAGNETIC DISC" STORE

Computer Evaluation

I
I

Set Pbint Adjust

IT:

Plotter
Tape Recorder
Printer

Controller.

OUTPUT FUNCTIONS

Fig. 2-Functional diagram of industrial process analysis and control computer.

nance proced ures demands skilled programming and operating personnel. However, beyond a certain size, additional capabilities and flexibility can be added at little
cost. Maintenance of this type of machine is facilitated
by its capacity to provide elaborate checks on itself and
the control system in which it is incorporated. The
integral transfer machine is superior to the incremental
machine (even one with variable selectable slewing
rates) in slewing time, and therefore better able to implement decisions. However, it is relatively inefficient in
computations on continuous variables and capable only
of a moderate computation frequency.
Because of its efficiency in computing with continuous variables, an incremental machine with relatively
few elements can provide good capacity and a high
computation fr-equency. Its relatively small size gives it
considerably better reliability with respect to failures,
though its reliability with respect to malfunctions is
comparable to that of the integral transfer machine.
Also, its maintenance is often complicated because of
some of the logical devices used in its design. In its basic
form it is relatively poor in respect to slewing time. It is
not well suited for problems of logical analysis and IS
lacking in certain data processing capabilities.
THE INDUSTRIAL PROCESS ANALYSIS AND
CONTROL COMPUTER

It is apparent that both integral and incremental
computation techniques offer advantages. A basic premise of the machine described herein is that it is good
economics to minimize the number of active storage and
switching elements by the use of incremental computa-

tion and stored logic wherever possible. This results in a
speed of operation that is relatively slow but adequate
for process control applications. A functional diagram
of a machine of this type now being built is shown in
Fig. 2. As indicated, all the functions of the computer
are achieved through extensive use of a magnetic disk
store in conjunction with a small magnetic coretransistor sequential network. The functions for which
the machine was specifically designed include
1) storage of data from process instruments, instrument calibration data, safe limits of variables,
computation constants, etc.,
2) data processing and computational capabilities
like integration, function generation, data correlation, smoothing and prediction, solving of differential equations, etc., and
3) generation of signals for adjustment of controllers,
generation of alarms, etc.
Briefly, these functions are accomplished as follows.
Storage of the values of all variables for a specified
period, sayan hour, at a sampling rate of one or two
points per minute per variable is accomplished by use
of a delay line of several thousand bits capacity. Thus,
whenever a variable exceeds prescribed limits, the recent history of the process is available to aid in determining the cause. These data can also aid in operator
supervision of the process, since system trends can be
checked by read-out of the same variables at different
times as a plot.
Analytical or empirical calibration data for selected
variables are stored on a group of channels referred to as

Braun: A Digital Computer for Industrial Process Analysis and Control
linearizing channels. To alleviate the problem of instrument drift, measurement by each instrument of a
known quantity can be transmitted to the computer at
specified intervals. From these data, new calibration
constants may be generated and stored as required.
A relatively simple alarm control facility is obtainable
by the allocation of certain channels for the storage of
upper and lower safe operating limits of certain variables. These data are continuously compared with the
data in the long delay line. Exceeding a limit causes annunciators and other alarm indicators to be actuated.
Off-limit data can also be logged or plotted whenever
alarm conditions are indicated. Computational and logical facilities can be provided to produce anticipatory
indications of trouble, e.g., searches can be made for
dangerous trends and the simultaneous occurrence of
events that imply trouble.
The analyzer is used principally to investigate overall system behavior. It can serve as a simulator, computing the solution to equations describing the behavior
of the process and its controllers. It can be used to study
the dynamic effects of variations in the choice of controllers, and their set points, utilizing sampled data
from the system itself.
The function of the correlator is to aid in the determination of the effects of different controllable parameters on the behavior of the system in order to ascertain
what variables to control and to compute optimum set
points for their controllers.
Set point control is accomplished with the aid of
channels in which the set points and control parameter
settings believed desirable are stored. These data are
continuously compared with sampled data from the
process to determine current settings for the controllers.
Also, the settings of the controllers are adjusted to
maintain output optimization in accordance with data
from the correlator. Any combination of proportional,
rate, and integral control can be provided.
A most important requirement of an on-line process
controller is a long mean time between failures-of the
order of six months. Satisfaction of this requirement too
is facilitated by a design that minimizes the number of
active storage and switching components. Various devices employed to reduce the component count while at
the same time maintaining specified capabilities are
1) simulation of active storage and switching elements by passive storage elements, use of the disk

2)
3)
4)

5)

211

store not only for the function of data storage,
but also for arithmetic and logical transformations
and control functions;
multiplexing of stored information and time sharing of active components,
the utilization of both incremental and integral
transfer techniques,
an organizational structure of the stored data
which minimizes the amount of control circuitry
required for manipulation of the data, and
specific logical configurations that capitalize on
the particular requirements of the control applications in question.

Active elements are used principally to direct data
from the disk to the controllers and to output equipment, to modify datain any given channel, and to control the flow of data between channels.
A high degree of reliability is also promoted by using
circuits operable over a wide range of parameters, by
underrating circuit components, and by eliminating
components considered inherently unreliable. Fail-safe
operation is achieved by placing the computer in parallel
with the process, i.e., so that it controls only the set
points of the controllers.
Over-all system performance depends not only on
techniques for minimizing the probability of occurrence
of a malfunction but also on rapid detection of a malfunction when it occurs.
To serve this end, the entire disk system is periodically and automatically given a test problem, and there
is also provision for the insertion of special diagnostic
programs by the system operator to allow any detected
error to be traced to the process, including controllers,
or to the com pu ter.
The type of computer system described can serve in
a test and evaluation phase as a process analyzer to ·
determine the feasibility of computer control and as a
simulator to investigate the effects of different types of
control schemes. It can also function as-a fixed-program
process-optimization control computer. Though a single machine can be provided with both capabilities,
the economics of a particular situation may be such as
to justify the use of separate computers, one a flexible
analytic computer, and the other a relatively inflexible
fixed program computer for control optimization. Each
of these machines would, of course, be less expenSIve
than the machine with both capabilities.

212

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The Burroughs 220 High-Speed Printer System
F. W. BAUER t
INTRODUCTION

N the past few years, computer manufacturers have
been developing and delivering data processors with
faster and faster processing speeds. Less attention,
however, has been paid to getting printed information
frorp,the data processor. As a result, most data processing systems are input/output bound.
The speed of the printing device has not been the only
restriction; expensive and time-consuming central processor runs were necessary to perform the editing and
formitting of the information for the printed page.
In order to bring about a system balance and relieve
the central processor of unnecessary data manipulation,
a high-speed printing system with complete off-line
editing was needed.
To define the problem more closely, we studied such
applications as: the printing of insurance premium
notices and declaration forms; wholesale drug and
grocery billing, oil company and public utility billing;
and the preparation of bank statements, stock transfers, and dividend checks. To handle these applications,
a printer system must not' only be fast but also-considering customer relations-capable of producing a
neat legible document.

I

SPECIFICATIONS

The high-speed, large-volume, complex-editing application, then, was the general framework which defined the boundaries of our system planning.
First of all, the new printer system must be a highspeed device, capable of pro<;iucing the printing requirements of a majority of applications. The necessity for
system speed established the need for independent
printer control of format and editing, since high-speed
output which depended on extensive editing by the data
processor could not be considered true speed at all.
Independent operation and speed requirements indicated that magnetic tape was the logical choice for communication between the central processor and printer
system. By using magnetic tape as a buffer, printing as
far as the central processor is concerned, is at magnetic
tape speeds; the central processor need never wait for
the printing device to accomplish its task.
To provide further versatility it was deemed advisable to allow the printer system to work directly with
the central processor for those applications requiring online operation.
As a final requirement we decided we would not want
to prepare special print tapes for off-line operation or
require central processor time for editing when on-line.

t ElectroData Div., Burroughs Corp., Pasadena, Calif.

AND

P. D. KINGt

In either case our obvious aim was to reduce the amount
of data manipulated within the central processor. Printing from master tapes or records was a must.
This, in brief, is the application framework within
which we established the performance specifications of
our new printer system. The term "printer system" is
significant. In this case, terminology was dictated by a
desire to describe accurately the operation of the units,
independent of direct processor control.
GENERAL DESCRIPTION

The Burroughs 220 High-Speed Printer System (Fig.
1), is a transistorized, buffered, on-line/off-line subsystem with versatile editing capabilities controlled by a
plugboard. The system, which includes a Printer Control Unit and a Printing Unit, is designed to operate online with the Burroughs 220 Data Processor, or off.:.line
with one or two standard Burroughs 220 Magnetic Tape
Storage Units.

Control Unit
The Printer Control Unit (Fig. 2), houses an 1100digit, random-access core storage used as a buffer,
which accommodates up to 100 computer words (10
digi ts pI us sign). The Control U ni t also contains the
system's control circuitry, a 120-character print register,
a 120-position bit register, the transistor power supply,
and the plugboard.

Printing Unit
The Printing Unit contains a drum-type, high-speed
printer (Fig. 3), having 120 print positions, with 10
characters to the inch, and a total of 51 characters per
print position. Of these, 15 are special characters, in~
cluding CR, OD, DR,
and - symbols.
The Printing Unit (Fig. 4), also contains paper motion controls and the power required to drive the printing mechanism.
Vertical line spacing is fixed at six lines to the inch.
Printing can be positioned any place on a 16-inch form.
The printer can accommodate a maximum form width
of 20 inches for centered printing. The maximum form
length is 22 inches. The printer can print an original and
five carbon copies.
Complete control of paper skipping (Fig. 5) is accomplished by means of a seven-channel punched paper
tape loop, i inch wide and photoelectrically sensed.
The paper tape loop provides six predetermined paper
skip positions and a carriage exit position, which is used
primarily for page overflow or for logical decisions.
Thus, jumping to and printing header information on
the next form is easily programmed.

+

Bauer and King: The Burroughs 220 High-Speed Printer System'

213

Fig. 1-Burroughs 220 High-Speed Printer System.
Fig. 4-Printer unit.

,

OPERATING
liGHTS

CONTROL
PANEL

Fig. 5-Printing unit, side view.
Fig. 2-Control unit.

51 CHARACTERS

120 POSITIONS
Fig. 3-Print drum.

Under plugboard control, page-skipping and single- or
double-line spacing before and after print allows easy
accommodation of preprinted forms. Paper moves at
the rate of 25 inches per second, with 9 msec start-stop
time for a single-line space time of 16 msec.
The printer may be operated at print drum speeds of
750, 900, 1500, and 1800 rpm. The effective printing
rate is dependent upon the information to be printed.
For alphanumeric information, the effective printing

rates are 624, 720, 1068, and 1225 lines per minute. If
only numeric information is to be printed the effective
printing rates automatically become 750, 900, 1500, and
1225 lines per minute.
These higher numeric print speeds are accomplished
by detecting in the control unit during the loading of
the print register that only numeric information is to
be printed. (See Fig. '6.) When this condition exists the
control unit terminates the print cycle at the end of the
numeric section of the print drum rather than requiring
a full drum revolution. The time required to traverse
the remainder of the print drum, the alphabetic section, is used by the printer system in spacing paper and
reloading the print register.
I t can be seen that at the top drum speed of 1800 rpm
the printing rate is only 1225 lines per minute for numeric information, the same as for alphanumeric information. This is because the time required to load the
print register or space the paper is greater than the time
required to traverse the alpha section when the print
drum is rotating at 1800 rpm.
Fig. 7 is a front view of the print unit showing horizontal paper positioning controls, vertical paper positioning controls, paper tension controls, the ribbon
mechanism, and the upper and lower set of form tractors. Lateral positioning of printing and form size ad-

214

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

",~','~~~~:fti~~::'EGfs~R :,
"

Fig. 8-System flow diagram.

Fig. 6-Character arrangement on print drum.

2) Scan cycle-The scan cycle is basically the transferring of selected information from the buffer to the
print register. During the scan cycle, all editing, formatting and selection is accomplished. The scan cycle
is normally completed during paper spacing time.
3) Print cycle-During the print cycle the line of information is transferred from the print register to the
paper. Since the core buffer is not used during the print
cycle, the load and print cycles can occur simultaneously.
Load

Fig. 7-Printing unit, front view.

justment are accomplished with the control in the upper right of the picture.
Controls are available for moving all tractors to the
right or left in synchronism for positioning of printing
on the page. In addition, controls are available for moving either the right or the left set of tractors individually
for accommodating form width. Paper tension between
the upper and lower sets of tractors is controlled by the
upper left control knob. By using two sets of tractors,
the upper and the lower, and positive mechanical detenting, paper creep is eliminated. Fine vertical adjustment within a line is accomplished with the lower left
control knob.
OPERATION

With some of the details out of the way, we can now
push a few buttons and see how the Burroughs 220
High-Speed Printer System works. The internal operation of the system is divided into three basic cycles
(Fig. 8):
1) Load cycle-During the load cycle, the core buffer is loaded from magnetic tape or from the Data Processor.

When used off-line, the load cycle is initiated from
the plugboard. Information is read from one of two
magnetic tape units. The block length on tape is variable from 10 to 100 words and can be variable within a
run. The number of blocks read per load cycle is selected
from the plugboard, the only restriction being that no
more than 100 words can be loaded into the buffer at
anyone time. If the capacity of the buffer is exceeded, a
buffer overflow alarm is available on the plugboard for
automatic corrective action if needed. Selection of the
storage area to which the information will be sent is
plugboard controlled. As a result, it is possible to have
in the buffer at one time any combination of information
from 2 tape units and the data processor.
When used on-line, loading of the buffer is controlled
by two' commands from the data processor. The first
command is used to determine if the printer system is
ready to accept information. The second command loads
the buffer with a record from 1 to 100 words in length.
Once loaded, the data processor is freed.

Scan
The scan cycle is so named because information to be
printed is not transmitted by plugboard wires but
rather by internal channels. The scan cycle includes the
transfer of selected buffer information through a translator to the print register for subsequent printing. All
editing of information is accomplished during this phase
of the printer operation. The plugboard wiring is used

Bauer and King: The Burroughs 220 High-Speed Printer System
for decision making, selection of starting positions of
fields, special character insertion, editing, and formatting functions.
During scan, the starting position of a field is selected;
the field then reads out sequentially to the print register,
with no restriction on field length, and continues until
ordered to a new starting location. After a change of address has been ordered the scan continues sequentially
from the new address. The addressing of the buffer is
controlled by a counter called the character address
counter, which changes the location when set to a new
value. Fig. 9 illustrates the operation during the transfer of one digit from the buffer to the print register. The
character address counter selects the digit to be transferred, out of 1100 possible settings. For each one of
these settings there is an exit hub on the board. These
exit hubs are the source of control pulses, to cause address selection, formatting and control. Other functions
which can be initiated by these pulses include zero suppression and the insertion of blanks, commas, decimal
points or dollar signs. A feature which helps reduce the
amount of information to be manipulated is the character emission of all 51 characters for printing of fixed
information-the date, for example.
When information is read out of the buffer, the digit
value is available on the plugboard and can be used for
controlling and testing purposes. Digits so used need
not be printed; for instance, decisions whether to print
a record or not can be based on the digital value of a key.
The scan cycle is automatically terminated when the
print register has been filled with 120 characters, and a
print cycle is automatically started. At this time an end
scan pulse is available for initiating a buffer load if desired.

Print
At the time the print cycle starts, the print register is
filled with a 120-character line. There is a counter synchronized with the rotation of the print drum which
tells at any instant the next character on the drum in
position to print. By comparing each position of the
print register with this counter the positions to be
printed are determined. At the start of the print cycle
this comparison process begins immediately (Fig. 10).
To illustrate, if the counter were at a value corresponding to R all R's in the print register would be
printed first. When the print drum has rotated through
one position and the counter advanced to S, all the S's
will be printed and so on. The print cycle will be completed in this case when the print drum has rotated back
to R. The printing actually occurs by timed firing of
print hammers. The 120-position bit register contains
the "yes" or "1:0" of whether the print hammer associated with a particular print position will fire at a specified character time. Loading the bit register is accomplished by means of the print register comparisons just
mentioned.

215

PLUGBOARD
Fig. 9-Buffer readout.

Fig. lO-Print cycle.

If spacing is wired on the plugboard to occur after
printing; paper spacing will start immediately after the
print cycle. A start scan pulse is available on the board
to start a new scan cycle, assuming no load cycle is taking place. If a load cycle is not yet completed, the start
scan impulse will be held up until it is. However, a load
cycle will normally be completed during a print cycle,
except at the highest drum speeds'.
SPECIAL FEATURES

This, in essence, is the Burroughs 220 High-Speed
Printer System. We will look next at some of the special
features of this system. Fig. 11 lists the functions available to complete the editing ability and enable printing
from master tapes; these are initiated by character address pulses.
The special features of the Burroughs 220 High-Speed
Printer System are described in the following paragraphs (Fig. 12).

Wiring by Exception
The control panel need only be wired when information is to be printed out in a different sequence than that
contained in the buffer. In this case, the character address pulse of the last digit of a field to be transferred is
wired to address the starting position of the next field.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

216

1. Zero suppress

2.
3.
4.
5.
6.

Check protecting asterisk insertion
Comma and decimal point insertion
Insertion of blanks
Delete
Special sign translation to
CR, OD, DR

+, -,

Fig. 11-Special functions.

1.
2.
3.
4.
5.
6.
7.

Wiring by exception
Field selection'
Exception or selective printing
Field interrogation
Multiple line printing
Retention of fixed information
Relative addressing
Fig. 13-Relative addressing.
Fig. 12-Special features.

Thus, field selection is accomplished. Since it is not
necessary to prgvide plugboard wires for the transmis-sion of all positions of a field the actual number of plugboard wires is reduced to a minimum.

Exception or Selective Printing
'Transferring the digit value of the contents of a buffer location to the plugboard during transfer of information to the print register allows recognition of keys within the information. Thus, logical electronic elements
cause or prevent printing. For example, when printing
paychecks, the plugboard can be wired to print checlfs
for only those employees with ear~l1ngs, ignoring all
employees with no earnings.

Field Interrogation
Logical decisions can also be made by comparing
buffer information to preset information in 10 rotary
switches. For example, in a statement preparation application, the date would be set in the switches. Only
those accounts scheduled for that date would have a
statement printed. All other accounts would be skipped.
Multiple Line Printing per Buffer Load
Because the buffer is actually a storage device, any
number of lines can be printed from one buffer load.
Repeat printing of any information is possible. Complete documents can be prepared with one reading of
the tape record.

Retention of Fixed Information
The ability to select the starting address during load
enables the retention of information in the buffer, for
example, page heading information. This feature is particularly useful when multipage documents are to be
printed.
Relative Addressing
(See Fig. 13.) This is a form of indexing register which
enables grouped records to be printed in the same format with wiring for only the first of the records. The

character address pulses are always governed by the
character address counter. However, the information is
available from the buffer position determined by the
sum of the setting of the character address counter and
the relative address register. Thus, by changing the
value of the relative address register, different information can be read out of the buffer with the same value
of the character address counter. This feature allows
side-by-side printing without duplication of wiring.
It is the relative address register which determines
the starting address during a load cycle. The character
address counter is always set to zero at the start of a
load cycle. As during the scan cycle, the sum of the character address counter and relative address register accesses the buffer.
CHECKING

Complete checking of all information transfer and
programmer control of error conditions are provided.
Fig. 14 shows the checking points and methods within
the system.
During load, each digit is checked for parity and invalid combinations. In addition, a check is made on the
number of digits in the tape record. If any error is detected, the tape is automatically reread and an error signal emitted from the board. If an error persists after two
retries, the system automatically stops, unless programmed to ignore this stop or to take other remedial
action.
Parity is checked during the transfer from the buffer
to the print register through the translator. Parity and
invalid characters are checked again in the translator,
and errors are indicated on the plugboard.
During printing, another parity check is performed
with errors again indicated on the plugboard. In addition to parity checking during printing, a synchronization check is made which insures that the print position
was fired at the correct time for a given character. This
print check feature (Fig. 15) works as follows.
A reluctance emitter in the printer counts a row counter. This counter determines the character to be read
out. A home pulse from a second reluctance emitter is

Tanaka: The ACRE Computer-A Digital Computer jor a Missile Checkout System

217

Fig. is-Synchronization check.

Fig. 14-System checking.

compared with the row counter when the latter's value
is at zero. If the two are out of synchronization, the
system automatically halts.
This is the only checking feature which cannot be ignored; the print check alarm is not available on the plugboard.
All error checking alarms that are available on the
plugboard can be used to cause retries or brute-force
operation which can be flagged on the printed page.
Thus, in every case but one, the programmer, not the
machine, decides whether an operation is to be halted or
not. Oftentimes, the programmer will want to wire automatic restart procedures on the plugboard.

Because of the versatility of the Burroughs 220 HighSpeed Printer System, all operations must be programmed or wired. The operator effectively provides
his own logical operations by wiring.
CONCLUSION

To summarize, the Burroughs 220 High-Speed
Printer System offers a maximum of editing versatility
with minimum plugboard wiring. More important, it
allows swift, simple but complete rearrangement of buffer information-and eliminates the necessity for complex and time-consuming data shifting within the computer or the preparation of special print tapes.
Because of this flexibility and power, the printing
problems of a wide range of applications can be solved
with ease.

The ACRE Computer-A Digital Computer
for 'a Missile Checkout System
RICHARD I. TANAKAt

INTRODUCTION

HE effectiveness of a missile system is directly dependent upon the proper assembly and subsequent reliability of its various subsystems. A supporting checkout system which enables rapid, consistent,
and thorough testing of subsystems is an essential item
in insuring the over-all operational success of a complex
missile.
This paper describes a digital computer which is used
as the central controller in an automatic checkout system. The system itself is called ACRE, for Automatic
Checkout and Readiness Equipment; the computer is

T

t Lockheed Missiles and Space Div., Palo Alto,

CaI~f.

referred to as the ACRE computer. The ACRE computer is, essentially, a general-purpose, stored-program
digital computer; particular capabilities, however, have
been emphasized to enable efficient operation of the
checkout processes.
The computer and associated system are required to
perform functions which can conveniently be grouped
as follows:
1) Monitor key quantities which indicate the existence of conditions hazardous to the missile or to
associated personnel.
2) Perform detailed checkout on a newly manufactured missile system to inspect for proper operation or to diagnose possible causes of malfunction.

218

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

3) Perform tests on a standby missile system at periodic intervals, to verify tactical readiness. Again,
if a malfunction is detected, a diagnosis is required.
4) Execute, rapidly, the test sequence required prior
to firing. For possibly marginal systems, a quantitative measure of operational success probability
is desirable.
I t is possible, of course, to meet the above requirements by utilizing manual methods or special-purpose
test devices. There are obvious disadvantages, however,
to both of these procedures. The described digital computer enables efficient operation in all of the modes described above and affords the following salient advantages.

tines can be commanded to occur at periodic intervals,
or may be the first step following an indicated system
malfunction (to verify that the missile system, not the
checkout system, is at fault).
GENERAL SYSTEM OPERATION
Fig. 1 illustrates the functional requirements for a
general checkout system. The following operations are
required.

Select and Control Test Stimulus
If a signal source is required as input to a subsystem
or system, the appropriate signal generators are selected and adjusted to provide the proper levels.

Adaptability

Select Input to Missile

The system can easily adapt to changes in the missile, in the test procedures, or in the criteria used for
evaluating results. If required, the system can be used
to test differing kinds of missiles.

The test stimulus is then channeled to _the proper
input terminal of the system under test.

Reproducibility of Test
Once a test sequence has been established, there are
no unwanted variations in the test procedure thereafter.
The operational advantages of a special-purpose checkout device are obtained.

Obtain and Convert Results
The resulting output from the system (the test value)
is converted, if necessary, from analog to digital form.

Perform Comparison
The test value is compared against previously established test liIl).its.

Ver satility

Evaluation of Test

In the event that particular test results indicate a
system malfunction, the computer program enables a
wide variety of alternate procedures to be followed.
Some factors for determining alternatives are:

If the test value falls within the test limits, a "Go" result is obtained; if the test value falls outside the allowable limits, a "No-Go" result is obtained. These two
possibilities determine the choice between two paths: a
successful test or "Go" result causes the normal test sequence to continue; a "No-Go" result causes execution
of evaluation modes as previously described.

1) the importance of the malfunctioning subsystem;
2) the extent of the malfunction;
3) the particular test mode in progress (i.e., a system
in the process of standby check offers alternatives
not available for a system which is being checked
in preparation for firing).
Possible reactions to an indicated system malfunction
might include a retest against less stringent test limits,
halt of the test sequence, or automatic switching to
standby system components.

A utomatic Sequencing
By using a stored program which includes the procedures to be followed if a malfunction is detected, the
test sequences become completely automatic in their
execution. This feature is particularly valuable during
countdown; the test times are accurately known, and
human operator errors (which could easily occur during
the stress of tactical countdown) are eliminated.

Self-Diagnosis Possibilities
As a corollary to the above, the computer also can be
programmed to execute routines intended to monitor
the operation of the checkout system itself. The rou-

Documentation
For later diagnostic purposes, it is extremely important that a detailed and accurate record be kept.
Furthermore, the documentation process should be as
completely automatic as possible.

Display
For supervisory purposes, a display panel of suitable
information (test number, results, etc.J is required.
GENERAL CHARACTERISTICS OF THE
ACRE COMPUTER
The ACRE computer is designed to meet the functional requirements outlined above; further, since the
checkout system must operate under tactical as well as
laboratory conditions, requirements of reliability, maintainability and environmental suitability are also involved. In a very general-sense, the machine satisfies the
latter requirements with conceptually straightforward
logical organization and carefully designed, conservatively operated circuits.

Tanaka: The ACRE Computer-A Digital Computer for a Missile Checkout System

219

'.!issi Ie

operator contro I s
and overrl des

Fig. 1-Generalized checkout system.

Physical Characteristics
The specifications on physical volume enable an overall assembly oriented toward accessibility and ease of
maintenance. The computer circuits, on plug-in etched
card assemblies 6 X 6 inches in size, are arranged in horizontal chassis in groups of 2S. The chassis are wired into
assemblies termed "pages" ; two pages, forming a
"book," comprise the computer.
The front of a page is shown in Fig. 2; the interconnection wiring is on the back of each page.
Fig. 3 illustrates the flip-flop card, which contains
three transistorized flip-flop circuits. Diode logic gates
are fabricated on similar cards.
The computer requires approximately 40 flip-flops,
and operates at 100-kc clock rate. Physical dimensions
are 20 X 20 X S6 inches high.
To aid in the maintenance of the computer, a special
maintenance panel is attached, which includes: 1) a
tester for the various circuit cards; 2) a memory simulator which allows static test of the computer with the
magnetic memory disconnected; 3) controls which allow single clock operation and manual setting of flipflops; 4) indicators attached to the individual flip-flops;
and S) voltage level switches for marginal checking.
Depending upon external system requirements (i.e.,
tie-in with missile subsystems, various converters, etc.),
as many as 30 additional flip-flops may be required.
This quantity varies directly with the desired system
performance requirements.

Memory
The ACRE computer utilizes a rotating magnetic
memory with an addressable capacity of 3904 words,
24 bits in length. These are organized into 60 channels,
each with 64 words, and four faster access channels of
16 words each.
The memory also contains a nonaddressable 64-word
channel which is used for automatic storage and output
tape buffering for the later described doe'umentation
capability. Further, the clock pulses, sector reference
information, and the various one-word registers are
supplied by the magnetic memory.

Fig. 2-ACRE book: front view.

Fig. 3-ACRE flip-flop board.

The six 24-bit registers on the memory are: 1) the A
register, the main arithmetic register in the machine;
2) the D register, an auxiliary arithmetic register; 3) the
B register (Index register), used both for address modification and for tallying purposes; 4) the Order Counter,
which specifies the address location of the next order to
be executed; S) the Address register, which stores the address portion of an order obtained from memory; and 6)
the Documentation or X register, which receives information either from selected arithmetic registers or
from an external keyboard, for eventual storage on
magnetic tape.

220

1959 PROCEEDINGS OF TEIE WESTERN JOINT COMPUTER CONFERENCE

Logical Organization
An Order Counter is used to specify the normal
program sequence; the order structure is single address.
An order word contains a 5-bit command code, a 12-bit
address (designated as m), and single bits each for
parity, B-reference, documentation, and spacer.
Numerical information is represented as sign and
magnitude (fractional binary), with single bits for parity
and spacer.
The machine has a total of 25 commands; these can
be categorized loosely as 5 arithmetic commands, 9
internal transfer commands, 6 control commands (jump
to location m for various conditions), and 5 special
commands which relate directly to the checkout requirements. (The latter 5 are included in the later description of special commands.)
SPECIAL COMPUTER FEATURES

Miscellaneous features, which define some of the
capabilities of the ACRE computer, are described below.

Parity
To detect malfunctions in the transfer and retention
of information, word contents undergo automatic
parity check during transfer. If a word is modified by
arithmetic processes, a new parity bit is generated and
inserted in the word. Parity error causes the computer
to halt; the operator is notified by a suitable alarm
indicator.

Switching Matrix
The ACRE system incorporates a switching matrix
which is directly controllable by the computer. At
present, a relay switching matrix is used, since the speed
of the relay network is sufficient to meet existing system requirements. (Also, relays enable convenient
handling of a wide range and class of test variables without elaborate preprocessing.) If desired, a solid-state
switching network can easily be substituted.

B Register
As previously mentioned, an Index register is provided for automatic address modification and for tally
uses. A one-bit in the B-reference position of an order
word causes the contents of B to be added to the address
portion of the order prior to execution.
For tally purposes, commands which increase or decrease the contents of B are provided. The decrease B
command allows branching on the sign of B. The logic
allows this command to be used either for branching
each time until the final traversal of a loop, or for the
opposite case of not branching until the final traversal.
Because of the similarity of many of the test sequences, the B register enables a significant saving in
storage required. Since, for tactical operation, the program and all associated parameters are stored permanentlyon the memory, the il1clusion of a B register (at a

cost of two flip-flops, associated logic, one read and one
write amplifier) contributes much more than programming convenience alone.

Documentation
The documentation feature enables automatic recording of all information pertinent to the checkout
processes. Normally, the information to be stored on
the output tape is specified by the program and hence
is documented automatically; however, to enable the
operator to insert additional information, input from a
keyboard can be documented during intervals when the
computer is idling.
During computation, a one-bit in the documentation
code position of an order word causes information appropriate to the command to be documented (two
examples: the sum for an addition command; the word
transferred, for any of the various transfer commands).
This information progresses from the one-word X
register into the 64-word special storage channel. When
the channel is filled, the entire channel contents are
transferred automatically to an output magnetic tape.
The output tape recorder uses magnetic tape 1 inch
wide, on 8-inch reels, each with a total capacity of 5
million words of storage (each word 24 bits long). Crosschannel and longitudinal parity are automatically
generated and recorded.

Display
The progress of each test sequence is indicated by a
panel of display lights. The display lights are addressable through the switching matrix; the operator receives
a direct indication of the sequence in progress, and of
test results obtained.
INPUT-OUTPUT

Program Test Loader
The magnetic memory is filled by the Program Test
Loader, ademountable input device using seven-channel,
punched paper tape. The Loader uses channel parity
and cross-channel parity to check information pickup;
as a further safeguard, the memory contents are automatically read back and verified.
For laboratory use, the Loader can be used as a
normal in pu treader.
For tactical operation (where the test sequences presumably have been generated, tested, and then stored
in their entirety in the memory), the Loader is disconnected. Disconnecting the Loader insures that no inad verten t modification of memory con ten ts can occur.
The stored routines are so designed that the operator
is required only to monitor the results, or, in a few
instances, to initiate sequences by push button control.

External Controls
The operator is provided with a minimal selection 0,£
controls. These include Start, Stop, the documentation

Tanaka: The ACRE Computer-A Digital Computer jor a Missile Checkout System
keyboard, and Test Selectors. The latter are individual
buttons which set the Order Counter to preselected configurations, enabling convenient manual selection of
particular tests.
For maintenance purposes, controls affecting physical
conditions, e.g., single clock, marginal checking voltages,
drum simulator levels, flip-flop set and reset switches,
are provided. Further, a Function Switch enables selection among three modes of operation: Normal Operation, Single-Order Execute, and Breakpoint Operation
(the last-named causes halt after execution of each
command accompanied by a breakpoint code-bit).
Output Display
A numerical display device can selectively display
the contents of the Order Counter, the A register, or
the Documentation Register. The Order Counter display consists of four octal digits; the other two registers
appear as sign and four decimal digits.
As previously mentioned, test indicators, primarily
controlled by the switch matrix, are provided. These
indicate, for example, status of various missiles, test in
progress, subsystem under test, test results, etc.
DISCUSSION OF COMMANDS

The comma~ds available in the ACRE computer have
been mentioned in the description of logical organization. Seven of the 25 commands directly related to the
checkout process are discussed in detail below.

221

Conditional Adjust Test Equipment (caj)
An item of test equipment, designated by the Select
Register, receives the 12 bits of the address m. The bits
are made available only after any previously commanded switching operation has been completed.
The selected equipment could be a signal generator
whose output is adjusted by the 12 bits; it also could
be an analog-digital converter, whose scale adjustments
or whose turn-on signal is derived from the 12 bits.
The switch interlock insures that switch connections
pertinent to the operation of the selected equipment
have been made before the equipment is activated.
Unconditional Adjust Test Equipment (uaj)
This command is similar to "caj" above, except that
no interlocks with the switch operation are provided.
This command is used when the desired connections
are known to be made. The unconditional aspect insures
that a simultaneous switching operation related to
another test cannot inhibit an operation initiated by
this command.
Bring Test Equipment Output (bte)
The output from an item of test equipment, designated by the Select Register, is read into D and also
into position m. This command is used, for example, to
obtain the output of an analog-digital converter for
subsequent comparison against programmed limits.

Compare (cpr)

Halt (hlt)

The contents of the A register are compared with the
contents of D; if A is greater than D, the computer will
transfer control to the order found in memory position
m. Both registers remain unchanged. The comparison
process is basic to the operation of the checkout system,
and is required frequently. Hence, although the comparison, and transfer of control, can be performed by a
subroutine, it is convenient to have the process available as a command.

The Halt command causes the computer to idle.
Simultaneously with the HaIt, the Select Register receives the four least digits of the address code m. The
machine will start automatically when one of the 16
signal sources, selected by the contents of the Select
Register, turns on. This allows the checkout system to
wait for the establishment of various conditions in the
missile before proceeding with a test.

Switch (swc)

The ACRE system has demonstrated the feasibility
of assigning to a digital device of the stored program
class all of the central control requirements for a missile
checkout system. The advantages inherent to a stored
program have contributed significantly to the derivation and application of test sequences at all levels of
missile operation, from manufacturing to field readiness
to prelaunch countdown. The tests, in turn, are of
paramount importance in insuring the highest possible
degree of successful missile operation.

Twenty bits in positlOn m are transferred into the
Switching Address Register. At the end of the transfer
process, a signal to initiate switching is generated. The
switching matrix proceeds to establish the connections
specified by the 20 bits; the computer is free to proceed
with the program. An interlock is provided, so that if
switching is still in progress when a new switch operation is commanded, the computer will idle until the
previous operation has been completed.
Select (slt)
The four least significant digits in the address code m
are used to set the four flip-flops of the Select Register.
The register contents then determine which item of
external equipment is to be affected by subsequent
commands.

CONCLUSIONS

ACKNOWLEDGMENT

The writer wishes to acknowledge contributions by
B. D. Leitner, P. W. Cheney, and L. D. Healy to the
computer logical design. Circuit design and computer
assembly were the responsibility of various members of
the Lockheed Computer Research Department.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

222

IBM 7070 Data Processing System
J.

SVIGALSt

N 1953, International Business Machines Corporation (IBM) introduced its first production model
of a large-scale computer-the IBM 701 Data
Processing Machine. Since that time, the company has
engineered and delivered a number of major data processing systems, each designed to meet certain business
and scientific requirements.
These systems have included the 305 Ramac, 650
Basic, 650 Tape, 650 Ramac, 702, 704, 705, 705 III, and
709. In addition to these comp'uter systems, the completely transistorized 608 Calculatochas been in productive operation since 1957.
Each of these systems has extended and refined the
growing body of practical knowledge of both the versatility and limitations of data processing equipment. The
result has been an increase in the total productive time
of the machine installations and more efficient use of
these machine hours.
More important, the application of these systems to
actual working problems has produced thousands of
specially trained personnel in the business world as well
as within IBM itself.
The new IBM 7070 Data Processing System is a
product of the ideas and expressed needs of this experienced group. It is designed to serve as a partner of the
man-machine-methods team of modern business.
The purpose of this paper is to describe the general
organization of the IBM 7070. This description will include a discussion of the special features of the system
and its physical and electrical characteristics.

I

added to the computer configuration. Each tape channel
is capable of operation simultaneously with the second
tape channel and internal computer operation. Each
tape channel may have from one to six magnetic tape
units connected. These may be any combination of regular-speed (41,667 characters per second) or high-speed
(62,500 characters per second) magnetic tape units. This
allows for a total of 12 magnetic tape units, any two of
which will operate simultaneously with computation.
A tape Ramac IBM 7070 configuration is obtained by
the addition of one to four Ramac file units to the system. The Ramac units are interconnected so that they
may be operated through either tape channel. This allows completely simultaneous operation of two Ramac
read and/or write operation~ in a manner equivalent to
that of the magnetic tape units. Each Ramac file provides three access arms. Experience with the IBM 650
Ramac systems indicates that access to these files can be
achieved effectively in zero time. This is accomplished
by seeking ahead for the next record while a previous
record is processed.

Fig. 1-The IBM 7070 Data Processing System.
GENERAL ORGANIZATION

The IBM 7070 Data Processing System combines advanced engineering design, based on high-speed, solidstate components, with an equally modern machine organization. The system is capable of efficient solution of
both commercial and scientific applications. The IBM
7070 Data Processing System spans a wide range of
capacities and features. These include punch card, magnetic tape, and magnetic tape/Ramac configurations.
The punch card IBM 7070 system consists of the
central computer, one to three punch card readers, and
one to three card punches or printers. The card readers
operate at a speed of 500 cards a minute; the card
punches operate at a speed of 250 cards a minute; and
the on-line printers operate at a speed of 150 lines per
minute. In the card system configuration, all card input
and output devices can operate completely simultaneously with each other and with internal computation.
To achieve a tape system configuration of the IBM
7070 Data Processing System, two tape channels are

t

Regional System Dept., IBM Corp., Los Angeles, Calif.

A typical IBM 7070 system is shown in Fig. 1. Included are the following units:
Console: This is a separate unit which includes the
console typewriter and a small operator's panel. The
console unit is designed to simplify and expedite the
operator's task and to insure maximum productive machine time. The typewriter is the principal operator's
tool. It replaces many of the indicator lights and control
switches of previous data-processing machines. Operator
error is minimized by the computer's ability to audit
opera tor commands through a stored program and by a
printed record of all data entered and emitted through
the console typewriter.
Magnetic Tape Units: The magnetic tape units are
seen to the right, rear of the picture. Two types of units
are available. The 729 II reads or writes tapes at a !"ate
of 41,667 characters per second. The Model IV reads ,or
writes tapes at a rate of 62,500 characters per second.
Card Reader: Immediately in front of and to the left
of the magnetic tape units is a card reader. This unit

223

Svigals: IBM 7070 Data Processing System
operates at a rate of 500 cards per minute with format
control by means of a control panel mounted on the
reader. Data from a full 80-column punched card may
be transferred into the computer simultaneously with
internal computer operations. The card reader is
equipped with a front attended tray feeding hopper and
stacker. As many as three card readers can be utilized
for card input. Selected cards may be offset in the
stacker.
Card Punch: To the right of the card reader is the
card punch. This unit operates at a punching speed of
250 cards per minute with format control by control
panel wiring. Front attended hopper and stacker are
used. Selected cards may be offset in the stacker. As
many as three card punches can be utilized for card output.
Printer: The printer is located to the right front of
the picture. This unit operates at a speed of 150 lines
per minute with format control provided by the control
panel. The printed line output consists of a span of 120
characters spaced ten to the inch.
Central Processing Unit: This unit is seen to the left
rear. It contains most of the system electronics and consists of the following elements:
1) Arithmetic registers and core memory,
2) Indexing hardware,
3) Space for optional floating decimal arithmetic
(with automatic double precision operation),
4) Magnetic core memory of five or ten thousand
words,
5) Two data channels, code translators, data registers
and controls for magnetic tape and Ramac storage
units,
6) Buffers and controls for card input, card output,
and the printers.

Disk Storage Units: To the left of the picture are the
disk storage units. These units consist of a stack of large
disks which magnetically hold up to 12,000,000 digits
each. Information is read from and written onto these
disks by an access mechanism containing a magnetic recording head. This mechanism moves rapidly to any
disk in the file. Three access mechanisms for each storage unit are provided to minimize access time by overlapping the "seek" operation. Up to four disk storage
units can be utilized by the 7070 system, providing a
total storage capacity of up to 48,000,000 digits of rapid,
random access memory. These units are attached to the
system by the same two channels that connect the magnetic tape units to the system. Each of these channels is
linked to a disk storage unit by program control, and
allows any combination of simultaneous read/write/
compute.
Manual Inquiry Station: The manual inquiry stations
are shown immediately in front of the Ramac disk storage units. These units permit fast interrogation of data
stored in the computer core storage, in the disk storage
unit, or on magnetic tape. The station consists of a

Bit Code

o

1

236

• 0 o0
o• o0
• o0 • 0
0• 0• 0
0 o• I 0
I o0 oI
0 I 0 o•
0 o• o•
0 o0 I I
0 I • o0

1 •

•
:::»

a

>

2 •
3
4

.0:
CD

5

D

'i

E

6
7

u

•

D

8
9

o

Fig. 2-Two-out-of-five code.

5
I ~ 01 02 03 04 05 , 06 07 OS' 09
G

N
Fig. 3-Word format.

special typewriter equipped with a solenoid-driven keyboard and transmitting controls. A 16-channel punched
mylar tape provides format control. Up to 10 manual
inquiry stations can be attached to the system, through
two buffers. The stations can be connected to the system
by cable up to 2500 feet from the central processing unit.
MACHINE CHARACTERISTICS

Bit-Code Structure
Information is represented by a two-out-of-five code.
The total number of possible combinations is 10, one
for each numerical digit. As shown in Fig. 2 the bit
positions are designated 0, 1, 2, 3, and 6. Each of the
digits 1 to 9 is made up of two bits whose sum equals the
number in question. Zero is designated by the one-two
combination. Alphabetic information is represented by
a dual-digit code.

Data Storage
A word in the machine code is composed of 55 bits.
Fig. 3 shows a word which consists of 10 digits plus sign,
or five alphabetic characters with an alphabetic sign. In
each case, automatic recognition of the sign position
indicates to the computer whether the information is
alphanumeric or numeric and all operations are performed automatically according to the sign.

Validity Checking
All information transfers to and from storage within
the 7070 computer are tested to insure that each digit
has two "one" bits, no more and no less, for each five-

224

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

bit position. Because the checking is for an exact number of "one" bits for every digit, this type of coding
structure affords a complete and consistent self-checking of data flow.

Data Transmission
An important feature of the 7070 is parallel transmission of data to and from core storage. An entire word,
including sign, is moved all at once. A channel for
parallel bit transmission consists of SS lines, one for each
bit in each of the 10 digits and the sign position. This enables a word in core storage to be moved in 6 fJ.sec to or
from core storage. Data are transmitted between the
core registers within the computer at a 4-fJ.sec rate. All
information is transmitted in parallel with the exception
of data transmitted to and from the magnetic tape units.
Parallel transmission is represented by the parallel
lines in Fig. 4. The accumulators all have serial paths
connecting them to the core adder. There also is a serial
data path to and from the input/output synchronizers.

MAGNETIC CORE STORAGE
0000 TO 9989

A rithmetic Operations
The arithmetic unit of the IBM 7070 contains three'
accumulators, each with a capacity of 10 digits and sign.
In addition to the accumulators, the auxiliary register
and the arithmetic register are each capable of containing ten digits and sign. The interconnection of these
uni ts is shown in Fig. 4.

1

Addition
An amount to be added to an accumulator is brought
to the arithmetic register. The number in the accumula tor is sent to the auxiliary register. The con ten t of
these two registers is added, one digit at a time, and as
the result is developed by the adder, it is brought to the
arithmetic register. At the conclusion of the operation,
the result is sent from the arithmetic register to the designated accumulator.

Arithmetic Timing
The duration of an arithmetic operation is determined
by the size, in digits, of the factors. In an add instruction, for example, it is determined by the size of the field
or by the significant digits in the accumulator, whichever is greater. Arithmetic operations take only the
amount of time needed to perform the actual computation; there is no time wasted in accumulating a full tendigit number for each arithmetic operation. The addition times for fixed point operation are shown in Table
1. These figures include access time for the instruction
and an operand.
TABLE I
Length of Operand

Execution Time

1, 2, or 3 digits
4, 5, or 6 digits
7, 8, 9, or 10 digits

48 p'sec
60 p.sec
72 p'sec

Fig. 4-IBM 7070 data-flow schematic.

I nstruction Format
Each instruction in the program consists of 10 digits
and sign. The digit positions are numbered 0 through 9
(left to right) as shown in Fig. S. For most operations
these digits are utilized as follows:
Sign and positions 0-1:
Positions 2-3:
Positions 4-5:
Positions 6-9:

operation code
indexing word
control digits
address

Operation Code: The sign and first two digit positions
provide for a maximum of 200 different operation codes
of which 120 are currently used. In addition, some of the
operation codes have multiple functions. These are
accomplished through different values which are placed
in digit positions four and five to further define the operation. For example, in a card operation, position four
specifies a particular input or output unit. Position five
defines the specific card operation such as read, punch,
or print.

Svigals: IBM 7070 Data Processing System

OP

INDEX FIELD
WORD CONTROL

~~~(_

,

o

COMPARISON CHART
729 TAPE UNITS

_A_ _\

s

Coding

D(Zj 01 02 03 04 05 06 07 08 09

G
N

+ ¢¢-99 ¢¢XX S
T

-

A

R
T

S DATA ADDRESS
T BRANCH ADORE SS
0 DATA VALUE
p

225

Automatic validity
check from tape
on writing operation?
Character density
Passing speed
Character rate
(maximum)

Model II

Model IV

Even number out
of 7 (C, B, A, 8,
4, 2, 1)

Even number out
of 7 (C, B, A, 8,
4, 2, 1)

Yes

Yes

200-555 characters 200-555 characters
per inch
per inch
75 inches per sec- approx.112.5 inches
per second'
ond
41,667 alphanu- approx. 62,500 almerical characphanumerical
characters per
ters per second
second

Fig. 5-Instruction format.
Fig. 6-Magnetic tape characteristics.

Index Word: Positions 2 and 3 of the instruction
specify the indexing word to be used. There are 99 index
words in the magnetic core storage, each of which contains a 10-digit number with sign. They are stored in
memory locations 0001-0099. The indexing word portion of a program step determines which of these 99
index registers will be applied; 00 means no indexing.
Control: In all of the arithmetic instructions, any portion of a word can be processed as easily as the entire
word. Positions 4 and 5 of an instruction determine the
part of the word that will be used. The digit in position
4 denotes the left end of the field. The digit in position 5
specifies the right end of the field. This is called "field
definition." The field definition feature means that several fields with like signs can be stored in a single word
with no inconvenience to the programmer in processing
an individual field. It should be noted that in all arithmetic operations using field definition, information extracted from a word is shifted automatically to the right
when placed into the designated accumulator. In storage operations, a number of digits equal to that specified
by the field definition digits is extracted automatically
from the right-hand portion of the accumulator, shifted
left to the designated location of the word, and then inserted in the word without disturbing the remaining
digits. Field definition operations occur without any
additional execution time.
Address: The address portion of an instruction, positions 6-9, may refer to the storage location of the data
or the location of the next instruction.
Indexing: As stated previously, memory locations
0001-0099 also may be used as index registers. When a
word is used as an index register, the digits have the following value:
Sign position: This specifies whether the actual value
of the indexing portion shall be added to or subtracted from an address during an index operation.
Digits 0-1 : Not used.
Digits 2, 3, 4, 5: This is the indexing portion of the
index word. Together with the sign of the index

word, this portion is added to the data address of
the instruction. All instructions are indexable.
Digits 6, 7, 8, 9: This is the fixed portion of the index
word and may be used by the programmer to store
constants, decrements, increments, or limits.

Magnetic Tape Characteristics: The IBM 7070 DataProcessing System provides two types of magnetic tape
units. These are the IBM 727 Model II or the 729 Model
IV. Data from magnetic tape to the system are read into
magnetic-core storage, and a record is written on tape
from the core unit. Any group of locations in core storage can be used for these purposes. The tape control unit
provides two channels, each connecting as many as six
tape units with the main system. The total of 12 magnetic tape units can be used in any combination. The
comparative characteristics of the two tape unit models
are shown in Fig. 6. The primary difference between
them is the greater density of bit storage and passing
speed of Model IV. Both units provide high-speed rewind at a maximum speed of 500 inches per second.
Successive records on tape are separated by a blank
space called the interrecord gap. Every tape-read instruction causes an entire record to be read: the reading is stopped by the interrecord gap. The size of a tape
record has no limitation except the capacity of core
storage used in transferring the information to or from
the tape. When reading from the tape, even this restriction is not rigid if data from only a portion of the tape
record are needed. The program defines the number of
words and locations that can be read in to core storage.
Any information from the tape in excess of that amount
is not accepted by the machine.
The coding structure used on magnetic tape is a scvenbit alphanumerical code. This code is identical to that
used by other IBM data-processing tape systems. Every
character must have an even number of bits, and all are
tested for this in every tape read or write operation. In
addition, there is a horizontal check of each record. As
the record is written on tape, a horizontal check char-

226

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

acter is created to make the number of bits even in each
of the seven channels. The check character itself must
have an even number of "one" bits to make the over-all
total of bits even.
The IBM 729 tape units have an additional checking
feature on tape-writing operations. The reading head
reads the tape just after it is written. This function is
illustrated in Fig. 7. This check is performed automatically and provides immediate verification of the actual
tape record.
NEW MACHINE CHARACTERISTICS

Fig. 7-Two-gap read-write head.

Record Definition Words
An important new concept introduced by the IBM
7070 Data-Processing System is that of record definition
words. These words define the first and last addresses of
a block of data stored in core storage. One of these words
is placed in storage for each block. The words may be
used singularly or in tables of record definition words,
as shown in Fig. 8. In each case, the last word in the
table is signified by a minus sign. Record definition
words are used for all data movement of more than one
word at a time. This includes disk storage read and
write, core-to-core block transmissions, movement of
input data from the input buffers and output data to the
output buffers, inquiry and reply information, and
others.

Scatter Read/Gather Write
A powerful programming tool in 7070 operations is
scatter read/gather write. This is illustrated in Fig. 9.
A single record read from tape can be divided into as
many parts as desired by the programmer. These parts
are distributed to the different blocks of core storage as
the tape is read. The figure shows how an inventory
record is separated by field and category during the
tape reading procedure. This operation is controlled by
the table of record definition words shown in Fig. 8.
This feature applies to writing on tape as well as reading
from it. It enables the program to gather data from various blocks and automatically assemble them into one
tape record.

Record Afark Words
Words containing the record mark character give an
additional flexibility to the scatter read/gather write
feature. Detection of the record mark word automatically denotes the end of a block in a scatter read/gather
write operation. In scatter read, a tape record will not
con tin ue to fill the storage block if a record mark word
is read into the storage area. Instead, it goes to the next
block and begins to fill it with the characters immediately following the record mark word. In tape writing, a
record mark word. in core storage will signal the system
to stop moving data from that block and start sending

SOl 2345 6789
I

LOCATION~:

en I

3211
3212
3213
3214
3215
3216
3~17

3218
3219
3220
3221

:

I

I

I
I

I

ENDING
! II STARTING:
ADDRESS ADDRESS
I

I

i

I

I

I

:

+:0: 0: 237 1 12380
+:0:0:0021'0021
+:0:0; 41 75 '41 94
+10 10: 1 71 6 1 720
+:0:013617 3621
+:0:0:
0104 011 0
I I
+10.0:
440 1 4425
I I
+:0:0: 2803 2803
+:0;0: 3913 3919
+:0:0: 0305 0307
-!O:O: 4853.4877

Fig. 8-Record definition words.

from the next block. Tape read or write instruction determines whether the record mark word will be operative. If not, a record mark word is treated as a normal
alphabetic word.

Zero Elimination
When a numeric word in, storage is written on tape, as
many as five high-order zeros can be automatically
eliminated. The sign of a word is combined with its unit
position digit. When the tape is read, the presence of a
sign combined with a digit indicates the unit position of
a word. The core storage word will be filled automatically with zeros in the high-order positions to replace
those eliminated during the write operation. The functions of record mark word and zero elimination are completely automatic and require no additional execution
time other than the time normally required to read and
write magnetic tape at full tape rate. These functions
result in a variable word and variable record size on
magnetic tape.

Svigals: IBM 7070 Data Processing System

-+/

~

laG

227

IJ....--------------- INVENTORY R E C O R D - - - - - - - - - - - - - - - . . . . - I . I
UNIT PRICES
ITEM
NO.

DESCRIPTION

UNIT

UNIT
WGT

UNIT
COST

1ZONE 21

ZONE 1

ZONE

31~

::!
:!

SALES THIS PERIOD

i

ON
HAND
BAL.

MIN.
INV.
UNITS

1

COST

L~~~~
. .
. . . . . .I???~

c·

IIRE·ORD.II

-

I*I

PRICE

1

•

•

TAPE CHANNEL

laG

I II

I II

t=

MIN.
INV.

UNIT
COST

UNIT

~

1=
1=
1=

DISC. el.
ITEM
NO.
UNIT
WGT.

MAGNETIC CORE
STORAGE

ON
HAND
BAL.

UNIT
PRICES
DESe.

SALES
THIS
PERIOD

1=
1=
1=
1=
~
1=
1=
1=
l=-

I::

II III

mm

1=
TTTTTn

Fig. 9-Scatter read/gather write example.

CHANNEL CONTROL

The magnetic tape system of the 7070 has two channels, each connecting as many as six tape units with
magnetic core storage. The channels also connect to as
many as four disk storage units. These disk units can be
programmed to use either channel, whereas a tape unit
can use only the one connecting it to the central processing area.
The primary purpose of the additional channel is to
allow the IBM 7070 to perform two operations simultaneously-reading two tapes, writing two tapes, or
reading one and writing one. Each control channel performs one of the two tape read/write operations; while
these simultaneous operations are taking place the program can continue. The function of the tape channel
is shown diagrammatically in Fig. 10.
Information is moved between core storage and disk
or tape storage by the transmission registers. The purpose of the transmission registers is to change the type
of transmission from serial to parallel, or vice versa, and
to synchronize the speed of the tape or disk with the
speed of magnetic core storage. Data must move to and
from tape and disk storage serially, but they are always

moved from storage to
Record Definition Regis,er

MAGNETlC·CORE STORAGE

Fig. 10-Tape/disk channel control schematic.

228

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

sent to and from core storage in parallel. In reading tape
or disk storage record, the characters are read serially
into transmission register A until the ten-digit positions
and sign position are filled. At this point they are sent
in parallel to transmission register B. Register A then
starts to fill up serially again with the next characters on
the tape or disk record. The contents of register Bare
sent in parallel to the magnetic core storage location
specified. In tape or disk storage write operations, data
go in parallel to transmission register B, then in parallel
to transmission register A, and serially to a tape or disk
storage unit.
The storage locations are controlled by the record
definition register, one of which is used for each tape
channel. The starting (or working) and stop addresses
in the registers are obtained from a table of record definition words stored in the memory. As a word is read
from tape, the content of the working address is increased by one and compared with the stop address.
After each word is read from tape, these addresses are
compared. If they are unequal, the working address is
increased by one and the next word is moved from tape
to the new storage location specified.
This process continues until the working and stop addresses become equal, or until a record mark word is
read. At that point, the last word is transferred, and the
sign position in the record definition register is tested. If
it is plus, the record definition word address register is
increased by one, and a new record definition word is
brought in from storage to designate a new block of
words. When the sign of the record definition register is
minus, this means the last record definition word in the
operation has been reached and the operation is completed.
The operations described here are accomplished automatically within the record read or write time. The
combination of zero elimination, record definition words,
and record mark words provide a completely automatic
method of obtaining variable word size, record size, and
block size on magnetic tape.

Segment Forward or Backward Space per Count
These are new commands designed to allow automatic spacing over a series of tape records. The spacing
operation can be performed with the tape moving in a
forward or reverse direction. The tape is moved at full
speed until one or several segment marks are sensed.
The number of segments skipped is defined by a record
definition word. The segment marks are recorded on the
tape under program control and can be used for any
type of demarcation required, such as a fixed number of
tape records or a change in the file sequence con trol.

Table LO(Jk- Up Operations
There are three types provided in the IBM 7070.
They are 1) equal or high, 2) equal, and 3) lowest. In
equal or high, a sequential table is searched for the first
entry that is equal to, or higher than, the search criteria.

Table look-up equal searches a random table for an
entry equal to the search criteria. Table look-up lowest
searches a random table for the entry with the lowest
search criteria.

Floating Decimal Arithmetic
An optional computing feature of the IBM 7070 is the
use of floating decimal operations. This includes automatic floating point double precision operations which
provide eight high-order and eight low-o~der Mantissa
digits, each with its correct, two-digit characteristic.

A utomatic Priority Processing
This new programming feature makes it possible to
process more than one program during the same period.
This procedure eliminates time lost in waiting for input/
output operation to be completed, since one program or
another is constantly functioning. One of the programs,
called the main routine, has a comparatively large number of program steps. The others, called the priority
routines, have relatively few instructions, but involve
almost continuous use of card reader, card punch,
printer, tape unit, or disk file.
The main routine functions normally, while the tape,
disk storage, or input/output unit in the priority routine
is operating. These operations may include reading a
card, punching. a card, reading tape, writing tape, seeking a disk file record, reading a record file, or writing a
file record. When anyone of the preceding operations is
completed, the main routine is signalled automatically.
The priority routine carries out its program, stops its
input/output or storage unit, and releases priority. The
main routine then takes up exactly where it left off. It is
possible to have more than one tape, disk storage, or input/output unit operating on a priority basis during
the main routine. However, only the main routine can
be signalled for priority; it is not possible to do this to a
priority routine. If a second priority is ready while the
first one is in progress, it will wait until the first one is
completed. The main routine is resumed only when there
are no priority routines waiting.
Priori ty is determined by the setting of the stacking
latches, shown diagrammatically in Fig. 11. There are
stacking latches for the card input/output units, one
for each tape unit, and one for each of the three read/
wri te heads in each disk file. Each program step in the
main routine tests any or all of these latches to see if the
priority routine is ready. These tests are made without
any delay in the program steps of the main routine.
There are four types of priority: card, tape, disk file,
and inquiry. Card priority is caused by the completion
of a card read, card punch, or print operation. A tape
priority routine is initiated by the completion of a tape
read or write operation, and disk file priority is started
at the conclusion of a disk file read, write, or seek. A
manual inquiry automatically initiates an inquiry priority routine. In all cases, the program may test, set, or
reset any.latch. This allows complete control of priority

Svigals: IBM 7070 Data Processing System

229

PRIORITY WAITING LATCH

OUTER RING - Stacking Latches
INNER RING - Priority Mask

SEQUENCING SCANNER

Fig. l1-Stacking-Iatch scanning schematic.

processing by the program although the entire scanning
process is continuous and automatic. In any tape operation requiring special attention, an automatic tape
priority is initiated. This includes, for example, operations in which an end-of-tape is recognized.

Physical and Electrical Characteristics of the IBM 7070
Data-Processing System Printed Circuit Cards
The system electronics is housed in a group of sliding
gate cabinets, one of which is shown in Fig. 12. These
cabinets are designed to accommodate printed circuit
cards of the type shown in Fig. 13. These printed cards,
which mount up to six transistors, are designed for automatic fabrication. Approximately 14,000 cards are employed in the basic 7070 system shown in Fig. 1. A total
of 30,000 alloy junction germanium transistors and
22,000 point contact germanium diodes are used in this
system.
Chassis in tercard signal wiring is provided by j urn per
wires which are connected to the card socket by wirewrap connections. Wiring is accomplished automatically
by wire-wrap machinery. This is shown in Fig. 14. Distribution of power supply voltages to the transistor
cards is accomplished by printed circuit strips. These
features enable a major portion of the electronic system
to be fabricated by automatic equipment.

Physical Planning Requirements
The use of a fully transistorized system has resulted
in a reduction of the physical facilities needed to sup-

port the IBM 7070 Data-Processing System. Table II
compares the savings for the' IBM 7070 system wit~
those of a comparable IBM 70~ II.
TABLE II
Floor space reduced up to
Power reduced up to
Air conditioning reduced up to
Weight reduced up to

50
58
58
50

per
per
per
per

cent
cent
cent
cent

These reductions will result in substantial savings
during the physical installation phase of each IBM 7070
system. In addition, existing facilities will accommodate
more powerful IBM 7070 systems within physical facilities now using nontransistorized computers.,

Library and Programs for the IBM 7070 Data-Processing
System
An extensive library of programs is provided with the
IBM 7070 system. These reduce the amount of time
and effort normally required in a computer installation.
The major automatic and library programs available
are as follows: .
Autocoder: This is a commercial assembler program
based on comparable programs used with the 705
systems.
FORTRAN: This program is a complete compiler for
scientific problems. The FORTRAN language is identical to that currently used on the IBM 650, 704, 705,
and 709 systems.

230

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Fig. 12-S1iding gate housing showing printed circuit cards.

Fig. 13-Printed circuit card.

Sort and Merge: This program provides from two- to
five-tape merging according to equipment available for
the program. I t defines a wide range of record length
and control data.
Report Generator: The report generator describes and
assembles a program to produce any desired report in
about 30 to 45 minutes over-all time, including sorting
and merging in the preparation of the report.
Utility Programs: These are a comprehensive, integrated set of utility programs to be used primarily with
the Autocoder. They include the use of modern testing
techniques, loads, memory prints, tape prints, etc.
Ramac Programs: These utility programs are for
the Ramac file, loading, unloading, and searching operations. They are similar to programs now provided for

Fig. 14-IBM 7070 sliding gate housing showing
wire-warp intercard wiring.

the IBM 650. In addition, a set of programs facilitates
use of the chaining system, including an evaluation, a
loading, additions, and deletions program, and optimum
sequencing program.
General: Several programs assist in input/output
areas such as tape labeling, end of file, error routines,
and restart procedures. An input/output routine automatically schedules simultaneous reading, writing, and
processing functions for IBM 7070 card, tape systems,
and Ramac systems. This new routine enables the programmer to think of his job as a serial operation and
automatically provides an efficient overlap of all functions. An analysis of total programming requirements of
previous systems indicates that input/ output has
averaged approximately 40 per cent of the total coding
job. With this new program, the IBM 7070 users can
reduce the time substantially since they are not concerned with the implications of input/output scheduling.
Simulation: An IBM 704 program simulates and tests
7070 programs. A 7070 program simulates the 650
card, tape, and Ramac machine with a similar 7070 configuration. This program will run at a speed at least
equivalent to the 650 execution of a program.
SUMMARY

The evaluation of a data processing system depends
upon more than its inherent machine characteristics.

Fleming: Organizational A pproach to Integrated Data Processing
It includes the procedures, methods, and programs
provided with the system and the manufacturers' support needed during the pre-installation, installation,
and post-installation phases. All of these necessary and
important features are provided with the IBM 7070
Data Processing System.
Specifically, they are:

Balance System: The high speed of computing and
internal flow of data are balanced by high-speed tape
units, rapid access storage in the disk files, and highspeed card readers and punches.
Maximum Utilization: Automatic priority processing
allows efficient time-sharing and multiprogramming
abilities for input, output, tape, disk file, and inquiry
opera tions.
Building Blocks: A variety of units in varying capacities permit custom-made systems with the ability to
grow as the user's needs are increased.
Application Range: The 7070 can handle a wide range
of applications, including batch processing, in-line processing, and computing. It covers the area of mediumto large-scale systems.
Transistors: Solid-state components offer such advantages as high reliability and reduced requirements
for floor space, electric power, and air conditioning.
Access and Use of Storage: Each word in core storage
can be used for a program step, input or output. Scatter
read and gather write minimizes the need for additional

231

steps which arrange data or assemble them for tape
opera tions.
Simultaneous Operations: Transferring data to and
from the system can be overlapped with computing
operations.
Fully Alphabetic: A complete 80-column card can be
read in or punched for any combination of numeric and
alphabetic data.
Programming Logic: Field definition, 99 index wordE.,
single address instruction, and many other factors contribute to direct and simple programming logic.
Variable-Length Records: Full flexibility in handling
grouped records of variable length on tape is provided
automatically by the scatter read and gather write feature and automatic zero elimination.
Reliability: Complete checking both of input, output,
internal operations, and tape and disk storage insure the
ultimate in performance.
Programming Systems: Assembly programs and a
number of other library routines assist in the planning
and programming.
Programming Testing: Programs can be tested prior
to delivery permitting full operation immediately after
installa tion.
IBM Services: IBM offers training, planning and programming assistance, customer engineering, and other
services. These are the vital steps necessary to insure
that the man-machine-methods team is complete and
will function properly.

An Organizational Approach to the Development
of an Integrated Data-Processing Plan
GEORGE

HE dictionary defines organization in four ways.
One of these definitnios is: "The way a thing's
parts are arranged to work together," and this is
the one that most nearly describes the subject.
The term integrated data processing has various
meanings. Although originally used to describe commonlanguage machine procedure it has gradually been extended to include all phases of the processing of data
and is often used to describe the wedding of two or more
data systems. For the purposes of this paper, the
broader definition will be used.
The reasons for which data are processed may be
categorized as follows:

T

t Boeing Airplane Co., Seattle, Wash.

J.

FLEMINGt

1)
2)
3)
4)

Top-level management reports.
Middle-management reports.
Functional reports.
First-level management and operating reports.

Although these categories are a bit arbitrary, they
are a useful classification for establishing the requirements of an integrated system. Each of these categories
competes with the other for data and each has the following characteristics:
1) Requirement for data and supporting records.
2) Cycles on which they are produced.
3) The degree of accuracy .that needs to be maintained.
4) Manner or method of presentation.

232

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The degree to which each category is compatible to
mechanization varies as does the method of collecting,
controlling, transporting, transcribing and processing
of the data.
Top-level reports are usually prepared by a skilled
staff from data already processed and recorded. They
often reflect the considered judgment of the staff and
may be accompanied by documents that serve to analyze and evaluate the results they reflect. The volume
of data required is relatively small, although data may
be collected from many sources. These reports are usually produced on a monthly or longer cycle and the degree of accuracy required is often exacting. Current
data-processing techniques are not usually adaptable
to these reports.
Middle-management reports are also generally compiled from data that have been summarized and recorded for other purposes. The interests in this category
are as wide as .the data-collection and processing systems will permit. Requirements range from the critical
to the superficial and the pressures that can be brought
to bear are considerable. The cycles vary from weekly
to monthly and the volumes of data required can be
tremendous. The degree of accuracy is high and methods of presentation are complex.
The need to present data that are meaningful in a
manner that requires minimum analysis is important as
is careful disciplining of the requirements. Generally
these requirements are adaptable to mechanization.
The functional reports include payrolls, inventories,
production control, and other similar types of data processing that service the many functions of the company.
The processing of large amounts of data collected from
many sources is a major characteristic. The reporting
cycle is most often weekly and the work usually involves
the maintaining and up-dating of records. The degree of
accuracy required is high as accumulative errors may
lead to considerable distortion.
The cooperation of many departments is usually required in order to combine families of associated processing into functional applications. An example of such a
combination is the labor handling function which would
include data processing required for the personnel,
timekeeping, payroll, accounting, treasury, and labor
relations departments. Another example is the material
handling function which might include some datahandling problems that originate in the purchasing, receiving, storing, accounts payable and manufacturing
areas of the company.
These processes, are highly compatible to machines,
and are generally the forte of the data-processing service
center. Most of the applications being processed on electronic data-processing equipment are, at least for the
moment, in these functional reporting areas. The new
data available to middle management are largely a byproduct of these processes.
First-level management and operating reports are

often the stepchild of our modern high-speed dataprocessing systems.
The operating levels of management need reports that
reflect yesterday's results the first thing this morning.
The need for corrective action is urgent and must be
taken immediately if it is to be effective. The report formats must be easily understood by many people and
the con ten ts flagged so that a cursory scanning of the
detail will reveal troubled areas. The volume of data
required is often large, as these reports are prepared for
the control of detailed operations. A relatively low level
of accuracy may be tolerated. A daily cycle and simplicity of format are the important considerations. Data
collected in or close to the area being served are normally
the source of these reports.
Format, collection systems and methods need to be
individually tailored to the department served as the
manufacturing or service processes they are designed to
control usually differ within each department.
The data-processing center is rarely equipped with
either the machines or manpower to cope with this category of reporting. Yet, the data needed for these reports
are usually the same data that are required for the functional reporting category.
In order to promote efficient, practical, and economical integrated processing, the data-processing center
must maintain an influential position regarding the determination of data-handling procedures within the
company. The quality and cost of data processing for
the functional and middle-management categories of
reporting will be dependent on the center's success in
establishing informational pipelines into all departments and the development of uniform and standardized
methods of operation. The center will need to maintain
staffs of intelligent personnel trained in data-processing
methods in order to set up and operate the sophisticated
procedures that will be required.
Experience indicates that even this is not enough.
Only when machine techniques are combined with practical operating experience can integrated results be successfully achieved. The establishment of informational
pipelines is only useful when they can be carefully maintained and vigilantly guarded. Sophisticated procedure
may be satisfactory for the highly-skilled machine technician but breaks down rapidly when entrusted to lessinformed employees.
It appears that we may have reached an impasse on
our trail towards the integration of data processing.
The requirements of the data-processing center are,
in many respects, in conflict with aspirations of the department being served. The forcing of informational
pipelines and procedural restriction on other departments tends to create awkward human relations problems.
The various departments in need of "on the spot" reporting to keep their first-line management informed
will be inclined to develop their own methods which will

Fleming: Organizational A pproach to Integrated Data Processing
compete for data and make standardizing of data-handling procedures difficult. Arranging these parts to
work together is a challenging problem.
The proposed solution is not an easy way out. However, where the suggested type of organization has been
established by managers who were aware of the many
incompatible aspects of the problem and who have had
a sincere desire to promote efficient methods of data
processing, it has been quite successful.
The proposed plan assumes the following:
1) A centralized concept of data processing has been
established, at least for the use of major equipment items.
2) Integration of the various data processes is considered advisable.
3) The central data service is equipped to take
proper care of most of the functional and middlemanagement reporting and record-keeping requirements.
4) The center reports to a level of management that
is influential in all departments of the organization.
5) A capable staff of analysts and programmers is
available.
6) Initial applications on major equipment have
been satisfactorily installed.
The major feature of the proposed organizational
plan is an outside (of the center) operation designed to
provide individual service to the using departments
and provide for the informational pipelines required by
the center. Another way of describing these service
groups would be branch or satellite data-processing operations.
The satellite operation may be as large or small as required to provide for the needs of the department being
served and may operate minor data-processing equipment. The equipment may range from paper-tapeequipped adding machines, typewriters, and bookkeeping machines, to small punched-card installations or,
where justified, small-scale electronic machines.
Normally, these satellite operations will be located
in the using departments in order to establish an atmosphere conducive to the wedding of technical machine-processing skills with the departmental experience.
The supervisor of the satellite operator will report
directly to the data-processing center. However, he will
be dedicated to serving the department manager to
whom he is assigned and must be approved by the department manager. It is expected that this operation
will be staffed by a mixture of personnel drawn from
both the data-processing center and the using departments.
The responsibility of this gr6up will, in addition to
serving the using department, also have the secondary
responsibility for establishing and maintaining the pipe-

233

lines necessary to provide the data-processing center
with the data that are originated or perpetuated by the
using department. It is further expected that as the
satellite group acquires the proper skills they will assume an active role in preparing suggested methods and
procedure for the approval of the department head and,
when approved, assist in their implementation.
Although the size of the satellite operation and the
manner in which it is equipped will vary with the needs
of each department, the duties will remain the same.
Implementing the plan will require a clear understanding of the objectives and duties of each satellite group.
A letter or memo signed by the data-processing manager and the department head is suggested in order to
be certain that the operating ground rules are firmly
established. These rules may be expanded as the group
gains familiarity with the area. However, the line of reporting must be to the data-processing center if maximum benefits are to be attained.
Organization within the data-processing center is
flexible. However, as the number of satellite operations
increases, it may be advisable to appoint a supervisor
over these operations to insure that expected standards
of service are maintained and to coordinate the procedural and data-flow activities.
Several problems in hurpan relations are apparent in
this proposal, such as the acceptance of the satellite
operation into a department, the divided responsibilities
of the supervisor, and the relationship between the department head and the data-processing manager. Difficulties may be expected in this regard, but experience
indicates that wherever a sincere effort is made to overcome these difficulties, they are less serious than those
generated by other types of organization. The proposal
is often questioned from the standpoint of the utilization of equipment and manpower. This factor, if present
at all, is normally offset by superior service rendered
the using department. The data-processing center will
benefit by receiving preprocessed data under carefully
administered con troIs and in presummarized form.
The most serious problem will prove to be in adequately manning the satellite operation with qualified
personnel. Most often the success of this plan will be
reflected in the ability of the appointed supervisor.
The establishment of these satellite data processing
groups provides an organization which may well serve
to overcome most of the day-to-day problems _connected with integrated data processing. Its many benefits include:
1) Providing the using department with a specialized
data-processing service. When properly equipped,
this group can prepare records and reports tailored
to the department's requirements without seriously interfering with the schedule of the central
data service. It also provides a vital service to the
data-processing center by maintaining surveil-

234

2)

3)

4)

5)

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
lance over the data-collection system that is so
necessary to high-speed processing.
Such an organization provides a coordination
medium through which standards of operations,
procedures and practices may be established. It
also provides a natural working medium for the
exchange of mechanical techniques and departmental operating experience.
By being directly related to the service center, the
satellite group is in a position to call for and get
expert service and ad vice from the service center.
It may borrow a keypunch operator to relieve a
temporary situation, obtain an experienced operator if required, or call in an analyst to assist
them. The shifting of temporary overloads to the
service centers standby equipment is facilitated
and it should be noted that the center will have a
built-in additional capacity for weekend or emergency work.
The proposed arrangement is useful in establishing control over data-processing equipment to be
used within the company.
It allows departmental management ~o concentrate on their prime objective with the assurance
that records vital to the department's functions
will be properly maintained. The system analysis

group within the service center is provided a
means of reviewing departmental reports to assure
themselves and the company that costly and unnecessarily redundant reports are not being maintained.
6) This plan also provides a means for giving the
technically-trained employee management and
administrative experience which will help to assure a pool of prospective management talent. The
service center will be benefited as this plan provides a line of advancement for their superior employees.
Precedents for the proposed plan are not too difficult
to find. For example: The timekeeping organization, although reporting to one authority, provides services to
many areas. Transportation units often operate from
central pools while maintaining specialized services in
remote departments. Purchasing normally concentrated
in a center provides associated operations to service
outlying branches. In fact, wherever overlapping services are a requirement, such organizations are not uncommon.
It seems probable that recently devised equipment
combined with advanced techniques and organized in
this manner will permit another step toward the goal
of truly integrated data processing.

Developing a Long-Range Plan for Corporate Methods
and the Dependence on Electronic Data Processing
NORMAN
INTRODUCTION

HAVE been asked to speak to you on the subject of
the impact of electronic data-processing innovations
on corporate systems planning. This subject matter
could be a recitation of how we have approached our
planning effort at Lockheed followed by a recitation of
how it has been adjusted from time to time by innovations announced by various manufacturers of electronic
data processing equipment.
However, I feel that this subject can best be approached by first spelling out some of the major problems facing all industry, pointing up some areas that are
lacking in development. Then I shall attempt to discuss
what appears to be a logical approach to these difficult
management problems, what contributions electronic

I

t Lockheed Aircraft Corp., Burbank,

Calif.

J. REAMt
data processing has made to date, and what contributions future innovations in electronic data processing
will or will not make to the solution of this multitude of
problems.
While my remarks are directed to a corporate administrative systems planning effort, we all realize that the
subjects that are under discussion at this conference
have broad social significance and we must stand ready
to assume our responsibilities.
Our economic system is designed in a manner in which
a majority of the decisions affecting it are made by
thousands of independent managements. This is an advantage to our country and to industry, but it also
poses heavy responsibilities on the shoulders of members
of management. As Americans, we are convinced that
this freedom of action awarded these managements will,
when working within the proper social framework and
business environment, result in the greatest good for our

Ream: Corporate Methods and Dependence on Data Processing

235

citizenry. As a nation, we have flourished under the con- simply do not exist. We speak of business profit, risk,
cept of individual freedom and I believe that it holds the product, investment, and customer relationships: The
functions are irrelevant to any of them, yet we also
only promise for our future.
You are all aware that at the present time we are en- recognize that work has to be done by people who spegaged in an economic "cold war." Khrushchev has pub- cialize because no one knows thoroughly the ins and outs
licly announced that it is the Russian intention to bring of a given function today, let alone all functions of a
the United States to its knees by means of economic war. business. A basic problem then is how to transmute
He has also announced that at the end of their latest functional knowledge and functional contribution into
seven-year plan, Russia will have surpassed the United general direction and profitable general results.
To my knowledge the problem of integration has esStates in production and that the living standard of the
average Russian will be better than we Americans now caped solution and will only be answered by devoted reexperience or will be experiencing at that time.
search and development. This will not be easy for we
Fantastic claims? Perhaps, but we are engaged in a: must recognize we are faced with the problem of delife or death struggle and the boasted intent of our eco- veloping a means of measuring and controlling a comnomic adversary must be taken seriously, for these plex assortment of interacting groups of variously motisame intentions were first announced by Karl Marx over vated entities in a flux of decision-making situations
one hundred years ago. Therefore, the seriousness of that comprise a normal company. The degree of the
our responsibility cannot be overstated. We must seek complexity involved is usually in direct proportion to
ways and means of increasing our over-all productivity the size of the organization.
and continuing the elevation of our living standards.
To maintain our cherished world position ahead of
RESPONSIBILITIES OF MANAGEMENT
other world political doctrines will require that the
Let us turn for a moment to a discussion of the rethousands of independent managements in our country sponsibilities of management, for certainly the adminprovide initiative and leadership in the use of the re- istrative systems planning effort of any company must
sources at their command. They are charged with the be addressed to the responsibilities of management.
responsibility of securing greater internal efficiencies
In the broadest sense we could define the responsiwithin their individual organizations and at the same bilities of management as the guidance, leadership, and
time maintaining and improving their satisfactory rela- control of a group of individuals toward a common obtionships with employees, customers, stockholders, and jective. This broad definition indicates a purpose, but
others.
fails to give us an insight of how results are obtained.
Druckerl stated in a recent article that one of the Therefore, it is necessary to define the responsibilities of
major problems of business is "the lack of any bridge of management by defining their five basic processes:
understanding between the 'macro-economics' of an
economy and the 'micro-economics' of the most im1) Planning-that is, determining what shall be done.
portant actor of this economy, the business enterprise."
As used here, planning covers a wide range of deHe said that the only micro-economic concept to be
cisions, including the clarification of objectives,
found today in our economic theory is that of profit
establishment of policies, establishment of promaximization which may mean short-run immediate revgrams, and determining specific methods and proenue or long-range basic profitability of wealth-produccedures.
ing resources that may have to be qualified by a host of
2) Organizing-or grouping the activities necessary
unpredictables, such as managerial power drives, union
to carry out the plans into management units and
pressures, technology, etc. But this fails to account for
defining the relationships among the executives
business behavior in a growing economy.
and workers in such units.
According to Drucker, profit maximization is the
3) Assembling resources-that is, obtaining for the
wrong concept. The relevant question is: "What miniuse of the business the personnel, capital, facilities,
mum does a business need?" not "What maximum can
and other things needed to execute the established
it make?" Companies that have attempted to think
plans.
4) Directing-i.e., issuing management directives.
through the risks of business have found that the survival minimum exceeded the present "maxima" in
This includes the vital matter of indicating plans
many cases.
to those who are responsible for carrying them out.
Drucker further pointed out that another crying need
5) Controlling-or seeing that operating results conis the development of an integrated organization.
form as nearly as possible to the established plans.
Twenty years ago it was possible to see a business enterThis involves the establishment of standards,
prise as a mechanical assemblage of functions, but today
motivation of people to achieve these standards,
we know that when we talk of business, functions
comparison of actual results against the predetermined standard, and initiating necessary cor1 P. F. Drucker, "Business objectives and survival needs," J.
rective action when performance deviates from the
Business, Univ. Chicago Press, Chicago, Ill., vol. 31, pp. 81-90;
plan.
April, 1958.

236

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

, All manage men t engages in the processes I have
enumerated, and it is clear that various individuals who
com prise manage men t spend varying amounts of time
at each.
The different members of management are divided
into their functional specialties, such as sales, research
and development, engineering, manufacturing, finance,
industrial relations, etc. Each of these may in turn be
divided into the five basic processes of management previously enumerated. These are then established as two
different approaches to the same management activities.
For example, the vice-president of engineering must
plan, organize, assemble resources, and direct and control in the same manner as any other member of management. His problems may differ in degree, but they
are interrelated and interdependent upon those problems of the balance of the manage men t echelons.
The increase in the scope and complexity of modernday business has resulted in management recognition
of the necessity of further development and increased
use of scientific management techniques. Management
recognizes that our simplifying processes must move
forward in balance with business complicating processes.
Unless our simplifying processes keep pace, we will become a casualty of our self-developed economic and
business complexity.
The next few years will see a tremendous increase in
the use of data-processing systems in the development
of new management methods; however, I wish to emphasize that in my opinion they will only be a tool of
management in the over-all improvement of the management abilities.
OBJECTIVES OF CORPORATE SYSTEMS PLANNING

In any corporate systems planning effort, it is axiomatic that we direct our attentions to the problem of
determining what information is required to operate
business in a coordinated and profitable manner.
Good communication aids in coordinating the known
activities of management. For instance, management
must know promptly whether operations are proceeding
in accordance with plans so that adjustments can be
made when required. Moreover, there are a wide variety
of activities, particularly those of a detailed nature,
that are impractical to plan too far in advance, and
coordination of these is achieved only when the personnel directing and performing them have current information upon which to base decisions.
Management has awakened to the realization that a
business is essentially controlled and directed by decisions based on information supplied by its data-processing system. It has realized that major policy evolves
from a whole series of day-to-day decisions based on the
information currently at hand. And it has awakened to
the bare fact that much desirable information is not
available.
, Further, management is realizing that it is much

simpler to set down ,the logic of problems in the field of
physical sciences than it is to set down the assumed
logic of a business executive when he is making a decision based on incomplete data.
Having established that the communication of essential information within a company is paramount in developing an integrated approach to management problems, it is necessary that we turn our attention to the
administrative systems that are used to supply managemen t intelligence.
The basic objectives in the development of an integrated approach to administrative systems are:
1) Development of improved management intelligence for use in decision-making processes.
2) The reduction and control of time spans.
3) Improved accuracy.
4) Increased productivity.
5) Reduced costs of operation.
None of these basic objectives are new, but the advent
of electronic data-processing systems has given new life
to this whole area. We are now and will be seeking the
proper application of data-processing devices in order to
take full advantage of interrelationships between the
data problems of various segments of management and
to recognize appropriately the dependence of a number
of these segments on the same basic input information.
PLANNED IMPROVEMENT

In attempting to devise a planned administrative
systems improvement program for Lockheed, we recognized we could not realize the desired results merely by
studying, appraising, and converting our existing systems and procedures. We were faced with a research
problem of considerable magnitude, and it could only be
solved by an analysis of the requirements based upon a
knowledge of systems parameters.
We found we had to solve this logical problem: How
can we best take the basic information from our day-to-day
operations and process and distribute it so as to maximize
our profits and minimize our costs? The problem required that we devise a well-planned program to be carried out by creative people acquainted with research
methods.
The most difficult part of any complicated problem,
whether administrative or sc\entific, is the devising of a
clear formulation and the establishment of a systematic
manner of proceeding. The administrative-systems problem being primarily concerned with the processing and
ft.ow of information has three basic parts:
1) Formulation of the problem-We must determine
here what inputs are required and what outputs
are required.
2) Logical design----,.In this part of the study, we set
up the internal relationships and describe the detailed information ft.ow so given inputs will produce required outputs.

Ream: Corporate Methods and Dependence on Data Processing
3) Detailed systems design-This part is concerned
with the techniques and the tools of the system
and spells out how the operations required in 2)
will be accomplished.
We found that we had to attack these in their order
(although at times it is possible to accomplish some of
the logical design and the detailed system design in parallel) .
Again, I emphasize that the most difficult area is the
proper formulation of the problem. Until this is accomplished, little return can be expected in attempting
to attack small pieces of the existing system.
The basic requirement is to tie the various segments of
management into an effective whole through proper
flow of information. It is binding the entire organization together into an effective, integrated whole through
this flow of information that permits the information
pipelines to serve not only as a means of improving dayto-day operations, but the projected operations as well.
Regarding this as a logical problem we realized that
it is not necessary that each person working on the formulation of the problem or the logical design have a detailed knowledge of the existing systems, accounting,
manufacturing control, etc. The actual requirement is
an ability to use research techniques and an objective
attitude which will not necessarily be influenced by existing administrative systems.
EFFECT OF ELECTRONIC DATA-PROCESSING
INNOVATIONS

In attempting to discuss the effect of equipment innovations on the systems planning effort of a corporation, I would like to preface my comments by stating
that in my opinion industry in general has not learned
to use efficiently the logical abilities of existing data
processing systems. I do not mean to discount the tremendous contribution that electronic data processing
systems have made to management, but I wish to emphasize that in my opinion, we are only on the frontiers
of exploiting their potential values.
Certainly the advent of solid-state systems will increase considerably the over-all productivity of equipment systems and will improve their reliability. However, until we improve our ability to use the logical designs of such new systems, we shall be using them as inefficiently as we are using existing systems, and our
costs will undoubtedly be proportionately higher than
they should be.
Since the delivery of the first Univac system to the
Census Bureau in March, 1951, we have seen tremendous strides in the development of equipment systems.
For instance, in the 700 series we have seen the 702 and
then the successive introduction of multiple versions of
the 70S. Unfortunately, companies that have followed
the trend of accepting all equipment changes that have
come down the pike have done so at a tremendous cost.

237

I believe changes of electronic data processing systems
must be dictated by management's ability to use them
efficiently after all costs have been considered. Too
many times equipment changes have been made without
recognizing the multiple of such hidden costs as site
preparation, turn-around costs, reprogramming effort,
etc. A word of caution-evaluate any proposed equipment change carefully.
With the rapid introduction of more advanced dataprocessing systems by various manufacturers, many
managements are utterly confused and are continually
pressed to answer the supposed problem of equipment
obsolescence. It seems to me that equipment obsolescence falls into four categories:
1) Economically obsolete-It would be beneficial economically to discard equipment and replace it
with equipment of more advanced design.
2) Technically obsolete-Better equipment is immediately available. It is not warranted to discard
presently installed equipment, but if new equipment were being acquired, it would be more feasible to acquire the latest design.
3) Technically obsolescent-Better equipment is in
the design stage, but is not presently available.
4) Conceptually obsolete-Better equipment is theoretically possible, but has not yet been designed.
If you will reflect for a moment you will recognize
the four stages of obsolescence and realize that all
equipments proceed from the stage of being conceptually
obsolete to the point of being economically obsolete.
However, the point of becoming economically obsolete
is usually a long time after they become technically
obsolete, because the marginal cost of operating and
maintaining them is small. If we were in a detailed
discussion of leased or purchased equipment, then we
would have to concede that when equipments are being
leased, the point at which they reach economic obsolescence is much earlier than when they are purchased.
There are two major reasons for a change in electronic data-processing systems:
1) It is desirable to decrease the unit cost, or
2) It is desirable to do jobs that are not possible of
accomplishment on existing equipment.
One of these two must be the basic reason for change to
equipments of increased speed or increased memory
capacity.
The physical difficulties as well as the economic; problems of changing equipment, both for the manufacturer
and the use;, acts as an inertia or a brake on the introduction of too many new models, each being a slight
improvement over the old. In my opinion, an order of
magnitude of three to five times in improvement is
needed before a going machine can economically be
supplanted.
Because of the long cycles and lead times involved in

238

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

the development in the new data-processing systems,
there is an, upper limit to the speed with which new
electronic data-processing systems can be introduced
by anyone manufacturer. However, as additional manufacturers start at varying places on the time scale,
the result for the consumer is a practically continuous
curve of increased technological advantage. The management problem then becomes one of choosing at
which point to make the change and of assessing the reliability of the various manufacturers' claims.
Another factor that must be considered is whether or
not the user of the proposed new electronic data processing system actually has a fully developed requirement.
In other words, you can find operations where electronic
data processing systems have been changed only to
learn that they have acquired too large a system for the
amount of work to be processed.
To date, most of the attention of management has
been directed towards gathering historical information
and not too much progress has been made in the area of
development of management information for determination of a company's objectives. The advent of electronic data processing systems makes possible the application of techniques that had hitherto been impossible by manual or electromechanical means. We can
look forward to great strides in this area in the next
few years in the development and application of mathematical techniques (operations research) to the problems of management.
The area of financial planning and control is wide
open, for example, in the applications of scientific management techniques. I am sure that electronic data-processing equipment will be a major contributor to the
eventual solution of many existing problems. However,
the equipment will only be a means of developing an
answer after we have determined what information is
required and how it can be developed. Here again, I
would like to emphasize that progress will not come
about merely by having the equipment, but must be
preceded by a fully developed definition of the problem
and the logical design of the system required.
In my initial work in the financial forecasting area, the
first unforeseen difficulty was the problem of language.
As a simple example, discussion with people in financial
operations activities revealed instances of several distinct concepts answering to the same name and a multiplicity of names applied to essentially one concept. As a
result, it was considered necessary to establish for the
accounting aspects of the problem a structural framework which was reasonably complete, self-consistent,
and within which the problem could be formulated with
some assurance that the terms employed were clearly
defined. Eventually it was determined that the problem
could be solved in four steps. First, obtaining a mathematical model for the entire accounting feature of the
forecast. Second, a study of prediction techniques used
in obtaining basic information for the various inputs to

the accounting structure. Third, a study of the procedures necessary to make certain quantities of the output a maximum or minimum, while others are held
under specified restrictions. Fourth, the application of
high-speed data processing equipment to the results of
the three previous studies.
I might add that work in these areas does not lend itself to rapid solution but is a result of long and arduous
effort on the part of persons who are dedicated to researching this type of management problem.
Information retrieval is another problem area that
holds promise of solution. However, data processing
equipment cannot make a contribution until such time
as the problem is defined and a logical retrieval system is
designed.
These latter two are only representative of a host of
management problems that must be sought out and
solved to insure eventual management survival.
SELLING ELECTRONIC DATA PROCESSING

Those of you who are in the data-processing area of
your management are faced with a real challenge to sell
your ideas to management as well as to employees. You
not only have the problem of selling the use of electronic
data-processing equipment, but you ultimately will have
the responsibility of proving that the introduction of
these systems has reached or exceeded the break-even
point. The break-even point of electronic data-processing systems is still most dubious; in fact, many existing
installations are not economically sound. You are faced
with the challenge of the control of costs after installation and of the continuous effort of insuring that the
programming effort is of prime efficiency.
ORGANIZATION CHANGE REQUIRED

One of the most difficult problems which faces the advocates of the use of electronic data-processing systems
is to awaken managements to their responsibility of insuring that they are using the systems to their best abilities. I agree that computers have been the impetus behind the tremendous interest that is now being focused
on the development of improved administrative systems, but at the same time most managements have
failed to recognize the necessity of integrating their organization in a systematic and purposeful manner.
Usually these changes just happen over a period of time,
arise from temporary expediency, or emerge as a solution to a crisis situation. However, in most instances
functional reorganization has not kept abreast of the
technological change.
The management organization structure is not inviolate and should be treated accordingly. In many cases
changes in the organization structure can eliminate
many of the procedures that complicate an administrative system, thus reducing costs and contributing to the
simplifying process.

Ream: Corporate Methods and Dependence on Data Processing
CONSOLIDATION OF SCIENTIFIC AND ADMINISTRATIVE
ELECTRONIC DATA-PROCESSING OPERATIONS

One of the most interesting and promising developments in the application of electronic data-processing
systems in business is the apparent determination of
most manufacturers to develop a data-processing system that can be used for both business and scientific
data processing with equal effectiveness. Also, the trend
towards the development of these all-purpose dataprocessing systems in the medium scale field, with an
ability to expand, opens the door for the economic use
for computers in many smaller industries. It also gives
good reason to speculate that large companies will find
that the use of these dual-purpose systems will decrease
considerably their over-all data-processing systems
rental costs by consolidation of scientific and business
data-processing operations and maximizing available
computer time.
The establishment of two definite data-processing
groups within a given area of a business to handle independently scientific and administrative data-processing
operations is a very costly venture. The now recognized
means to handle many independently programmed operations simultaneously, that is, business, scientific, or
both, eliminates many of the arguments that were formerly used to substantiate the independence of the two
establishments referred to above. From the standpoint
of the future economics of data-processing systems installation, I believe that this is a major step forward.
PRE-ANNOUNCEMENT OF EQUIPMENTS

One of the most disturbing factors in the application
of electronic data-processing equipments to management problems is the continued practice of some manufacturers of announcing new equipment innovations far
in advance of their actual design completion or availability. I think that manufacturers would serve their
purpose well if they would withhold announcements of
new equipment until such time as they w.ere able to discuss these new equipment systems with the support of
factual information.
I recognize that the industry is extremely competitive
and that various companies are jockeying for the best
possible position. However, I believe that many of
their sales practices have done much to retard progress
by throwing many managements into a state of utter
confusion by moving equipments into a business enterprise long before the planned installation is ready for
their use.
I know of instances where certain manufacturers
have attempted to have management withhold decision on equipment in order to deliver their own new
system many months in the future when full economic
justification of immediately available equipments has
been documented. Also, some manufacturers' sales efforts have attempted to replace a competitor's equip-

239

ment with their own equipments when there is no economic justification to the user.
Usually the sales tactic behind these equipment preannouncements has been to prove to a potential customer that the manufacturer in question has a corner
on the knowledge and abilities in the data-processing
equipment field. In my opinion, there are many reliable
manufacturers and I know of no company that has a
corner in the area of development or know-how.
LACK OF DEVELOPMENT IN THE INPUT AREA

Just as in the usage of punched-card equipments, the
weakest area of electronic data-processing systems development today is in the input areas, i.e., the collection
of input data at the point of origin. It is true that this
area is one of the most difficult to define-and perhaps
has been side-stepped by both the users and the manufacturers because of this. Yet, I have the feeling that
some manufacturers have long recognized the problem
but have been more interested in protecting their investment in equipments and card plants than in making a
definite contribution to the state of the art.
ELECTRONIC DATA-PROCESSING SYSTEMS CONTRACTS

I would also like to comment briefly on the present:
type of contract normally available for the lessee and
the lessor of the equipments. Up to a short time ago,
the rental contracts were certainly all vague and meaningless and were written solely for the benefit of the
lessor. However, recently there has been a trend on the
part of some lessors-that is, the manufacturers of the
equipment-to write more definitive contracts than were
previously available to the user. Even these are not
definitive enough. It is time for the users to demand an
even more definitive contract, one that charges the
manufacturers with performance responsibility. I can
see no reason why any manufacturer should make fantastic performance claims for given equipments during
his sales efforts and then fail, if they are valid claims, to
put these into a leasing agreement. Refusal only means
that the abilities of the equipments have been overstated.
The present type of contract, as I previously stated,
is a lessor's contract, and I believe that its continued
use through the years has caused the user to absorb
many costs due to equipment malfunction that should
have rightfully been borne by the manufacturer.
DEVELOPMENT OF PROGRAMMING TECHNIQUES

Charges have been levelled at the manufacturers of
data-processing equipment that they have failed to develop means of using the logical ability of existing systems to the fullest extent through neglecting to furnish
advanced programming aids and techniques. To a degree it is a valid criticism. However, I feel that this is
buck-passing, and to date our failure to exploit more
fully the use of presently available data-processing-

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

240

equipment in business is caused by the lack of the development of logical systems that will fully exploit the
logical abilities of the equipment available. Too much
effort has been expended on trying to transfer available
human systems to these equipments rather than attempting to develop a proper definition of the logical
system required.
CONCLUSION

Electronic data processing is becoming a byword in
the evolution of the techniques of a scientific approach
to the problems of management. The equipments are
not an end in themselves and cannot be considered a
panacea for the ills of management. Rather they are a
tool of management. Their contribution to the improvement of management is entirely dependent on how well
the problems of management are defined by the indi-

vidual practitioners and how ingenious they are at developing means of formulating information for management review and decision-making processes.
Again, we are only on the frontiers of the potential
we seek. Hard work and ingenuity will bring success.
We must move carefully and at all times must be in a
position to justify our activities. The introduction of
improved electronic data-processing systems will undoubtedly contribute to the advancement of the state
of the art, but the feeding of bad inputs into faster and
more capable equipments will only generate more bad
information at a faster pace.
I have tried to point out some of the difficult areas
we are encountering to avoid overoptimism. Yet, there
is no reason to be overpessimistic. Our eventual goal
can be attained, and with the high stakes involved, the
significance of the results warrants the all-out effort.

A General Approach to Planning for Management Use of ED PM Equipment
GOMER H. REDMONDt

DRING the past decade we have all been aware
of, or have played a part in, a maximum effort on
the part of certain dedicated people. The objective of this informal effort seemed to be to convert all
clerical and computational work efforts into an automatic, "fire-all-the-clerks, we-can-do-anything" approach to all known business practices. These dedicated
people were mainly comprised of data-processing manufacturers, management consultants, scientists, engineers,
and administrative line and staff executives and assistants. They usually had a good grasp of a specific problem and felt that this problem or series of problems were
sound EDP applications and by extrapolation, proceeded to teach a number of management executives the
economics of EDPM installations.
The influence of this effort and the span of time over
which it took place were both beneficial. Decisionmaking management level executives encouraged and
at the same time discouraged the advance of clerical
automation. Equipment marketing people were forced
to compete in areas that they frankly knew were not
practical. "If I don't submit a proposal for this application, competition will get the business and it's a rough
row getting back in," were frequent comments that we

D

t

Chrysler Corp., Detroit. Mich.

all have heard upon questioning EDP salesmen regarding doubtful or submarginal installation.
Management consultants were asked into the clerical
automation area by top executives who respected previous neutral and objective work assignments, many of
these management consultants accepted work assignments with the assumption that old and proven "standard" techniques would serve their purposes in this area
as they had in many others; others went in with good
staffs and did a "down town" yeoman-like job. Still
others capitalized on the confusion and helped equalize
mass optimism by forecasting dire results if EDPM installations were contemplated and planned without
specific help from their firm.
Not to be outstripped, many business and newspaper
writers and reporters joined in and began to point out
the miracles of electronic hardware and its possible
effect upon American business organizations and operations. Articles in magazines, in newspapers, in trade
journals praised use of EDP equipment as a new industrial revolution and played up the hardware and its
speed with almost no references to planning and application problems and costs. Other articles damned the rapid
acquisition of EDP equipment. These write-ups blew
way out of proportion some pioneering marginal commercial installations, threw serious doubt into management's mind as to the capabilities of their EDP planning

Redmond: General Approach to Planning for Management Use of EDP M Equipment
people and condemned organizational structure for permitting such a cancerous growth to survive.
Systems people and EAM operators also feeling a
need to keep abreast were schooled in electronics. In
many cases, they forgot traditional good systems concepts by tailoring work areas and good systems concepts to the convenience of a preselected "brain"; this
was done to get into business as soon as possible.
To sum this up I would like to quote two individuals
who have different viewpoints and yet a mutual understanding of this problem. John Diebold, President of
John Diebold and Associates, in an address to the
Eleventh International Management Congress said,
"Automation has presented management with a major
new problem. As yet management has not faced up to
this problem and is hardly even grappling with it in any
true sense./This is through no lack of energy or good intentions. On the contrary, the very activity of management in this sphere attests to the progressive spirit and
desire for improvement that characterize the modern
manager. The trouble lies elsewhere. Automation has
turned out to be a much more complex and difficult
problem than was originally thought. This being the
case, the current disposition to minimize its revolutionary and novel aspects is more hindrance than help in
putting automation to work."
The other gentleman that I must quote, to successfully set the stage for this general topic, is an EDP
equipment manufacturing top executive. He is one of a
half-dozen human beings who have successfully bridged
the gap between an exacting technical knowledge of
EDPM and the work that management should plan for
this equipment. Just recently, he asked when American
military, industrial and institutional management was
going to use properly the equipment already produced
and plan the right scientific and data-processing jobs
for it to accomplish. He went on to point out that there
is enough data handling equipment already produced to
process all problems that our economy needs to handle
-providing this equipment is properly programmed and
properly distributed. Sound planning is the answer.
The need for planning prior to major commitment in
most endeavors is apparent. A football team plans so
that all eleven men work together to achieve a first
down or a touchdown. In business, profit planning,
sales planning, prod uct planning, and prod uction planning are present in most organizations and normally are
predominant factors in successful achievement of objectives.
Formal planning has not been adopted for orderly
consideration of electronic hardware by all business,
military, and other forms of organizations. This is
primarily due to the apparent desire of many functions
of an organization to 1) jump on this bandwagon (or
get left behind), 2) to control this monster which could
completely dissolve current organization, 3) to merely
extend current punch card applications through the
electronic barrier, or 4) to ignore the entire subject

241

until EDP has become a proven practical factor in other
organiza tions.
With the possibility of huge expenditures for programming, for building computer sites, for selection of
appropriate systems applications, and for the hardware
itself, an organization should begin to plan for successful planning, for it is essential in all forms of planning
that a fundamental design and approach to planning a
subject be created. Such a design for EDP planning
must take into consideration scope, organization, and
the ground rules or administration of a planning function itself.
To discuss this on a general basis, due to the number
of extremely different forms and types of organizations
represented here, let us first examine some possibilities
of designing "scope" into an EDP planning function.
SCOPE

Planning latitude may be extremely broad if you are
functionally responsible for data processing in a multiplant industrial complex or it can be extremely limited
if the characteristics of the enterprise are small and processing requirements are simple. To my knowledge, there
is no formula for determination of planning scope. Some
questions may help in formulating this portion of your
design for ED P planning:
1) Should the plan point long range, at the immediate, or both? What should the plan achieve? Are
planning objectives understood by all concerned?
2) Is the planning concerned with a part or all of a
specific application?
3) Should planning review present punch card applications, present manual operations, or should it
also encompass data processing for new management decision-making techniques?
4) Is planning required for all portions of an industrial enterprise or are we only concerned with a
division, a plant, or a single function at corporate,
division, or plant level?
5) Should planning be pointed toward data-processing economy or in specific work areas do accuracy
and speed take precedence?
The scope of planning cannot be completely determined without a review of the planning organization
and its place in the organization.
ORGANIZATION

1) Where should the staff work of EDP planning
take place in the organization? Is central planning
required for product planning, for profit planning,
etc? Can economies be gained by centralized planning or are products and data-processing problems so different that plant or divisional planning
is essential?
2) Can a combination centralized-decentralized form
of planning work? Does this provide greater flexibility?

242

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

3) How should the planning group(s) be organized?
What should the relationships be to systems employees?
4) What should be the make-up of the planning
group or groups? How many staff workers should
participate? What background, educational and
experience levels should be utilized?
5) Should committee action be employed?
6) Should line employees contribute?
7) What part, if any, should outside consultants perform?
8) Equipment manufacturers-should they be asked
to contribute data or ideas during planning and
decision making?

Planning work and staff work are not new to most
American military men or to business firms. An important contribution to the acceptance of a complex
program is the attainment of prior success to create an
aura of reliability in those executives that have so performed. Therefore, design for the plan is especially importan t during an initial planning process; in many
cases the "chips" are great, and as in baseball, the batter either gets on base or he is out. In many more cases,
however, successful use of EAM equipment plays a
significant role in preparing for formal planning.
At Chrysler our planning had to be initiated by superimposing a fact-finding and investigatory stage over an
existing broad and, in many cases, extremely good dataprocessing system. This system utilized medium and
After a proper organization is decided upon, managelarge EDP equipment along with EAM equipment at
ment must decide upon a proper set of ground rules or
various levels of organization. It was largely created by
administrative routine within which the EDP planning
breaking off an existing installation (manpower and
will operate.
programs) and, for geographic purposes, a new installation was then originated. A good amount of systems
PLANNING GROUND RULES
work had been accomplished, but this area needed ex1) Is planning to be specific or broad? Is the plan to
tensive work to restrengthen approach concepts as well
be fixed o~ flexible? What happens to the plan if
as to institute much-needed machine room documentabroad procedures or organizational patterns are
tion, administration, and control. A total of 42 machine
changed during or after completion of the plan?
rooms were in existence with over 1200 employees work2) How much documentation will be required?
ing on data-processing equipment. The comptroller was
Should progress be periodically or occasionally rethen informed of the need for a formal plan to investiported, and if so, to what level or levels of managegate work areas and techniques; this review would then
ment?
expand into an over-all systems study that envisioned
3) What preplanning bench markers can be estabexamination of source data pickup to finished managelished to enable management measurement of
ment action or information reports. He agreed.
planning progress?
EDP planning at Chrysler is the staff responsibility of
4) Should management decision be requested during,
the Manager, Systems, and Procedures. Data processing
or only upon completion of, a review of the entire
is an important management instrument in our complanning scope?
pan y, and we have tailored our systems and procedures
5) In what detail should management participate
organization to place proper emphasis and yet not overduring planning steps? What information must
emphasis upon the subject of data processing. Our manmanagemen t have to make final decisions prop~
agement believes that a correct balance between the
erly-all the detailed technical data or broad apwork and programming effort of good systems concepts
proximations of cost and estimates of anticipated
planning and mechanical and ED P planning people
results?
must prevail.
6) Should fact finding be accomplished by a planning
An Assistant Manager, Corporate Systems and Prostaff or should all components contribute factual
cedures, has been delegated the specific and full-time
data?
responsibility of EDP and EAM planning and control.
7) Is it necessary to sell staff work and EDP planning
His organization comprises three sections, each headed
prior to the planning action?
by a supervisor; they include 1) Operations Research,
These question areas are representative of the many 2) EDP and EAM Administration and Control, and 3)
decisions that should be made before initiating a thor- Planning. Our Planning Section is made up of eight emough corporate-wide planning survey, or in performing ployees who have between 3 and 8 years of practical opa feasibility study at division or plant level. The list erating and staff experience in this specific field. Most
of questions is not a complete check list, but may be have college degrees in mathematics, in business, or in
helpful in creating a list for a specific organization.
engineering; however, some of our finest staff work has
Planning work that can be measured and evaluated been accomplished by personnel with much less formal
is difficult in itself, but when initially directed toward education.
EDP, difficulties in the planning process may be ampliInitially our planning was purposely geared on a
fied due to communication problems and due to many broad basis to securing factual data and, on the first
organizations' inherent caution and fear of possible pass, to point out specific areas in which immediate new
changes.
or remedial work should be initiated. Our preliminary

Redmond: General Approach to Planning for Management Use of EDP M Equipment
plan was half completed when we discovered a need; a
need for a better way to handle those jobs (in a large
industrial complex) that due to low unit costs, minimum
elapsed time requirements, and with little additional
communication costs, made a large computer a natural.
As in any flexible planning technique, new base lines
and facts were developed, and after proper delineation
and documentation of the new computer's planning and
operating organization, the over-all plan was reinstated.
You will note that specific and detailed operating planning, after a management decision has been made, is
separated from the staff planning work and in effect proceeds through its own planning cycle within a specific
predetermined scope and time limitation.
The broad-gauge, corporate-wide planning is then
continued with new factors and perhaps with new
ground rules that have resulted from the major decision
passed down by the top management. Continuation does
not mean starting over, but it does embrace a review of
the planning process to date. This planning process entails 1) reaffirmation of objectives, 2) analysis of the
new situation, 3) determination of planning routes and
manpower requirements, 4) choice of alternate course
to be initiated,S) conversion of choice into action steps,
6) creation of internal and external communication
channels, 7) determination of planning and methods of
appraisal, and 8) then a procession through these same
steps after evaluations indicate that changes in the
planning process should be incorporated.
Specifically, our current plan which will be presented
again to top management in the near future has the following objective.
I t is the objective of the Corporate Systems and Procedures Department, through the development and eventual approval and
adoption of this and succeeding planning efforts, to provide Chrysler
Corporation with the most effective management information systems in the Industry.
The need for constant and continual planning is a fundamental
requirement to successfully integrate, coordinate and effect such a
dynamic data handling system. Competitively this is a must for
Chrysler.
In general, we have just scratched the surface in the data handling
field with the mechanization of some of the "safe" clerical functions.
These functions have provided valuable experience in data processing
techniques. On occasion, we have used the logical abilities of data
handling equipment to provide meaningful data for the control and
regulation of manufacturing processes and to directly help to improve
our product quality. However, at the present time most of the data
generated merely reports the status of operations, produces necessary
checks and notices, but is rarely employed to provide recommended
actions for timely operational decisions. Ultimately, a data handling
system will be evolved that will provide this needed decision-making
information, and select action courses for more consistent and precise
control at all levels in the organization. The program to effect this
plan will not remain static for any long period of time, but must be
continually reviewed to reflect new data processing product development, new systems concepts, and progressive changes within the
corporation.

After formally stating a purpose, it is frequently necessary to state the base line, or present situation, which
helps form the scope and direction of our planning in
the minds of the planners and decision makers. The
present status should be concise and complete; any de-

243

cisions that have been firmly made, but not as yet implemented, should be "baked in" as part of the plan
initiating point. As decisions are made, this part of a
plan package will change; such decisions should be formally reflected and should be documented so that at
any time that management decision making may occur
(and this process could be almost continuous) an up-todate document can be presented to insure fact-founded
decision.
A typical Present Status should include:
1) A statement and exhibits to indicate present dataprocessing installations.
2) An exhibit indicating the size, geographical location of these installations, along with an indication
of communication media.
3) The basic type of equipment employed and their
cost or rental.
4) An analysis of the number of people involved in
data processing and the annual cost including
fringe benefits.
5) A statement and chart that indicates the current
organizational placement of data-processing components throughout the corporation.
6) A chart indicating functions and work areas that
are being handled by data processing for each installation within the corporation.
7) A detailed statement indicating the history of a
representative data-processing component; how it
grew, what jobs it is now performing, what basic
costs would be incurred if automated techniques
were not employed.
8) An indication of the training and supervisory
talents that present job incumbents possess.
9) Examples of areas in which improper control and
improper organization, lack of documentation,
poor systems concepts, etc., are contributing to
errat~c and faulty data processing.
Current planning progress should then be indicated
to show positive staff and line, central and decentralized work effort that has resulted from the over-all
planning efforts. This should include such progress as:
1) Training programs initiated by corporate or machine manufacturers to increase the scope of employees' work efforts through greater knowledge
of new concepts or machines.
2) Training programs to teach supervisors and machine procedural planners the latest administrative, documentation, and control techniques.
3) Installation progress of approved EDP equipment
or of new or combined EAM installations.
4) Major conceptual developments initiated at any
EDP installation.
5) Disclosure of preliminary detail plans that look
promising (although in which sufficient staff work
has not been accomplished to secure final decision).

1959 PROCEEDINGS OF THE WESTERN JOINT COlYlPUTER COlYFERENCE

244

6) Development of community or employee communications designed to "sell" data processing.
7) Progress made in report and problem solution
areas entirely foreign to data processing at this
time; using-techniques such as simulation, or by
tailoring mathematical models of problem areas,
thereby establishing entirely different methods
for better and more accurate management deciSlOne

The next section of a planning package may be a review of the development of short-range programs, as
they relate to a long-range program. This program development review should include:
1) A concise statement of problem areas; personnel

outside of the planning section should be asked to
supplement and/or modify a list of these areas.
2) A brief statement as to the proper remedial action
that must be taken to correct existing problem
areas. Again, the considered opinion of line and
staff executives should be requested during the
formulation of the short-range action plan.
3) New problem areas currently not processed mechanically or electronically; these should be defined and should be investigated in terms of possible application and solution by use of different
source information or processing techniques.
4) A firm schedule against which periodic progress
reports may be measured.

5) Using the aforementioned four steps, a general
statement recording 1-, 3-, and S-year goals of information handling should be spelled out. This
should include a statement specifying the planning
philosophy and the general direction in which
the plan is approaching the stated objectives.

In closing, EDP planning is a repetitive, dynamic
process that, to be effective, is flexible and when intermediate decisions are made, should be re-created. It
may be worthless if used as a pure academic exercise. A
plan is considered and then may be rejected or partially
or completely adopted. In any event, it has served a
purpose. ED P planning should be tailored to the total
enterprise, to top management individuals, and to the
background of previous good or bad experiences with
EAM or EDP applications. Planning recognizes that
staff members do not have a patent on brains. Factual
and idea contribution must be encouraged and developed. It should not always result in an EDP application; in many cases, byproducts of EDP plans and feasibility studies are more valuable than the EDP results
themselves.
De-emphasizing hardware, it should place much more
stress upon approach concepts and related facts.
ED P planning should be kept practical and should
engender continuing enthusiasm on the part of the planning staff. It is not an end in itself. The only reward of
good planning is its influence upon management, which
results in the achievement of stated objectives.

Dynamic Production Scheduling of Job-Shop
Operations on the IBM 704 DataProcessing Equipment
L. N. CAPLANt

AND

IRST, let us explain what we mean by job-shop
operations. In this case we are talking about our
Jet Engine Department in the General Electric
Company, Aircraft Gas Turbine Division. The Jet Engine Department is the step between the basic concept
development and th~ production shop. Their manufacturing facility is responsible for building engines and
components and for testing these engines and components to prove practicability of design, serviceability,
manufacturability, and reliability. The prototype engines are also within their manufacturing responsibility.

F

t
Ohio.

General Elec. Co., Business Systems Computations, Evendale,

V. L. SCHATZt

From this, one can visualize the type of operation
about which we are speaking. It is largely one where
Engineering requests parts to be manufactured for installation into an assembly which is to be tested. The
important thing is that there are a large number of requestors and relatively few items per request. Stated
another way, this means no schedule of incoming work.
Hence, the name, job shop. Fig. 1 represents the flow of
operations just discussed.
Now that we have seen what a job shop represents
in our case, consider the size of our operation. There are
approximately 9000 operations on order in the manufacturing shop at anyone time. These operations may
represent two or three thousand requests with several

Caplan and Schatz: Scheduling of Job-Shop Operations on the IBlv! 704
ENGINEERING
(REQUESTS)

Q

~'~~I
I

PROTOTYPE
ENGINES

PRODUCTION
SCHEDULrNG

~STING

IQ

01

101

MANUFACTURING

D
ASSEMBLY

FIELD TESTING

Fig. 1-Jet Engine Department operations diagram.

machine operations per request. This shop contains 50
machine-tool areas. An area in this case represents machines which would perform one type of function-for
example,S lathes or 3 grinders or 15 inspectors. The
cycle time for one job through the shop averages about
22 days.
At this point the problem becomes evident. We must
be able to provide a schedule which will process work
through the shop in the most efficient manner. We must
work to a schedule that is set by the requestors' due
dates. We must minimize waiting time for the part requested, we must minimize farm-out to vendors while
idle time exists in our own shop, and we must minimize
the idle operator time in our own machine shop. Also,
we would like maximum utilization of our expensive
machine tools. The scheduling must be completed
quickly; otherwise, it is obsolete by the time it is to be
used. In addition to the size of the job to be done, other
complications to scheduling include the necessity to
make allowance for weekends, allowance for holidays,
consideration for overtime authorizations for extra
shifts, holidays and weekends, sequential machine operations, simultaneous completion of machine operations on all parts going into an assembly, and operations
being performed by outside vendors. It is difficult to
develop the start and finish time for each job when all
these facts are considered.
A punched-card and tabulator system had been used
with some degree of success in the past. The primary
output of this system was a punched card provided to
the shop foremen showing the job to be worked on and
the starting date. In spite of the fact that this is a relatively small shop, the scheduling became nearly impossible on tab equipment. The scheduling was inefficient
to the extent that we were having to farm out work and
at the same time we had idle machine time in our own
shop. The resultant loss ran into thousands of dollars
monthly. Also, in one case we were scheduling for two
months' backlog of grinding work when actually less
than two days' work was available for grinding.
This was more or less the climax which brought about
consideration of the computer by the Production Scheduling people. Initially they were discouraged because of
some of the things they had been told. For example, a
great portion of the machine work would be sorting of
the 9000 work-information cards into priority sequence
and then resorting into job-order sequence. The job num-

245

ber is 19 digits in length. Therefore, to sort and resort the
information deck would require passing 342,000 (9000 X 2
X 19) cards through a conventional sorter. This would
take well over 10 hours and the sched ule would be obsolete before the sorting was completed. The opinion was
that sorting of this type ~ould be prohibitive in cost on
a computer. A second opinion held that the great number of exceptions would make this scheduling impractical to program for a computer. Nevertheless, Production Scheduling brought the problem to the Computations facility to see what could be done. After some
study, we decided that the job could be accomplished on
our 704. Painstaking programming would be required
and new techniques or so-called "tricks" could be developed to do the sorting in a practical time. The existing card system was adapted to computer handling with
a bare minimum of change. One man from Production
Scheduling, working half-time, and a man from Computations, working full-time, completed and placed the
program in operation in about 10 months.
Let us consider the system and the computer's role.
When a work authorization is received by the machine
shop, the scheduling group assigns a job number and
priority, and breaks the order down into operations
and estimated hours required for each operation. Cards
are then made up for each operation. The card contains
priority, job number, operation number, type of overtime allowed, men required, work area and alternate
work area, a minimum starting date, estimated hours,
part name, quantity, and some other information nonpertinent to the computer operation. The cards for new
work are brought to Computations on a weekly basis
along with cards on work in process. A date card to tell
the machine when the scheduling begins heads the deck.
Priority cards are placed ahead of the operation's cards.
(See Fig. 2.) Within Computations these cards, 9000 on
the average, are placed on tape via card-to-tape converter. The tape and program are placed on the machine. As the tape reads into the machine, the information is checked for possible errors, such as blanks in a
numerical field, impossible data such as February 29th,
non-leap year, and 31 days in 30-day months, or out-ofsequence job number. Some errors may be corrected. For
example, if a 31st day is found in a 30-day month, the
machine will make the day read as the first day of the
next month. All errors are listed by job and operation
number. If 50 errors are detected in the input deck, the
operator is notified by an on-line comment to halt the
program. The computer then prints the rest of the errors, if any, in the deck. As the information is being
checked, it is also being transformed into binary words
so that all scheduling information for the 9000 cards
may be fitted into the 32,000-word memory at one time.
This is still not possible so that job number (2 words) is
placed on a tape sequentially. The technique for sorting
this information is as follows. The memory was split into
three parts. Storage 0-9000 will contain words with
priority number and sequence number stored sequentially in the same word; 9001-18,000 contains words of
scheduling information in sequential order; and 18,001-

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

246

0-9000
JET ENGINE DRPT.
FOREMANS APPROVAL

Sequence Number

Priority

START TIME
COMMENTS

18 Bits

18 Bits
STOP TIME

PRODUCTION SCHEDULING CARD

.

I TENTHS

9000-18,000
Planned Hrs.
00199-

PRIORITY CARD

o 0

DO 0
0

DO DO

00
000...OOot..
DO
'2.4
' .........
_
_
_
_
_
_
_
_
IlgI.,_•• - __
__
_
__
D__
- _
_
_
_
_
_
_ __

Z22Z%.Z_---"-- - - - - - - - _
3339333.. _ -- -0"'0-- - - - - _

-----0-----0---------........ ------0-------==
0-= == =.-=--=..-:..-=.=.

44444444- ..... nUJUG _

_____ - - -

I
I
I Overtime I Quan.
I
I Code
I
I
I
I
I
I
I
I
I
I

18,000-27,000

- - - - - - _--

(,(,G •••• _ _ _ _ _ _ _

717777"-- - - - - - 0 - - -

,a\~,:,::":"-:.

I
I Multiple
I Operators
I
I
I
I
I

I
I Not Before
Date
I
I
I
I
I
I

-

---

Alternate Work Area

Work Area

--~:

-

q,mr:.::,

DATE CARD

041258

DO 0 DO 0

Uoooo'u _ _ _ _ _ _ O_ _ _ _ _ _ _ _ _ _ _
;~6ir~rr,--------D-D---

- - - - - -- - - -

Z.t2U2lt. _ _ _ _ _ _ _ - - - - _

333S3~

-

_ __

..-----0-D--------D--0--- _____ '

-4D444-4~4

Fig. 3-Illustration of memory division for sorting purposes.

... ____

5555055.
-- - _0 -_
106,,6,.,0 _ _
_
_
_
_-_ ,_-

WORXLOAD/II011I!S

- _
- _
_
_
_

77777777----- - - - - - - - _

--0---------

B8S8~sa-----

IJl.-4"".,.~

_ _ _ _ _ _ __

_______ -

99U9gee _ _ _

- -_ _ _ _ _ _ __

DAY-l

!lAY-4

!lAY-3

!lAY-2

Fig. 2-IBM cards making up data deck.

27,000 contains other words of scheduling information
sequentially. (See Fig. 3.)
The first 9000 words are sorted into priority sequence.
By looking at the first word and adding 9000 and 18,000
respectively, the computer may pick up the second and
third words belonging to that operation. Sorting into
order requires approximately two minutes. We write
this on a tape. Similarly, we read back into spaces
9001-18,000 and 18,001-27,000 the job number and pick
it up in the same manner as above to merge with the
tape containing our scheduling information. Finally, we
have a tape containing the operations, including job
number written in priority order. This tape feeds back
into the computer and scheduling takes place. Assemblies and sequential operations are taken care of by using
buckets to hold dates until the required pieces are finished. (See Fig. 4.) From this, we get a tape containing
start dates in priority order and ~equence number. This
tape is read into memory and merged with our original
input tape to place a starting date on each operation.
(See Fig. 5.) The result is punched out as dispatch cards
for the shop areas. Of course, while scheduling is going
on, the computer is tabulating load per area per day. If
maximum capacity is reached in any area per day, the
tabulation and schedule move to the next day.

MILLING ON
CONE NOT

COMPLETED

rt:o

(;VERTI1lE

ON ThIS

AUTl:;GnJ.zED\
Joa

')

3

HR

aPEX

Fig. 4-Example of machine-tool area loading logic.

The printed work load (see Fig. 6) is sent to the Production Scheduling Office. The number at the far left
denotes the machine tool areas, such as grinding, milling, etc. The schedule starts as of the date printed at the
top of the page. The work allotted to each ~rea each day
for the next 42 days is printed. Finally, the area's maximum capacity per day is printed on the far right. If they
feel they can do better on work load, they may reshuffle
priority and run again. (Our future plans are to have the
computer determine optimum schedule.) The punched
cards are sent to the machine tool areas. The workmen
fill in the hours worked or finished on the card and the
cards are returned to the scheduling office to be remade
for next week's run. That is, if a workman has completed 15 hours of a 40-hour job, he marks 15 on the
card. The scheduling office will remake the card with
25 instead of 40 estimated hours and place it in the deck
to be sent to Computations. (See Fig. 7.)
Some interesting sidelights regarding the computer

Caplan and Schatz: Scheduling of Job-Shop Operations on the IBM 704
Part Name

Area

Shaft Gear Spur
Shaft Gear Spur
Shaft Gear Spur
Shaft Gear Spur
Shaft Gear Spur
Shaft Gear Spur
Shaft Gear Spur
Ring Spl Adapt
Ring Spi Adapt
Ring Spl Adapt
Camp Rotor
Camp Rotor
Comp Rotor
Blade Stg 3
Blade Stg 3
Camp Blade Stg 4
Comp Blade Stg 4
Camp Blade Stg 4
Camp Blade Stg 8
Camp Blade Stg 8
BId Comp Stg 16
BId Comp Stg 9
BId Comp Stg 9
C/R Blade Stg 10
C /R Blade Stg 10
C/R Blade Stg 10
C/R Blade Stg 1
C /R Blade Stg 1
C/R Blade Stg
C/R Blade Stg
C/R Blade Stg
C/R Blade Stg
C/R Blade Stg
C/R Blade Stg 12
C/R Blade Stg 12
C/R Blade
C/R Blade
C/Disc Stg 5
C/Disc Stg 5
C/Disc Stg 5
C/Disc Stg 5
C/Disc Stg 6
C/Disc Stg 6
C/Disc Stg 6
C/Disc Stg 6
C/Disc Stg 7
C/Disc Stg 7
C/Disc Stg 7
C/Disc Stg 7
Stub Shaft
Stub Shaft
Stub Shaft
Comp Disc Stg

202 245
212 255
201 265
Mag 275
Ins 285
212 295
Ins 305
274 020
212 ·030
Ins 040
268 050
212 060
Ins 070
Zyg 020
Ins 030
212 010
Zyg 020
Ins 030
Zyg 020
Ins 030
Ins 030
Zyg 020
Ins 030
212 010
Zyg 020
Ins 030
Ins 030
Zyg 120
212 010
Zyg 020
Ins 030
Zyg 020
Ins 030
Zyg 020
Ins 030
Zyg 020
Ins 030
212 020
268 030
212 040
Ins 050
212 020
268 030
212 040
Ins 050
212 020
268 030
212 040
Ins 050
Ovn 010
263 020
Ins 030
Ins 050

Planned Hrs.

Job Number
5-6-00169
5-6-00169
5-6-00169
5-6-00169
5-6-00169
5-6-00169
5-6-00169
5-6-40151
5-6-40151
5-6-40151
5-7-80042
5-7-80042
5-7-80042
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045
5-7-80045

000
000
000
000
000
000
000
000
000
000
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
803
806
806
806
806
806
806
806
806
806
806
806
806
807
807
807
809

04
04
04
04
04
04
04
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01
01

7001
7001
7001
7001
7001
7001
7001
0001
0001
0001
0001
0001
0001
0001
0001
0002
0002
0002
0003
0003
0004
0005
0005
0006
0006
0006
0007
0007
0008
0008
0008
0009
0009
0010
0010
0011
0011
0001
0001
0001
0001
0002
0002
0002
0002
0003
0003
0003
0003
0001
0001
0001
0001

0000
0017
0030
0000
0000
0032
0000
0050
0010
0000
0120
0030
0000
0000
0000
0204
0000
0000
0000
0000
0000
0000
0000
0495
0000
0000
0000
0000
0609
0000
0000
0000
0000
0000
0000
0000
0000
0050
0100
0025
0000
0050
0100
0025
0000
0050
0100
0025
0000
0000
0040
0000
0000

247
Start Date
10/23
10/27
10/27
10/28
10/30
11/03
11/06
10/01
10/02
10/17
10/02
10/03
10/07
09/29
10/03
09/29
09/30
10/06
09/29
10/03
10/03
09/29
10/03
09/29
10/03
10/09
09/29
10/01
09/29
10/06
10/10
09/30
10/06
09/30
10/06
09/30
10/06
09/29
10/07
10/09
10/14
09/29
10/09
10/13
10/15
09/29
10/10
10/14
10/16
09/29
10/03
10/07 /
10/09

Fig. 5-Output for Scheduling Office.

Area
201
203
206
207
208
209

Day 1 to 42
48.0 48.0
48.0 48.0
0.6 0.8
32.0 32.0
0.0 0.0
0.0 0.0
32.0 14.5
16.0 13.0
0.0 0.0
32.0 32.0
0.0 17.0
0.0 0.0
32.0 32.0
0.0 12.0
0.0 0.0
16.0 15.5

PRODUCTION SCHEDULING
DAY BY DATE LOAD
09/22/58
48.0
48.0
0.6
32.0
0.0
0.0
23.5
10.0
0.0
32.0
0.0
0.0
32.0
0.0
0.0
0.0

48.0
48.0
3.7
32.0
0.0
0.0
15.5
8.0
0.0
32.0
0.0
0.0
32.0
0.0
0.0
0.0

48.0
24.0
3.5
31.0
0.0
0.0
0.0
0.0
0.0
32.0
0.0
0.0
32.0
0.0
0.0
0.0

0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

48.0
13.1
0.0
13.5
0.0
0.0
6.0
0.0
0.0
32.0
0.0
0.0
17.0
0.0
0.0
0.0

48.0
0.0
2.5
13.5
0.0
0.0
16.5
12.0
0.0
32.0
0.0
0.0
3.5
0.0
0.0
2.5

48.0 48.0 48.0
2.4 1.8 0.9
2.0 1.0 0.0
30.5 30.5 0.0
0.0 0.0 0.0
0.0 0.0 0.0
20.0 14.9 12.1
8.0 0.0 0.0
0.0 0.0 0.0
25.0 2.5 0.0
0.0 0.0 0.0
0.0 0.0 0.0
5.0 0.0 18.0
0.0 0.0 0.0
0.0 0.0 0.0
0.0 6.0 12.0

Fig. 6-Listing of punched-card output.

0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Capacity/day

48.0 hrs.

Capacity/day

32.0 hrs.

Capacity/day

32.0 hrs.

Capacity /d~y

32.0 hrs.

Capacity/day

32.0 hrs.

Capacity/day

16.0 hrs.

248

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

NEW JOBS
OLD

NEW

Create cards
Recreate new set of
cards factoring in
latest conditions for
rescheduling

Machine tool operator
completes or modifies
card

®

Schedule and
reschedule

Workload by Day
for Next Month

Vendor Load and
Delivery Schedule

Fig. 7-Information flow to Computations facility and back to Production Shop.

are the following. Our 32K memory is completely filled
twice during running. Six tape units are required. The
computer running time is 30 minutes. In addition, two
hours' handling time is required to prepare the input.
At one time an experimental task force tried to do the
complete scheduling job by hand. By the end of four
days of round-the-clock work, they threw up their
hands. The machine system has now been in operation
several months with a resultant 4 per cent drop in idle
labor. Savings the first year, above the cost of the program, are estimated at a minimum of $31,000, but are
probably much greater.
In summary, the advantages of the system are
1) realistic, up-to-date and on-time schedules,

2) better work loading for minimum idle time of men
and machines,
3) compact and easy method of placing work in areas,
4) use of the program to predict results of crash programs on delivery schedules and dates.
An added advantage not foreseen is that a tool dispatcher now places tools at all machines prior to the
workmen starting on the job, instead of having the
workmen obtain their own tooling after receiving the
job. This is possible since we know what type of work
will be at each machine tool and on what day. Delays
in schedule due to tool shortages no longer occur.
As for future plans, we are making a study of scheduling the entire integrated operation as seen in Fig. 1.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

249

Numerical Methods for High-Speed ComputersA Survey*
GEORGE E. FORSYTHEt
CLASSIFICATION OF PROBLEMS

UMERICAL analysis is the science and art of
uS.ing .digital compu.ting machines to carry out
sCIentIfic computatIOns, excluding pure data
processing. Because of the recent changes in computers,
the field is dominated today by the problems of using
the large stored-program digital computers. Analog
computers, though important both fOF their output and
as a source of methods (too seldom exploited) for digital
computers, are not considered here. It is possible to give
a rough mathematical classification of the types of computing problems ordinarily met. The following classification is adapted from Forsythe [4].

N

Approximation
1) Evaluate functions of one or more variables. Integrate, differentiate, and interpolate them. Sum a series.
2) Approximate functions of one or more variables
by simpler functions in the sense of least squares, least
deviation, or other norms.
3) Test empirical data for blunders and fill in missing
data. Smooth such data, and fit them with typical
curves. Integrate and differentiate them. Analyze their
spectra.

Algebra and Number Theory
4) Solve a number of simultaneous algebraic or transcendental equations.
a) Linear.
b) Nonlinear, including algebraic eigenvalue problems and polynomial equations.
5) Solve systems of linear or nonlinear inequalities.
a) Programming problems.
b) Problems from game theory.
6) Combinatorial problems (dealing with functions
of permutations).
7) Problems from number theory.

A nalysis and Functional Equations
8) Find the maximum of a function of one or more
variables.
9) Solve initial-value problems for one or more
ordinary or partial differential equations.
10) Solve boundary-value problems for one or more
ordinary or partial differential equations.
11) Solve problems for other functional equations,
including, for example,

* The writing of this exposition was sponsored by the Office of
Naval Res., under: Co~tract Nonr-225(37), NR 044 211.
t Stanford Umverslty, Stanford, Calif.

a) Differential equations with retarded arguments.
b) Difference-differential eq ua tions.
c) Conformal mapping problems.

Simulation
12) Simulation of random noise-for example, with a
prescribed power spectrum.
13) Simulation of physical systems as an alternative
to setting up equations for them, e.g., traffic flow, or
animal nervous systems.
THE WIDE SCOPE OF NUMERICAL ANALYSIS

In treating any of the above classes of problems, there
is a very wide class of activities which comes under the
heading "numerical analysis," and an additional group
of almost inseparable activities called "mathematical
analysis" at the origin of the problem, and "programming" or "coding" on the machine end. Let us briefly review these.

Problem Formulation
The formulation of the mathematical problem to be
solved is applied mathematical analysis, and not numerical analysis. However, the computational feasibility of
the problem is a very important factor in the selection
of a mathematical problem by which to model a given
physical problem.

Replacement by an Algebraic Problem
Many of the mathematical problems of probJem
formulation A are not algebraic. Basically, digital computers can only add, subtract, multiply, and divide.
Hence, to be tractable by a computer a problem must be
algebraic-dealing with only a finite number of variables. The reduction of a transcendental problem to a
related algebraic problem (or sequence of algebraic
problems) may be called "discretization," and the corresponding error, the "discretization error." The study
of discretization is an important part of numerical
analysis.

Design of a Theoretical Algorithm
Having an algebraic problem to deal with, numerical
analysts design algorithms with which to solve it. At
this stage it is expedient and customary to think of idealalgorithms in which exact arithmetic operations are
carried out with real numbers. Some algorithms are
"direct," terminating with the answer to the algebraic
problem in a finite number of steps. Others are "iterative," and attain the answer only as the limit of a sequence of steps.

250

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Convergence and Errors of Iterative Algorithms
Numerical analysts study the convergence and other
asymptotic properties of iterative algorithms. The fact
of convergence is almost essential to their use, while the
speed of convergence is economically important. Knowledge of the asym ptotic behavior assists one in accelera ting the convergence, which is almost always too slow.
Finally, at a higher level of sophistication, one ought to
have at any stage of the iteration a means of bounding
the difference between the current iterate and the exact
solution of the algebraic problem.

Digitalization and Round-Off Error
We know that digital computers do not deal with real
numbers, but only with rational numbers. In practice,
computers usually use only special kinds of rational
numbers-short terminating fractions to base 2 or 10,
called "digital numbers." The arithmetic operations
heretofore discussed are replaced by various pseudooperations. This introduces round-off errors and raises
all kinds of difficult questions about the algorithm and
its utility. These, too, are treated by numerical analysis, and represent a major point of contact with actual computers.

Programming and Coding
Actually getting problems on a digital computer, the
goal of programming and coding, is so closely related to
numerical analysis that it is difficult to separate them.
The selection of an algorithm is bound up with the available means for approximating it digitally on a computer. For many problems, like the solution of Dirichlet's
problem over a complicated region, the machine determination of difference equations (hand determination would be unthinkable for large problems) requires
a code far more difficult than the machine solution of
the equations.
The wide range of activity naturally forces practicing
numerical analysts to specialize. I think it is vital that
such specialization be along "vertical" lines, i.e., that
each analyst deal with problems from their formulation
right through their coding and operation, confining
himself, if necessary, to a few problems. In this way the
feedback between machine peculiarities and problem
formulation is most apt to be effective. This feedback
should lead to the most rapid evolution of mathematical
methods for using the new computers. As an immediate
benefit, such a plan of operation should be the truest
safeguard against major blunders, for such blunders
arise most easily in areas of misunderstanding between
different responsible persons.
Unfortunately, the organization of mathematical
groups in many companies is in just the opposite direction. Engineering groups and mathematical analysis
groups emerge, as well as programming groups and
machine operating groups, and there is resistance to relaxed communication among them. It would be a great

contribution to the use of computers-and to our entire
technology-if some organizational genius could find a
method of keeping problems in the hands of very small
responsible groups from their inception to their solution.
CURRENT STATUS OF THE MATHEMATICAL AREAS

It is manifestly impossible to survey here the current
status of all areas of numerical analysis. Let me merely
point to a few sources of new research in the fields, following the outline of the first section. Many exceedingly
important new contributions will necessarily fail to be
mentioned here.

Approximation
A major conference on approximation was held at the
Army Mathematical Center in Madison, Wis., in April,
1958. The proceedings of that conference, to be published, will be an excellent synopsis of current thought
on approximating functions. Problems of numerical
integration, differentiation, etc., can be given a formulation in terms of finding nearest vectors in such function
spaces as Hilbert space. (See Sard [16].) While more
conventional approaches, based largely on polynomial
interpolation, are probably an adequate basis for dealing with functions of one variable, it is probable that
approximation problems for functions of two or more
variables will be solved effectively only in terms of function spaces. And it was quite clear at the Madison conference that the great need now is for suitable methods
for integrating, interpolating, and approximating functions of several variables.
The problem of estimating the power spectrum of a
random process from a sample time function has become
exceedingly important in modern engineering. The
problem is one of statistical estimation, and progress is
coming largely from statisticians like Tukey [13].
One area in approximation theory, that of minimax
approximation, is receiving considerable attention. It
has been known for years that polynomial and rational
functions of least deviation from f(x) over an interval
are characterized by residuals with a certain equalalternation property. Since a computer can generate
any rational function of x, and only rational functions
of x (we ignore problems of digitalization), it is natural
to seek rational approximations to the common transcendental functions of analysis. Hastings and his collaborators [6] have published a book of useful approximations obtained by hand-tailoring methods. But it is
widely agreed that digital computers can and should
generate such approximations. Work of Remez [14] led
to an algorithm by Novodvorskil and Pinsker [11] for
this purpose. The algorithm has been improved and
coded at various computer centers, but it is not yet clear
that a best procedure has been found. The problem is
really one of linear programming with an infinite number of conditions.

Forsythe: Numerical Methods for High-SPeed Computers-A Survey
While it is an attractive and interesting numerical
analysis problem, for example, to find the rational function of degree 10 which best approximates cos x over an
interval, this may not in fact be the most appropriate
approximation to cos x in practice. It may be that a
piecewise quadratic approximation would be faster to
use, only slightly more bulky to store, and easier to determine. Professor C. B. Tompkins has mentioned that
such an approximation is very easily tailored by an
automatic computer to fit detailed specifications. Thus
we must always ask whether numerical analysts are in
fact applying their talents to solve the right problems.

Algebra
Problems of linear algebra and the solution of polynomial equations have been studied extensively during
the present decade. In his 1958 lectures in Ann Arbor,
Wilkinson [19] greatly increased our understanding of
several common methods for these problems. His forthcoming book will contain the same ~ma terial and more,
while a little of the material is already available [2].
Wilkinson's most important basic contribution seemed
to be a satisfactory and realistic analysis of the round-off
error in Crout's method for Gaussian elimination. The
importance of "searching for a pivot" was demonstrated
conclusively.
Wilkinson discusses the calculation of the eigenvectors of a tridiagonal matrix, a necessary part of either
the Givens or Lanczos methods for solving the complete
eigenvalue-eigenvector problem for finite matrices [20].
He also gave the following startling example of the instability of the roots of a polynomial. Let
f(X) = (x
=

x 20

+

1) (x

+ 2)

... (x

+

20)

+ 210x + .. " .
19

with zeros -1, -2, .. " -20. Suppose just one coefficient of f(x) is slightly perturbed, say because of
rounding, so that instead of f(x) one deals with g(x)
=f(x) +2-23 X 19 • Then Wilkinson has found that the
zeros of g include numbers near -20.846, and -13.99
± 2.5i. In face of this evidence, no one should ever
again blindly ask any automatic routine to find the
zeros of a polynomial of importance!
Householder [7] has reported a new algorithm which
might be an improvement on Givens' very excellent
routine for getting eigenvalues of symmetric matrices.
For unsymmetric matrices the eigenvalue problem remains one for which there are many routines, with none
of them outstanding enough to become standard.
Ostrowski [12] has published one of the first papers
bounding the variation in the eigenvalues of an unsymmetric matrix, due to perturbations in its elements.
This has very important consequences for round-off
error. It will be important to improve Ostrowski's results for various special cases where root multiplicity can
be guaranteed to be less than the order of the matrix.

251

I shall not attempt to report on the progress in dealing
with inequalities, combinatorial problems, or number
theory.

Functional Equations
The status of the numerical solution of ordinary differential equations is being reported in a forthcoming
book by Henrici, starting from the points of view of
Rutishauser, Dahlquist, and others. The well-known
methods of Milne [9] have generally proved unstable on
automatic computers, although recent modifications
may save them [10].
A recent book by Richtmyer [15] treats difference
methods for marching problems of partial differential
equations. A forthcoming book by Forsythe and Wasow
will deal with difference methods for linear partial differential equations of second order, including both
marching and jury problems. The concept of stability
will be quantified somewhat for marching problems.
The work of Young, and Peaceman-Rachford and successors, will be recorded. Work by Kahan and others on
overrelaxation without Young's property A will be mentioned.
As indicated earlier, a large practical problem in putting Dirichlet's problem on a computer is to generate the
difference equations. A very substantial project, called
SPADE, is operating under SHARE to automate the
generation and solution of difference equations. Such
work is as important as it is difficult, and needs solid
sponsorship.
The work of Gerschgorin is vital in bounding the discretization error due to replacing an elliptic boundaryvalue problem by difference equations. The Gerschgorin result has been extended by Wasow [18]. Recent
Russian work is extending the results to cases where the
unknown solution does not have bounded third derivatives. (See Volkov [17].)
The Gerschgorin theory has not been developed to
deal with internal interfaces, and it would be highly desirable to make the extension. There are many questions open for investigation in connection with graded
nets, regions with corners, and so on.

Simulation
About simulation there is only one topic I wish to
mention. An increasingly important area in modern
engineering is that of random noise. The simulation of
random noise is important in many ways, and methods
for analog computers are discussed, for example, by
Laning and Battin [8]. The author holds that a welldesigned digital computer can do anything an analog
computer can, and better, provided that the coders
know what to code. It seems to me time that someone
investigated and wrote about the simulation on a digital
computer of random noise with a prescribed power
spectrum or autocorrelation function.

252

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

residual b - Ax is not a new one, of course, but it has
usually been considered only as an expedient substitute
Before the advent of computers one thought about for the "real error" x-h. What is new in recent thinking
discretization errors, and these are still important. In is to consider that such a number as lib-Axil (or the
dealing with desk computations one had to worry about more general e) may often be a fair measure of the error
arithmetic blunders, but these are comparatively rare in in solving Ax = b. Of course, the decision as to what is an
automatic computation. But the characteristic error in appropriate measure of the error in solving a system can
much automatic computation is an error due to the only be decided after an examination of the physical
digitalization of the numbers in a computer, with re- problem underlying the computation.
sultant approximation of the arithmetic operations, to2) A second major development is the advent of
gether with the cumulative effects of these errors codes which precisely bound the error of a calculation.
through numerous subsequent operations. One now Perhaps the most promising are the range arithmetic
finds several points of view emerging with regard to codes prepared under the supervision of Ramon Moore
these errors.
at the Lockheed Missile Systems Division, Palo Alto,
1) Let me illustrate one new point of view in the con- Calif. In these codes, the operands are the range numtext of the solution of a linear algebraic system Ax = b, bers of Dwyer [3], i.e., real intervals. The answer to a
although it has far wider applications. Here A is a range multiplication, for example, is the smallest digital
given nonsingular matrix of digital elements, while b is interval which includes all products of pairs of numbers
a given vector of digital components. Let h = A -Ib be chosen from the intervals of both factors.
the true solution of the system. Suppose one has found a
Range arithmetic seems to be the most efficient form
digital vector x which approximately solves the system. of error computatio~ by a machine, and machine compuHow does one measure the error in x?
tation seems far easier and more efficient than human
The conventional concept of error is the vector x - h, bounding of errors. It is to be hoped that range arithor some number
which measures its smallness. metic will find its way into all algebraic translators, as
The vector x - h answers the question, how wrong is x an optional form of arithmetic.
as a solution of the problem with the given data A, b?
3) There have been suggestions by Wilkinson, Carr,
It is comparatively difficult to estimate IIx-hll, since it and others, that problems be computed several timesrequires knowledge of the size of A-I.
for example, one each with eight, nine, ten, eleven,
Givens [5] and Wilkinson [19] are asking a different twelve, and thirteen decimals. From a comparison of
question in formulating the concept of the error in solv- the answers, one should be able to guess the round-off
ing Ax = b. Namely, how necessary is it to alter the given errors quite satisfactorily in some problems. Such ability
data A, b to new values AI, bI, in order that the approx- to do arithmetic with variable word length is not easily
imate solution x correctly solve the altered problem available on most machines, but should be made
Alx=b I? They consider the matrix A -AI and the available.
vector b - bI as a definition of the error in x. Again, they
4) Givens has emphasized the importance of having
might adopt some number of the form e= {IIA-AIII2 programs which print out guaranteed bounds for
II b - bI I12 }1/2 as a measure of the smallness of A - A 1 answers to problems. For example, if a polynomial is
and b-b l •
input (perhaps by means of its coefficients as range numOf course Al and bI are not uniquely determined by bers) , an ideal routine might output in effect several
x. With suitably chosen norm functions one could force rectangles in the complex plane, together with the
uniqueness by demanding that AI, bI be closest to A, b, number of zeros in each rectangle. Such a routine should
in the sense of minimizing the E defined above; let min e be without the slightest mathematical error;, its asserdenote that minimum value. However, it would prob- tions should be known to be correct for a specified class
ably be difficult to compute min e or the minimizing of polynomials. And, if the input polynomial turns out
quantities AI, bI , and all that is really needed is a rea- not to be in that class, that fact should be output.
sonable upper bound for min e. The real power of the
One knows in principle how to write such routines for
Givens-Wilkinson suggestion is that it is easy to bound various types of problems. What is lacking is definitive
min E in solving linear systems by a number of methods. experience with them, for very few have been actually
I shall not be able to go into this, except to mention the written. With polynomial zero-finders, it is certain that
use of the residual b - Ax.
acceptably small rectangles can be found, by use of
One simple way to bound min e has been used for a multiple-precision arithmetic, if necessary. But for
long time. Let bI=Ax, and let AI=A. Then Alx=br, ordinary differential equations, for example, it may be
and min E ~ b - bIll = b - Axil. Here one has concen- that no realistic solution bounds could ordinarily be
trated the necessary alteration of the data in the vector found, if they must be rigorously correct. A good way to
b. The vector b - Ax is easy to compute and is often al- settle this question is to accum ula te much experience
ready available as a byproduct of a solution of Ax = b. with well-written codes taking full advantage of the
The idea of measuring the error in x by the size of the speed of current machines.
TREATMENT OF ERROR

Ilx-hll

+

II

II

Forsythe: Numerical Methods for High-SPeed Computers-A Survey
FREE EXPERIMENTATION AND TEAMWORK OF
MAN AND MACHINE

253

and machine may intercommunicate quickly and effectively. In this way man can return to the automatic
digital computing loop in the effective way he is present
in other types of computation.
It must be admitted that many administrators disagree with the last paragraph. They feel that an automatic digital computing loop can operate quite effectively at the slower rate which occurs with unattended
runs, when the analyst ponders the output overnight,
returning next day with his new input. And the collaboration of man and machine may sometimes be more effective because of the extra time spent by the analyst in
pondering his next move. 1
I am sure that unattended runs are often adequate,
and especially often for the more easily analyzed and
better understood classes of problems. But it is characteristic of difficult problems that they are only imperfectly understood, so that the analyst may be in
grave doubt about the choice of computing method and
about various other matters, and even about where
trouble will occur. But even at high speed it is often
possible to read orders of magnitude from console
registers and thus monitor a computation. For some
problems an analyst in attendance can come prepared
with various cards, for example, and can interpose
changed values or alternate codes very early in a long
computation, at a large net saving of computer time
over just letting the machine run unattended. Can't we
agree to give a scientist a few seconds to make such adjustments during a run? This is not to ask for many minutes for pondering a surprising turn of events-that
would be very wasteful by any criterion.
I suppose the decision as to attending a run should be
the decision of how a creative but responsible man feels
he can best operate. A creative scientist is a rare person,
and in his research should be permitted wide latitude in
choosing his hours, location, and methods-including
his technique of collaboration with a machine. In fundamental terms, computers can be manufactured in great
numbers, and there are many persons capable of administering their operation. The really critical shortage
remains that of creative scientists and engineers who
can make effective use of their tools. Hence, at least on
any problem of importance, the time of a scientist is
really more valuable than that of a machine, and all obstacles should be cleared away from his use of a machine
in the way he feels will solve his problem most effectively. No one should ever compare a creative man with
a machine on the basis of their dollar costs per hour!

I t is safe to say that we are all using automatic digital
computing machines very badly. It is characteristic of
humans to use new tools as though they were extensions
of old tools, and only to devise appropriate new methods very slowly. It would be most desirable to speed up
our process of accommodation to the new machines,
since otherwise one wonders if numerical analysts will
ever catch up with computer engineers!
I suppose that the surest road to progress is to encourage imaginative experimentation by those close to
machines, and especially to attract imaginative people
to computer laboratories. I feel it would help if machine
administra tors would allow-yes, arid encouragetheir most imaginative numerical analysts to play personally with the newest machines as much as possible.
In the early days of automatic computation, research
workers with important computing problems, physical
or mathematical, would always be present when their
computations were being carried out on automatic computers. They would help trouble-shoot the code, and
use the preliminary results to suggest new cases to try
and parameters to vary. The same method of operation
is customary now with analog computation.
However, it was discovered that inquisitive research
workers would waste machine time while they pondered
the problem, and probably they sometimes got in the
way of the machine operators. As a result, in many
companies programmers and research workers are effectively discouraged from attending machine runs.
Elaborate automonitoring and other efficient routines
enable the machine to operate automatically for hours
at a time, running codes which are preassembled on
tapes. As a result, the apparent efficiency of a computing
laboratory increases, when measured in good operating
time per month. However, the real efficiency may well
be found to decrease, when measured in problems
solved per month. The reason is that the machine is being deprived of its most valuable component-an intelligent human who knows the problem. Knowing he
will not be present at the console, the coder tries to
cover every eventuality, and sometimes asks for ten
times as much computing as would be necessary if he
could be present. Is this efficiency?
There are sound economic reasons for insisting on
reasonable efficiency in the machine room, and I do not
advocate wasteful practices. But the gain from reasonable efficiency should include the gain from reasonable
LITERATURE ON NUMERICAL ANALYSIS
use of man as the most important ally of the machine. I
feel that if a scientist wishes to work with the machine,
In a field growing as fast as numerical analysis, it is
it is up to the administrators to design methods whereby difficult even to learn the names of important new books
this collaboration can be efficiently and effectively car- and journals, let alone read them. I shall mention here
ried out. Machine designers should also bear this in
mind, and provide enough visual monitors and enough
1 The author is indebted to Dr. Walter F. Bauer for an exchange
conveniently adjustable console registers, so that man of ideas on this matter.

254

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

the names of a few journals which have a fair number of
articles or abstracts on the numerical analysis appropriate to high-speed digital computers.

Journals
Chiffres (France), The Computer Journal (England),
Communications of the A ssociation for Computing M achinery, Journal of the Association for Computing Machinery , Journal of Research of the National Bureau of
Standards , Journal of the Society for Industrial and A pplied Mathematics, Mathematical Reviews, Mathematical
Tables and Other A ids to Computation, Zeitschrift fur
numerische Mathematik (West Germany), Proceedings
of the Cambridge Philosophical Society, Quarterly of Applied Mathematics, Quarterly Journal of Mechanics and
Applied Mathematics (England), Referativny~ ZhurnalMatematika (Soviet Union), Review of the Society for
Industrial and A pplied Mathematics, Vychislitel' naia
M atematika (Soviet Union), Zeitschrift fur angewandte
Mathematik und Mechanik (East Germany), Zeitschrift
fur angewandte Mathematik und Physik (Switzerland),
Zentralblatt fur Mathematik und ihre Grenzgebiete.
Any librar)C associated with a digital computer center
should have current subscriptions of all these journals,
together with as complete a back file as possible.
Books
Books on numerical analysis which take real account
of automatic computation have been slow to appear,
perhaps because the field changes so rapidly. I should
like merely to call your attention to two books which
are too recent to be well known. Complete citations are
in the bibliography below. The first is "Modern Computing Methods" [2], written anonymously by four
mathematicians in the National Physical Laboratories,
near London. It is thin but full of good material by persons intimately involved in computation, both before
and after the computer revolution. The bibliography of
128 titles is completely annotated.
The second is a general survey [1] of machines by
Alt, including coding and problem analysis. There is a
12-page bibliography, implicitly annotated by crossreferences to the text.

REFERENCES

[1] F. L. Alt, "Electronic Digital Computers, Their Use in Science
and Engineering," The Academic Press, Inc., New York, N. Y.,
and London, Eng., 336 pp.; 1958.
[2] Anonymous, "Modern Computing Methods," The Philosophical
Library, Inc., New York, N. Y., 129 pp.; 1958 (said to be written
by L. Fox, E. T. Goodwin, F. W. J. Olver, and J. H. Wilkinson).
[3] P. S. Dwyer, "Linear Computations," John Wiley and Sons,
Inc., New York, N. Y., 344 pp.; 1951.
[4] G. E. Forsythe, "Contemporary state of numerical analysis,"
in G. E. Forsythe and P. C. Rosenbloom, "Numerical Analysis
and Partial Differential Equations," John Wiley and Sons, Inc.,
New York, N. Y., pp. 1-42; 1958.
[5] W. Givens, "Numerical Computation of the Characteristic
Values of a Real Symmetric Matrix," Oak Ridge Natl. Lab., Oak
Ridge, Tenn., Rep. ORNL 1574, 107 pp.; 1954.
[6] c. Hastings, Jr., "Approximations for Digital Computers,"
Princeton University Press, Princeton, N. J., 201 pp.; 1955.
[7] A. S. Householder, "Some Mathematical Problems Arising in
Matrix Computations," Oak Ridge Natl. Lab., Oak Ridge,
Tenn.
[8] J. H. Laning, Jr. and R. H. Battin, "Random Processes in
Automatic Control," McGraw-Hill Book Co., Inc., New York,
N. Y., 434 pp.; 1956.
[9] W. E. Milne, "Numerical Solution of Differential Equations,"
John Wiley and Sons, Inc., New York, N. Y., 275 pp.; 1953.
[10] W. E. Milne and R. R. Reynolds, "Stability of a numerical
solution of differential equations," J. Assoc. Comput. Mach., to
be published.
[11] E. N. Novodvorskir and I. S. Pinsker, "On a process of equalization of maxima" (Russian), Uspehi Matem. Nauk, vol. 6, no. 6,
pp. 174-181; 1951.
..
[12] A. Ostrowski, "Mathematische Miszellen XXVII. Uber die
Stetigkeit von charakteristischen Wurzeln in Abhangigkeit von
den Matrizenelementen," Jahresber. Deutsch. Math. Verein., vol.
60, pp. 40-42; July, 1957.
[13] H. Press and J. W. Tukey, "Power Spectral Methods of Analysis
and their Application to Problems in Airplane Dynamics," reprinted by Bell Telephone System as Monograph 2606, 41 pp.;
1956.
[14] E. I. Remez, "Obshchie Vychislitel'nye Metody Chebyshevskogo
Priblizheniia," Akademiya Nauk Ukrainskoi SSR, Kiev,
U.S.S.R., 454 pp.; 1957.
[15] R. D. Richtmyer, "Difference Methods for Initial-Value Problems," Interscience Publishers, Inc., New York, N. Y., 238 pp.;
1957.
[16] A. Sard, "Best approximate integration formulas; best approximation formulas," Amer. J. Math., vol. 71, pp. 80-91; January,
1949.
[17] E. A. Volkov, "On the question of solving the interior Dirichlet
problem for Laplace's equation by the method of nets," (Russian), Vychislitel' Naia Matemat., Akademiya Nauk SSSR,
Moscow, U.S.S.R., no. 1, pp. 34-61; 1957.
[18] W. Was ow, "Discrete approximations to elliptic differential
equations," Z. angew. Math. Phys., vol. 6, pp. 81-97; March,
1955.
[19] J. H. Wilkinson, lecture notes on matrix methods, to be published by the University of Michigan as part of notes on Advanced Numerical Analysis for Summer, 1958, by the University
of Michigan Engineering Summer Conferences, East Engineering Bldg., Ann Arbor, Mich.
[20] J. H. Wilkinson, "The calculation of the' eigenvectors of codiagonal matrices," Computer J., vol. 1, pp. 90-96; July, 1958.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

255

More Accurate Linear Least Squares
RICHARD E. VON HOLDTt
THE LINEAR LEAST-SQUARES PROBLEM
"

If ET

A be a given matrix of m rows and n columns,
(m"2:.n), so that A W=O implies W=O (i.e., the
column vectors of A constitute a linearly independent set), and let Ybe a given m-dimensional vector.
We seek an n-dimensional vector X, so that 1R(X) 12 is
the minimum value of 1R(X) 12 where

L

R(X) = AX -

A

Y.

(1)

GEOMETRIC DERIVATION OF THE CLASSICAL
SOLUTION TO THE LINEAR LEAST-SQUARES
PROBLEM

Let S be that subspace of m-dimensional Euclidean
space which is spanned by the column vectors of A.
Then for arbitrary X, AX is a vector in S, and R(X) is a
vector with initial point at the terminal point of Yand
terminal point in S.
Let A X be the orthogonal projection of Y onto S.
Then

= AX - Y,

R(X)

o.

Using the diagonal elements, in increasing order, as
pivots, and combining proper multiples of each row into
all following rows to produce zeros below the pivot elements in the column containing the pivots in (5) does
not change the value of any of the minors of ATA
formed by deleting all but the first k rows and all but
the first k columns of ATA. Thus this process of replacing (5) by an equivalent upper-triangular system of
equations yields the successive nonzero pivots:
(8)

The above described process is equivalent to premultiplying both sides of (5) by a lower-triangular matrix
L, with unit diagonal elements, yielding
(9)

(3)
THE SOLUTION OF

R(X) = [R(X) - R(X)]

= A(X - X)

+ R(X)

+ R(X).

(4)

From (3) and (4), for arbitrary X, we have
1 A (X - X) 12

+

1 R(X) 12"2:. 1 R(X) 12.

Thus 1R(X) 12 is the minimum value of 1R(X) 12 for arbitrary X. Substituting (2) into (3), X must satisfy the
relation:
(5)

(5)

BY ORTHOGONALIZATION

Using the columns of A, in increasing order, as pivot
columns, and combining proper multiples of each pivot
column into all following columns so that the resulting
columns are orthogonal to the pivot column, replaces the
columns of A by an orthogonal basis for S. Let the
matrix which results be denoted by B. Then

B = AU

== I AwI2

(11)
=

0---7 W

=

O.

(6)

Hence A T A is a nonsingular n X n matrix and (5) has a
unique solution.
THE SOLUTION OF

(5)

BY DIAGONAL PIVOTS

Let M k , (k = 1, 2, ... , n), be. the matrix obtained
from A by deleting all but the first k columns of A. Then
the columns of Mk form a linearly independent set and
by the argument of (6), MkT Mk is a nonsingular matrix,
(k=l, 2, . . . , n).
t Lawrence Rad. Lab., University of California, Livermore,
Calif.

(10)

where U is an upper-triangular matrix with unit-diagonal elements, and furthermore

From the hypothesis on A, we have
ATAW = 0---7 WTATAW

(7)

and this upper-triangular system is solved by back substitution.

For arbitrary X, we have

1 R(X) 12 =

(k = 1, 2, ... , n).

(2)

is orthogonal to S, or
ATR(X) =

Let PI be the determinant of MIT Ml and P k be the
determinant of MkT Mk divided by the determinant of
Mk_lTMk_ l , (k=2, 3, .. , n). Then

where D is a diagonal matrix.
Premultiplying (5) by UT and replacing X by UU-lX,
we have
(12)

and from this,
(13)
COMPARISON OF METHODS

Since both LA TA and LT are u pper-triangular matrices, their product, LATALT, is also upper triangular,
besides being symmetric, and is therefore a diagonal
matrix. Thus ALT is a matrix of mutually orthogonal

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

256

columns and since LT is upper-triangular with unitdiagonal elements,

LT = U.

(14)

Again, since LT is upper-triangular with unit-diagonal elements, the diagonal elements of LATA and D
= UTA TAU = LATALT are identical, or

= 1, 2, ... , n).

(k

(15)

Let Ak and Bk be the kth columns of A and B, respectively, and let Ek be a scala, defined by
1 Bk 1

= €k 1 Ak I,

(k = 1, 2, ... , n).

(16)

Since Bk is a projection of A k , we have

o<

(k = 1, 2, ... , n),

€k :::; 1,

(17)

and Ek is a measure of the figure loss encountered in constructing Bk from Ak by orthogonalization, and there is
no further figure loss by cancellation in computing,

(k = 1, 2, ... , n).

and
(27)
Since the calculation of D by the method of orthogonalization has half the figure loss of the method of diagonal pivots, the former method using (27) yields a computationally more accurate inverse. The elements of
this inverse matrix are useful to statisticians in the
Theory of Error Analysis.
DETAILS OF THE METHOD OF ORTHOGONALIZATION

The matrix U = LT need not be formed explicitly except in the evaluation of (ATA)-l by (27).
Let Uk be the matrix which is the nXn identity except
for the elements in the kth row to the right of the diagonal. These elements are the multiples of the kth column
of B which are added to the corresponding following
columns to yield new following columns, which are
orthogonal to the kth column. Then

(18)

(28)

In the method of diagonal pivots, I Bkl 2 is formed
from 1 Akl 2 by repeated subtractions, and since

When: 1) the nonidentity elements of Uk have been
formed and used to orthogonalize the following columns
and Zk to B k ; 2) BkTY = BkTZk has been formed; and 3)
IBkl 2 has been formed, then Bi is no longer needed and
the storage cells used for Bi are now available for storing
the nonidentity elements of Uk and the scalar BkT Y
= (BT Yh. Repeating this process until k = n, X is
evaluated by

1 Bk 12 =

€k

2 1 Ak 12,

(19)

our measure of the figure loss in this method is Ek2• Thus
the method of orthogonalization has half the figure loss
of the method of diagonal pivots.
R(X) AS A By-PRODUCT OF ORTHOGONALIZATION

X

Let Zk, (k= 1, 2, ... , n+l) be defined by

(20)
and
k

Zk+l = Y -

L BJ(BjTZj)/(BjTBj) (k =

1,2, .. " n). (21)

i=l

U 1U 2 ... Un_1D-1(BTZ)

B k+1TZk+1 = Bk+1T Y,

(k

= 1, 2, ... , n - 1), (22)

since the columns of B are mutually orthogonal. Setting
k=n in (21) and using (22), we have
n

Zn+1 = Y -

L

Bj(BjTY)/(BjTBj).

(23)

j=l

Thus Zn+1 is the component of Y orthogonal of S, or

Zn+1 = - R(X).
A TA

(24)

AFTER THE ApPLICATION OF

THE METHOD OF ORTHOGONALIZATION

From (10), (11), and (14), we have

LATALT = D.

(25)

Since Land LT are nonsingular,

(26)

(29)

where we take advantage of all the known zero elements
of the matrices involved in performing the indicated
matrix premultiplications.
For calculation of (AT A)-I, we have from (27) and (14)

(AT A)-l

= (L n- 1L n- 2 ... L 2L 1) TJJ1(L n_1L n_2

Then

THE INVERSE OF

=

•••

L 2L 1).

(30)

The nonidentity elements of the product

Lk(Lk-1Lk-2 ... L 1)
may be stored in the locations occupied by the nonidentity elements of the two factors, (k = 2~ . . . , n -1).
Having formed L, the diagonal and subdiagonal elements of (AT A)-l can be formed and stored in the locations occupied by D and L.
CONCLUSION

Although the number of operations involved is greater
in the method of orthogonalization than in the method
of diagonal pivots, the increased accuracy is well worth
the time and effort. It is to be noted that the method
of orthogonalization for weighted polynomial fitting is
equivalent to forming a set of weighted orthogonal
polynomials, fitting the data to these polynomials, and
reducing the combination of these polynomials to a
single polynomial in the manner of Tchebycheff.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

257

The CORDIC Computing Technique
JACK VOLDER

T

HE "COordinate Rotation DIgital Computer"
computing technique can be used to solve, in one
computing operation and with equal speed, the
relationships involved in plane coordinate rotation; conversion from rectangular to polar coordinates; multiplication; division; or the conversion between a binaryand a mixed-radix system.
The CORDIC computer can be described as an entire
transfer computer with a special serial arithmetic unit,
consisting of 3 shift registers, 3 adder-subtractors, and
special interconnections. The arithmetic unit performs
a sequence of simultaneous conditional additions or subtractions of shifted numbers to each register. This performance is similar to a division operation in a conventional computer.
Only the trigonometric algorithms used in the
CORDIC computing technique will be covered in this
paper. These algorithms are suitable only for use with a
binary code. This fact possibly accounts for their late
appearance as a numerical computing technique. Matrix
theory, complex-number theory, or trigonometric identities can be used to prove rigorously these algorithms.
However, to help give a more intuitive and pictorial
understanding of the basic technique, plane trigonometry and analytical geometry are used in this explanation whenever possible.
First, consider two given coordinate components Y z
and Xi in the plane coordinate system shown in Fig. 1.

t
Y z = Rz sin (h

(1)

Xi = Rz cos Oz'

(2)

With a very simple control of an arithmetic unit operating in a binary code, the sign of a number can be
changed and/or the number can be divided by a power
of two. Thus, if it is assumed that the numerical values
of Y i and X t are available, the numerical values of both
coordinates of one of the proportional quadrature vectors, R/, can be easily obtained.

Yi' = 2- i X i

(3)

Xl = - 2- i Y i

(4)

where j is a positive integer or zero.
The vector addition of R/ to R i , by the algebraic
addition of corresponding components, produces the
following relationships:

Y i+! = v1

+ 2- 2iR

Xi+l = v1

+ 2- 2iR i cos (Oi + tan-

t

sin (Oi

+ tan- 1 2-i )
1

= Y i + 2- 1Xi (5)

2- 1) = Xi - 2- i Y i (6)
(7)

Likewise, the addition of the other proportional quadrature vector at () - 90° to the vector Ri produces the following relationships:

+ 2-2iR i sin (Oi v1 + 2- 2iR i cos (0

Yi +1 = v1
Xit-l

=

2 -

tan- 1 2-i ) = Y i - 2- i Xi (8)
tan- 1 2-i ) = Xi

+ 2-i Yi (9)
(10)

y

If the numerical values of the components Y i and Xi
are available, either of the two sets of components Yi + 1
and Xi+! may be obtained in one word-addition time
with a special arithmetic unit (as shown in Fig. 2) operating serially in a binary code.

Yj

V REGISTER

~=-----------~L-----L--L--~

)(.;"1

Xi

Xi+1

__

X

Fig. I-Geometry of a typical rotation step.
X REGISTER

The subscript i, as used in thjs report, will identify all
quantities with a particular step in the computing sequence. The given componenfs, Y i and Xi, actually describe a vector of magnitude Ri at an angle ()i from the
origin according to the relationship,

t

Convair, Fort Worth, Tex.

Fig. 2-Arithmetic unit for cross addition.

This particular operation of simultaneously adding
(or subtracting) the shifted X value to Yand subtracting (or adding) the shifted Yvalue to X is termed "cross
addition."

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

258

The effect of either of these two choices can be considered as a rotation of the vector Ri through the special
angle plus (or minus) (Xi where
(11)

accompanied by an increase in magnitude of each component by the factor (1 +2-2i)t.
Note that this increase in magnitude is a function of
the value of the exponent j and is independent of whichever of the two choices of direction is used. If a particular value of j is specified to correspond to a particular
value of i in the general expression, and if it is specified
that, for every ith term, one and only one of the two
permissible directions of rotation is used to obtain the
i+l terms, then the choice may be identified by the
binary variable ~i where ~i = + 1 for positive rotation, or
-1 for negative rotation. This gives a general expression for the i+ 1 terms as
YH1

= vl + 2- 2i R i sin (Oi + ~iai) = Y i + ~i2-iXi

(12)

X t+1 = vl + 2- 2iR t cos (Oi + ~iai) = Xi - ~i2-iYi (13)
(14)
After the components Yi+l and Xi+l are obtained, another similar operation can be undertaken to obtain the
i+2 terms.
Yi+2= v 1+ 2- 2{f+1) [v1 +2- 2iR i sin (Oi+~iai+~i+lai+l)]

=

Yt+l+~i+12-{f+l)Xi+l

ai :::; ai+l + ai+2 + ... + an + an.

Yi+l

(16)
(17)

Likewise, the pseudo-rotation steps can be continued
for any finite, pre-established n'umber of steps of preestablished increments but arbitrary values of sign.
After these steps have been completed, the increase in
magnitude of the vector as a result of these steps will be
the constant factor

as = tan- l 2-1 ~ 26.5 0
General term: ai = tan- l 2-(i-2) (i > 1).

Third term:

(26)
(27)

Any angle can now be represented by the expression
A = ~1(900) + ~2 tan- 1 2- 0 + ~3 tan- l 2- 1 +

+

~n

tan- l 2-(n-2).

(28)

The combination of values of the operators ~1~2 ... ~n
form a special binary code which is based on a system of
Arc Tangent Radices and will be identified as the ATR
code. The values of (X selected for this computing technique will be called ATR (Arc Tangent Radix) constants.
In addition, only one more term is required in this
ATR system than that required in a perfect binaryradix system for equiva1ent angular resolution;
(n -

l)th term of perfect binary system (±variable)

v 1 + 2- 2i ·vl + 2- 2{f+1).vl + 2- 2(1+ 2) ...
. v 1 + 2- 2m •

(23)

The following sequence meets the requirements of
(22), (23) and (11) :
(24)
First term:
al = 900
0
l
0
(25)
Second term: a2 = tan- 2- = 45

(15)

Xi+2= v 1+2- 2(i+1) [v1 +2- 2iR i cos (Oi+~iai+~i+lai+l)]
=Xi+l-~i+12-(i+1)

that, by an appropriate choice for each ~, the algebraic
summation of all steps can be made to equal any desired
angle.
The requirements for making this sequence of steps
suitable for use with any angle as the basis of a computing technique are: 1) a value must be determined for
each angle (Xi so that for any angle (J from -180 0 to
+ 180 0 there is at least one set of values for the ~ operators that will satisfy (19), and 2) these chosen values
must permit the use of a simple technique for deterIll\ining the value of each ~ to specify A.
The following relationships are necessary and sufficient for a sequence of constants to meet these requirements:
1800 :::; al + a2 + a3 + ... + an + an
(22)

= 2-n revolutions (29)
(18)

The effective angular rotation A of the vector system
will be the value of the algebraic summation of the individual rotations.

nth term of ATR system (for large n)
2-(n-l)
~ - - - revolutions

(30)

7r

2-(n-l)

-- <

2-

n•

(31)

7r

where
(20)
and
~i =

+ 1 or

-1.

(21)

Therefore, although the magnitude of each individual
rotation step is fixed, there now appears the possibility

Note that all terms except the first are terms of the
natural sequence tan- l 2- i (j=0, 1,2, etc.) and may be
instrumented as shown in Fig. 2.
The computation step corresponding to the most
significant radix is simply

Y 2 = Rl sin (0 1 +

~1900)

=

~lXl

X 2 = Rl cos (0 1 +

~1900)

= -

~lYl

(32)
(33)

Volder: The CORDIC Computing Technique
where
(34)

This step is unique in that no magnitude change is introduced. It may be instrumented with the same circuitry required for all of the other steps by simply disabling the direct input to the adder-subtractor during
this step.
The change in magnitude of the components resulting
from the use of all of the terms in the series of (28) is the
constant factor

vi

+ 2- o'vl + 2-

2

'vl

+

2- 4

•••

tion from (h and then set the action of the addersubtractor accordingly to obtain the relationship
(39)
Regardless of the choice for ~i , the shift gates and the
register gates can be controlled so that the increments
of rotation prescribed by the sequence of ATR constants
are used in the same order (most significant first) as
shown in (24)-(27).
I t can be shown that, by adding another term an to
the summation of all ATR constants, the summation
is greater than or equal to 180 0 for any value of n:

(35)

(40)

of this magnitude factor is a function of n
be a constant for any given computer. By
solving for the factor for n = 24 and by devalue of the magnitude change factor as K,

By expressing the angle of any vector in the form
given by (37), the following relationship is obtained:

. vi
The value
which can
arbitrarily
noting the

+ 2-2 (n-2) .

259

K = 1.646760255.

(36)

At this point, these individual steps can be fitted into
a complete computing technique. Of the two basic
algorithms that will be described here, the problem of
"vectoring" will be considered first. Vectoring is the
term given to the conversion from rectangular-coordinate components to polar coordinates, that is, given the
Yand X components of a vector, the vector magnitude
R and its angular argument () are to be computed. In
this technique, Rand () are computed simultaneously
and in separate register locations.
First consider the problem of computing R. Except
for the known magnitude change K, the Pythagorean
relationship of the coordinate components is maintained regardless of the value of the summation of rotation angles, A. Then, if the individual directions of rotations, ~i' can be controlled so that, after the end of the
computing sequence, the Y component is zero and the
X component is positive,
Rn+I

=
=

+ Y n+12 =
KVX 12 + Y12.
VXn+12

Xn+I

=

I/h 1 ~

al

+ a2 + aa + ... + an + an.

(41)

Although it has been previously stated, without proof,
it can be readily shown that, for the ith term,
ai ~

ai+I

+ ai+2 + ... + an + an.

(42)

Therefore, if the same rules given for (39) are applied
to determine (}i+l,
- al ~

1

81 1

-

al ~ a2

+ aa + ... + an + an.

(43)

Then, by applying the inequality of (42) to the left-hand
term of the above equation,
1 82 1

==

1 1 (h 1 -

al 1 ~ a2

+ aa + . . . + an + an.

(44)

Likewise, this process may be con tin ued through an to
give
(45)

As an illustration of the step-by-step value of the vector, as described by the coordinate components at each
step during the vectoring operation, consider the example in Fig. 3.

KRI

y

(37)

The technique for driving () to zero is based on a
numerical nulling sequence similar to nonrestoring
division.
Since the vector Ri is described only in terms of its
rectangular-coordinate components, the angle of this
vector (}i from the origin (positive X axis) is not known.
However, if (}i is considered to be expressed in a form
so that
(38)

then it can be shown that the sign of the Y i component
always corresponds to the sign of the angle (}i.
Therefore, before each step of the computation, the
sign of Y i may be examined to determine which of the
two possible values of ~i will drive (}i+l opposite in direc-

R,

--------~~--~~~~~~&.-x
~___.l...___/R7

Rs

Fig. 3-Step-by-step relationships during nulling.

If, at the end of the computing sequence, the coordinate components specify a (}i+l equal to zero, the
total amount of rotation performed was equal in magnitude but opposite in sign to the angle (}1 as specified by
the original coordinate components YI and Xl.

260

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

At this point another register (identified as the angle
register) and another adder-subtractor may be introduced, and it shall be assumed that the numerical value
of each of the preselected ATR constants is stored within the computer and can be made available to the arithmetic unit in the same order as specified in (28). Since
each ~ controls the action of the cross addition, each may
also control the action of the additional adder-subtractor
so that a subtraction or an addition may be made simultaneously in the angle register of the numerical value
of the corresponding ATR constant to an accumulating
sum to obtain the numerical value of the original angle
(h. Then, at the end of the computation, the desired
numerical value of (h is in this additional register, and
the quantity KRI is in the X register.
A block diagram of the complete arithmetic unit
necessary for computing both RI and 0 is shown in
Fig. 4.

TABLE I
TYPICAL VECTOR COMPUTING SEQUENCE

Y Register

X Register

Angle Register

--------------1.1000101 = Xl
0.0000000
Y 1 = 0.0101110
-------1.1000101 +0.0101110
+0.1000000
---------0.0111011
-0.0101110
---0.0001101
-0.0110100
--1.1011001
+0.0011011
---1.1110100
+0.0001111
---0.0000011
-0.0000111
----1.1111100
+0.0000011
---1.1111111

0.0101110
+0.0111011
--0.1101001
+0.0000110
---0.1101111
-1.1110110
---0.1111001
-1.1111110
---0.1111011
+0.00000)0
--0.1111011
-1.1111111
--0.1111100 = KRI

tan- l

, U(j);)

p(j, j)

1-

L' pU, k),

(15)

k

where

A(j, k} = 1/SNo3, if rea; k) = r(a;j),
a

= 1, 2 ... {3

-

1, (3

+

1 ... N;

and r({3, k) E C o[r({3;j)], (3 = 1,2, ... N; (16)

= 0, otherwise.
1 For a case in which the Monte Carlo method leads to an evaluation of ZN, see Salsburg, et al. l4].

In (16) Co [rCB,j)] denotes a cube of half edge 0, oriented
parallel to the coordinate planes, with center at r({j, j);
the latter notation means the position vector of molecule
{j in configuration j. The length 0 is a parameter of the
Monte Carlo process whose value is found to influence
the rate of convergence of the process. It is ordinarily
small (see Section IV) compared to the edge of the cube
V, and in (16) it is expressed in units of the smallest
increment in the molecular coordinates; in our calculations, therefore, in units of 2-17 times the edge of the
cube V.
The motivation underlying this specification of
pU, k) is as follows. The factor AU, k) corresponds to
a uniform choice among the limited number of neighboring states contained in Co; this limitation is related to
the small mean free path in molecular systems at interesting densities. The unsymmetrical form of the Boltzmann weighting factor (14) is chosen in preference to
the alternative symmetrical form
e-U(k)/kT

P(J,. k) -- A(·J, k) -e---u-(j-)/-k-T-+-e---U-(k-)-/k-T

(17)

mostly for computational convenience, since it leads to
fewer calculations of the exponential function (in the
case of more complicated models than hard spheres),
but also because the unsymmetrical form leads to a
more rapid motion in configuration space.
For hard spheres (14) reduces to
p(j, k)

= AU, k),

if U(k) = 0,

= 0,

if U(k) =

00,

(is)

assuming U(j) = 0, the only case with which we will b~
concerned since the initial state of the chain has U = 0.
It can be easily shown that if u(r) has only point infinities (say, at r=O), then PU, k) defined above also
satisfies the ergodic condition. However, the hard sphere
u(r) is infinite over a finite interval in r, with the result
that U in 3N dimensional space is infinite over regions
of considerable extent, from which the state point representing our system is excluded. The topological question of whether the accessible region in which U is zero
is then connected or not, depending on the values of
N / V and (J", is one to which the answer is unknown, and
as a result the equivalence of the Markov and "ensemble" averages is somewhat doubtful. Under such
circumstances the pU, k) defined above may separate
the configuration states into separate ergodic classes.
Within each class (i.e., depending on the choice of
initial configuration) the Markov average will converge
to an average equivalent to the ensemble average taken
over a restricted region of configuration space. However,
it should also be mentioned that the pU, k) defined by
(14) and (15) may actually connect spatially disconnected regions of configuration space, if the thickness of
the dividing barrier is not too large compared to the
parameter
(This is the only circumstance in which
the value of 0 will affect the convergent average of the
Markov chain.)

o.

264

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Such a compartmentalization of configuration space
also raises the well-known quasi-ergodic problem in statistical mechanics concerning the equivalence of "ensemble" averages to the time-average behavior of an
actual physical system of N molecules moving according
to the Newtonian equations of motion for the given
potential function. In this connection we call attention
to the work of Alder and Wainwright [6], in which these
equations of motion for the same systems of hard
spheres with periodic boundary conditions have been
integrated. Thus it is possible to compare the results of
this kinetic theory calculation with the present statistical calculation. In a later part of the paper some ~om­
parisons of this sort will be made.
As mentioned already, the topological question of the
connectivity of configuration space has not been solved.
I t is quite evident, for example, that at sufficiently high
'densities, near close packed, it will not be possible to
interchange the positions of two molecules without
crossing a region in which U is infinite. Configuration
space is then evidently compartmentalized into something like N! disconnected regions. However, for systems of N identical molecules (i.e., a one-component
system) this compartmentalization is of no consequence
since all ensemble averages have identical values in all
such compartments (i.e., for hard spheres, the volumes
of the compartments are all the same). It is the question
of whether, associated with this trivial partitioning of
the configuration space, there is a further, nontrivial
partitioning into regions of different properties, which
is uncertain. In particular, for example, if such a nontrivial compartmentalization does -occur, how does it
depend on the choice of N, at constant Nj V and (J"? Is it
an artifact of the periodic boundary condition? We will
have occasion to return to these questions when we
describe below the results obtained with the method.

III.

CALCULATIONAL PROCEDURES

It is convenient to take as unit of distance the edge
of the cubical volume V, so that in these units V = 1.
For a given choice of N, specification of T then determines the value of (J", by (11). We first outline briefly,
then discuss in greater detail, the steps by which the
calculator is caused to develop a particular sample chain
according to the Markov process defined by (14)-(16),
for a system of hard spheres.
1) Assume that at "time" t the system is in the state
j(t) , corresponding to subscriptj in (14)-(16).
2) A random choice of one of the N molecules is made,
corresponding to su bscri pt f3 in (16) . We call this molecule f3(t).
3) Each coordinate of molecule f3(t) is given a tentative random displacement, uniform on the interval
(-0,0), corresponding to r({3, k) in (16). Call the configuration in which molecule (j(t) is in this displaced
position, and the other N -1 molecules a have position
rea, j(t)) configuration j' (t).

4) Configuration j' (t) is tested for overlaps; i.e., one
or more pairs of molecules whose distance between
cen ters is less than the molecular diameter (J".
5) a) If no overlap is found in Step 4, then the next
configuration in the chain is j'(t); i.e., j(t+l) =j'(t).
b) If configuration j'(t) contains an overlap, the
configuration at time t+ 1 is identical with that at time

t:j(t+l) =j(t).
6) Certain procedures concerned with the averaging
process are performed; the procedure then repeats beginning at Step 1, except for occasional interruptions
for checking and census procedures. We now describe
each of these steps in greater detail.
A. Specification of a Configuration
Here we describe the information carried in the calculator memory for purposes of specifying a particular
configuration. Fundamentally all that is required are
the values of the 3N coordinates r(a), which are stored
in the r-Table. For economy of calculating time, however, it is desirable to carry along additional redundant
information, since the calculation requires a rather
large amount of machine time. For example, in connection with the calculation of the "cumulative radial
distribution function" nCr) (5), it is desirable to have
available for each configuration jet) a tally, called the
C-Table, of the intermolecular distances (squared) ra(32
into the NR intervals C(v) = [(J"2+ v ·2-S, (J"2+(v+1)2- s ],
v = 0, 1, ... , N R -1. Here s is chosen to be an integer
in order to expedite the tallying process. When we are
interested only in estimating the pressure, using (10),
so that only g((J") is required, the range of the C-Table
can be made quite small, with a consequent economy of
machine time, as will appear below.
The search for overlaps in Step 4 can be considerably
expedited by a device which avoids the necessity of
examining all N-l values of ra(32.2 The basis for the device is the fact that the displacement parameter 0 is in
all interesting cases relatively small compared to unity
(edge of V), so that any actual overlap ra(3 which occurs
must be with a molecule a which was fairly close to f3 in
configuration j(t). We proceed as follows. At some point
to in the chain (in particular, at its beginning t = 0) we
establish a set of tables called the M(a)-Tables, one for
each a = 1, 2, ... , N. Each M(a)-Table consists of an
indefinite number of entries M(a, 'Y), 'Y= 1,2, ... , and
gives all molecule numbers jJ.(a, 'Y) for which r ai(a.'Y) (a)-Tables recording for each a-Mea, 'Y)
interaction in jet) the corresponding interval in the
C-Table. In any case, the procedure consists of a tally
of these jet) interactions into the E-Table, but with a
negative unit increment; at the conclusion of this procedure the E-Table then contains the desired changes in
the C-Table, so that the next procedure is C+E~C.
The modification of the r-Table is simply r [(j(t); j'(t)]
~r[{j(t); j(t)]. If the "refresh" procedure for modifying
the M-Tables was required in Step 4, we now replace
M[(j(t)] by the W2-Table described there. This delay is
necessary since M[(j(t)] must be used for the computation of jet) interactions.
We have so far not established a good criterion of the
rapidity of the convergence process, other than simply
a~visual inspection of the trend of the results. In order
to obtain some quantitative criterion for the choice of
the parameter 0, we calculate the sum of the squares of
the distances in 3N dimensional space between successive configuration points:
t

cp(t) =

L: Ilr(fj(t');j(t'»

- r(fj(t'),j(t' -

1»11 2 • (22)

t'=1

At present about 7 per cent of the calculation time is
spent in Step E1.
E2) Overlap in j' (t)
In this casej(t+1) =j(t) , no changes in the tables are
required, and there is no contribution to cf>(t).
F. Census and Checking Procedures
For convenience in monitoring the progress of the
calculation, averages of the C-Table are taken over successive intervals of fixed length in time t, as well as over
the en.tire length of the chain. For the former purpose an
A-Table is carried which is set to zero at the beginning
of each time interval, and at successive time steps is
incremented by the C-Table. For the over-all average an
S-Table is carried which is augmented by the A-Table
at the end of each time interval. The cumulative distribution function nCr), (5), is then estimated by
n(cr 2

+ v'2-

1
s

)

=-

v

L: S(v'; t).

Nt v'=o

(23)

Numerical differentiation (performed later by hand)
then gives g(cr), and the pressure p.

Numerous checks are built into the problem, particularly in the "refresh" routine in Step 4, and at the
census intervals just described, where the current
C-Table is checked by calculating all N(N -1)/2 interactions. These procedures have proven to be very
worthwhile in giving notice of the occurrence of machine
errors.
About 12 per cent of the calculation time is involved
in these procedures.

IV.

RESULTS AND DISCUSSION

The Monte Carlo method was previously applied to
the system of hard spheres by Rosenbluth and Rosenbluth [9]. However, the results obtained- by Alder and
Wainwright [6] at an early point in their kinetic investigation suggested that the Rosenbluth results might be
partly in error, so that we undertook a concurrent reinvestigation of the question. The new results shown in
Fig. 2 are indeed in rather good agreeement with those
of Alder and Wainwright.
The new Monte Carlo results have been described
briefly elsewhere [9] from the standpoint of their statistical mechanical interest., Here we shall consider them
briefly from the standpoint of computational interest,
concentrating on aspects of the behavior which are in
some sense exceptional.
Plotted as the reduced pressure pvo/ k T vs the reduced volume T, where Vo is the close-packed volume per
molecule (vo = cr 3/vl2) , the results fall on two distinct
curves, as shown in Fig. 2. This behavior may well be
related to the presence of a first-order phase transition;
we leave aside this point in the present discussion. In
the region T < 1.6 the figure shows dOJble-valued pressures. In the regiop T = 1.52-160 these arise because of
the behavior of a representative single chain shown in
Fig. 3. These chains show in addition to the usual statistical fluctuation, a tendency to oscillate between two
rather well-defined classes of states, with interclass
transitions being so rare (about four hours of 704 time
were required to generate the chain pictured in Fig. 3)
as to make the over-all averages very poorly determined
within a single chain, and differing widely in independent chains generated for the same values of Nand T. If
the two classes are averaged separately, however, the
inter- and intrachain agreement is quite good; it is
these separate averages which are plotted in Fig. 2, and
which give rise to the' double-valuedness. It is noteworthy that Alder and Wainwright observe the same
rare interclass transitions.
In the course of investigating the effect of varying the
paral:TIeter 0, we generated 14 different Markov chains
for the 32-molecule system at T = 1.55, the chain shown
in Fig. 3 being one of these. These chains comprised a
total of 12,800,000 configurations, the longest chain in
the set containing 3,500,000 configurations and the
shortest, 560,000 configurations. A total of 14 welldefined interclass transitions were observed; four chains
produced no transitions, eight produced one transition,

Wood and Jacobson : Monte Carlo Calculations in Statistical Mechanics
1000
+

500

200

•

100

+"",

50

+

P vo/kT
FREE VOLUME

20

+

10
5

2
I

L -_ _L-~-LLUD-__~~~~~_ _ _~~~~~

1.01

1.02

1.05

1.2

1.1

1.5

2.0

3.0

5.0

10.0

• Monte Carlo, N = 32
... Monte Carlo, N = 256
Alder and Wainwright [6], N=32.
Fig. 2-For the sake of clarity many of the calculated points have been
omitted from the figure, particularly those of Alder and Wainwright for N = 108 and 256.

+

1.6,--------.----------;r-----.,----------,

1.0

o

4.105

t

Fig. 3-The number of molecules inside a sphere of radius 1.026 u
around a representative molecule, averaged over intervals of
19,200 configurations, plotted against the total number of configurations t, ina representative chain at 1" = 1.55, N =32,0 =0.051.
The over-all average of these points should converge to the cumulative distribution function n (1.026 u), but the convergence is
evidently very slow due to the secular fluctuation between the
groups of high- and low-pressure states.

and two chains showed three transitions. Grouped all
together, the results suggest that a transition occurs on
the average of about once in 900,000 configurations.
These 14 chains gave values of pvo/kT (averaged over
the entire length of each chain, not the separate highand low-class averages mentioned previously) ranging
from 5.45 to 6.92, with the average over all 14 chains
being 6.23. This value of the average is obtained for two
different weightings: 1) each configuration of every
chain is given equal weight; or 2) each chain is weighted
with its final value of ¢(t) (22). The calculated standard
deviation of this average is 0.11. We do not wish to
stress the significance of this average, though it is interesting that it lies midway between the two curves of
Fig. 2.

267

We interpret this transition behavior to be due to a
compartmentalization of phase space (see the discussion
of this point in Section II) into two regions which are
narrowly connected in the region 1.52 /t/() (in units of
the linear edge of the Monte Carlo cell V, per second of IBM 704
calculator time) as functions of 0, for N = 32 and 'T = 1.55. In (a)
the dashed curve is from the high pressure class of states, the
solid curve from the lower pressure class. Similarly in (b):. lower
pressure class; X high pressure class.

system of 32 molecules under these conditions would
have a rate of root-mean-square displacement about
3 X 1012 times the maximum in Fig. 4(b). Thus we should
not be surprised at the rareness of the interclass transitions during the machine calculation-on the scale of
physical time they are relatively frequent, perhaps
severa] hundred per microsecond. For the same values
of the parameters, we can also compare directly the
physical collision rate, which is estimated as 4 X 10 14 collisions per second in a system of 32 molecules, with
Alder and Wainwright's [6] calculation rate, which is
about two collisions per second. Thus we see that in
either case, the calculational procedures are much
slower than the actual molecular system. One should
not conclude from these figures that the Monte Carlo
proc·edure is more efficient than the Alder and Wainwright procedure, since the root-mean-square displacement criterion is a very crude basis for comparison.
Empirically, in terms of machine time, they seem to
produce the interclass transitions at roughly the same
rate.
The most important task remaining is to establish the
significance of the two branches of the p- V curve at
T < 1.5, i.e., the connectivity and relative volumes of the
corresponding regions of configuration space. Aside
from possibly developing some more complicated stochastic approach, the chief hope of further progress lies
in increasing the calculation speed. We have two approaches in mind, one with respect to the method, the
other inv:)lving the use of faster machines.

A possible improvement in method lies in the question
of how fine a subdivision of configuration space is required in order for a digital calculation to give results
practically equivalent to the continuum of states implied by the analytical integral in (1). As already
mentioned, we currently use (2 17 )3 = 251 points in 3N
dimensional space. If the number of bits required to
represent the molecular coordinates can be reduced
from the present 17 to perhaps 13-14 or less, then in the
computation of the intermolecular distances we can replace arithmetic multiplication by a faster table-look-up
process. With this and other similar modifications we
hope to gain a considerable factor, but certainly less
than 10, of improvement in speed.
The "Stretch" calculator currently under contract
with IBM, for which coding is now in progress, will increase the calculation rate by a factor of 20-50, and we
hope that the combination of improved programming
and faster calculator will make possible a determination
of the over-all average in the region near T = 1.55. In
another direction in computer design, it is probably
worthwhile to mention that this problem falls in the
class of those for which parallel computation by several
arithmetic units with common memory and control
could result in a considerable increase in calculating
rate.
We shall close with a brief mention of some applications in which the results have been interesting without
being confusing. For the hard sphere system at T> 1.6,
where the chains are well convergent, the results are believed to be essentially exact and have been of considerable utility in calibrating various theoretical approximations leading to analytical treatments. At sufficiently
low densities the method gives results :in agreement
with the "virial expansion" whose first five coefficients
are known [9]. On the other hand, at sufficiently high
densities (T < 1.3) the lower of the two pressures calculated from the two separate classes agrees increasingly
well with the "free-volume" approximation. This suggests that the latter may be asymptotically correct at
high densities.
In the case of "Lennard-Jones molecules," the method
yields results [1] which applied to argon agree very well
with experimental observations at pressures below
about 4000 atmospheres, over a region where no analytical approximation has given agreement. At higher
pressures there is disagreement, with a suggestion that
the experimental observations may be in error. If the
disagreement is real it has interesting consequences in
the study of intermolecular forces. Just as with hard
spheres, there is agreement with the free-volume approximation at very high densities.
The method has also been tested successfully on the
so-called "lattice gas" with nearest neighbor interactions, by comparison with known analytical results [4].
We hope in the future to apply the method to systems
of molecules of different kinds (mixtures) and to nonspherical molecules.

Ito: Real-Time Digital Analysis and Error-Compensating Techniques
REFERENCES

[lJ W. W. Wood and F. R. Parker, "Monte Carlo equation of state
of molecules interacting with the Lennard-Jones potential. I. A
supercritical isotherm at about twice the critical temperature,"
J. Chem. Phys., vol. 27, pp. 720-753; 1957.
[2J B. J. Alder, S. P. Frankel, and V. A. Lewinson, "Radial distribution function calculated by the Monte Carlo method for a hard
sphere fluid," J. Chem. Phys., vol. 23, pp. 417-419; 1955.
[3J N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H.
Teller, and E. Teller, "Equation of state calculations by fast
computing machines," J. Chem. Phys., vol. 21, pp. 1087-1092;
1953.
[4J Z. W. Salsburg, J. D. Jacobson, W. Fickett, and W. W. Wood,
"The application of the Monte Carlo method to the lattice-gas
model. I. Two-dimensional triangular lattice," J. Chem. Phys.,
vol. 30, pp. 65-72; 1959.

269

[5) W. Feller, "Probability Theory and Its Applications," John
Wiley and Sons, Inc., New York, N. Y., ch. 15; 1950.
[6) B. J. Alder and T. W. Wainwright, "Phase transition for a hard
sphere system," J. Chem. Phys., vol. 27, pp. 1208-1209; 1957.
[7] N. Metropolis, "Phase shifts-middle squares-wave equation,"
Proc. Symposium on Monte Carlo Methods, H. A. Meyer, ed.,
John Wiley and Sons, Inc., New York, N. Y., pp. 29-36; 1956.
[8] O. Taussky and J. Todd, "Generation of pseudo-random numbers," ibid., pp. 15-28.
[9] M. N. Rosenbluth and A. W. Rosenbluth, "Further results on
Monte Carlo equations of state," J. Chem. Phys., vol. 22, pp.
881-884; 1954.
[10] W. W. Wood and J. D. Jacobson, "Preliminary results from a
recalculation of the Monte Carlo equation of state of hard
spheres," J. Chem. Phys., vol. 27, pp. 1207-1208; 1957.

Real-Time Digital Analysis and ErrorCompensating Techniques
WALLY ITot
A PARTICULAR EXAMPLE ILLUSTRATING THE
ANALYSIS TECHNIQUE

~~r-------~----_At

HE techniques which are described in this paper
apply to the digital mechanization of any system
represen ted as an ordinary linear differential
equation with constant coefficients-inhomogeneous or
homogeneous.
1
As an example, consider the sinusoidal loop as generated by a DDA excited by a sampled external forcing
function, l(t), as shown in Fig. 1. It is

T

1 - - - - - 4 - - - At

I----~
iotf(r)dr
Sampler

(1)

Fig. 1-DDA loop for x+wo2x=f(t) with initial
conditions x(O) and x(O).

which can also be written as

+ wo' J,' x (r) dr ~ J, '!(r) dr.

x(t) - x(O)

(2)

However, reference to Fig. 1 reveals that the DDA
lashup does not mechanize (2) but does mechanize

g

g

xD(r)dr -

t

x(O)dr

+ wo'

0

0

0

t

r

~g J 0

c?

(4)

S

t

t

= h/(1-e-sh) where h is the sampling interval in seconds.
Taking the transform of (3), we obtain

tl

f(T)dTdtl

(3)

o

where XD(S) is the Laplace transform of the digital solution, XD(t). Consider then the inverse Laplace transforms:
MD(t)

where ..) represents the integration operation used in
o

the comRuter.
I
Let

g represent the rectangular rule for integration,
o

and designate the Laplace transform of

t

g by t}R(S)

and

t

o

Res. Staff Engr. Aeronaut. Div., Minneapolis-Honeywell Regulator Co., Minneapolis, Minn.

ND(t) = L-l[
.

s
s'

+ WO'SqR

] =

L-1f

l

s

S'

+ wo'l~:-'h

].

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

270

If we use the notation

by a DDA employing the rectangular rule yields a
digital solution given by
(5)

XD(t)

~

+ x(O)ND(t) + f,'f(r)M D(t -

X(O)MD(t)
h

r)dr

__

= x(O) - . - sin ('Yt)

then we can write

sm'Y

_
wo2hMD
MD+---s(1 - e- sh )

1

2 2

(6)

+

+ x(O) [ cos ('Yt)

Multiplying both sides of (6) by the factor (1-e- 8h )2,
we obtain

+ -.h-

(

]
1 - wo h - cos 'Y) -_
sin 'Y
sin ('Y t)

It --

fer) sin ['Y(t - r) ]dr.

sm'Y

(12)

0

METHODS OF COMPENSATING FOR ERRORS IN THE
DIGITAL SOLUTION

Inverse Laplace transforming (7), we obtain

MD(t) - 2MD(t - h)

+

The exact solution for

MD(t - 2h)

(1)
IS

+ w02hf

t MD(r)dr
t-h

= 0

(8)

x(t) =

the integro-difference equation for MD(t).
Imposing the mathematical constraint that, except at
t=O, h, 2h, ... , kh, MD(t) is a constant staircase-type
function, we see that the integro-difference equation
(8) reduces to
MD(t) - (2 - wo 2h)MD(t - h)
MD(t - 2h) = 0 (9)

+

Wo

+ x(O) cos (wot)

as contrasted with the digital solution

XD(t) =

x(O)h

a linear, homogeneous, difference equation with constant coefficients for the eigenfunction, M D(t).
It turns out that

+ [1 -

+ x(O)

wo 2h 2

cos ('Yh) lx(O) __
sin ('Yt)
sm ('Yh)
.

.

-

cos ('Yt)
(13)

h

MD(t) =

__
-.-- sin ('Yt),
sm ('Y)

x(O) sin (wot)

(10)

where
sin ('Yt) = sin ('Ykh) for kh :=:; t

<

(k

+

l)h

k = 0, 1,2, ...

and

(14)

22
1
[hWOV 4 - W0 h ]
'Y = - arctan
.
h
2 - wo 2 h 2
In a similar fashion, it can be shown that the other
eigenfunction is given by

ND(t) =

L-l[
-

=

S2

+

cos ('Yt)

s

wo2sQR

The partially compensated digital solution then takes
the form

,
[X(O)h
XD (t) =

+

(1 - W1 2 h2

.

wo 2 h2 - cos ('Y)].
sin ('Yt) (11)
sm ('Y)

where cos(')'t) = cos (')'kh) for kh:=:;t < (k+ l)h

Thus, we see that the digital mechanization of

x + wo x = J(t)

cos wlh)X(O)]_~
sm (wot)

+ x(O) cos (wot)
h
+ -.---

sm (woh)

It --

J(r) sin [wo(t - r) ]dr.

(15)

0

The solution XD'(t) suggests that a well-chosen but incorrect value for x(O) be used called l\(O) , so that

= 0, 1, 2, ....
2

-

sm woh

]

+ [1 k

This comparison of exact and digital solutions suggests
three successively applied techniques for compensation
of digital truncation errors.
The first and most obvious compensation we can
introduce is to force,), to equal woo We do this by using a
value W1 2 in the serial multiplier so that W1 2 satisfies the
rela tionshi p

xl(O)h
(1)

+

(1 -

W1 2 h2

-

cos w1h)x(0)

x(O)

. _ - - - - - - - - = - - . (16)
sin woh

Wo

Ito: Real-Time Digital Analysis arid Error-Compensating Techniques
With these two digital compensations, the digital solution looks like
XD"(t) =

X(O) sin (wot)

+ x(O)

Wo

+ .

h
sm (woh)

f

_
cos (wot)

tf(r) sin Wo (t - r)dr.

(17)

0

The third and last compensation technique consists
of multiplying f by a constant, c = c(h) , so that

c(h)

1

sin (woh)

Wo

(18)

'Y

=

arctan [woh v16 - wo 2 h 2 ]

h1

4

•

The exponential term, eext, where a is positive as indicated, reveals that the truncation errors, incurred by
use of the trapezoidal algorithm increase exponentially
whereas there is no such tendency toward instability
introduced by the rectangular algorithm.
Another source of instability can be delays between
successive integrations. For example, suppose we purposely delay one integration, say, one sampling period,
behind the other integration. That is, suppose we mechanize according to
t

t

With the three compensations introduced, the digital
solution is given by

271

g

g

xD(T)dr -

x(O)dr

o

0

g
t

XD(t) =

x(O) sin (wot)
Wo

+

_
x(O) cos (wot)
r)]dr

(19)

0

as contrasted with the exact solution given by (13). The
only errors left are those generated by the convolution
integral

1ftfer) -sin [wo(t -

Wo

S2

+ wo (sfR)e- h8

If the trapezoidal algorithm is used instead of the
rectangular algorithm for integration in the digital
mechaniza tion of

S2

_
+ wo (sfR)e- h8
2

2

1
- - - - _ - - and
2
h8
S2 + wo (sfR)e-

x + wo x = f(t)
the following expression is obtained as the digital solution:

[e-ah.- cos ('Yh)] eat _sin ('Yt) }
+ x(O) {eat -cos ('Yt) +
sm ('Yh)
ah
+ - he- eat f fer) sin h(t - r) ]dr
~

(20)

S2

_

+ wo (sfR)e- h8

loge v1 + w02 h
h

2

t

)

X

[

)

X

[.

cos

(arctan (hwo) )]
t
h

and
exp (

]
eat sin ('Yt)

s
2

are
exp (

2

t

sx(O)

+

The eigenfunctions corresponding to

INTEGRATIONS

where

_

2

J(s)

OF ALGORITHM OR BY DELAY BETWEEN

XD(t) = x(O).
[ sm ('Yh)

(21)

+ + wo (sfR)e- h8

r)]dr.

INSTABILITY CAN BE INTRODUCED BY WRONG CHOICE

sin ('Yh)

x(O)

iDeS) =

S2

he-ah

t2

o f(r)drdl

rather than by (3).
Taking the transform of (21) and assuming the
rectangular rule, we obtain

0

0

f

o

+ ~f tf(r) sin [wo(t Wo

+",,'

loge

vl + w

02

h

h

2

t

sm

(arctan (hwo) )]
t
.
h

The eigenfunctions show that the effect of a onesampling period delay between integrations is to introduce a positive exponential-type factor into the solution-in other words, the solution is rendered inherently
unstable.
BIBLIOGRAPHY

[1] E. I. Jury, "Synthesis and critical study of sampled-data control
systems," Trans. AlEE, vol. 75, pt. 2, pp. 141-149; July, 1956.
[2] R. H. Barker, "The pulse transfer function and its application to
sampling servo systems," Proc. lEE, vol. 99, pt. 4, pp. 302317; 1952.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

272

Automatic Digital Matric Structural Analysis
M. CHIRICOt, B. KLEINt,
LIST OF SYMBOLS

A = cross-sectional area of bar.
maximum cross-sectional area of bar.
Amin = minimum cross-sectional area of bar.
A p = area of panel.
Bmax = length of longer parallel edge of tapered
panel.
B min = length of shorter parallel edge of tapered
panel.
Dx(i) =X component of displacement at the joint i
in the coordinate system (X, Y, Z). Dy(i)
and Dz(i) are defined similarly.
E=Young's modulus.
FxCi) =X component of the external force applied
at the joint i. Fy(i) and Fz(i) are defined
similarly.
G = shear modulus.
Lij = length of the bar with end joints i and j.
P ij = load at the joint i on the bar Bij.
q = shear flow.
q=average shear flow.
Oij = the displacement of the joint i in the direction of the bar B ij .
!1Xij=Xi-Xj. !1Yij and !1Zij are defined similarly.
tD = thickness of panel at intersection of its
diagonals.
i,j, k, m, n = subscripts.
Amax =

INTRODUCTION

HE accelerated pace of computer development
has made possible the rapid solution of complex
problems. The time necessary to set up problems
has begun to surpass greatly machine solution time.
Consequently, more emphasis is needed on means of
making use of digital machines and allied equipment
in setting up problems automatically.
Heretofore, the preparation of certain matric equations appearing in structural analysis has been a tedious
task. The procedures have required a large amount of
judgment and tiresome hand computation. The chances
for errors have been prevalent.
The present p~per presents a method whereby the
above factors can be minimized or negated. Input data
are reduced to a minimum. All logical decisions are
carried out completely automatically so as to arrange
the matrix automatically. Machine time is found to be
very small relative to the time previously needed to set
up problems. Therefore, this coded program should
prove very useful to structures and allied engineers.

T

t

Convair Astronautics, San Diego, Calif.

AND

A. OWENst

Some familiarization with Klein 1 ,2 is helpful but not
necessary for the understanding of the development in
this paper. The basic concepts are the ones of joint, bar,
and panel. The joints connect the bars and the bars
border the panels.
MACHINE SIMULATION OF ELEMENTS

The code is built around the ,basic elements: joints,
bars, and panels. The information below noted by asterisk (*) is computed.
1) Joints
A group of words is assigned to each joint containing the following information:
a) Its position coordinates x, y, and z
b) Whether it is fixed
*c) All the bars attached to the joint.
2) Bars
A group of words is assigned to each bar with the
following data:
a) Its two end joints
b) Whether it is tapered
c) Its cross-sectional area (both maximum and
minimum, if tapered)
*d) The direction cosines x, y, and z
*e) The length of the bar L ij .
3) Panels
A group of words is assigned to each panel with
information on:
a) The four corner joints
*b) The area
c) The thickness
*d) All of the bordering bars
*e) Bmax and B min (lengths of the parallel sides)
f) Whether it is tapered.
The number of cells in a group must be multiples of
eight to allow the use of multiple index registers.4'lA
joint, bar, and panel, grouped together in 40 cells -Ito
economize on space, are not necessarily related.
THE CODE

A) Input
The information without asterisks, described in
the previous section, is read into the proper location. The numbers of the joints appearing in the
bars and panels are converted into the two's
complement of that joint's address. The joint then
can be referred to with an index register.
1 B. Klein, "A simple method of matric structures analysis,"
J. Inst. Aeronaut. Sciences, vol. 24, pp. 40-46; January, 1957.
2 B. Klein, "A simple method of metric structural analysis, part
II-effects of taper and a consideration of curvature," J. Inst.

Aeronaut. Sciences, vol. 24, pp. 813-820; November, 1957.

Chirico, Klein, and Owens: Automatic Digital Matric Structural Analysis
m

B) Bar Cross Referencing
1) The following is computed for each bar B ij :
a)

~X

=

Xi -

b)

~y

=

y~

c)

~z

=

Zi -

Xh

(1)

- Yh

(2)

273
;>

(3)

Zj.

\I.

2) The bar address complement is stored in the
groups of the joints i and j.
3) Every panel is examined. If both the joints i
and j are in it, the bar Bij borders it. In this
case the two's complement of the bar address
is placed in the panel group and the two's compIe men t of the panel address is placed in the
bar group.
e) Joint Equations
1) For each joint i the following force equilibrium
equations are entered in the matrix

~ Pi} (~);; =

a)

b)

c)

Fx(i),

(4)

~p;;(:)

= Fy(i),

(5)

~p,,( ~)

= Fz(i).

(6)

1) The two parallel bars for each panel are found
by comparing the direction cosines. The bars are
rearranged so that the longest parallel side is first
and the other parallel bar is second.
2) Twice the area Ap is computed by the following
(see Fig. 1).

(7)

where

=
sin 01 =
cos

Lij

sin 01

v1 -

01= (~X)
L

(8)
2

cos 01

(9)

(~X) + (~Y) (~Y)
ff

L

L

~

ff

+ (~);;(~t.

L

~

(10)

The cosine above is equal to the dot product
only if both sides are directed away from or toward the joint. If the sides are directed in opposite ways, the dot product must be multiplied by min us one.
E) Panel Equation
1) The shear panel displacement equation for the
panel in Fig. 1 is
- (2A p /GtD)Qjmki- Bmox(Oi?

- L 1 j (Oij+Oji)
o

\
Fig. 1-Shear panel.

2) The arrows inside the panel indicate the direction of shear flow. The arrows outside the
panel indicate the direction of displace men ts.
3) The corresponding panel and bar numbering
for Fig. 1 is
panel: P jmki

D) Panel Area

h

8

+Ojm) + Bmin(Oki+Oik)

+ Lkm(Okm+Omk) = O.

(11)

bars:

Bmj, B ki , Bij, and B km •

4) The panel numbering determines the positive
direction of the shear flows, which are directed
toward the joints designated by the first and
third subscripts of the panel. (See Fig. 1.) The
bar numbering determines the direction of
positive displacement of the joint which is
toward the joint determined by the first subscript of the bar.
5) The signs of the terms in the equation are:
a) Shear term. This sign is minus.
b) Displacement term. If the direction of
positive displacement is the same as the
direction of positive shear flow along a side,
this sign is plus; if they are opposite, the
sign is minus.
6) The code determines the signs by examining
the joint numbers. If the first bar subscript is
equal to the first or third panel subscript, the
sign is plus; otherwise, it is minus.
7) The length of the side is the factor outside the
paren thesis in the displacernen t term. Note
that the length of the opposite side is used in
the case of the parallel sides.
F) Bar Eq ua tions
1) The axial element equilibrium equation for
each bar is

(12)
n

2) When the bar Bij is one of the parallel bars
bordering a panel, bn of (12) is the length of
the other parallel bar; otherwise, it is the
length of the bar B ij •
3) The sign of a shear term is positive if the positive direction of the shear along an edge of the

274

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
panel and the positive direction of the displacement in the bar bordering that edge are
opposite; otherwise, the sign of the shear term
is negative.
4) The axial force displacement equation for a
tapered bar is
Pi;

~------+---4---~------~'12

---7X

+ Lij L: (±qn)f(Bmin/Bmax, Amax/ Amin)
n

I

where P ii is the force at the narrow end of the
bar. The sign of the shear term is plus if the
shear and displacement directions oppose, and

I

1
y

f(Bmin/ Bmax, Amax/ Amin)
(Amin/ Amax)O.175
1 + (Bmin/ Bmax )O.7

A = Amin

+(

(14)

Amim/ Amax)O.175
2
(Amux - Amin) (15)

if the taper is linear.
5) For a nontapered bar the equation is

P ij

+ P ji + (2EA/Lij) (5ji -

5ij) = O.

G) The Flow Chart follows.

B
I
t

Each Bar
Compute ~x, ~y, ~z, L
Bar Address into Joints
Cross Reference
Bars vs Panels

1

(16)

Fig. 2-Example problem.

Example Problem
The structure in Fig. 2 is analyzed by the code. The
complete input data describing the structure appears
in the next section, which is followed by the matrix
generated and the solution with the matrix column
numbers. The elements without matrix column numbers
are obtained by a simple calculation from matrix computed elements; e.g., Delta 1-7 equals Delta 1-3.
Total computing time, i.e., the time for both matric
setup and solution, is about 0.01 hour.
Problems of a much more formidable nature have
been arranged and solved by the program, and work is
in progress for improving the code. For example, use of
instantaneous coordinates may create many more zero
elements in the matrix.
INPUT
FORCES
JOINT
3

Each Joint
Joint Force Equilibrium
Equations Entered into Matrix

I

t

Each Panel
Find Parallel Bars and
Bmax and Bmin

I

t

Each Panel
Compute the Area

I

t

Each Panel
Shear Panel Displacement
Equation Entered into Matrix

I
1
Each Bar
Axial Element Equilibrium and
Axial Force Displacement
Equations Entered into Matrix

1
Matric Solution

to-

9

X

0.0
0.0

Y

1000.0
1000.0

Z

0.0
0.0

JOINTS
NUMBER
1
2
3
4

5
6
7
8
9

10
11

12

X

Y

Z

- 2.5
- 3.75
- 7.5
-11.25
- 5.0
-15.0
2.5
3.75
7.5
11.25
5.0
15.0

16.0
8.0
16.0
8.0
0.0
0.0
16.0
8.0
16.0
8.0
0.0
0.0

0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.1

BARS
JOINT 1 JOINT 2 TAPERED
YES
3
4
YES
4
6
YES
10
9
YES
10
12
YES
2
1
YES
2
5
YES
8
7
YES
11
8
NO
1
3
NO
4
2
NO
9
7
10
NO
8
NO
1
7
NO
2
8

AM AX
1.0
1.3333
1.0
1.3333
1.0
1.3333
1.0
1.3333
0.66667
1.0
0.66667
1.0
0.66667
1.0

FIXED
NO
NO
NO
NO
YES
YES
NO
NO
NO
NO
YES
YES

AMIN
0.66667
1.0
0.66667
1.0
0.66667
1.0
0.66667
1.0
0.66667
1.0
0.66667
1.0
0.66667
1.0

E

1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0
1.0

275

Chirico, Klein, and Owens: Automatic Digital Matric Structural Analysis
PANELS
THICKNUMGNESS
BER JOINT JOINT JOINT JOINT SHEAR

1
2
3
4
5
6

3
4
1
2
7
8

THE MATRIX
ROW

1
1
1
2
2
3
3
3
4
4
4
4
5
5
6
6
6
7
8
8
8
9
9
10
10
10
11
11
11

11

12
12
13
13
13

14
15
15
15
15
15
16
16
16
16
16
16
16
16
16
17
17
17
17
17
18
18
18
18
18
18
18
18
18
19
19
19
19
19
20
20
20
20
20
20

2
5
8

4
6
2
5
8

11

10
12

11

COL

1
2
3
1
2
4
5
6
7
8
9
10
7
8
11

12
13
11

14
15
16
14
15
17
18
19
20
21
22
23
20
21
24
25
26
24
27
28
29
30
31
32
28
4
29
33
34
6
30
31
35
36
29
30
37
38
36
39
29
33
34
40

30
37
41
42
36
37
43
44
42
17
36
39
40

1
2
7
8
9
10

0.4
0.4
0.5
0.4
0.4
0.4
VALUE

0.42443388
-0.42443388
1.0
-0.90545894
0.90545894
-1.0
0.90545894
-0.42443388
0.15437688
-0.15437688
-1.0
1.0
-0.98801203
0.98801203
-0.15437688
-1.0
1.0
0.98801203
-0.42443388
0.42443388
-1.0
-0.90545894
0.90545894
-1.0
0.90545894
0.42443388
-0.15437688
0.15437688
1.0
-1.0
-0.98801203
0.98801203
0.15437688
1.0
-1.0
0.98801203
-3499.9999
-8.8352984
8.0970674
-10.0
-10.0
-3571.4285
-8.8352984
-8.8352984
8.0970674
-8.0970674
-7.5
-7.5
5.0
5.0
-3499.9999
8.0970674
-8.0970674
-1.0
1.0
-3571.4285
8.0970674
8.0970674
-8.0970674
-8.0970674
-7.5
7.5
5.0
-5.0
-3499.9999
8.8352984
-8.0970674
10.0
10.0
-3571.4285
8.8352984
8.8352984
-8.0970674
-8.0970674
7.5

0.07
0.1
0.07
0.1
0.07
0.1
ROW

COL

20
20
20
21
21
21
21
22
22
22
22
23
23
23
24
24
24
24
25
25
25
25
26
26
26
26
27
27
28
28
28
29
29
29
29
30
30
30
30
31
31
32
32
32
33
33
33
33
34
34
34
34
35
35
35
35
36
36
36
36
36
37
37
37
37
38
38
38
38
39
39
39
39

19
37
43
38
35
10
23
30
37
10
23
38
13
26
34

40
40

40

13
26
32
27
9
3
30
31
9
3
32
12
34
6
12
44
41
22
16
37
43
22
16
44
25
40
1~

25
35
27
45
8
35
27
29
8
38
32
7
11

38
32
33
29
11

41
35
46
21
41
35
36
21
44
38
20
24
44
38

VALUE

-7.5
-5.0
-5.0
5.0
-1.0
-1.0
1.0
-0.26666667
-0.26666667
1.0
1.0
-7.5
-1.0
1.0
-0.26666667
-0.26666667
1.0
1.0
-5.0
10.0
-1.0
1.0
0.26666667
- 0 . 26666667
1.0
1.0
7.5
-1.0
0.26666667
-0.26666667
1.0
5.0
-10.0
-1.0
1.0
0.26666667
-0.26666667
1.0
1.0
-7.5
-1.0
0.26666667
0.26666667
1.0
8.0970674
-8.0970674
-1.0
1.0
4.2360996
-4.2360996
-0.1430726
1.0
8.0970674
-8.0970674
-1.0
1.0
4.3028433
-4.3028433
-0.10150823
0.10150823
1.0
8.0970674
-8.0970674
-1.0
1.0
4.2360996
-4.2360996
-0.14307260
1.0
8.0970674
-8.0970674
-1.0
1.0
4.3028433
-4.3028433

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

276

ROW

COL

40
40
40
41
41
41
42
42
42
43
43
44
44
44
45
45
45

39
36
24
27
-47
2
27
28
2
32
1
32
4
28
41
48
15

THE SOLUTION
MATRIX
COLUMN
NUMBER

ELEMENT

24
25
26
20
22
21
23

P
P
P
P
P
P
P
P

16
14
15
46
48
13
11

P
P
P
P
P
P
p
P
P

p

12

10
7
8
9
3
1
2
45
47
44
41
38

VALUE

p
p

P
P
P
P

p

P

p

P
Q

Q
Q

ROW

COL

46
46
46
47
47
48
48
48
27
28
31
32
43
44
47
48

41
42
15
44
14
44
17
42
49
49
49
49
49
49
49
49

-0.10150823
0.10150823
1.0
8.8352984
-1.0
1.0
4.6223159
-0.13111821
1.0
8.8352984
-1.0
4.6951448
-0.903026737
0.093026737
-8.8352984
-1.0
1.0

2
3
7
1
4
5
8
1
4
2
3
6
2
4
1
8
9
2
7
10
11
7
10
8
9
12
11 8
12 10
1
1
1
2
2
2
2
3
3
4
4
4
5
6
7
7
7
8
8
8
8
9
9
10
10
10
1
2

3

VALUE
0.0
1631.4448
1631.4448
3299.3447
986.2406
3299.3447
986.2406
418.5939
986.2406
0;0
7443.9683
7443.9683
4150.4514
6515.2640
1631.4448
0.0
1631.4448
986.2406
3299.3448
986.2406
3299.3448
418.5939
986.2406
0.0
7743.9684
7743.9684
4150.4514
6515.2640
407.4740
105.1129
0.0

MATRIX
COLUMN
NUMBER
35
32
.27
39
40
36
37

17
43
42
33
34
29
30
4
31
28
19
18
6
5

VALUE
-4.6223159
-0.13111821
1.0
-8.8352984
-1.0
-4.6951448
-0.093026737
0.093026737
-468.75001
-468.75001
-468.75001
-468.75001
-1104.4123
-1104.4123
-1104.4123
-1104.4123

ELEMENT
Q
Q
Q

DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DELTA
DX
DY
DX
DY

4
5
6
1
1
1
2
2
2
2
3
3
4
4
4
7
7
7
8
8
8
8
9
9
10
10
10
3
3
9
9

2
3
7
1
4
8
5
1
4
2
3
6
1
8
9
2
7
10
11
7
10
8
9
12

VALUE
0.0
-407.4740
-105.1129
43445.283
6117.8879
6117.8879
26172.822
3698.4027
3898.4027
26172.822
29813.814
151221.70
7396.8052
53067.405
53067.405
6117.8879
43445.282
6117.8875
3698.4027
26172.822
3698.4022
26172.822
-29813 .814
151221.70
7396.8045
53067.405
53067.405
-29813.814
180986.33
29813.813
180986.33

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

277

A New Approach to High-Speed Logic
w.

D. ROWEt

INTRODUCTION

FUNDAMENTALS OF PARALLEL-PARALLEL LOGIC

OST approaches to the construction of highspeed computers and logic systems in the past
have been first, to go from serial to parallel
circuitry, and then to use faster and faster components
to increase the operational speed of the circuitry. Highspeed components have reached the state where we are
not talking about switching times in the order of the
light-foot. That is, if we have a switching circuit with a
switching time of one millimicrosecond, the theoretical
limit in which we can switch this information from one
point to another requires that the separation distance
be less than one foot. This is the theoretical limit
imposed by the velocity of light and shows that we are
rapidly approaching the limits of switching speed.
It is therefore necessary that we look again at the organization and utilization of logic techniques to determine whether there are not other means of obtaining
high-speed operation from logic circuitry, and thereby
of bypassing the need for extremely fast components.
It has long been recognized! that one solution to this
problem would be the use of canonical logic forms of
either the minterm (sum of products) or maxterm (product of sums) form.2 The practical application of this
technique has long been hampered by the lack of a suitable electronic device capable of responding to the
multiple input-output loadings required. This paper
discusses a transistor device, the "Modified NOR Circuit," which is capable of handling up to 25 inputs and
25 outputs with response times of about 80 m}-'sec. This
device is applied to the design of a high-speed adder and
a high-speed counter.
The design approach used here is termed "parallelparallel" logic. This term arises from the fact that not
only is the function constructed in parallel, but the logic
is also constructed in parallel. A comparison of this logic
with others is as follows:

With a logic circuit that has an infinite number of inputs and outputs and if all input signals and their complements are available, logic arrays can be constructed
using only one level of logic circuits. The advantage of
this lies in the fact that the complete operation time of a
logic array consists of only one logic circuit propagation
time. This means that the maximum speed of a logic
array is the same as the maximum operating speed of a
single logic component.
To explain this procedure, it can be shown that any
logical operation can be written using conventional
Boolean notations (where+indicates an OR operation,
and· represents an AND operation, and - a complement) as a sum of products or a product of sums, i.e.,

M

Se[ial Logic
Function-Serial
Logic-Serial
Parallel Logic
Function-Parallel
Logic-Serial
Parallel-Parallel Logic
Function-Parallel
Logic-Parallel

t

Westinghouse Electric Corp., Buffalo, N. Y.
R. K. Richards, "Arithmetic Operations in Digital Computers,"
D. Van Nostrand Co., Inc., New York, N. Y.; 1955.
2 M. Phister, Jr., "Logical Design of Digital Computers," John
Wiley and Sons, Inc., New York, N. Y.; 1958.
1

The left-hand expression in the equation may be represented by a number of AND circuits (first level) working into a single OR gate (second level) (Fig. 1), while
the right-hand (equivalent) expression can be represented by a number of OR gates (first level) working
into an AND gate (second level) (Fig. 2). It is then obvious that if each logic circuit has no limit in its fan-in
and fan-out, all logic can be done on two levels or a
depth of two. Now, the OR operation is electronically
unique, and can often be performed by a simple junction of leads without any logic elements. In the case of
the NOR logic element, to be discussed below, the OR
operation need not be performed at all, since this element accepts multiple inputs and performs the necessary OR operation implicitly.
If all signal complements are not available, a third
level is required for negation. Since most signals come /
from bistable registers in parallel-parallel operation,
both the signal and its complement are almost always
available. The insertion of a third level in some inputs
and not others can cause occurrence of certain race conditions if care is not exercised. Memory and counting
circuits also require special consideration.
Previous papers have discussed the use of a particular universal logic circuit called a "NOR" circuit.3,4
The logic of this circuit is such that an output exists if,
and only if, neither input A NOR, B NOR, C NOR,
etc., is present. (See Fig. 3.) This circuit is universal in
that it can perform all logic functions when combined
in various forms with other NOR circuits. The right3 W. D. Rowe, "The transistor NOR circuit," 1957 WESCON
CONVENTION RECORD, pt. 4, pp. 231-245.
4 W. D. Rowe and T. A. Jeeves, "The NORDIC II computer,"
1957 WESCON CONVENTION RECORD, pt. 4, pp. 85-95.

278

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE
y

::

LEVEL 2

(a)
A

D

oC

0

LEVEL I

0

0

0

0

(p)
Fig. i-Representation of two-levellogic-AND-NOR.

Fig. 3-NOR logic; (a) diagrammatic NOR; (b) truth table. Logical
expression: an output appears at C if neither input A nor input
B is present.
y

LEVEL I

LEVEL 2

LEVEL I

Y = (Xl

LEVEL 2

+ X2)(XI + X2)
Y

Fig. 2-Representation of two-Ievellogic-OR-AND.

= (Xl + X2)(XI + X2)

Fig. 4-Representation of two-Ievellogic-NOR-NOR.

hand expression of (1) can be expressed simply by operation of several NOR circuits (first level) into another
NOR circuit (second level). (See Fig. 4.) (Where the output of the array is fed to other similar circuits the second
level may be omitted, as remarked above.)
The' particular advantage of using the NOR circuit
instead of English logic circuits is twofold. First, only
one type of circuit is required, so that only a single propagation constant is necessary; and second, it is possible
to build NOR circuits with numbers of inputs and outputs that are quite large, in the order of 2S inputs and
2S outputs. This number is sufficiently large for many
applications.

utilizing a maximum of two inputs per logic circuit. The
depth is five, so that five propagation times are required.
The logic expression for this circuit is shown to be
s = [xy

TO A FULL ADDER

The examples used so far are trivial in that they are
two levels for both the parallel and the parallel-parallel
cases. In order to illustrate more fully the advantages of
the parallel-parallel method, a slightly more sophisticated example will be examined.
This case is that of a full adder circuit. A basic full
adder (Fig. S) provides the sum of two addend variables
and the carry variable from the preceding stage in
binary form. The circuit is built up of two half-adders

xy

+ c],

(2)

which is derived directly by the substitution of the output of one-half adder which can be expressed 1ll two
eq ui valen t ways:
H

= xy

+ xy +

I

(x

+ y)(x + y)

(3)

into the input of the second half adder, expressed as

s
ApPLICATION OF PARALLEL-PARALLEL TECHNIQUE

+ xy + c][xy +

= (H

+

C) (H

+

C)

(4)

to form (2). Here x and yare the two addend variables
and C the preceding carry variable.
Manipulation of this equation brings it into parallelparallel form so that the equivalent expression is
s = (x

+ y + c) (x + y + c)(x + y + c) (x + y + c).

(S)

The derivation of this equation frou{ (2) is shown in
Appendix I.
This equation leads to the circuit of Fig. 6, where
multi-input logic circuits are used. The depth of logic
is now only two.

279

Rowe: A New Approach to High-SPeed Logic
c

POSITIVE
BIAS
~s~u~pp~~Vy--------~-------------------------oV8B
~G=R~OU=N~D~

-+____ __________________-oG

______

~

RI

M

N

INPUTS

OUTPUTS

NEGATIVE

~V~O~~~A~GE~SU~P~P~~Y----------~------~--------~Vcc

Fig. 7-Basic transistor NOR circuit.

THE MODIFIED TRANSISTOR NOR CIRCUIT
'.[(x

+ y)(x + y) + cH(x + y)(x + y) + cJ = s

Fig, 5-Full adder consisting of two half-adders, using
only two inputs per NOR. Depth = 5.

y

c

In order to facilitate the use of parallel logic it is necessary to have a circuit with as large a number of inputs
(fan-in) and outputs (fan-out) as possible. Since it is extremely desirable to use only a single logic circuit, the
transistor NOR circuit was selected for investigation.
The basic transistor NOR circuit is shown in Fig. 7.
A negative voltage on any of the inputs, M, is sufficient
to cause the transistor to saturate and supply a ground
potential signal to the outputs, N. The absence of a
negative voltage on the inputs causes the positive bias
voltage to maintain the transistor at cut-off. Under this
condition, the transistor being in a very high impedance
state, the outputs see a negative voltage equal to
output voltage = Vee

Rr
xRe

s

= (x

+ y + c)(x + y + c)(x + y + c)(x + y + c)

Fig. 6-Full adder consisting of parallel-parallel logic.

In the example, fewer logic circuits are required in
the parallel-parallel case, since most of the logic is accomplished in inputs of the logic circuits and the interwiring. This does not always hold true. The worst case
of parallel-parallel logic has a maximum of 2n -1 firstlevel logic circuits, where n is the number of inputs to
each first-level logic circuit. There are q second-level
logic circuits, where q is the number of desired outputs.
Fortunately, most applications are so specialized that
only a few first-level logic circuits are required. However, in the worst case, the second-level logic circuits
must handle 2n -1 inputs.
I t is possible to reduce the number of logic circuits by
compromi,~ing on depth, as was done by Weinberger
and Smitn. 5
5 A Weinbeq~er a.nd]. L. SIl?-ith, "A one-microsecond adder using
one megacycle ClrcUltry," IRE'TRANS. ON ELECTRONIC COMPUTERS
vol. EC-5, pp. 65-73; June, 19,56.
'

+ Rr

,

where Rr is the input resistor value of a NOR circuit
being driven by one of the NOR circuit outputs, N; Rc
is the collector load resistor; and x is the number of outputs, N, actually connected to the inputs of other NOR
circuits. Only one NOR circuit input is driven by one
NOR circuit output. This equation shows that the output voltage is reduced as more inputs of succeeding circuits are connected to the outputs of the NOR circuit in
question.
If the minimum voltage that appears under the fully
loaded condition is called a "one" (this voltage being
sufficient to saturate a succeeding NOR), and the absence of this voltage (ground) is called a "zero," the
logic conditions of Fig. 3 are fulfilled and this is the basic
NOR operation.
Since all inputs are resistive, and most of the logic
is accomplished in the input network, only a small number of transistors are required as compared to the number of resistors. This is certainly desirable, as most of the
logic is now accomplished by the wiring interconnections and resistors, which can be inexpensive, reliable,
and miniaturized elements.
For extreme reliability, the basic NOR circuit seems
to have a maximum fan-in, fan-out of six, which is

280

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

+24 VOLTS
GROUND

24 VOLT

2

ZENER
DIODE

1-----~2

N
------~------~----~24

24
'-------0

(a)

25

25

-20

Vee

-250 VOLTS

Fig. 8-The modified transistor NOR circuit-type A.

'n
UJ

::7!
ct

fairly small when one is considering parallel-parallel
logic. In order to increase the fan-in and fan-out to
about 25, a more suitable number, the basic NOR circuit has been modified so that the output voltage remains at essentially power-supply potential regardless
of output loading. This allows the circuit to drive many
more outputs than is possible in the basic circuit.
One means of modifying the NOR circuit is shown in
Fig. 8. A 24-volt breakdown Zener diode is placed across
the collector and emitter of the transistor such that output will never fall below 24 volts when the transistor is
cut off (output load is restricted t'o never exceed a condition where the output voltage will fall below this value).
Under this condition the current derived at the high
voltage Vcc acting through the resistor is shunted between the load and the Zener diode. As the load varies
(due to changes in the number of outputs used) the excess current is shunted through the diode so that the
load is continually driven from a 24-volt output signal,
during a "one" output condition.
I t is certainly possible to replace the diode with a resistor whose value is chosen such that output voltage is
always in the fully loaded condition. This reduces the
circuit flexibility since a different resistor value is required for every different condition of output loading.
A basic disadvantage of this circuit is its high power
dissipation, arising from the high-voltage power supply
acting across resistorl Rc. This disad van tage has been
eliminated by the design of a second type of modified
NOR circuit.
This circuit is based on the fact that under certain
conditions a transistor makes an excellent constant
voltage source. This operation is described in Fig. 9. A
constant emitter current source is derived by the constant voltage difference between bias voltages (VEE and
V CE) acting across emitter resistor R E. This contains
the maximum collector current to be

I - I-

I

! - VEE
V CE
- - - - - - - - - - A = I cmax ,
RB

where

I-

VEE!

>!- VCE!

and A is the current gain of

-15

a:::
UJ
a.

:J

ORDINARY
LOAD
LINE

-10

..J

i
~

CONSTANT
VOLTAGE
PORTION

~

-5

0~----+-----~-----+-----4--~~~--­

o

-5

-10
VOLTS

(b)
Fig. 9-(a) Constant voltage circuit, (b) circuit operation plot.

the transistor. Below this value a range of constant potential exists. An actual plot of this can be made by observing the meters in the circuit of Fig. 9(a) when the
load resistor RL is varied. This plot is shown in Fig. 9(b)
for a particular case where Icmax = 20 rna and V CE = - 24
volts. The constant voltage portion is shown, and this is
the operating range utilized.
This circuit is then combined with the NOR circuit to
make a very effective modified NOR circuit. (See Fig.
10.) Transistor Tl is the basic logic switch, and transistor T2 is the constant voltage source. The power dissipation of this circuit is considerably less than that of
Type A circuit.
The silicon diode placed in the forward direction between base and ground prevents the base from being
overloaded when a plurality of inputs have "one" signals applied. This connection makes use of the high forward-voltage drop of most silicon diodes. This circuit
has actually been constructed using micro-alloy diffused
base transistors. The circuit has a simultaneous fan-in
and fan-out of 25 with an average propagation time in
the order of 80 mJ.tsec.
In actually testing circuits in various applications
evidence points out that the average propagation tim~
is a more significant measurement of operation speed
than rise, fall, and storage' times considered' individually. Average propagation time is measured by taking a
series string of n logic circuits and applying a pulse to
the first stage. The average propagation time is the time
delay for the pulse to be propagated from the first to the

Rowe: A New Approach to High-Speed Logic

281

+2

GND
R,

RS

5,
T,

2

2

R,

N

M

R,
24

RE

25

t="
25

Fig. lO-Modified transistor NOR circuit-type B.
Fig. ll-Three-stage high-speed carry circuit.

last stage divided by n, the number of logic circuits. In
any particular NOR circuit the actual propagation time
varies only a few per cent from the average, in the majority of cases. Since rise, fall, and delay times do not
add directly to give actual operating time of compounded circuits, average propagation seems to be a
more significant means of circuit operation time measurement.

C 1 = DI

+ RICO =

C 2 = D2

+ R2DI + R 2R IC O

C3

(10)

DI

+ R3D2 + R3R2DI + R3R 2RIC
Dn + RnDn-l + RnRn-lDn-2 +
+ RnRn-l ... R 2R C

= D3

Cn =

1

ApPLICATION OF PARALLEL-PARALLEL LOGIC TO
HIGH-SPEED ADDITION

In order to illustrate the effectiveness of parallelparallel logic, an application consisting of the circuitry
for the carry operation of a high-speed adder will be
shown in detail.
One of the major difficulties in designing high-speed
adding devices is the drawback of ripple-through
carry.l When making a binary addition, the most significant bit is dependent on the carry signal from the
preceding stage, which is in turn dependent on the carry
signal of its preceding stage, etc. This means that the
most significant bit is dependent on the condition of the
least significant and every other bit in order.
It has been shown 1 ,5 that the expression for a carry
signal from any particular stage k of a binary adder may
be given as
(7)

DK
RK

=
=

AK·BK
AK

+ BK

(8)

are used for convenience, the carry signals for binary
adder of n bits may be expanded. Then

Co = Co.

(9)

Generally, since there is no carry into the least significant stage, Co is O.

O

(12)

O.

This shows that the carry for any bit n is immediately
available from a single logic array for each bit. Furthermore, some of the logic expressions that are used in determining the carry of a particular bit are used in all the
succeeding bits, so that much of the circuitry of each
bit is necessarily repeated throughout. Actual circuitry
for a three-bit carry circuit is shown in Fig. 11. The
carry for each bit is arrived at after only two levels of
logic circuitry, regardless of the bit position. Also, the
adder of Fig. 6 may be used as the adder circuit, with
the negation of the carry derived directly from similar
circuitry. Total addition time is then exactly four propagation times or approximately 320 m,usec, regardless
of the number of bits in the adder.
For the basic carry circuit the number of logic circuits (L) is

L =

n

L: (X + 2)
:1:=1

where CK is the carry signal from stage K, CK - 1 is the
carry signal from the preceding stage, and AK and BK
are the addends associated with bit K. If the substitutions

(11)

=

n(n

+ 5)
2

=

(1/2)n 2

+ (5/2)n,

(14)

where n is the number of bits. The maximum number
of inputs required by any logic circuit is n+ 1, and the
maximum number of outputs is max k(n-k+1), which
is n(n+2)/4 if n is even, and (n+ 1)2/4 if n is odd, where
k is bounded by 1 ~ k ~ n. Then for the addition of two
20-bit words (without sign) a logic circuit with a fan-in
of 21 and a fan-out of 110 is required. (The fan-out can
be kept below 25 by using a logical design other than
that given in Fig. 11. See Appendix II.) The carry circuit will require 250 NOR circuits. This compares with
80 logic circuits (fan-in fan-out of 3) for a particular
twenty-bit ripple carry circuit in use in the NORDIC
II computer (4 circuits per bit).4

282

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Fig. 12-Reversible binary counter.

Eq. (14) shows that by compromising on depth and
thereby speed, the 20-bit carry circuit can be broken
down into two 10-bit carry circuits, each requiring 75
logic circuits with a fan-in of 11 and a fan-out of 30.
However, the depth is now four, half the speed of the
20-bit carry case. For a depth of eight, a total of 100
logic circuits are required, since four 5-bit carry circuits
may be used. Only 6 inputs and 9 outputs are required
for each level. This demonstrates the flexibility that is
available by compromising on speed. An example of
what can be done with only a limited fan-in and fan-out
is the SEAC high-speed adder.5
ApPLICATION OF PARALLEL-PARALLEL LOGIC TO
COUNTING CIRCUITS

Some circuits, such as counter chains, do not fall
directly into easy application of parallel-parallel logic.
I t is possible to treat such operation in a similar manner
by making all counters and similar devices operated by
parallel logical means, instead of direct sequential
means. To illustrate this point, an ordinary binary
counter has been examined to determine the logic involved in switching each stage by a count pulse. Then
by considering each counter stage as a logical input, and
determining from this the necessary changes in each
counter stage for the next count pulse, we can effect a
complete count cycle in only two propagation times,
plus the switching time of a counter.
An example of such a binary counter is shown in Fig.
12 for 4 bits. It is reversible in that it can count up or
down. The condition of the counter before a count
pulse determines which counters should be changed
when the count pulse appears. When all the counters
have been changed, the new condition determines the
change for the next succeeding count pulse, etc.
The limit of the size of a binary counter of this nature
depends upon the maximum allowable fan-in, fan-out
of the NOR circuits used. Thus for a 25-input-output
NOR circuit, a 24-bit, simultaneous advance, binary
counter is possible. It is also obvious that the counter
code used is restricted only by the logic used. Therefore,
a counter of any desired code is possible.
RACING AND TIMING CONDITIONS

When every input signal is required to travel in paths
of equal length to the output (i.e., all inputs extend
through the same depth), the only conditions of racing
that can occur are due to the variation of the propaga-

tion time of any logic circuit from the average propagation time. This condition is easily bypassed since
parallel-parallel operation is exactly synchronous. This
means that all output signals appear simultaneously
after  Leave Subcontrol

I

I
-

Decisions Based
" on Timing and
" System Load

I
Store Addresses

,'" in Normal Returns
of Subroutines
to be Used

FIg. 5-Structure of a subcontrol bJock.

,... Enter System

Significant Block

302

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

comparison of priority to time available; however, work
load could also be a factor. Once a task has been selected, it is necessary to determine how this task should
be carried out. There are actually two types of sel~c­
tions which must be made at this point: a selection of
method based on the same quantities on which selections of tasks were based, and a selection which is completely independent of timing requirements but based
on the state of a given task. The first type of selection
is concerned with quantity vs quality. In other words,
as demands 011 time increase, accuracy may be sacrificed
in order to increase the number of outputs computed
during a computational interval. This type of decision
must be based on parameters computed by the central
control since it is concerned with the strategy being
employed. The second type of decision is concerned
with how much has already been done on a certain task
and on the basis of past results what should be done
next. During initial simulations there will be the need
for the analyst to select arbitrarily a given method he
wishes to evaluate. For this purpose a manual selection switch not under control of the subcontrol is
included.
One final duty of the subcontrol is to place all inputs
and outputs of the system significant block on tape in
the binary mode if they are desired for historical records
and use in future analysis of system operation.
The central control block is only entered at the beginning of each basic timing interval. (See Fig. 6.) At
this time, the subcontrol-system significant combinations which are to be used, the order in which they
should be used, and the amount of time allotted to each,
is determined. In order to accomplish this, the central
control must be able to assess the work load expected
during this and succeeding intervals, and the relative
importance of ea,ch task to be performed by the system
significant blocks. This type of information would be
based on such quantities as probability of a task obtaining its objective, the size of the waiting line for a given
task, time required by each system significant block, and
necessary data rates allotted to a given task. It would
also be the duty of central control to interpret external
commands which have been supplied since the last interval. This would include information regarding functioning of different sections of the system, changes in

strategy, and instructions concerning the deletion of
certain data.
Such things as 'error checks, system performance
checks, and diagnostic programs will be handled by
central control or at least controlled by central control.
Signals indicating errors either in the computers or in
other equipment such as data links will be interpreted
by central control and the necessary action taken. This
action would include such things as bypassing computations which cannot be handled with a given part of the
system nonoperable, the switching-in of spare equipment, the calling-in of diagnostic programs to help locate the cause of an error, and the issuance of corrective
instructions, if possible, where incorrect outputs have
been given. System performance checks would indicate
the way the system is operating and could be used to
determine if an alternative strategy is required. Some
examples of the types of decisions which central control
may have to make are:
1) An external signal indicates that one launch site
cannot be used. What action should be taken? The
block which performs launch site selection would
have to be notified; also, time must be allotted during the next few timing intervals for reassignment
of launch sites for any interception already planned
using the inoperative site.
2) There is a large number of targets requiring prediction and smoothing. How should the next time
interval be used in order to reduce the size of this
waiting line? During the next interval, as much
time as possible would have to be allotted for the
prediction system significant block. Hence, the
other blocks would be allotted a minimum amount
of time or bypass completely. The parameters sent
to the prediction subcontrol would be such that
all prediction would be done by the shortest
method.
The exact structure of the central control cannot be
specified in detail since the functions it must carry out
are dependent on strategy. However, ideally, its function is to apply a strategy to the present conditions in
order to produce a course of action which is consistent
with immediate operational requirements and results in
the least probability of system saturation.

R ep r ese ntafve
Results
1

Represent a f lve npu t s

Ordering and Time Allotments

Size of Waiting Lines
Data Rates

.

"-

Error Alarms

~

Data from Control Center
System Performance Results

Central
Control
Strategy

...

·for SS Blocks During Next Interval'
Parameter Changes ...
External Outputs to Monitors

,'-

;:.
Fig. 6-Functions of central control.

:-..

Rosene: Maximum Utilization in a Real-Time Computing System

303

SUBROUTINE DESIGN

DATA-STORAGE DESIGN

The macro instructions which compose system significant block have been described as calling sequences to
subroutines. (See Fig. 7.) These subroutines are divided
into three levels according to their use in the system,
and hence the manner in which they must be written.
The calling sequences of those in the top level only appear in system significant blocks and usually in only
one block. The function performed by this type of subroutine is usually system dependent; that is, the function could be done in manyways and simulation and analysis will determine which method is best for a given
mode of operation. Calling sequences of subroutines
from lower levels could appear within a subroutine at
this level. The calling sequences of a second level subroutine could appear in a system significant block or in
a top level subroutine. The functions performed by a
second level subroutine are explicitly defined mathematical or logical tasks and their use does not directly
depend on mode of operation of the system. This level
subroutine can only contain calling sequences of subroutines from the lowest level.
All subroutines in the top two levels are written in
dosed form and the location of all inputs and outputs
are specified in the calling sequence. Provision is made
within each subroutine for storing all inputs and outputs
on tape under the control of its calling sequence so that
any data needed for analysis purposes are readily obtained if desired. One of the differences between these
two levels of subroutines is the way in which they are
written; that is, the second level programs are com-

Another requirement of program design is the specification of the data storage. The data storage for a realtime system must be set up with the specific purposes of
the system in mind and be compatible with the program
design. Output requirements are one of the important
considera tions in designing data storage. System significant blocks and subroutines on the top two levels have
provision for placing their inputs and outputs on tape.
The data supplied by individual subroutines are necessary for analysis of the programs involved, whereas
once the system is actually in operation, the output supplied by the subcontrol blocks is of prime importance.
Since, in many cases, there is a minimum of time available for output in an operating system, the data-storage
design should be such that the system significant block
outputs require a minimum amount of computational
time.
Even after a system is operating, it may be necessary to change or modify the strategy being employed.
In order to expedite this type of change, separate storage for control data is advisable. In systems where random-access memory is limited, data not being used
would be transferred to some auxiliary type storage such
as drums. The data storage in this case would have to
be designed with this transference of data in mind.
SIMULATION EXPERIMENTS

The first units of the system to be simulated are the
subroutines. (See Fig. 8.) The purpose of this phase of
simulation is to evaluate the mathematical models used

Subroutines Used by
a Subroutine

Tape Output
Provided

Error
Returns

SS

Levels 2 or 3

Yes

Yes

General

SS or Levell

Level 3

Yes

Yes

General

Levels 1 or 2

None

No

Yes

Level

Use

Form

1

Dependent
on Mode of
System Operation

Specialized

2

Not Dependent
on Mode of
System Operation

3

Utility Programs

Location of Calling
Sequences

Fig. 7-Subroutine levels.

pletely general, whereas top level programs can be more
specialized.
The third level of subroutines consists of utility programs, and the calling sequences of this type of program
appear only within other subroutines of higher levels.
All utility programs are written in a general closed form,
and no provision for tape storage of inputs and outputs
is provided.
Subroutines from all levels have error returns for overflow, division failure, and nonacceptable inputs.

Phase

I j a) Subroutine.
lb) Groups of subroutines.

Phase II

Phase III

a) System significant block with all or part of its
subcontrol.
{ b) Groups of system significant blocks each with
its subcontrol and part of central control.
Complete system including peripheral
ment.

Fig. 8-0rdering of simulation experiments.

~quip­

304

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

over the range of inputs expected. In some cases, this
simulation consists of connecting several subroutines
together. The second phase consists of system significant block simulation accompanied by subcontrol control. During this phase, the different methods available
for performing a function are assessed and the optimum
selected. More than one method per function is kept in
some cases because of time considerations, e.g., time
available vs accuracy needed. Sections of the control program used during this phase become part of the subcontrol of the blocks being simulated. The third phase,
the entire simulation, is reached by successively combining system significant blocks and adding the appropriate sections of the central control. Adequate evaluation of programs at any level is impractical without
realistic inputs which are most easily supplied by other
subcontrol system significant block combinations and

central control; thus complete system simulation seems
necessary. Also, a synthetic environment would have
to be provided by peripheral subsystem simulation of
such things as radar and missile.
CONCLUSION

For the PLATO an timissile system Fire Control
Center, a program design was developed which embodies
three decision levels :central control, subcontrol, and system significant block. This versatile program design provides for the maximum machine utilization, and permits
a trade-off between quality and quantity of computation as well as expediting programming, debugging, and
simulation.
The general concepts of this program-design approach should be applicable to other real-time computing systems in the business and scien tine fields.

Pattern and Character Recognition Systems-Picture
Processing by Nets of Neuron-Like Elements
L. A. KAMENTSKYt
INTRODUCTION

PATIAL pattern recognition, of which the recognition of alpha numeric characters is a subclass, is
an important and practical problem. More efficient
coding of transmitted pictorial information and more
efficient utilization of humanly produced information
could result from its solution. The problem of pattern
recognition has been stated as the assignment of a meaningful code to a recognizable structure in a set of signals. 1 The signals, in this case organized spatially, are
the result of a transformation from a visual picture field
P to an electrical representation of this field. The points
of this signal field S correspond to a characteristic of
points in the picture. In this study, the reflectivity of
given picture-point areas is quantized as black or white
as a basis for a two-state electrical-signal representation
of points of a pattern.
The total information content of this signal field is
certainly less than that of the original picture. Useful
pattern recognition requires that the code assigned to a
pattern have even less information content than the
signal field. For example, a code assigned to a pattern
of a number in the set of 10-decimat-I?-umber symbols

S

t

Bell Telephone Labs., Inc., Murray Hill, N. J.
L. D. Harmon, "Computer simulation of pattern 'recognition,"
presented at Symp. on Pattern Recognition, ,Ann 'Arbor, Mich.;
October 22, 1957.
1

indicates the number value but need not reflect the size,
position, nor quality of the original pattern. The pattern-recognition machine is thus a device for performing
an information-destructive transformation on the signal
field to yield an output code assigned to the value of
the pattern. Internally, the machine may perform a
sequence of information-destructive transformations.
Furthermore, the machine may internally produce
transformations yielding parameters of the picture as an
intermediate step, rather than the output code directly.
Such parameters may have a physical meaning. For
example, in useful number recognition the presence of
the pattern parameters, straight lines, openings in
curves lines, or corners is significant for generating the
output code.
This paper describes an approach to the solution of
pattern recognition that may be characterized as
"spatial operations by neuron-like elements." Signal
fields are transformed by a predetermined network of
threshold-responsive elements. These elements have
been called neuron-like in that they have many inputs
and a single all-or-nothing output; they are connected
in spatial arrays with excitation or inhibition gating
between the signal field and the elements, or between
the output and inputs of elements. More correctly, they
should be called spatial neurons, since only their spatial
properties approach the assumed, properties of neurons.
I

Kamentsky: Recognition Systems-Picture Processing
(The coined name "speuron" will be used in this paper.)
As will be shown, spatial. gating only has been used and
none of the complicated temporal properties of neurons
have been simulated. However, speurons have really
been applied for their usefulness in performing many
different information-destructive transformations. They
operate on all signal-field points simultaneously under
the centralized control of a program. A model of a
simplified speuron net was built. Included in this paper
are illustrations, from the output of this model, of useful
pattern filtering (noise reduction, width reduction, etc.)
and pattern-parameter (straight lines, corners, etc.) extraction transformations.

305

lowing the contours of a pattern and changes in direction
of a tracing head can be used as the features of the pattern. In recognition of double-dot writing,5 searches are
made for written lines in seven specific areas of the signal
field. In effect this searching system codes the feature,
"Are partially- or fully-closed loops of black area present
in this pattern?"

Feature Extraction by Spatial Transformations

The signal field is usually resolved into n independent
elements. During a spatial transformation, the state of
each element is examined. The state of this element and
other elem~nts whose coordinates are specified with respect to the examined one are functionally related by a
CLASSIFICATION OF METHODS
specific rule to determine the transformed state of the
Solutions of the pattern-recognition problem may be examined element. There are many rules for transclassified by the nature of the required information- forming patterns. 6- 9 A study of Fig. 1 may help to clardestructive transformations. Remembering that the in- ify this. Three such transformations are shown here.
put to a pattern recognizer is the signal field defined in , Consider neighbors in this case as adjacent horizontalthe first paragraph, and that the output is a code as- vertical elements.
signed to represent a class of patterns having a recognizAlthough one investigator 9 has proposed a parallel
able structure, let us develop the place of the neuron- transformation system, all experimental studies of
like net.
spatial transformations have been made using a digital

Element Matching
The signal field is resolved into n independent elements and each input pattern is represented by an n bit
code. Recognition is effected by matching an input code
with codes representing the configurations of independent elements of each of the set of recognizable patterns.
All possible patterns would be represented by a code
table containing ::;; 2n entries. Logical2 or statistical
techniques 3 may be used to find the correct or best fit
of the input and recognizable pattern fields.

Feature Matching
The individual elements of many patterns, however,
are not independent, since recognizable patterns contain
constraints on form. Parts of patterns can be classified.
in terms of independent groups of elements, instead of
by individual elements. Some of these groups include
the geometrical parameters, straight, curved, closed or
opened, and breaks or corners. These may be independent of absolute position, size, noise, and some changes
of form. We shall call these "features of the pattern."
Recognition of patterns is possible if a sufficient set of
relevant features can be extracted from the signal field.

Searching for Features

INPUT

~r-~1-~~

PATTERN

~

~

~

S IS THE POINT UNDER EXAMINATION AND HAS VALUE ONE,
IF BLACK
N IS THE NUMBER OF ITS NEAREST NEIGHBORS THAT
ARE BLACK

Fig. 1-Different transformations of the above pattern.

This group of pattern recogmzers effects featureextracting transformations by searching specific areas,
5 T. L. Dimond, "Devices for reading handwritten characters,"
of the signal field. Edge tracing 4 can be applied to fol- Proc. EJCC, pp. 232-237; 1957.
2 "Electronic Reading Automation," The Solartron Electronics
Group, Ltd.
3 H. T. Glantz, "On the recognition of information with a digital
computer," J. Assoc. Comp. Mach., vol. 4, pp. 178-188; 1957.
4 J. Loeb, "Communication theory of transmission of simple
drawings," in "Communication Theory," Willis Jackson, ed., Butterworth Scientific PubIs., London, Eng., pp. 323-325; 1953.

6 O. G. Selfridge, "Pattern recognition and modern computers",
Proc. WJCC, pp. 91-93; 1955.
7 G. P. Dinneen, "Programming pattern recognition," Proc.
WJCC, pp. 94-100; 1955.
8 L. Cahn, R. A. Kirsch, L. C. Ray, and G. Urban, "Experiments
in processing pictorial information with a digital computer," Proc.
EJCC, pp. 221-229; 1957.
9 S. H. Unger, "A new type of computer oriented toward spatial
problems," Froc. W JCC; 1958.

306

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

computer. Each point must be examined sequentially
until the complete field has been transformed. Although
all spatial transformations to be described can be done
on a digital computer, a relatively long time is required
to relate spatial arrays of signals. As a result, parallel
operations have been thought essential to performing
useful pattern recognition.l,lo

XiJ • 0,+ 1,-1

Y=O,I

X l j - - - - -....r
X2

INPUT

j----...,

X3 j - - - -...

~-----'Yj

OUTPUT

X n j - - - - -............

Spatial Computers

z

Unger 9 has proposed a program-controlled computer
consisting of a spatial array of identical arithmetic
modules which are connected to adjacent modules and
to corresponding signal field points. Each module consists of a one-bit accumulator, about six bits of memory,
and associated logic. The logical arithmetic operations
this system could perform have been discussed by Unger.
The utility of serial computing has been questioned
and has led to proposals for spatial computing. It is
proposed that the utility of doing arithmetic operations
at all be questioned as an answer to the patternrecognition problem. Threshold-responsive elements
(speurons) are proposed as one other answer, because:

THRESHOLD
CONTROL

n

Yj • I IF .1: Xij ~ Z
'II

n

!-

Yj • 0 IF

Xij < Z

1:1

Fig. 2-The basic neuron-like element.
n

Yj

=1

if

L: X1j' ~ Z
i=l
n

1) They are simple; a model was constructed using a
transistor per speuron.
2) They can be connected in different ways; each element can have many inputs; the associated logic
between connections is simple.
3) They can transform by many different rules with
a given connection by changing a common supply
voltage.
4) They can be used to recognize patterns by successively transforming a signal field under program
control.
I t is demonstrated here that certain speuron operations
will clean up patterns or extract certain features; that
is, an output state or given sequence of states will appear, after a given sequence of transformations, only at
points in the signal field where a certain feature is present. It is hoped that speuron subroutines will be found
to uniquely extract sufficient sets of features of useful
sets of patterns. One subroutine for each feature will be
run sequentially on the machine. The presence or absence of a given set of relevant features will be the basis
for recognizing patterns, like numbers or letters.
NEURON-LIKE ELEMENTS

Th« basic element-of the neuron-like nets is shown in
Fig. 2. The "general elements N j may have any num,ber n of inputs Xij each taking on the value 0,
1, or
-1. All elements are controlled by setting a "threshold Z." Z can take on the values 0 to n. The elements
ohave a single output Yj. Yj is either 0 or 1 based on the
criteria:

+

10 O. G. Selfridge, "Computersa'nd pattern recogpition," presented
at meeting of Amer. Assoc. Advance Sci.; December 27, 1956.

= 0

Yj

if

L: Xij < Z.
i=l

In the general case, Xij and Yj are different for each
element, but Z is the same for all elements of a given net.

Input to and Output from the Elements
Other logical elements must be connected to the basic
element to provide for successive transformations and
to increase the number of operations of a speuron net.
A general form for the input and output of speurons is
shown in Fig. 3. The inputs Iij used to control gates can
take on the value 0 or 1. The gated signals Ci, the control inputs to the speurons, can take on the values 0,
1, or -1. C I is not necessarily equal to C2 or Cg , etc.;
however, all CI's, C 2 's, etc., all Z's and flip-flop reset
signals R are connected together. Xij may be related to
Ci and Iij by the following table:

+

Ci

+1
-1
0

Iij

Xij

0

0

+1
+1
+1

+1
-1
0

NET CONNECTIONS

Two sets of net connections have been studied. In
the first, called the nearest neighbor connection, a field
point and neighbors surrounding it are connected to a
corresponding speuron. In the second, called the directed connection, a field point and points along specific
radii emanating from this point are connected to a corresponding speuron.

307

Kamentsky: Recognition Systems-Picture Processing
SIGNAL FIELD

Inj

o
SIGNAL
INPUTS

o

0

o
I2j

o

Ilj
CI

o

0

C2

o

0

OUTPUT FIELD

0

0

CONTROL
INPUTS

Cn-------{
NEURON-LIKE
ELEMENTS

z--------------------~

"Fig. 4-Nearest neighbor connection.

R------------------------------------~

Fig. 3-General input and output of the elements.

Nearest Neighbors Connections
Each of nine inputs Iij of a threshold-responsive elemen t Dj is connected to a point in the signal field Sj corresponding to the position of Dj and to one of its eight
nearest neighbors. This connectivity may be extended
to next-nearest neighbors, etc. In a simplification of this
connection, shown in Fig. 4, each of five inputs Iij to
speuron Dj is connected to Sj and to one of its four horizontal-vertical nearest neighbors. The latter connection
was used in the model to be described. It should be
pointed out that neighboring and corresponding input
points add or subtract in a manner depending on the
values of Ci. To effect successive transformations, the
output of each element OJ is connected back to the corresponding input Sj through a gate which is opened for
each transformation cycle. It is assumed that an input
signal Sj can change to the same state as OJ after each
transformation cycle.

Directed Connection
The inputs of an element Dj are connected to the
corresponding signal I=oint Sj and to signal points along
radii emanating from Sj. A directed connection with
horizontal-vertical radii is shown in Fig. S. A range of
influence and the number and the angles of the radii
must be defined for each net. The directions of influence on all speurons are controlled by the voltages Ci.
There is one common control for each radius and one
control for the connection to the corresponding field
point. Transformations can be generated which extract
the feature, "Does a signal field point lie within a partially- or completely-closed loop of black area, or is it
part of a black area?" Settings of Z will produce transformations which depend on the width of pattern lines
or on the number of pattern lines in a given direction.
A

SIMPLE MODEL

A 30-element model neuron-like net was built to
study the feasibility of this device and to allow the au-

CONTROL
SIGNALS

0

0

0

0

0

0

9

0

0

0

0

0

0

0

0

0

Co
CI
C2
C3
C4

SELF ELEMENTS
l.EFT ELEMENTS
RIGHT ELEMENTS

Z .
R

THRESHOLD
RESET

OJ

OJ
OUTPUT

Fig. 5-Directed connection.

thor to experiment with different transformations. Although all transformations can be simulated on a computer, the use of the model enabled the transformations
to be evaluated faster.
The model's input signal-field toggle switches,
neuron-like elements, and lamp outputs were arranged
in a six-by-five matrix. The input-toggle switches were
each connected to four leads with phone-tip ends to allow any input to be connected to the input jacks of any
speurons. Each element is a Kirchhoff adder driving a
biased transistor. A circuit diagram of five elements and
their input-output is shown in Fig. 6. These correspond
to the simple Nj elements shown in Fig. 2, except for one
input which can be controlled (as in Fig. 3) to be made
positive, negative, or zero if the signal point connected
to that input is one.iThis input was always connected to

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

308

OUTPUT LIGHTS

033
25K
INPUT
SWITCHES

I33

I32

I€T31T33
T42

I43

T22

'-~----~I ~----~VV----+4---+~--~

€

I34

~:~

T44

T33

I4§gT33

T44

-67.5V

T35

SELF POS
oSELF ZERO
ySELF NEG

T24

+67.5V

Z=4

~+4.5V

L..4--O-+O=-- +3.0V

oL.-+I.5V
19

III

Fig. 6-Circuit diagram of 5 model elements with
nearest neighbor connection.

the signal pattern position corresponding to the neuronlike element itself. This resulted in positive summing inputs from the four neighboring field points and adding,
subtracting, or zero inputs from the corresponding or
self-position. Four Z settings were built into the model.
With the three self-input settings, 12 operations are
possible. A lamp was used to indicate if the output pattern of each element was a one. Either the input or output pattern can be displayed on the lamp array. A photograph of the model is shown in Fig. 7.
Successive transformations were made by noting the
pattern of output lights and setting up this pattern on
the toggle-switch input array. The model was connected
using both nearest neighbor and directed connections.
The model successfully indicated useful nearest neighbor connection transformations. It was found that
there were not enough elements to do anything useful
with the directed connection. Transformations by the
model are illustrated in the next section.

Model's Nearest Neighbor Transformations
Illustrations from the model's display of its 12 operations are shown in Fig. 8. To avoid confusion a different
input pattern was used to illustrate each threshold setting. IJl the first column are the input patterns. In the
second column are the patterns obtained with the positive self-connection; the third column shows zero selfconnections; the fourth column illustrates negative self-

connections. The four threshold settings of Z = 1, 2, 3,
and 4 are shown on rows 1, 2, 3, and 4. Note that the
transformations in the upper left fill in patterns, while
those in the lower right reduce patterns. If all of the inputs were controlled as in Fig. 3, the Ci controls would
make the transformations directionally selective among
15 alternatives.
Some basic picture-processing operations are shown
in Fig. 9. The results are self-explanatory. The values of
Z are the threshold setting; the values of S are E for an
adding or self-exciting connection, 0 for a self-zero or
ignoring connection, and I for a subtracting or selfinhibiting connection. If additional controls Ci were
used, patttern thinning and corner finding could be
made directional operations. For example, only lowerleft corners could be indicated on the output.
Fig. 10 illustrates operations on horizontal, vertical,
and diagonal lines, vees and tees. Transformations ar,e
possible which reduce vees or tees to a single point. Some
of the other patterns are interesting, especially the ones
showing the invariance of the vee to the self-settings.
The lower four photographs show the behavior of isolated lines to the transformation with Z = 2 and S = I.
Successive transformations with Z = 2 and S=O are
shown in Fig. 11. In one case, that of a nonsquare "L,"
successive transformations result in an alternation of
two patterns after a number of transformations, the
number depending on the pattern size. For a square
"L," the resulting pattern is stable and a filled square.
CONCLUSION

A classification of the character-recognition problem
was used to show the distinctions among the various
methods. The neuron-like net method gives a large
class of spatial operations, transforming all input pattern points simultaneously.
The neuron-like elements are relatively simple but
capable of many operations. All speurons are connected alike to elements of an input presentation and
are centrally controlled. Two distinct types of net connection have been studied. The first related neighboring
points of the pattern; the second related points along
given radii in the pattern. '
A simple model has been constructed and shown to
yield useful transformations which reduce irregularities
and extract pattern features. Transformations obtainable by other types of elements and net connections
have been indicated. A number-recognition method
using one connection is currently being evaluated. The
neuron-like net will be investigated more extensively by
simulation on the IBM 704 with input patterns prod uced by a scanner.11
11 W. H. Highleyman and L. A. Kamentsky, "A generalized
scanner for character and pattern recognition studies," this issue,
p.294.

Kamentsky: Recognition Systems-Picture Processing

Fig. 7-Photograph of the model.

Fig. 8-The 12 basic operations of the model.

Fig. 10-Picture-processing operations.

Fig. 9-Picture-processing operations.

Fig. l1-Successive transformations of a nonsquare
and square "L" with Z=2, s=o.

309

310

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

The Social Responsibility of Engineers and Scientists
F. B. WOODt

INTRODUCTION

ECENTLY there has been some interest in the
question of the social responsibility of engineers.
A series of articles and letters to the editor ap·
peared in the early part of 1958 in Computers and A utomation l - 6 which dealt first with whether a journal such
as Computers and Automation should publish articles on
the social responsibility of computer scientists. Then
specific topics such as the possibility of the destruction
of civilization due to some component failure in the computer linked to a missile-warning radar network were
treated. A series of viewpoints has been presented ranging from conscientious objection to working on a computer system that might be used for destructive purposes at one end of the scale, to a viewpoint of no concern with the use of one's work at the other end. My
interpretation of these discussions is that people are
arguing about the implied hypothesis: there is a danger
to the existence of our civilization because social institutions have too long a time lag in making adjustments to
utilize the latest technological advances wisely.
This apparently sudden interest in the social responsibility of computer scientists was preceded by a long and
fluctuating development of concern for social responsibility in science and engineering. Meier has reviewed the
status of social consequences of scientific discovery and
has made specific recommendations concerning the social responsibility of administrative scientists. 7 Layton
has studied the history of the idea of social responsibility
in the American engineering profession. 8 Rothstein has
discussed some of the deeper philosophical aspects of
these problems in his book. 9 The Western Joint Com-

R

t IBM Corp., San Jose, Calif.

puter Conference at Los Angeles, Calif., May 6, 1958,
conducted a panel on "The Social Problems of Automation."lo
The various viewpoints appearing in Computers and
Automation present an uncoordinated distribution of differing ideas. The views of the 1958 WJ CC panel have a
certain amount of coherence. I t would be desirable to
find a straightforward way for an individual or engineer
to determine his responsibilities in this area. The ideas
which I am about to develop are hypotheses brought
forward for the purpose of obtaining discussion on this
important subject. At this stage, they represent my own
personal views and are not to be construed as representing a policy of my employer. I would have preferred to
have this paper follow a historical analysis of this problem of the social responsibility of engineers so that I
could be sure that I am not repeating the same mistakes
made in previous periods of interest in the subject. Perhaps by; next year we will have a sounder base to operate from in discussing the subject of the social responsibility of engineers.
DISTINCTION BETWEEN THE SOCIAL RESPONSIBILITY OF
CITIZENS IN GENERAL AND THAT OF SPECIALISTS

In a democratic nation such as the United States, all
citizens have a responsibility to keep aware of the major
problems of our country. This is necessary to be prepared to make wise decisions in electing public officials
and in voting on basic policies. Specialists such as engineers and scientists of course share this basic responsibility with all citizens. I maintain that specialists have
an additional responsibility beyond that of the citizen
because of their special knowledge which is not readily
accessible to the layman.

1 Readers and Editor's Forum, "Curse or blessing?" Computers
and Automation, vol. 7, pp. 9-10; January, 1958.
WHAT SOCIAL RESPONSIBILITIES DO ENGINEERS
2 E. C. Berkeley, "Cooperation in horror," Computers and AutoAND ScIENTISTS HAVE?
mation, vol. 7, p. 3; February, 1958.
3 A. A. Burke (I), W. H. Pickering (II), and Editor (III), "DeIn the long run, technology is undoubtedly making
struction of civilized existence by automatic computing controls,"
Computers and Automation, vol. 7, pp. 13-14; March, 1958.
changes in the organization of our society. We cannot
L. Sutro, "Comments on 'Destruction of civilized existence by
"expect engineers and physical scientists to become
automatic computing controls,' " vol. 7; pp. 6, 31; May, 1958.
4 Editor (I, III) and Readers (II), "The soci.;tl responsibility of
sociologists. However, we can expect engineers to ask
computer scientists," Computers and Automation', vol. 7, pp. 6, 9;
questions and urge that appropriate social scientists
April, 1958.
'
5 "Ballot on discussion of social responsibility of computer scistudy the social problems related to their work. Each
entists," Computers and Automation, vol. 7, p. 6; May,' 1958.
scientist or engineer can ask himself where his own speLilter results, vol. 7, p. 6; July, 1958.
6 N. Macdonald, "An attempt to apply logic and common sense
cialty fits in the development of devices or new knowlto the social responsibility of computer scientists," Computers and
edge which may affect social organization. Then he can
Automation, vol. 7, pp. 22-29; May, 1958.
Discussion: "Locks for front doors," vol. 7, p. 24; August, 1958.
speculate as to what problems might come up in the fu7 R. L. Meier, "Analysis of the social consequences of scientific
ture due to the application of his work.
discovery," Amer. J. Phys., vol. 25, pp. 609-613; pecember, 1957.
8 E. Layton, "The American engineering profession and the idea
of social responsibility," Ph.D. dissertation, Univ. of Calif. at Los
10 H. T. Larson (chairman), H. D. Lasswell, B. J. Shafer, and
Angeles; December, 1956.
C. C. Hurd, "The social problems of automation," panel discussion,
9 J. Rothstein, "Communication, Organization and Science,"
Proc. WJCC, pp. 7-16; May, 1958. (AlEE Publication T-107.)
The Falcon's Wing Press, Indian Hills, Colo.; 1958.

Wood: The Social Responsibility of Engineers and Scientists
This may be a further stage in the development of the
last decade in which it has become popular to use human
factors engineering studies directed by psychologists to
determine if proposed electromechanical devices requiring human reading or manipulation are consistent with
the way human beings function. As our industrial society becomes more complex, it may be necessary to extend this concept to "social factors" studies where the
engineer calls in sociologists to investigate the social
effects of applying his new knowledge or devices.
At this stage the engineer's responsibility may be to
see if there is someone or some group studying these
problems, and if there is not, he can recommend to the
appropriate agency that such a project be undertaken.
In this way the engineer can shorten the time lag between the introduction of a new technology and the appreciation of its social consequences. I have noticed that
even specialists sometimes fail to recognize the division
point between their domain and that of other specialists.
The important thing here is to obtain the advice of the
appropriate specialists, instead of just relying upon our
own ideas and feelings.
A CHECKING CHART TO AID THE ENGINEER IN
DEVELOPING SOCIAL RESPONSIBILITY

Let us construct a chart to outline the factors involved in determining what an engineer's social responsibility should be. Such a chart is shown in Fig. 1. Starting in the lower left-hand corner, there is a box to write
in the "Engineer's Special Work." To the right appears
a box for the "New Knowledge and Devices" which may
result from the work of the engineer.
The next step is more speculative, namely the listing
of "Potential Social Consequences" in the third box.
The next box, "Find Expert Advice," is for statement
of the problems that the potential social consequences
indicate as requiring investigation by social science advisors. A sample of the principal fields of science advisors who might be consulted is listed with boxes for
checking to see if they are needed on this problem. Some
engineers' special work may lead to problems of a biological or medical nature, while the work of others may
require the aid of psychologists or social scientists.
After preliminary contact has been established by the
engineer with the required science advisors, the engineer
must determine how far he will go himself in taking
action. In the box on the right, four different magnitudes
of action are indicated. The engineer may find all he has
to do is to inform the appropriate social scientists about
the problems, and they will pick up the responsibility
from there on. In other cases there may be no funds to
support the social scientists, and the engineer may feel
it is his responsibility to campaign for appropriation of
funds to support social science projects or to convince
industrial management to include social scientists on
their staffs.
I claim that the engineer, who does not have much
spare time because of his basic engineering work and

311

SCIENCE ADVISORS
SOCIAL SCIENTISTS
o Historian

o
o

o
o

FIND EXPERT ADVICE

@

Lawyer
Philosopher
Political ScI.
Sociologist

o Inform
o
o
o

PSYCHOLOGICAL

o
o

o

CD

TAKE
APPROPRIATE
ACTION

Industrial
Engineer
Psychologist
Psychiatrist

DIscuss
Propose

CampaIgn

BIOLOGICAL

o

Biologist

D Physician

CD

ENGINEER'S
SPECIAL WORK

NEW KNOWLEDGE AND DEVICES

®

Fig. 1-A checking chart for analyzing the social
responsibility of engineers and scientists.

his family responsibilities, can find short cuts to understanding the social implications of his work through devices such as the checking chart of Fig. 1. I have faith
that the engineer can fulfill his social responsibility to
help utilize the results of his work in keeping with mankind's highest aspirations.
To fulfill his social responsibility the engineer must
understand that it is a responsibility he shares with
many people both inside and outside his profession. He
may not need to devote a tremendous amount of time
and energy to the social implications of his work. The
key to success lies in developing a fruitful perspective
of the relationship of his work to the society in which
he lives.
A SAMPLE USE OF THE CHECKING CHART

Consider an engineer working on the problems of data
communication in connecting remote stations to a central computer. This is entered in the first block in Fig.
2. A successful solution to the data communication
problem might result in a universal credit system, where
every store, airline, do'ctor's office, race track, stock exchange, etc., would have terminal sets which would
make transactions when the customer's coded credit
card is inserted in the set. This would eliminate the need
for money for most transactions. This new device is entered in the second block in Fig. 2.
Then we go on to block 3, "List Potential Social Consequences," such as:
1) The elimination of money might mean there would
be no more armed robberies, which would be a step
forward in the development of civilization.

312

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

SCIENCE ADVISORS
SOCIAL SCIENTISTS
o Historian
00 Lawyer
[X] Philosopher
IXJ Political Sci.
IX) Sociologist
PSYCHO LOG ICAL
o Industrial

m

o

EngIneer

FIND EXPERT ADVICE

®

@

How can we provide
protection for individual
freedom?
Are there simple legal
means of preventing a
dictator from controlling
society through control of
the credit sy stem?

TAKE
APPROPRIA1£
ACTION

[X] Inform
[Xl Discuss

o
o

Propose
Campaign

Psychologist
Psychiatrist

BIOLOGICAL
Biologist
Physician

(1) No more armed robberies.

o
o

(2) Shorter working week in business
and sales.
(3) Problem o( underworld inventing

CD

ENGINEER'S
SPECIAL WORK

new money system.
(4) Possible loss of creative minority

groups.

Computer-Data
Communication

Universal Credit System Which
Replaces Money

Fig. 2-A sample use of the checking chart.

2) The universal credit system might permit a
shorter working week in sales and business administration work, permitting individuals to devote more time to creative hobbies which would
enrich our community life.
3) New problems might arise such as gangsters inven ting a new money system to finance illegal activities.
4) Police measures instituted to suppress the underworld gangsters might interfere with groups working on important social problems. For example,
some public officials might be violating some of the
provisions of the United States Constitution by
discriminating against some minority racial or
religious group. People in the community involved
might feel like contributing a few dollars each to
hire a lawyer to look into the case. These people
might be afraid to contribute to this important
cause when the accounting system would keep a
record of each transaction. How do they know
whether some future official will be able to distinguish between supporting a legal test cas~ to
protect the ,Constitution, and supporting some
subversive activities? In such a situation the existence of this universal accounting system might
inhibit people from protecting our constitutional
government.
The next step for the engineer is to find expert advice
to evaluate which of the potential social consequences
pose real problems that won't just solve themselves in

the nat~ral course of events. It would be desirable if we
engineers could just refer these questions to some agency
such as the National Science Foundation (NSF) for
consideration. At present the NSF has a limited representation from the social sciences, so the engineer may
have to find appropriate experts wherever he can. To
assist the engineer in finding advice, I have listed some of
the more obvious classifications on the checking chart
under the principal categories of biological, psychological, and social science. At present the potential science
advisors can usually be found on the staffs of nearby
colleges or research institutes. In this case I have
checked the boxes opposite the relevant categories in
this sample case.
Informal discussion with these expert advisors results
in a restatement of the problems as follows:
1) How can we provide protection for individual freedom in a more complex society where new technology such as computer-data communication systems
permi t a centralized accounting system covering
all financial transactions in the community?
2) Are there simple legal means and technical characteristics of a computer-data communication, system which permit safeguards to prevent potential
dictators from seizing control of the system as
means of gaining control of our country?
These questions as now restated are questions of importance to all citizens. The social scientists and the engineers both have additional responsibilities over and
above their basic responsibility as citizens. However, the
citizens at large have the basic responsibility of providing for financial support of such studies. The extent to
which the engineer is responsible for taking action on
these matters depends upon the state of development of
social science research projects.
On the checking chart I have shown four degrees of
action the engineer might take:
1) Inform: If through governent agencies or private
foundations there exist social science research projects adequate to study the problems, the engineer
may discharge his social responsibility by simply
informing these social scientists about the potential technological changes that may result from
his work. In some cases an engineering research
organization, in order to protect its proprietary
interests, may prefer to hire social science consultants instead of releasing technological data to
outside institutions.
2) Discuss: If the social science consultants are available and are financed, but do not have sufficient
understanding of the technology involved, the engineer may have to organize discussions with the
social scientist in order to pass enough of his special knowledge on to the people otherwise qualified
to investigate these problems.

Wood: The Social Responsibility of Engineers and Scientists
3) Propose: If there are insufficient social scientists
available and the funds available are inadequate
to support such research, the engineer may find it
necessary to propose new appropriations and
scholarships through his company, the existing research foundations, or through government agencies.
4) Campaign: If the agencies having the power to
allocate funds for the study of these social problems fail to act, and the engineer is convinced that
the problems will soon be urgent, he may have to
plan stronger action such as campaigning to get
political groups to pick up the problems. He may
have to carry his campaign directly to the people,
if the political leaders are insensitive to his proposals.
MAINTAINING A PERSPECTIVE

In his specialized engineering work the engineer has
acquired through education and experience the portions of basic science that are most useful in his particular engineering assignment. The human needs on his job
assignment usually have been evaluated by other people
so that the human needs have already been translated
into engineering objectives. To fulfill his role as "interpreter of science in terms of human needs," he needs
some more direct contact with both science and with
human needs. He can read such magazines as the Scientific American, which has popular articles on all levels
of phenomena, as a way of keeping abreast of developments in science. To obtain a more direct contact with
human needs, he can participate in a local church social
problems study group. In order to develop a better understanding of the business world in which the results
of his engineering work are used, he can read a magazine
such as Fortune. He can develop a better perception of
the social effects of science on a world scale by following
the activities of the United Nations Educational, Scientific, and Cultural Organization (UNESCO) by reading one of their bulletins such as the quarterly Impact of
Science Upon Society. The technical societies such as the
AlEE and the IRE might eventually develop a monthly
one-page abstract of significant articles relating to the
social consequences of new technology.

313

CONCLUSIONS

Recent articles and panels on the social problems of
computers and automation are a healthy sign that some
engineers are developing a perspective of how their special field relates to the activities of mankind in general.
Engineers need some kind of a framework to present an
abstract but meaningful view of human activity to
which they can correlate their own work.
A checking chart has been developed to assist the engineer in tracing the potential social consequences of his
own work. A table of major sections of the biological,
psychological, and social sciences is included to assist
the engineer in selecting expert advisors.
.
In a democracy all citizens have a responsibility to
keep aware of the major problems of our country. I
believ~ that specialists such as engineers and scientists
have an additional social responsibility because their
knowledge is not readily accessible to the layman.
I believe that the engineer can carry out his social responsibility primarily by being concerned with the question: Are qualified experts investigating the potential
social problems that might result from the engineer's
work? The engineer can use the checking chart developed in this paper to assist in arriving at an answer
to the question and in determining to what level of action his responsibility should extend. He shares with
other specialists the responsibility for seeing that these
problems are being studied and that provisions to inform
the voters are made in our society.
I do not suggest that the engineer should be responsible for solving the social problems related to his work.
The engineer's responsibility is more of a coordinator to
alert the people of our country to the status of our coverage of the problems. If the engineer finds that a social
problem relating to his engineering work is not being
adequately investigated, he has a responsibility to refer
questions to management, social scientists, government
agencies, and to the citizens at large to stimulate the
investigation of such problems.
ACKNOWLEDGMENT

I wish to express my appreciation to Dr. M. M. Astrahan for his valuable comments and discussion during the
preparation of this paper.

314

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Emergency Simulation of the Duties of the
President of the United States
LOUIS L. SUTROt
I. INTRODUCTION

A

TECHNICAL problem is arising in our democratic government which engineers and mathematicians are equipped to assist in solving. The
problem is how to approach making the kind of decision
the President is called upon to make if missiles are detected on their way toward the United States. The
number of facts on which a decision should be based
appears to be increasing. The length of time in which
to make the decision appears to be getting shorter.
Dr. Isador Rabi described the problem in a speech
given in December, 1957. 1
Hydrogen bombs are going to be deployed at bases around the
world under the control of many groups of persons. If an oncoming
ICBM were detected 5000 miles away there might be time to intercept it with weapons not yet developed. But there will not be time to
wake up the President to ask what to do, to call a meeting of the
cabinet.

Facing a question that has not been mentioned before
in the literature of computer engineering, we should give
great consideration to method. I propose that as we approach each part of the problem we first describe it in
the language most appropriate for the topic. Then let us
attempt to translate this statement into computer and
control terminology. Third, let us inquire to what extent
an improved system can be built out of a combination
of human beings and electronic equipment or electronic
equipment alone.
The work of three men is the precedent for the attempt, in ,this paper, to describe human beings, human
relations, and man-machine relations in terms of computer and control engineering. One is Dr. Warren
McCulloch, a psychiatrist now at M.I.T., who is describing the human nervous system in this manner. One
of his early papers was, "The Brain as a Computing
Machine."2 One of his more recent is on the design of
reliable circuits out of unreliable components,3 giving
one answer to the question of why the brain is as reliable as it is. The second man to supply precedent is
Dr. Karl Deutsch, a political scientist now at Yale. He
came to wide attention with the publication of a book
explaining nationalism in terms of communication engi-

t Instrumentation Lab., Dept. of Aeronautics and Astronautics,
Mass. Inst. Tech., Cambridge, Mass.
1 R. K. Plumb, "New weapons peril U. S. life, Rabi says," New
York Times, vol. 107, pp. 1, 10; January 1, 1958.
2 W. S. McCulloch, "The brain as a computing machine," Trans.
AlEE, vol. 6, pp. 492-497; June, 1949.
3 W. S. McCulloch, "Stable, reliable and flexible nets of unreliable
formal neurons," Res. Lab. of Electronics, M.LT., Cambridge, Mass.,
Quart. Prog. Rep., pp. 118-129; October, 1958.

neering. 4 The third is Jay W. Forrester who is now simulating business and economic systems by computer programs. He described his approach in, "Industrial Dynamics-A Major Breakthrough for Decision Makers."5
Prior to undertaking this he directed the develop men t
of the SAGE computer. I quote these three men extensively.
This paper was written during evenings, weekends,
and holidays. The opinions expressed are mine or those
whom I quote, and not necessarily those of my employer.
II. THE PROBLEM
We appear to be approaching an era of violence. The
two major powers are manufacturing weapons to kill
millions of people. They can be fired by the push of a
button or by the signal from a computer. Many may
soon be hidden so that they cannot be destroyed by
bombing. As these weapons are built, installed, and connected to remote controls, the probability that one will
be fired will rise rapidly, and the probability of a salvo
to wipe out a nation will also rise, although more slowly.
The problem that engineers need to consider requires
them to design controls that operate within limits. They
must so arm the United States that another country
considering an attack will know that it will receive a
violent attack in return. Such armament is called deterrent power. On the other hand, they need to be concerned that building up deterrent power by the United
States will lead to building up deterrent power by another country. This interaction is regenerative and leads
to a rising probability of destruction of both sides.
The need for deterrent power was presented by Albert
Wohlstetter in an article entitled, "The Delicate Balance of Terror."6 Wohlstetter is an economist for the
Rand Corporation, a private nonprofit research corporation working on aspects of national defense and survival.
He states that:
We must expect a vast increase in the weight of attack which
the Soviets can deliver with little warning, and the growth of a significant Russian capability for an I essentially warningless attack ....
What can be said, then, as to whether general war is unlikely:>
Would not a general nuclear war mean "extinction" for the aggressor
4 K. W. Deutsch, "Nationalism and Social Communication" John
Wiley and Sons, New York, N. Y.; 1953.
'
5 J. W. Forrester, "Industrial dynamics-a major break-through
for decision makers," Harvard Business Rev., vol. 36; July-August
1958.
'
6 A. Wohlstetter, "The delicate balance of terror," Foreign Affairs, vol. 37, pp. 217, 222; January, 1959.

Sutro: Emergency Simulation of the Duties of the President of the United States

Data Input
Radars

Output

Data Processors

---~BMEWS

and SAGE

Computers--~Dept.

315

of Defense

"'" '\aThe

..

President--~DeclslOn

andA'
ctlon

//'
Reports from

Overseas---------------~)Dept.

of State

Fig. I-Examples of channels in the man-machine system for making emergency decisions.
as well as the defender? "Extinction" is a state that badly needs
analysis. Russian casualties in World War II were more than
20,000,000. Yet Russia recovered extremely well from this catastrophe. There are several quite plausible circumstances in the future
when the Russians might be quite confident of being able to limit
damage to considerably less than this number-if they ma~e sensible
strategic choices and we do not. On the other hand, the risks of. not
striking might at some juncture appear very great to the SovIets,
involving, for example, disastrous defeat in peripheral war, loss of
key satellites with danger of war spreading-possibly to Russia itself
-or fear of attack by ourselves. 6

Wohlstetter concludes that our ability to strike back
in spite of attack should make a foreign country's aggression less likely. This is deterrence. It consists of two
parts: first, the weapons, and second, the ability to reach
a decision to use them.
In arming against Russia, the United States is making
a move which may be followed by more arming on the
part of the Russians. This is positive feedback. It should
be replaced by negative feedback of the kind to be described in the next section.
Let us return now to the problem, namely, how to approach making' the kind of decision the President is
called upon to make if missiles are detected on their way
toward the United States. Dr. Karl Deutsch who has
studied this problem suggests breaking it down into the
following parts: 7

We can plot on this diagram the three improvements
recommended by Dr. Deutsch. To broaden the facts on
which a decision is based, there needs to be a greater
input of data. In addition, there need to be better ways
of tapping the facts stored in the executive departments.
To improve the reliability of logic and computation requires improved data processors. To shorten the time
requires an increase in speed of the entire decisionmaking system.
Pursuit of these three improvements can take us a
long way toward a solution of our problem. To go further
requires that we look closely first at the human being
who holds the office of President, then at the biological
computer which learns, remembers, and makes decisions. Delving into these biological mechanisms will alJow us to examine possible simulators of memory, ability
to learn, and abi1ity to make decisions.

III.

HISTORY OF OUR DECISION-MAKING SYSTEM

We have now described the problem this paper considers, in language appropriate to the problem. We began to convert this description to computer language
when we made the simplified diagram of the system
(Fig. 1) and observed that this is a man-machine system.
To progress further in making a description in computer
1) Broaden the base of facts which lead to a decision.
and control terminology, we need to go back to the
2) Improve the reliability of the logic and computation used in
origins of this man-machine system.
processing these facts.
Perhaps by accident, the history of man-machine sys3) Shorten the time for making the decision.
tems has never been told as a whole. To read present
Let us apply Dr. Deutsch's analysis to a rough dia- texts on the subject one might be led to believe that
gram of the man-machine system now used for making man-machine systems are not much more than a hunemergency decisions. (See Fig. 1.) The upper input iJ- dred years old. Yet books are a kind of machine. Their
lustrates electronic channels; the lower, written reports. parts move with respect to one another. Moreover, as
The many other inputs have been purposely omitted. ~ a human being reads words in a book, he is letting these
Data flow from these inputs through a stage of data words program the biological computer in his head.
processing before they enter the State and Defense DeThus, a society that lives by rules written in books
partments. In the executive departments, the new data is a man-machine system. It has been evolving for 5000
are correlated with data stored in the files and memories years, from the days when men first wrote on stones
of the personnel. They report to the President and they and clay blocks to the present when recorded knowledge
may recommend action. The President usually chooses fills vast libraries. The evolutionary process has been
between alternatives presented to him. If there is time carried forward by inventive people who created new
he will consult with the National Security Council be- systems when the need arose for them.
fore deciding.
Benjamin Franklin might be called the first engineer
to apply himself to the design of the American system.
We know Franklin for his inventive work in the realms
7 K. W. Deutsch, private communication; February 21, 1959.

316

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

of electricity and heat. He discovered the identity of
lightning and electricity and advanced the theory, still
valid, that electricity is of two kinds, "positive" and
"negative." He invented the lightning rod, a heating
system for American homes, and the lending library. In
1754, he started work on the American system of government. 8 The colonies were then threatened by the French
and the Indians. The British government called a congress at Albany in the hope of getting the colonies to
cooperate in raising troops and funds. Franklin, representing Pennsylvania, drafted the plan which the ~on­
gress adopted, although the colonies did not.
Franklin's plan, redrafted twenty years later, became
the Articles of Confederation, which were the system
specifications for the first American government. When
a more elaborate system was required, Franklin participated in the writing of the present Constitution.
J ames Madison was the leading designer this time.
Unlike Franklin, he had specialized in the design and
operation of governmental systems. He had helped to
set up the state government of Virginia. He had served
in Congress and observed the weaknesses of the Articles
of Confederation. When the prospect arose of writing a
Constitution he wrote out a proposal for it.
Adopted in 1789, the Constitution has grown since
then by amendments and interpretation by courts. Congress has passed laws and administrators have made
rules to carry out the laws. These rules are the programs
which public officials pledge that their internal computers will obey.
The system devised by Franklin, Madison, and the
other founding fathers is diagrammed in Figs. 2-4. Lines
represent information flow. Fig. 2 suggests that each
Congressman is ideally part of several feedback loops.
The people in a congressional district elect him, then
demand action of him. His, action may be to participate
in writing a new law or in opposing a proposed law. One
feedback loop consists of reports by newspapers, radio,
and TV. In another loop, the law is carried out by someone appointed by the Presiden t. Either the reports
shown in the first loop or the impact of the law itself
on wages, prices, and other interests of people shown in
the second loop, may cause them to change their demand on their Congressman. If he acts to their satisfaction, they usually re-elect him.
The election of the President occurs in another loop
which takes four years to traverse. Formation of the
Consti tu tion occurs in still another loop with the longest
time period of all. Fig. 3 shows the same loops as Fig. 2,
but now all of Congress and the whole electorate are
represented. The whole body of law enacted by Congress
is shown as a block at the center. To it is attached a
small block below it, representing the newly enacted
law.
8 H. C. Hockett, "Political and Social Growth of the United
States, 1492-1852," The Macmillan Co., New York, N. Y., pp. 188,
189, 247, 286; 1935.

A Congressman is also part of feedback loops that
include very much larger groups of people than a congressional district. Such groups might be the automobile
industry, the United States, or mankind. To show these
feedback loops would require a very much more intricate
drawing than Fig. 3. The number of these additional
feedback loops and the quantity of people that they involve are a measure of the breadth of interests and the
statesmanship of a Congressman.
Fig. 4 shows the response that the system was designed to make to an offensive incident or series of incidents by another nation. The incidents bore on the
electorate or on special interest groups among the
electorate who demanded action from Congress and the
President. When a "threshold of tolerance" was crossed,
Congress declared war and the President carried out the
war through his secretaries of War and Navy. In practice, the incidents may have affected the owners and
editors of mass media of communication so that they
demanded action from Congress and the President. Or
the incidents might come more fully to the attention
of the executive than the public and thus the threshold
of tolerance of the President would be crossed before
that of the public and he would press Congress to a
greater degree. This happened from 1939 to 1941.
Germany under Hitler, Russia then and today, lack
the free flow of information and feedback controls of the
kind described above. They are less stable in their relations with other nations. For example, a treaty, being a
law, is part of the feedback control system of the United
States. A treaty made by a dictatorship is observed or
not as the dictator sees fit.
A system like that devised for the United States could
be devised for the entire world and provide stability in
that area also. The man part of the system needs to be
educated for its task. The machine part needs to be
capable of greater speed and reliability than the original
system designed for the United States.
IV.

CHANGES TO THE DECISION-MAKING SYSTEM,

1950

TO

1959

We have described in computer and control terminology the system that operated to repel an attack up to
1950. Let us now look at the changes that have been
made in the present decade. Steps have been taken in
each of the three directions that we considered desirable
in Section I I .
Fig. 1 showed the pattern of respons~ that has been
taking shape since 1950. Congress is no longer part of
the loop of response. In January, 1955, Congress
handed to the President the power to defend Quemoy and Matsu if he
likes, and tol use atomic weapons there at his discretion .... The
pattern is now clear; in the Middle East, as in the Far East, Congress
has left it to the President to fight or retreat as he sees fit. 9
9 J. Reston, "War-making power; Quemoy crisis shows how controLpassed from Congress to President," New York Times, vol. 107,
p. 4; September 4, 1958.

Sutro: Emergency Simulation of the Duties of the President of the United States

317

Judicial
Review
Elect
People in a
Congressional
District
'--_ _~--:~..,., Demand action of

Congressman

Passes

Report by Newspapers, Radio, and TV

Impact on Wages, Prices, and
Other Interests of People
Fig. 2-Congressman in feedback loops.

AHorNE/{
HilT/ON

Fig. 3-Simplified block diagram of the United States Government.

This act of Congress formalized the practice begun by
President Truman at the outbreak of the Korean Conflict in 1950. This practice has served to shorten the
time for making the decision, but I question if it has increased the reliability of the decision. The older system
requiring debate in Congress and across the nation
brought more minds to bear on the problem.
The flow of facts into the decision-making system has
been increased and speeded by two unique electronic systems, SAGE (Semi-Automatic Ground Environment)
and BMEWS (Ballistic Missile Early Warning System).
Fig. 5 shows the contents of the upper half of the
diagram of Fig. 1 arranged in pictorial fashion. For
simplicity, it shows only the part of the system where
data are detected and moved at electronic speeds. The
flow of reports from overseas to the State Department
is assumed to be present but not shown. SAGE computers with radars above them are shown in the inner
ring. BMEWS computers with radars above them are

Fig. 4-Response of the United States to an attack, 1789-1950.

Fig. 5-Hypothetical response of the United States to an attack in
the present and near future. (The flow of reports from overseas to
the State Department is assumed but not shown.)

318

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

shown in the outer ring. Their signals are shown entering
a hypothetical central computer which organizes them
for presentation to personnel in the Defense Department. These people merge the new data with pertinent
data from their own memories or their files. Then they
make selections from these merged data to compose reports and make recommendations to the President.
The SAGE and BMEWS systems are part of the improvement to the decision-making process that we seek.
They broaden the base of facts on which a decision
would be made. Let us examine those systems in more
detail.
Sage is a gigantic man-machine system whose radars
watch the sky over the United States and feed information into the largest computers so far mass-produced. IO - 15
At each of about 30 "direction centers" in the United
States, a 75,000-instruction program runs continuously
to process the data and display them on large scopes.
Fig. 6 shows a typical computer center. Fig. 7 shows
this center being fed by information from radars at the
left and giving out information to planes and missiles at
the right. A tie is shown at the top to higher headquarters and at the bottom to an adjacent direction
center.
Fig. 8 shows the point at which the SAGE computer
gives up its data to a man who then makes a decision.
Here an Air Force officer, looking at a displayed map
on which approaching enemy planes are shown, orders
planes or missiles to intercept them. The SAGE compu ter carries ou t his order by directing the plane or missile to the target.
In 1960 a system is scheduled to go into operation
which will inspect in a similar fashion the air space between the United States and other countries. This is
the Ballistic Missile Early Warning System (BMEWS)
whose radars have a range of 3000 miles. 16 The radar returns will be interpreted by a computer to discover
whether each object seen moving at high speed is a
meteor, a satellite, or an ICBM. As in the SAGE system,
the conclusions reached could be used to generate a display, send a message, or fire an interceptor missile.
But it can also do more than SAGE. By tracing the
trajectory of a missile, BMEWS can determine where it
10 R. R. Everett, C. A. Zraket, and H. D. Bennington, "SAGE, a
data processing system for air defense," PrOG. EJCC, pp. 148-155;
December, 1957.
11 W. A. Ogletree, H. W. Taylor, E. W. Veitch, and J. Wylon,
"ANjFST-2 processing for SAGE," ProG. EJCC, pp. 156-160; December, 1957.
12 R. R. Vance, L. G. Dooley, and C. W. Diss, "Operation of the
SAGE duplex computers," FroG. EJCC, pp. 160-163; December, 1957.
13 M. M. Astrahan, B. Housman, J. F. Jacobs, R. P. Mayer, and
W. H. Thomas, "The logical design of the digital computer for the
SAGE system," IBM J. Res. Dev., vol. 1; January, 1957.
14 H. D. Bennington, "Production of large computer programs,"
ProG. Symp. on Adv. Prog. Methods for Digital Computers, ONR
Symp. Rep. ACR-15; June 2, 1956.
15- D. R. Israel, "Simulation in large digital control systems,"
presented at the Natl. Simulation ConL, Houston, Texas; April,
1956.
16 "The ICBM's: danger-and deterrents," Newsweek, vol. 52, pp.
56-57; December 22, 1958.

Fig. 6-A SAGE direction center building. (Photograph by
Lincoln Laboratory, M.LT.)

Fig. 7-Inputs to SAGE direction center are from radars at left,
weather stations and commercial planes below. Outputs are to
planes, missiles, adjacent direction centers, and higher headquarters. (Drawing by Lincoln Laboratory, M.LT.)

Fig. 8-Air Force officer ordering interception of enemy plane or
missile. (Photograph by Lincoln Laboratory, M.LT.)

came from; then, assuming the missile takes no evasive
action at a later stage, BMEWS can predict where the
missile is likely to go. The prediction may make it
possible to destroy the missile in the air. The estimate of
where the missile came from can be the basis for a decision to retaliate.

Sutro: Emergency Simulation of the Duties of the President of the United States
Congress' response to the threat of nuclear attack has
been to increase the effectiveness of the President and
at the same time weaken the feedback loops of which
the President is a part. This has reduced the sensitivity
of the control system to public demands and restraints.
I t appears that attention should be given to providing
new control loops to replace those that have been
weakened or removed. But first let us give our full attention to the problems of the President.
V.

/

319

/

PROBLEMS OF THE PRESIDENT IN AN EMERGENCY

The following are two situations that he might have
to face.
Fig. 9 shows country A (aggressor) launching an attack on country N (non aggressor) intended to both
destroy it and prevent it from retaliating. Let us assume
that the deterrent power of country N is its ability to
launch missiles. It appears that in the immediate future,
the majority of launching sites are likely to be known,
with the result that a retaliatory attack by missiles can
be made only if it is started before the original attack
arrives. There are several "ifs" here and if they are all
to be satisfied, speed of decision is very important. However, when the retaliatory power is hidden, as we are led
to believe it will be in a few years, great speed will not
necessarily be needed. A reliable decision requiring days
if necessary appears far more important, lest an error
be made.
However, a circumstance that would demand speed of
decision arises when the President's life is threatened by
an approaching missile. He has two alternatives: to order a retaliatory attack on a suspected country, or to
wait, knowing that if he is destroyed someone else may
order the retaliatory attack. Needed is fast processing of
data that give him a reliable basis for decision in the
time he has available.
As we approach closer to an examination of the duties
of the President, let us consider what Dr. Deutsch believes a data-processing system can and cannot do
today:7
1) Compute trade-offs (if I do this, then what?).
a) What might be the effect of each of our actions on the
civilians in this country?
b) What will be the effect of each of our actions on the capabilities of the attacking countries?
c) What will be the effect on third countries?
2) Prepare estimates of the over-all effect of an action.
3) Make recommendations to the President.
No computer today has the learning capacity of an individual,
much less that of a community. Computers should facilitate human
and community learning by evaluating and cross-checking relevant
data. Progress consists of putting more and more of the informationhandling burden on the mechanical and electronic equipment and
leaving an ever·smaller amount of ever-higher decisions to the human
agent. 7

But suppose the human agent does not respond because he is asleep, as Dr. Rabi suggested, or for some
other reason. It is the obligation of computer engineers
and programmers to inquire what they can do to supplement the President. The American people may not

Fig. 9-Country A launching an attack on countrv N.

accept what they propose. But proposals should be
made periodically and in greater detail as more techniques become available.
To that end a description will be made first of the
emergency duties of the President, then of the qualities
that led to his selection by the American people. These
two descriptions can be regarded as part of the specifications of a simulator. With these specifications before us
we will then inquire how far engineers have progressed
toward the emergency simulation of the duties of the
President.

VI.

THE DUTIES AND THE QUALITIES OF A PRESIDENT

The President's task in the problem we are considering is to order or not to order the military to act. He is
there to make sure that the military are effectors, not
decision points. For example, in an international crisis,
military men get poised, ready to use their weapons.
The President, on the other hand, will act the way his
personality dictates.
All that we ask of a President is that he be his best
self. We mean by this that we ask him to apply to a
major decision the traits that he demonstrated before
taking office. Yet all of us have our ups and downs.
There is always the possibility that a quick decision
will be required when the President is not at his best. A
system to back up the President, therefore, is being considered.
If such a system were to win the acceptance of the
American people it would need some of the qualities of a
President. What are some of these?
To avoid the mental images of actual Presidents, let
us refer to the President for the moment as a system-a
very elaborate biological system. This system is put into
its key position by a process whose first milestone is
nomination at a national convention. It is then tested
for three to four months in a kind of trial presidency during which it is presented with the problems of the President and called upon to declare what decisions it would

320

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

make if it were the President. During this same time, the
system is watched by reporters and TV: How does it
treat its wife, its children, its friends? What are its beliefs? Does it get angry easily? During this testing period, an image is built up in the minds of the voters.
The image is one of a predictable system, to the extent
that the voter has made observations. On election day,
at the end of this test period, voters choose between two
or more systems.
Looking more closely at a system, we observe that
what interests the voters most-or what we think should
interest them most-is its information-processing subsystem. This is a network of switching and storage elements. Of the 30 million million cells that comprise a
human system about one tenth make up its informationprocessing or nervous subsystem.
Dr. McCulloch calls this subsystem a "biological
computer."17 Feeding information into it are the senses
of sight, hearing, touch, taste, smell, and acceleration.
I t contains three kinds of memory, a means of learning,
and a means of making decisions. It appears that a system to simulate (see the Appendix) the duties of the
President will require the following properties of biological computers:
1) Memory
2) Ability to learn
3) Ability to make decisions.

In the following three sections we briefly describe
and evaluate the steps that the computer engineering
profession has taken toward simulation of the duties of
the President. These efforts are for other purposes, but
they serve this purpose.

model,18 a solid line represents the flow of goods from
the factory to the warehouse, to the retailer, and finally
to the customer. Dashed lines represent the flow of information from the customer to the retailer and all the
way back to the factory. Numbers in the lines indicate
the length of delays. Where a flow of goods and a flow
of information touch, a decision is made.
Forrester's diagram represents a more advanced form
of analysis than that shown in Figs. 3-5. The analysis
itself consists of difference equations.
The following (typical) equation tells how to calculate the level
of Unfilled Orders at (the) Retail (end of the business) at time K:
UORK = UOR.r + DT(RRR.rK - SSR.rK).
This equation tells us that the unfilled orders at retail at time, K,
are equal to the unfilled orders at retail at the previous time, J, plus
the inflow minus the overflow. 1s

The inflow is the product of a time interval DT and a
rate, RRR, that holds from times J to K. The outflow
is the product of the same time interval and another
rate, SSR. Each equation is evaluated independently,
using the results from the previous evaluation of all the
equations. (See the Appendix.)
While the simulator of the President would require
facts and figures bearing on current issues, its memory
of environment can be approximate. Industrial dynamics models could serve this purpose. The model described above was intended to bring understanding of
one company to its factory manager or corporation executive. Models of the groups of companies that make
up an industry would be useful to the simulator of a
President. Models of the United States government, its
aBies, and its adversaries would be necessary.
VI I I. ABILITY TO LEARN

VII. SIMULATION OF HUMAN MEMORY
By computer merp.ory we mean both the static storage
and the continuously running program that up-dates
this storage and presents alternatives for decision. Let
us look at the memories in both SAGE and in Industrial
Dynamics Research programs at M.I.T.
From the data received by its radars, a SAGE computer can predict the course of each aircraft in the airspace which it is monitoring. It can predict the points
at which interception can be made by aircraft taking
off from different airfields. An Air Force officer, watching the two predictions plotted on a scope, can select an
aircraft· to make an interception. This action is illustrated in Fig. 8.
Just as the SAGE computer contains a model of moving air~raft, so an Industrial Dynamics program contains the model of a company. In a diagram of a typical
17 W. S. McCulloch, "Reliability of Biological Computers," lecture,
University of Pittsburgh, Pittsburgh, Pa.; May 10, 1957. (Unpublished.)

The present system for making emergency decisions
is one that learns. The biological computers in the system learn by changing, or increasing, the storage in
their memories. The system as a whole learns in several
ways, one of which is illustrated in Fig. 3. Here trials
and errors are recorded in the memories of human
beings and lead to new rules.
The first method of learning we shall consider for the
simulator is continual reprogramming. Dr. Richard C.
Clippinger suggests :19
I t will probably be necessary for the governmental simulator to
operate in parallel with the President for a considerable time in
order to learn. Computer learning is similar to the successive reprogramming of a complicated process by means of more and more
efficient programs, drawing intelligently on more and more past
experience. Probably the longer it has been in operation the more
efficient it will be, that is, the more it can accomplish in a few microseconds.
18 ]. W. Forrester, "Formulating Quantitative Models of Dynamic
Behavior of Industrial and Economic Systems, Part I," Industrial
Dynamics Res., School of Industrial Management, M.LT., Cambridge, Mass., Memo. D-16, pp. 8, 30, 31; April 5, 1958.
19 Private communication, October 19, 1958.

i

Sutro: Emergency Simulation of the Duties of the President of the United States
The SAGE system learns in the manner described by
Dr. Clippinger. A staff of programmers at the System
Development Corporation in Santa Monica, Calif., attends the system and incorporates what is learned in
an improved program. 14 To "get back into" a program
of 75,000 instructions requires careful documentation
augmented by computer methods for changing the·
, program. The need to rework increasingly large programs is an incentive for the second method of computer
learning we are considering here-heuristic programming or "artificial intelligence."
Dr. John McCarthy describes artificial intelligence :20
These programs all use trial-and-error learning. A criterion for
an acceptable solution is known. Then the machine "searches" a
group of potential solutions for one answer that meets the criterion .... Unfortunately the groups or classes of potential solutions of
interesting problems are too large to be examined one at a time by any
conceivable computer.
Therefore, we must devise methods called heuristics for replacing
the search of the class of potential solutions by a number of searches
of much smaller groups. It is in these heuristics that the intelligence,
if any, lies.

Programs written by Newell, Shaw, and Simon have
proved theorems of logic21 and played chess, each with
increasing skill.~ A program written by Gelertner and
Rochester con taining the theorems and heuristics
taught in a high-school geometry class has done the
homework and taken the examinations of that class. 22
But each of these programs handles only a limited
range of problems. To extend the range we need to tie
together a learning system with many storing systems.
Each of us needs only to look in a mirror to see a system
that does all these things and, in addition, makes decisions of the kind described in the next section. Examination of this system is instructive. Its elaborate transducers facilitate learning. These transducers include the
eyes, ears, sense of touch, and inertia-sensitive inner
ears. For each transducer there is a corresponding part
of the biological computer where information is processed before it is stored. Thus the transducers are not
only detectors, they are filters, switching incoming information toward its place of storage. Furthermore,
they are adjustable filters. When you are looking for
something, you have tuned your detectors to find that
thing and ignore other things. Searching for a red ribbon
in your bureau drawer, you tune your eyes to search for
red and need only make a yes or no decision about each
thing you see.
The radars of the SAGE system report only targets
moving at a speed greater than a certain amount. However, the filter here is not adjusted by the computer.
20 J. McCarthy, "Getting closer to machines that think," New
York Herald-Tribune, Engineering News Supplement; May 24, 1959.
21 A. Newell, J. C. Shaw, and H. A. Simon, "Empirical explorations of the logic theory machine, a case study in heuristic," Proc.,
WJCC, pp. 218-230; February, 1957.
22 H. L. Gelertner and N. Rochester, "Intelligent behavior in
problem-solving machines," IBM J. Res. Dev., vol. 2, pp. 336-345;
October, 1958.

321

Moreover, radars "see" with very coarse resolution.
Great sums of money have gone into the development
of radar. There has yet to be a comparable effort at
developing a high-resolution system with adjustable
filtering to enable an electronic system to "see" the objects that human beings not only see but think about
most of their waking hours.
In the absence of its own inputs, the simulator will
have to take in the form of punched cards or electric
signals the observations of those who do have these inputs. Lacking a filtering system, it will have to use the
classifications of events made by these observers. The
classification can determine what heuristics and what
part of the memory are to be employed.

IX.

ABILITY

To

MAKE DECISIONS

Decisions can be made by computer programs according to predetermined rules. To run these rules and
memory of the kind developed by Industrial Research
and, possibly, a learning routine would make a slow
simulator. Speed can be obtained by imitating the human decision system.
The decision-making apparatus in the human system
is the reticular formation. It is the core of the brain
stem. It is about as big around as a cigarette and about
two inches long. Each of the several thousand large
cells in this formation:
receives signals from almost every source in the human body, coded
in pulse-interval modulation to convey whence the signal came from
and what happened there .... The reticular formation decides what
he ought to do, what he should heed, how vigilant he ought to be and
whether he has time for that idle fancy that inspires his future
action. 23

The method by which the several thousand large cells
of this formation reach a decision is similar to that used
by a battle fleet.
Every ship of any size or consequence receives information from
the others and sweeps the sky for hundreds of miles and the water
for tens of miles with its own sense organs. In war games and in
action, the actual control passes from minute to minute from ship
to ship, according to which knot of communication has then the
crucial information to commit the fleet to action .... It is a redundancy of potential command, wherein knowledge constitutes
authority.

In the reticular formation, each cell is like a ship of
this battle fleet, able to take command when the information it has received is accepted, by all of the several
thousand large cells, as that most requiring attention.
Having spent much of his life mapping the nervous
systems of monkeys and men, Dr. McCulloch is now
studying the nerve connections of the human reticular
formation. Everyone of the several thousand large cells
in this formation is connected to nearly every other. In
addition, everyone of these cells receives signals from
23 W. S. McCulloch, "Where is fancy bred," Bi-Centennial Conf.
on Experimental Psychiatry sponsored by the Western Psychiatric
Institute and Clinic, Dept. of Psychiatry, University of Pittsburgh
School of Medicine, Pittsburgh, Pa.; March 5, 1959.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

322

Limitations of computers, when recognized by engisome of the afferent cells of the body and from some of
the cells of the cerebral cortex. This much can be deter- neers, appear to stimulate efforts to overcome the
mined from dissection. What cannot be determined this limitations. This gives direction to the development of
new techniques of memory, ability to learn, ability to
way is how each cell influences every other.
Fortunately, much is known about how the reticular make decisions and the additional categories mentioned
formation performs. From this knowledge, McCulloch . by Dr. Deutsch.
is considering a possible logical diagram showing how
A further challenge from him should be quoted:
its neurons may affect each other. The resulting design
To build into a computer the properties of perceptiveness, tolcan be implemented by artificial neurons such as those erance of ambiguity, mercy and spirituality-that is, perceptiveness
toward second-order and higher-order patterns of preferencesbeing built by Jerome Lettvin. 24
Could the logical design also be implemented by a would require capabilities far in excess of those available at present.
So long as such vastly greater capabilities have not been developed,
programmed computer? A small part of it could. Each computers can aid human judgment but cannot safely replace it .
. neuron can be represented by storage registers containing the neuron threshold, the state of the neuron after
The second measure we shall consider is the extent
the last cycle of excitation and inhibition, and the na- and sensitivity of feedback control such as that in Fig.
ture of the connections to other neurons. To simulate all 3. If we find difficulty in trusting one human mind, we
of the interconnections in the clock time of the brain shall have greater difficulty in trusting a simulator.
would require the processing of at least 1000 (!) instruc- However, a control network is possible consisting of
many simulators. Given authority to act, a decision
tions in 0.1 second.
An assembly of artificial neurons is called a parallel would be made by a majority of those simulators that
computer, meaning that all logical operations are oc- had not been destroyed by attack or sabotage. Each
curring at the same time instead of sequentially, as in a would simulate the duties of a Congressman or group of
programmed computer. For the present, parallel logic is Congressmen. As Dr. Clippinger has suggested for a
a goal to work towards while using programmed logic. simulator of the President,19 each should be operated in
parallel with the one it is simulating so as to:

X.

WHEN SHOULD THE SIMULATOR BE USED?

A programmed simulator, although slow, can render
a service. now by providing an operating model of the
environment of the President, by demonstrating how
new rules may be learned, and by demonstrating how
rules may be applied to make decisions. Starting as a
guide to decision-makers, a simulator could be gradually
improved until it might be able to make decisions on its
own. It would be for Congress, the President, and the
American people to decide if the simulator should be
allowed to do this.
Three measures will be suggested as aids in deciding
when a simulator should be used in this way. One measure is the extent of internal restraint. As Dr. Deutsch
puts it:
For any large ... memory system, the specific content of all combinations that might become dominant ... cannot be predicted. The
possibilities are too numerous as to what combinations might arise
in a human mind, or in any computer ... remotely comparable.
Hence we fear entrusting political control to anyone human mind,
or to any small committee, even though we trust them as being
hum.an personalities ... who share the unspoken and unstated values and inhibitions of our culture and religion.
An electronic machine (at present) can include in its memory,
at best, only those rules of law, morality and religion that have
been stated explicitly in words.... These ... rules a computer
would then apply with terrible literal-minded ness. It might become
the electronic embodiment of the letter that kills, rather than of the
spirit that gives life.
'24 J. Y. Lettvin, "Nerve Models," Res. Lab. of Electronics,
M.LT., Cambridge, Mass., pp. 178-179; January 15, 1959. In the
diagram, the unlabelled diode at the left is the excitatory input; that
at the right, the inhibitory input. The wiper of the potentiometer
determines the threshold.

... (a) learn, (b) demonstrate to Congress and the President that it
is worthy of their respect and faith for at least a limited period, (c)
provide time to educate and persuade the people of this democratic
country that it should be used.

Such a network could have feedback controls as extensive as Congress itself, at least during trial periods.
The third measure of when a simulator should be used
is the measure of the emergency when, if Congress, the
President, and the American people have previously
approved, the simulator would be permitted to act.
Seeking this measure takes us back to the question
raised by Dr. Rabi. In accord with that, two conditions
would make the use of a simulator desirable. One condition is imminence of destruction such as a 90 per cent
probability that 5,000,000 people will be killed, a 9 per
cent probability that 50,000,000 people will be killed, or
any of the equivalent probabilities. The other condition
is the inability of the President to respond.
Equipment with extraordinary reliability is needed
to determine both of these conditions. The estimate of
probable deaths would need to be made by a computer
that has both information about approaching missiles
and models of population. The President's ability to
respond in a predetermined time could be determined
by interrogating him, by requiring him to report periodically, or by some other method.
The desired reliability should be obtained either by
operating computers in parallel, which is done in the
SAGE system, or by applying the theory of building
reliable circuits out of unreliable components. 3 The

Rothstein: Can Computers Help Solve Society's Problems?
latter requires the kind of parallel logic described in the
last section with interconnections and thresholc1s so
selected that the failure or erratic behavior of one or
more elements will not affect the output.
ApPENDIX
DEFINITION OF SIMULATION

The word "simulation" is used in this paper
modern technical sense :25

1ll

its

... to assume the appearance of, ... without any intention to deceive. I refer to its use in the field of mechanical-electronic computation. Here the procedure is to simulate physical or mental
proresses in setting up a problem which is then given to a computer to
solve. 25

The Industrial Dynamics Research program at
M.LT. uses the words "make a model of" in the place of
25 ] . C. Warner, "The fine art of simulation," Carnegie Alumnus,
Carnegie Inst. of Tech., Pittsburgh, Pa.; 1959.

323

"simulate." The model in this case is a set of equations.
These M.LT. people save the word simulate to describe
the evaluation of these equations, one at a time, for a
given set of input conditions. They solve the equations
at time intervals which are short compared to the
shortest delay intervals of the system being modeled.
They are thus simulating simultaneous solution.
In this paper "simulate" is given the meaning of the
first paragraph above. Simulation here is intended to
achieve a "quality" equal to or excelling the performance of the human being to be simulated, for the periods
when it is given his responsibility. The "quality" of performance is a composite of breadth of facts which lead
to a decision, reliability of the logic and computation
used in processing these facts, speed, and human considerations. A simulator might attain acceptable quality by excelling in some of these considerations while
falling short in others.

Can Computers Help Solve Society's Problems?
JEROME ROTHSTEINt
INTRODUCTION

T

HE advent of large-scale computers gave new impetus to mechanizing the handling of tremendous
quantities of data. It also indicated the possibility
of carrying out many ventures of social significance
which are now completely impractical. It is hard to see
an important social revolution in the first, per se. Automatic billing of telephone subscribers or mechanization
of clerical activities, for example, is only substitution of
machine for manual activities. This has been going on
continuously since the beginning of the industrial revolution. Exciting prospects emerge when one considers
fields characterized by enormous amounts of data together with complicated intertwining causal relationships buried under statistical blur.
The present paper considers a few of very many possibilities. They were chosen mainly because one might
expect them some day to have enormous impact, both
on the individual and on society. The first group bears
on the weather and on economic planning and policy,
the second on various questions of public health, and
the third on "the proper study of mankind," man him-

t

Edgerton, Germeshausen and Grier, Inc., Boston 15, Mass.

self. They are tentative groupings with no pretense to
completeness or profundity. It is believed, however,
that they make some general statements plausible.
These are that modern computer and data handling
techniques may
1) lead to making our economic system more productive, and to smoothing cycles of inflation and deflation,
or employment, and of farm income;
2) revolutionize our ideas of public health, and make
the world a more wholesome dwelling place;
3) revolutionize our knowledge of ourselves, our
abilities, susceptibilities, mental, physical and genetic
constitution, as well as diagnostic and preventive
medicine.
We believe it is the responsibility of the computer
engineer and scientist to point out such potentialities, to
acquaint specialists in many fields with what computers
can do, to collaborate with them in applying computer
techniques to those fields, to keep research foundations
and government agencies aware of areas worthy of support, to keep administrators, policy makers, and legislators informed and advised, and thereby to assist in
the formulation of sound public policies.
In the discussion below, military, industrial, and scientific applications are very largely neglected. This is

324

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

not because these fields are unimportant, but rather
because their importance has been so well recognized
that a paper of this length could now contribute little
to them.
WEATHER, COMPUTERS, AND ECONOMIC POLICY

The historic role of weather and climate in human
and economic terms needs no belaboring. Famine,
drought, floods, plenty, epidemics, mass migrations, the
rise and fall of civilizations, wars and their outcome
have often been engendered or decided by long- and
short-range weather and climatic variations. With foreknowledge, as in the biblical story of Joseph and
Pharaoh, countermeasures become possible, with tremendous gain in economic and human terms. Modern
data processing is making it possible to use the tremendous mass of accumulated and current meteorological,
astronomical, and climatic data to make long- and
short-range weather predictions more accurate. When
continuous global cloud surveys by satellite are available, accuracy will advance even more, and even
weather control will doubtless be more effective. The
average citizen will know when to carry an umbrella or
plan a vacation, retail establishments will plan inventories more intelligently, and farming will be less of a
gamble. If one knows a drought is going to occur, for
example, it is silly to plant extensively where there is no
artificial irrigation. If it is known that a good crop is
imminent and that the following year will be one of
drought, the government might well consider making
provisions for crop storage. Knowledge of this sort on a
global scale can go a long way toward eliminating famine in one place and glut in another. The effect of this on
international relations and on the likelihood of revolutions and wars may well be stupendous. More locally,
public works programs in agricultural areas could be
planned to dovetail with an expected decline in demand
for farm labor, with lowered shock or recession of the
whole economy. Production based on agricultural raw
materials can be more intelligently programmed when
yields are forecast even approximately. Inventory policy and servicing depots for farm machinery could be
set up more efficiently. Railroad rolling stock, truck
fleets, and the like can be administered more efficiently
when demand is predicted more accurately. Clearly, the
techniques of operational research, which could make
extensive use of data-handling systems, would become
much more readily applicable to tremendous areas of
planning, allocation, production, servicing, inventory,
and other policy problems. Expert consulting services
for small enterprises, unable to support operational research on their own, would become economically justified, and might well lower failure rates for small business. Fuel, electric power generation, and hydroelectric
policies are also weather dependent, and the economic
value of long-range weather information in these fields is
clearly tremendous. It hardly seems an exaggeration to
say that little of the national and world economy is un-

affected by weather, and that most of it would benefit
by ~tter knowledge of weather trends.
There is no reason to restrict considerations like these
to agriculture. Some control over inflation and deflation
is exerted by the Federal Reserve System and the Treasury by control of discount rates and the sale of government securities, for example. Policies presently established as "curative," often with undesirable lag, may
become "preventive" when economics, aided by largescale computer techniques, has made economic forecasts more accurate. The forecast problem is comparable
in complexity to the global weather problem, and may
well be attacked with similar techniques. Fluctuations
may be one of the prices of freedom, but the instability
and suffering engendered by excessive swings can surely be much reduced by intelligent countermeasures, and
with no real loss of freedom. It would seem almost suicidal not to develop such countermeasures. Computer
techniques can be expected to playa vital role in this
field.
COMPUTERS AND PUBLIC HEALTH

In many fields of public health, detailed cause-andeffect relationships are submerged by multitudinous
accidental factors. Masses of data gathered over long
periods of time must often be digested and tested by
sophisticated statistical techniques before valid statements can be made about them. All statements, on
which action is to be taken, should be treated as testable statistical hypotheses, with action justified when
the hypothesis has reached some preassigned confidence
level. The examples below were chosen at random, and
represent but a small sample of the total one could cite.
Fluoridation of drinking water is a measure backed
by much competent medical opinion and bitterly fought
by a number of lay groups. Proponents of fluoridation
claim that tremendous reductions can be made in
dental caries with no ill effects. Its opponents make dire
predictions about the effect of fluoride on the kidneys,
the nervous system and almost every other organ in the
body. There are millions of people at the present time
who consume fluoridated drinking water. It therefore
seems entirely feasible to amass completely convincing
and compelling data on the correlation between fluoridation of drinking water and every ill to which the flesh
is heir. Of course, even in the face of compelling evidence
one often finds crackpots who refuse to be convinced. If
one has faith in democracy, however, one must believe
that in the long run the majority will recognize and accept the truth. If statistically valid evidence of this sort
is gathered-and computers can certainly playa vital
role in doing this-and it turns out that almost all tooth
decay can be safely and permanently eliminated from
the population, the gain would be incalculable.
A similar situation appears to be involved in estimating the effect of radioactive background and cosmic
rays on stillbirths, cancer, and genetic impairment of
large populations. It has been supposed by some that

Rothstein: Can Computers Help Solve Society's Problems?
there is a certain threshold of danger. Others maintain
that any increment in the radiation background, no
matter how small, will produce additional cases of defective births, mutations, and of cancer. One clearly
cannot make controlled human experiments, but the
city of Denver is a mile higher than New York. The inhabitants of Denver are thus exposed to a higher cosmicray background than the people of New York. In addition, uranium ores in Colorado and a number of other
western states must subject the people in those regions
to radioactive background, to uranium materials in
their foods and the like which are absent in many coastal
areas. Some parts of the world, such as Travancore,
India, have high natural backgrounds due to the presence of monazite sands or other radioactive materials.
I t thus seems entirely feasible toget statistically convincing data on the incidence of conditions of all sorts in existing populations living under a variety of radiation
backgrounds. With extensive, computer-processed data
one can make a more realistic assessment of the dangers
of atomic testing, for example. One can perhaps find
whether a threshold for radiation damage exists, or
even if there are beneficial effects of radiation in very
small amounts. There is some evidence that normal
individuals have some immunity to cancer. It therefore
does not seem entirely impossible that fall-out in very
small amounts might have no effect on normal individuals but could shorten the life expectancy of individuals fated to succumb to cancer by a small amount.
Similarly, data on abnormal births and congenital defects might lead to a better understanding of the genetic
hazards.
A third field is that of poisons, taken in a broad sense.
Coal or oil smoke from the heating plants of private
dwellings, and exhaust fumes from automobiles could
conceivably have subclinical toxic effects. It is known
that carcinogenic substances are produced in oil refineries. Such substances can also be produced by combustion of tobacco and many organic or carbonaceous materials. Some studies indicate a connection between
smoking and the incidence of lung cancer and cardiovascular diseases. It seems possible that "chemical fallout" from daily industrial, heating, smoking, and automobile-riding activities could be a hazard greater than
radioactive fall-out. Chemical agents can also produce
mutations. Among these are colchicine and other complex compounds related to coal tar derivatives and other
substances with great biological activity. A large-scale
statistical survey of the quantities and identities of atmospheric and other contaminants of all sorts, and their
correlations with the incidence of various diseases, appears eminently desirable. If one found, for example,
that small communities in the Rocky Mountains have
far lower incidence of cardiovascular disease than smoggy industrial areas, with suburban residential areas intermediate, then laws requiring the chemical clearance
of smog and precipitation of dust particles might ultimately be in order. Many industrial poisons are known

325

and in many places measures have been taken to prevent atmospheric contamination, but by and large a
serious and obvious outbreak of some condition seems
to be required before public opinion is aroused enough
to take action. It is therefore quite possible that many
cases of this sort are unnoticed because cause-andeffect relations are buried in a sea of accidental factors.
The possibility that we can live under healthier conditions seems too real for us to overlook these potential
applications of computer processing and analysis of the
tremendous amounts of data required.
COMPUTERS, THE INDIVIDUAL, AND SOCIETY

Every human being is a unique complex universe. As
life insurance companies well know, this does not prevent the drawing of valid statistical inferences about
human populations. The more homogeneous a statistical population, the more accurately can inferences be
made about unobserved individuals on the basis of observations on a sample. In between the heterogeneity of
homo sapiens as a whole, and the homogeneity of identical twins, one senses the possibility of a classification
scheme which would divide mankind into groups of sufficient homogeneity to make it possible to draw medically or otherwise useful inferences about a group. The
scheme would certainly be complex; and there is no a
priori reason to assert, for example, that even a million
categories would enable one to predict that a particular
ten-year-old boy, let us say, will develop angina pectoris
between the ages of forty-five and fifty-five, or that a
healthy young woman will have a cervical cancer before
she is sixty.
The existence of recognizable hereditary characteristics, the small number of clinically distinguishable blood
types, the "tendencies" or "predispositions" known to
practicing physicians, the experience of plant and animal breeders, the development of strains of laboratory
mice like waltzing mice or those who invariably develop
cancer, and research in heredity (e.g., on Drosophila or
Neurospora), and other examples, all suggest that very
useful classifications probably exist whose numbers of
categories are not astronomically large. If a number of
the order of ten thousand categories could adequately
characterize an individual biologically, such a characterization would be of tremendous use in preventive
medicine. Who knows the extent to which cancer and
heart trouble could be anticipated, and perhaps prevented or corrected or ameliorated by proper control of
diet, activity, or environment? The Rh factor which
used to kill babies born to mothers of opposite Rh blood
type is now understood well enough to predict the
couples to whom this would happen. The tragedy is now
avoidable. Who can say what anguish and burdens
could be prevented if a similar understanding of congenital idiocy and other genetically based abnormalities
or inferiorities could be achieved? Who knows but that
the patterns ~f potential parents of future Einsteins
migh t become recognizable?

326

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Such knowledge, like all knowledge, would be a
double-edged sword. Unattractive Orwellian prospects
suggest themselves under totalitarian regimes. But
under a form of government in which the sanctity of
human personality is preserved, one sees the possibility
of great good, of healthier individuals, and of a gradually improving human stock. The concept of eugenics
is basically good, and if the individual is free to use eugenic information or not, if he is never coerced, and if
the program is kept free of bias and fanaticism, only
good can come of it.
Setting up such a program would ultimately require a
tremendous amount of experimental data, data processing, analysis, and testing of statistical hypotheses. Years
might pass before results with practical application
could be obtained (though we doubt it), and automation
of hundreds of complex laboratory procedures, many
not now known, might be necessary before people could
be adequately "typed." But if a program developed
capable of coming anywhere near the goals described it
would bring benefits even greater than those discussed.
The old problems of nature vs nurture, or heredity vs
environment as determinants of human capability and
achievement might become better understood. Factors
leading to more valuable and satisfying lives might
stand out in bolder relief if the life patterns of a large
number of biologically similar individuals were studied
with the tools of psychology, sociology, and anthropology. A far deeper understanding of psychosomatic interactions would almost surely result. Cross-cultural
. studies, as part of the broad program, might give deeper
insight into the interactions between individual psychology and the cultural milieu, perhaps to enable us to see
how the values and goals of different cultures (which
have much to do with the stresses and motivations of an
individual) generate different statistics of stress syndromes, diseases, antisocial behavior, or mental conditions in biologically similar individuals. Could such
knowledge be obtained, we could begin to develop a
science of society which would permit conscious improvement of our customs, values, and motivations, provision of sanifying influences, and tend to maximum
development of individual talents and satisfaction.
These utopian dreams, distant as they may now seem,
need not be unattainable. With computer techniques,
automation, and the devoted labors of inspired, determined, and creative people, such dreams will come ever
closer to reality.
In these times of international tension, omnipresent
powder kegs and atom bombs, one can seriously maintain that the world cannot afford to neglect these possibilities.
CONCLUSION

There is an old witticism about the difference between a scientist and a philosopher. The former applies

increasingly refined techniques to an increasingly specialized and narrowing aspect of the world. As time goes
on he knows more and more about less and less. The latter continually generalizes, going into increasingly
abstruse abstractions. As time goes on he knows less
and less about more and more. In the end, the story
goes, the scientist knows everything about nothing,
while the philosopher knows nothing about everything.
While neither scientist nor philosopher would admit approaching the limits described, there is enough truth to
the story to show that it might sometimes be good for
the scientist to be a little more philosophical, and for
the philosopher to be a little more scientific.
I t is hard, perhaps, for the computer scientist or engineer, wrestling with detailed problems of hardware,
logical design, budgets, deadlines, maintenance, and
operation, to take the philosophic approach very often.
The viewpoint here espoused is that once in a while a
broad philosophic look at things can be valuable. This
paper has therefore studiously avoided technical minutiae and conventional computer applications in favor of
bold (we hope not reckless) extrapolations and generalizations to broad problems. What we dream today we
achieve tomorrow. It is our responsibility not only to
develop day-to-day technology, but also to philosophize
enough to find fields where technology will promote advances measured in broad human terms. Just as we
have learned to dig better with steam shovels than with
our fingers, so must we learn to tackle civilization's problems with the aid of computer techniques rather than
with our bare brains.
We cannot wait until the perfect master plan /is
worked out. Not only would this never be done, but
even if some plan were set up as near perfect, it would
probably be obsolete before it could be implemented.
The same kinds of piecemeal attack that characterize
all research would have to be employed. There is much
that can be done immediately and in parallel (some being done already), such as 1) mechanizing public health,
hospital research, and individual physicians' case history data, 2) developing "common language" techniques
to permit easy exchange and consolidation of information gathered by independent agencies, individuals, and
countries, 3) making it possible to consolidate scattered
data on individuals in order to amass birth-to-death
histories, 4) creating techniques of integrated cooperation between individuals, institutions, local, state, federal, and international organizations, 5) encouraging
cross-disciplinary research, 6) sponsoring programs designed to uncover areas to work on immediately, 7) perfecting routines for testing increasingly complex statistical hypotheses, 8) formulating special fields so that
computer techniques can be used on them, and 9) doing
this whenever it becomes feasible. We must press for
the nine parts of action to support the one part of philosophy.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

327

The Measurement of Social Change
RICHARD L. MEIERt

s IT possible to build a synoptic instrument, similar

I

to a telescope or a radar network, for viewing one's
own society? How may we interpret the myriads of
social activities that are presently undertaken? Preliminary explorations suggest that we need sensing techniques, or transducers, that pick up changes going on
inside the society. External indicators, like air photos,
are too superficial. We are faced with a problem of discovering what operating characteristics a deep-probing instrument should have so that it may be as practical and useful as possible.
Economists judge change in society by modifications
in the makeup of the gross national product and the level
of expenditures, political scientists can analyze elections and polls, but sociologists and anthropologists
have no cumulative sets of accounts or aggregate indexes. They have had hopes, however, similar to those
expressed by Lazarsfeld [3]:
Our economic statistics are today quite well advanced. We know
how much pig iron is produced and how much meat is exported every
year. But we still have very little bookkeeping in cultural matters.
The content of mass media of communication is an important and
readily available source of social data, and it will not be surprising if
this analysis becomes a regular part of our statistical services in the
not too distant future.

These statistics have not yet come into being because
the labor cost was high, the time lags were great, and the
system description was incomplete, so it has been impossible to state how one set of measurements related to another.
Let us take a brief, searching look at the social system.
Society is maintained and changed by the behavior of
its members. Intuitively one feels that the basic unit of
behavior is the act, but acts are not as easily counted
and differentiated as particles, molecules, or organisms.
Satisfactory data can only be obtained when actors are
forced to confine their behavior within certain preset
specifications or codes, which may be called languages,
currencies, habits, or "standard operating procedures."
This behavior must be observed in public spheres, since
the objective, detached observer is missing in private
affairs. The latter will require altogether different instruments and techniques for data accumulation, and will
not be taken up here.
By far the most promising attack upon the problems
of measurability is offered by lumping together small
sets of· acts into transactions. A social transaction involves, among other things, the emission of a message

t

Univ. of Michigan, Ann Arbor, Mich.

together with evidence of its receipt-apparent to an
observer, but also, through one or another form of feedback, to the agency responsible for emission.
At any given moment, the population of the society
can be divided into senders, receivers, and nonparticipants, much as the economist divides his population
into producers and consumers, and each participant
must play both roles. The message normally contains
some information that is novel to the receiver, more that
is redundant, and some symbols that are quite unintelligible. Some messages are not communicated directly to
receivers, but are stored in libraries, files, and artifacts
where they become a resource embedded in the social
environment. Uncoded information may be gleaned
from the environment through systematic observation.
Scientists, weather observers, diagnosticians, and other
professionals have been trained to reduce these phenomena to coded, communicable form (Fig. 1). The
information flows should be sampled where the wavy
lines occur.
Observation of the
Social and Physical
Environment

Wavy lines indicate
where sampling can
best be carried out

Investment of
Communications in
Social and Physical
Environment

Fig. i-Communications flows in society.

The term "information" at this point has been used
in its intuitive sense. At a later stage, it will be shown
that the demand for information storage (used now in
its technical sense) in our instrument corresponds
crudely with the volume of these flows in society-about
as well as national income figures represent the combined satisfactions of consumers. The greatest difficulty in the design of our instrument is the conversion
of all of the codes for human communication, oral, written, graphic, gestural, musical, etc., into a single code

328

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

which is cQnvenient for machine handling. We may have
to incorporate human operators, whose skills resemble
those of cataloguers in a library, for the more difficult
features of translation. Fortunately, as we shall see
later, the bulk of the information flow in modern society
is in the form of printed language, which seems amenable to automatic sensing, coding, and abstracting
(Luhn [4]).
Given the complexities in social communications,
how would a representative and comprehensive overview of the social transactions be obtained for our instrument? The mass media-television, radio, magazines,
newspapers, books, records, catalogues, direct mail advertising, etc.-could be recorded at the source. Schools,
conferences, committee meetings, shop talk, live performances, etc., would have to be random-sampled. But
on what basis?
Here we are forced to refer to a fundamental property
of modern society. A sender can have many simultaneous receivers, but any given receiver usually accepts messages from but one sender at a given moment; on rare
occasions he may pay attention to two or three, but no
more. The decision as to the completion of a social
transaction depends upon the receiver. He pays for the
message by spending time taking it in. Human time is a
moderately scarce commodity, and cannot be wasted indefinitely. People tend to switch dials and scan the
newspapers and magazines until they find messages that
are interesting to themselves. Message types that gain
few receivers tend to be dropped by senders in favor of
. those which get more attention. Therefore, the social
value of broadcast messages may be determined, as a
first approximation, by the amount of time people devote to them.
Thus a comprehensive time budget of the members of
the society-how they allocate time to receiving messages of various kinds, and time to other matters that do
not involve social communications-provides a simple,
additive criterion of value. We could attach the probable
number of receivers, with some estimates of the time and
place of reception, to the records of the messages themselves that are held in storage for our instrument.
The formulation of a society-wide time budget has
already been explored by Meier [5]. A quantitative description of time-use has applications in public affairs
independent of its employment in our system, and the
techniques required for making economical measuremen ts already exist.
Economists have found that macro-analysis is greatly
assisted by subdividing the economy into such sectors
as agriculture, manufacturing, households, etc. The
rules for simplifying the accounting may be different in
each sector. The choice of sectors in social analysis will
depend upon the kinds of reinforcement provided by
other approaches to social measurement, such as public
opinion evaluation, the Census, and historical analysis.
A first guess regarding sectors is provided in Table 1.

TABLE I
COMMUNICATIONS-ORIENTED ALLOCATION OF TIME
IN PUBLIC ACTIVITIES

Work
School
Radio and TV
Shopping

Travel
Reading
Meetings and Parties
Dining and Drinking

Play
Ritual
Personal Services
Miscellaneous

Possible Subcategories under Work
Factory
Office
Construction and Mining
Agriculture and Forestry

Housework
Maintenance of Property
Services
Miscellaneous

Another feature of our instrument must be introduced. If it is to be economically constructed, it should
be decentralized. The headquarters would contain communications which are subject to national distribution,
plus some measurements of exports and imports over
continental boundaries, while branches would exist in
every metropolitan area (Fig. 2).
In the course of proposing a design for this apparatus,
we have piled feature upon feature so that it has by now
become quite elaborate. It is expected to intercept and
store a huge volume of messages, but this is made feasible by eliminating most of the redundancy in social communications, and reducing all the messages to a common code. The instrument attaches weights to these
messages according to the number of persons and the
amount of time spent receiving them; it indicates the
times and places the message is received, and it must
store all of this in a permanent record which can be
scanned quickly and automatically. Fortunately it need
not get every message that is received, but may start
modestly by sampling at, say, a one per million rate. As
the instrument is refined, and the representation of social change that is required must be finer-grained, the
sampling rate may be advanced.
We are now ready to discuss who would use such an
instrument and for what purposes. Planners and administrators who must make decisions for the public regarding parks, playgrounds, schools, traffic patterns, and
various social services should be able to develop criteria
for deciding from studies of trends in social communications and from comparisons with other sources of social
data. Advertisers may be expected to develop their
craft on the basis of the more detailed measurements of
response they would be able to obtain. Politicians should
be able to sense better the distribution of sentiment on
various issues. Educators may assess the impact of
special programs. Changing tastes, the appearance of
new patterns of social interaction, and the passage of
fads should all be registered as factual data-"how
much," "where," and "when." The natural emphasis is
upon local public affairs, mass entertainment, and the
functioning of work, school, and commerce, because
these matters make up the bulk of our communications
activity.

329

Meier: The Measurement of Social Change
Nationwide Network
Distribution

Goverrunent and
Business Operations
Number

Television
Radio
Films
Magazines
Books

Correspondence
Reports
Directives
Catalogs
Educational Materials

Market~

Foreign Imports

Local Radio
Local TV
Newspapers
Resional Mags.

Schools
l-ieetings
Churches
Shopping

~

~ Advert"'"

!

CENTRAL STORAGE

/1\

MEl'ROPOLITAN STORAGE POINTS

I

ot

times the
term appears
per Unit ot
Time.

r \

COM/.ruN.ITY ACTIVITIES

Time
B

Wholesale Markets
Public Events
Conventions
Universities

Restaurants
Bars

Sports
Visiting

Fig. 2-0rganization of social communications as imposed
by the location of various activities.

A skilled operator would ask his questions in terms of
key words or phrases appearing in the content with a
frequency of 10-6 to 10- 8• They serve as "tracers" of
message content as it is spread through the population.
Maps and time series can be prepared which show th~ir
buildup and decline. More detailed information about
the changing attitudes of people may be obtained by
reconstructing the contexts within which the key words
appeared.
The severest criticism to be made of a representative
record of social communications is that the content of
the messages tends to -be superficial. In many, if not
most, social transactions people disguise their true feelings about a subject. An investigator may nevertheless
make many nontrivial observations, and can probe
more deeply, if he desires, by using the "trial balloon"
technique (Fig. 3). An event, closely relevant to the
subject of interest, is purposely created-it may be an
announcement, an incident, or a rumor. The subsequent
wave of "talk" that is stirred up may then be analyzed.
The effect that is triggered off provides a good indication of the sensitivity of the public to that issue at that
time.
These and other small-scale tactical uses in government and commerce should grow rapidly to the point
where large installations may be justified which allow
hundreds of simultaneous operators.
The strategic uses of an instrument of this sort are
still more interesting. The accumulation of socio-cultural "wealth," for example, may be estimated in a
manner analogous to that developed by economists, and
the flows of information through society may also be
estimated. A very brief outline of the steps involved,
and the kinds of conclusions to be obtained, will be presented.

Fig. 3-The "trial balloon" stimulus as revealed by content analysis.
Assume the stimulus contains concepts whose treatment in communications uses terms A, B, and C with high probability.

What is the total of all nonredundant information
that is transmitted in society for a year? The limitation
upon flow is the capacity o( the receiver to understand
the messages to which he exposed himself. A receiver has
a limited repertory of terms. Reasonably good statistics
exist only for English vocabulary.
The respective terms that are used in messages can
be mapped according to their probability of occurrence
in social communications, as in Fig. 4. The abscissa is
some arbitrarily defined categorization of meanings,
similar to the Dewey decimal system. When this same
map is put onto polar coordinates, we can show stages
in the development of a receiver as in Fig. 5. The protuberances on the periphery are associated with the
specialties engaged in by the person. The map of transition probabilities between terms would have the same
appearance.
There is a standing rule in society that a sender should
have greater knowledge about the subject of the message than the receiver, if information is to be transmitted.
Thus, on the average, the senders are more informed
and more expert than the receiver, as shown in Fig. 6.
Continued communication would cause the receiver's
repertory of terms to grow in the direction of the
sender's. He would learn something about the subject.
Senders must choose their terms so that they lie on
the periphery of the receiver's map, if they are to save
time and maintain interest.
We are now in a position to estimate the amount of
information flowing that is potentially useful to receivers. Let us define a restricted number of classes of
receivers, say about a hundred, each representing a different segment of society, ranging from illiterates to
various kinds of professionals, but exhaustive of the

330

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Deptive

7

lopr1thJIL

6

The levels
obte.1ned

o~

frequency
~

B1JIIbOl1c

terms uae4
in IOc181

r---

4

10- ~I-

~

be

vocabul.a.r7 tests,
cOlll;pletiOll tests,

~

II

"t1JIq

throIl8h

Thor are

etc.

excellent 1D41ceil
o~ intellectval
achieveMl1t in the

3

cCllQlllllD1caUons 2

respective directions.

A

B

C

0

E

F G H IJK

categories o~ social communications, spaced accord1Dg
to the allocatiOll of tbe volume of trllD8lD1tted terms
to each of them respectively.

(a)

In American culture A might include terms used in
newspapers, magazines and popular books, B those
uae4 in cOllversatiOll, rsdio, and telev1siOll, C those

eDVIloyed in bUSiness transactions • • • J those used
in the fine arts, etc.

Fig. 4-Typical properties of a receiver of social communications.

A

(b)
Fig. 6-Relationships between repertories in social communications.
(a) The receiver expands his map in the indicated direction as a
consequence of communication. (b) Simplified maps showing how
the sender chooses terms which lie on the boundary of the vocabulary that is shared, if he wishes to optimize the transfer of
information.

10

G

•: 630.000

", ........... -_.....

B

F

c

E

o

Adult

Fig. 5-Typical development of the vocabulary map in an individual.
The powers of ten shown are levels for the frequencies with which
terms appear. If the Zipf distribution (rank order times frequency
is a constant) holds, each shell contains the indicated number of
terms. Categories or contexts (A, E, C, ..• K) are assigned by
convention as before.

population. Each would have a distinctive map. The
messages transmitted in society and stored in our social
record must have the receivers which choose to spend
time on them classified according to category. Shannon
[6] has described a method for using typical receivers
for the measurement of redundancy.
Interestingly enough, the significant information
flow rate tends to stabilize itself for a given context.
Editing the rough spots out of manuscripts has this ef-

feet, and directing has this function for mass media.
The pauses unconsciously inserted into human speech
have recently been shown to work on the same principle
[1]. This property, combined with miscellaneous other
available information about mass public behavior in
American metropolitan areas, enables us to arrive at the
first approximation of information flow (Table II).
TABLE II
INFORMATION TRANSMISSION IN METROPOLITAN SOCIETY*
(POPULATION 5,000,000)
Mode of
Reception
Reading
Television
Lecture and
Discussion
Observation of
Environment
Radio
Films
Miscellaneous

Estimated
Flow
Time Allocated Receiving
Rate Estimated
bits/year
hours/year
bits/minute
4Xl0 9
3Xl0 9

1500
500

36Xl013
9Xl013

4Xl0 9

200

5Xl013

3Xl09
1.5Xl0 9
1.6Xl08
5Xl0 9

100
300
800
100

2Xl013
3Xl013
8Xl012
3Xl013
6X1014

per capita average,,-,10 8 bits/year

* Judged in terms of the probable repertories of receivers, not in the
accepted sense used in information theory. Possibly a new term should
be coined for information distributed over a population.

Shumate: SimuLation of Sampled-Data Systems Using Analog-to-Digital Converters
Compariso~s between the poorest and richest metropolitan areas can also be exceedingly suggestive (Table
III). Observers now agree that socio-cultural growth
parallels economic growth but the introduction of
measurements suggests that social communications must
either precede economic growth or grow more rapidly
than income. Apparently an expansion of socio-cultural
activity is a necessary but not sufficient precursor of
economic development.
TABLE III
INCOME AND INFORMATION FLOW EXTREMES IN URBAN SOCIETY

Income
Non-red undant*
information receipt

San Francisco

Addis Ababa or
Jakarta

$3000 capita/year

$150 capita/year

",10 8 bits/year

",106 bits/year

* Again in terms of probable repertories of receivers. This assumes
that 70-80 per cent of residents in the poorer cities are illiterate.

The heavy volume of information transmitted by
reading is highly significant. A society like our own
which is increasingly white collar reads more at work
and at home. There are limits to human ability to receive information, however, which are believed to be in
the neighborhood of 10 9 bits per capita per annum for a
population with the present distribution of mental capacities. At the present estimated rate of gain, this

331

theoretical saturation level is likely to be reached within
two generations. The prospect is startling enough to
cause us to investigate more closely the stresses associated with communication saturation in human organizations.
My feeling is that our instrument is already technically feasible. Simple calculations show that sampling
social communications at a rate of ten parts per million
presents storage requirements within range of existing
equipment, but the desired degree of access re'mains
unclear. Message collection in the field is not a problem, but the programming for storage and the cataloguing of nonverbal materials has been inadequately developed. Much experimental work and formal analysis
will be required before a truly comprehensive cross section of social change can be achieved.
REFERENCES

[lJ F. Goldman-Eisler, "Speech production and language statistics,"
Nature, vol. 180, p. 1497; December 28, 1957.
[2J M. Kochen and M. Levy, "The logical nature of an action
scheme," Behavioral Science, vol. 1, pp. 265-289; October, 1956.
[3J P. Lazarsfeld, "Communications research and the sociologist," in
"Current Trends in Social Psychology," W. Dennis, ed., University of Pittsburgh Press, Pittsburgh, Pa., pp. 232-233; 1948.
[4J H. P. Luhn, "A statistical approach to mechanical encoding and
searching of literary information," IBM J. Res. Dev., vol. 1,
pp. 309-313; October, 1957.
[5J R. L. Meier, "Human time allocation; indices for the measurement of social change," J. Amer. Inst. Planners, vol. 25, pp. 2733; February, 1959.
[6J C. E. Shannon, "Prediction of entropy of printed English,"
Bell Sys. Tech. J., vol. 30, pp. 50-64; January, 1950.

Simulation of Sampled-Data Systems Using
Analog-to-Digital Converters
MICHAEL

s.

INTRODUCTION

NTIL recently, systems simulation problems
could generally be split into two classes: problems
requiring only an analog computer for solution
and problems requiring only a digital computer for solution. Any general problem has a number of characteristics which adapt it to either one or the other method
of solution.
A systems problem is most readily adapted to analog
computer simulation when it requires a relatively short
solution time on the computer and a relatively inaccurate solution is acceptable; has relatively "high" frequencies; and has nonlinearities such as, for example,

U

t Consultant for Space Technology Labs., Inc., and California
Inst. of Tech., Pasadena, Calif.

SHUMATEt
saturation, deadzone, or hysteresis. To be adapted to
digital computer simulation, a systems problem usually
possesses relatively low frequencies and requires a lon~
solution time; can be adapted to an iterative form of
simulation without introduction of an instability (this
is usually implied by a lack of nonlinearities such as
those mentioned above); and has a range of variable
which exceeds that possessed by an analog computer
solution.
Certain problems involving combinations of both
groups of properties may often be split into two separate problems, one involving high-frequency nonlinear
effects, and one involving low-frequency effects.
An example of such a problem is the simulation of the
flight of a liquid-propelled ballistic missile. The missile's

332

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

reactions to external and internal forces can be simulated on an analog computer, and the motion of the
missile along a trajectory can be simulated on a digital
computer.
However, recent developments in control systems,
particularly systems using sampled or discrete data,
have generated a third class of simulation problems;
this class of problem usually involves both sampled and
continuous information, has nonlinearities and no general dividing line between high and low frequencies, and
usually has a range of variable which exceeds that possessed by an analog computer. An example of such a
problem is the simulation of an inertially guided missile.
The guidance and control systems must have rapid response characteristics, and the system may involve a
special-purpose digital computer, whose program makes
it act like a multimode, adaptive controller.

Synchronous
Switch

(a)
Isolahon
Arnphfler
Synchronous
Switch

o

(b)

T

o

Fig. 1-Two examples of sample-and-hold circuits.
SIMULATION OF SAMPLED DATA SYSTEMS

Large analog simulation facilities are thus faced with
the problem of obtaining reasonable representations of
such systems, and must thus have the capability of at
least sampling continuous information, and hopefully
be able to perform some intelligent and useful operations on such information.
Several straightforward methods of performing sampled-data simulation are presently available. Sampling
may be accomplished by a so-called hold-amplifier [see
Fig. l(a)] or a passive hold circuit [see Fig. l(b)]. These
circuits are not easily adapted to simulation of digital
computer operations, because of difficulty involved in
holding past values of sampled information. This difficulty is partially eliminated by using transfluxors 1 as
hold devices.
The most general method for simulation of sampleddata systems is to use analog-to-digital converters to
connect an analog computer to a general-purpose digital
computer (see Fig. 2). In such an installation, one or
more analog-to-digital converters are used to sample
continuous information and present it to a digital computer. Several more analog-to-digital converters are
used open loop as digital-to-analog converters to present
and hold digital information to the analog computer.
Two major disadvantages of this system are:
1) The difficulty in starting the two computers simultaneously, and
2) The difficulty in scheduling operation time on the
digital computer, since in most installations digital
computing time is a premium quantity.
The first of these difficulties may be remedied by
equipping the digital computer with an "interrupt" feature, and the analog computer with a "start" control.
The interrupt feature allows the digital computer to
1 J. A. Rajchman and A. W. Lo, "The transfiuxor,"
vol. 44, pp. 321-332; March. 19.16.

PROC.

IRE,

Analog-to-Dlgltal..--_ _ _ _ _ _ _ _ _---,Dlgltal-to-Analog
Converter s

Conyer ter s

General
purpose
Dlgltal
Computer

Fig. 2-General-purpose sampled-data simulator.

break its program when a pulse (perhaps a sample
pulse) is received, and jumps to a special subroutine
used for external communication purposes. The start
control allows the analog computer hold relays to actuate simultaneously with the same pulse that first interrupted the digital computer.
The second difficulty is easily overcome if one is willing to pay for the digital computer time. It seems a little
ridiculous, however, to use a large general-purpose
digital computer if the digital program involved is relatively simple. For example, in order to use this system
to simulate a sample-and-hold, the digital computer
must be programmed to perform a transfer of a number
from an analog-to-digital converter to a digital-to-analog converter, and would thus sit idle during most of a
sample period.
I t therefore becomes evident that such a large, general-purpose sampled data simulator should only be
used for complex, sophisticated problems, and some
auxiliary equipment should be developed to simulate
simpler sampling operations.

Shumate: Simulation of Sampled-Data Systems Using Analog-to-Digital Converters
EQUIPMENT

The construction of an auxiliary sampled-data simulator is, of course, dictated by equipment present in a
simulation facility. The facility currently available to
the author consists of a 300 amplifier Electronic Associates Analog Computer, an EPSCO Addaverter conversion system, and a Remington-Rand Univac Scientific
Digital Computer, Model 1103A.
The approach used in the construction of an auxiliary
sampled data simulator was based on the fact that the
facility currently possessed equipment which would
both sample analog voltages and hold analog voltages.
This equipment, the Addaverter (see description below), would, of course, be in use part of the time as a
communication link to the 1103A, but its time schedule
was sufficiently open to warrant thinking of using it for
other purposes.
The Addaverter is used to simulate a group of parallel
sample-and-hold channels. A sample-and-hold is obtained by effectively placing an analog-to-digital converter in series with a digital-to-analog converter. The
analog-to-digital converter is caused to sample its input
voltage, the resulting digital number is then transferred
to the digital-to-analog converter, and converted back
into a voltage, which remains unchanged until the complete cycle is repeated again. Note that delaying of the
conversion of the number back into a voltage is equivalent to delaying the sampled values of the input voltage
and is useful for simulation of transportation lag, etc.
In order to obtain a better description of the implementation of the above concept into a complete sampled
data simulator, some description of the Addaverter is
necessary.
The Addaverter consists of 30 analog-to-digital converters (hereafter abbreviated ADC), 15 used as such,
and the other 15 used open loop, as digital-to-analog
converters (abbreviated DAC).
All 15 ADC's sample 2 the voltages present at their
inputs simultaneously, under the control of a central
sample controller, which is triggered by a single pulse,
called a sample pulse; the resulting digital numbers remain intact in the memory of each ADC until the next
sample pulse occurs.
The DAC's operate individually, each converting the
digital number previously stored in its memory to an
analog voltage when it receives a pulse, called a present
pulse. The voltage at a DAC's output remains unchanged until a new number has been read into its
memory and a present pulse has been received. A single
present pulse input and 15 pulse inhibit inputs are used,
instead of 15 pulse inputs.
In order to transfer the digital numbers from the
ADC memories to the DAC memories without using the
digital computer, an additional piece of equipment, nor-

mally used for Addaverter maintenance purposes, is employed. One particular operational mode of this equipment, when triggered by a single pulse, transfers the
numbers stored in the ADC memories into the DAC
memories in a sequential fashion: the content of the first
ADC memory is transferred into an auxiliary register,
and then into the memory of the first DAC;3 the process
is then repeated for each of the other 14 ADC-DAC
channels. Provision is made to prevent the number
transfer for any individual channel by energizing an inhibit gate associated with that channel.
These three units, the 15 ADC's, the 15 ADC's used
as DAC's, and the number transfer equipment, comprise the basic sampled-data simulator. In order to obtain satisfactory operation, it is necessary to supply a
burst of three pulses each time it is desired for the simulator to sample. Another piece of equipment was constructed to supply the pulse burst necessary to control
the simulator. This equipment consists of necessary controllogic for the simulator, and a large quantity of pulse
and dc logic with a prepatch capability, to permit flexibfe operation of the entire system. A source of timed
pulses is available, to use as system clock; a "start" system is available, to synchronize starting of analog simulations with the clock pulses.
An ADC-DAC channel may thus be used to sampleand-hold an analog voltage with either of two transfer
functions:
1
(1) .
HI(S) = - (1 - e- sT ), and
s

H2(s) = -

1
s

(1 - e-ST)e- ST

T

~

T

(2)

where T is the sampling period, and T a delay time.
No problems are incurred if it is desired to have all 15
ADC-DAC channels operating in the same mode. However, it is sometimes desired to simulate multirate sampled-data systems, or monorate systems that have several samplers each with a different time delay. No difficulties would arise if each ADC-DAC channel operated
independently of every other channel. This is not the
case, however, since all channels sample simultaneously.
The simulator control equipment was_ so designed to
permit simulation of up to four different sample-andhold operations; however, their operating frequencies
must be restricted to integral m ul ti pIes of some frequency.4 A block diagram of the simulator control is
shown in Fig. 3. The simulator control has a set of
sample pulse inputs, a set of present pulse inputs, and a
set of "enable" outputs, all located on the prepatch
panel. The inhibit inputs for the present pulse (DAC
This process requires 40 p,sec.
This restriction is brought about by the simultaneous sampling
of the ADC's. It is pos~ib.le to use two asynchronous frequencies, but
some external pulse lOgIC IS necessary to prevent double pulsing of the
sample control.
3

4

• •2 The Add<1;verter uses ~he "ripple-down" method for analog-todIgItal converSIOn and reqUIres 180 p.sec to complete one conversion.

333

334

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

Sample Pulse to ADe'a

------------_o To Group
Sample
Input

1---.......

1 - - - - - - - - 0 To Group Pres.ent
Input

(a)
~~_------....;;;.;;...;.=;;.;;.L.-=..:.if.:.....-_o To Group I Sample
Input
~------o() To Group I Present

Input

=

Frequency fain
To Group II Sample
L-.r--;:===:;---=--~---=::..:...---()
Input
Present Pulse to DAC's

1 - - - - - - - - - 0 To Group II Present
Input

(b)
Fig. 3-Block diagram of simulator control logic.

Fig. 4-Example pulse logic diagrams.

inhibit), and inputs for the transfer inhibit are wired in
parallel5 and brought up to the prepatch panel. The
four "enable" outputs, one for each operating group,
must be patched to the inhibit inputs of the channels,
as predetermined by the programmer. If a particular
channel is to operate according to the group 1 mode, its
inhibit input must be patched to the group 1 enable.
Several channels may operate from the same group enable.
A particular group's operating mode is completely determined by what pulses are fed to its sample input and
its present input. For example, to operate a group as a
sample-and-hold [see Fig. 4(a)], a clock pulse must be
fed to the group sample input each time it is desired
for a sample to occur. A present pulse must then be
fed to the group present input 815 J.l,sec after each
sample pulse. The sample input pulse causes all group
enable outputs to be reset, then the particular group
enable (whose input was pulsed) is set, and the ADC's
are caused to sample (all 15 sample); 180 fJ.sec later, the
number transfer is initiated. (Only those channels associated with the operating group have numbers transferred.) The present input pulse, which must be delayed
at least 815 J.l,sec to allow the sample and number transfer to be completed, then causes the DAC's connected
to the operating group to present the numbers transferred as voltages.
Note that the sample-and-hold is actually a sampleand-hold plus a short delay; this delay is not long
enough to introduce unwanted effects in the majority of
problems simulated. If it is desired to introduce more
delay, it is only necessary to increase the delay of the
group present input pulse. The maximum delay possible

Simulation of sampled-data systems which incorporate a digital computer may be accomplished by using sample-and-hold channels to simulate the digital
computer's transfer function (provided the transfer
function is not too complicated). Simple digital filters,
second- or third-order difference equations, etc., may
be sim ula ted.
If a group of ADC-DAC channels are wired in series
[the output of the first wired to the input of the second
(see Fig. 5, with the control logic being pulsed by logic
circuitry given in Fig. 4(a) of the example logic diagrams, the variable delay set to 815 J.l,sec, and all channel inhjbit inputs wired to the operating group enable

6 Logically, if a number is transferred into a DAC, it is done so
with the intention of presenting it.

6]. R. Ragazzini and G. F. Franklin, "Sample-Data Control
Systems," McGraw-Hill Book Co., Inc., New York, N. Y.; 1958

mm. delay

min. delay

min. delay

Fig. 5-Wiring of sample-and-hold channels to
simulate long time delays.

for a group present input pulse is the sampling period
for the particular group.
To operate two groups at different, related frequencies, the group running at the higher frequency is
operated as described above, but the clock pulses for
the lower frequency group must be obtained by dividing the higher frequency clock pulses by some integral
number. [See Fig. 4(b).] The start control input is
shown in Fig. 4(a) and 4(b).
A great many other modes of operation are possible,
but will not be presented here because of space restrictions.
SIMULATION OF SIMPLE DIGITAL COMPUTER
OPERATIONS 6

Shumate: Simulation of Sampled-Data Systems Using Analog-to-Digital Converters
output], then it can be easily shown that successive
channel outputs are each delayed from the previous
channel output by one sample period.
En * in this treatise will be used to denote the sampled
and held value of E; the subscript n refers to the nth sample pulse.
Suppose E is zero, and has been for a considerable
length of time. Then En *, En-I*, etc., will all be zero.
Now suppose that during the interval between the nth
and the nth 1 samples, E changes to some nonzero
variable value. Then, after the present pulse associated
with the nth+l sample, channell's output will change
to En+1*. However, during the sampling associated with
the nth 1 sample, channell's output was still zero,
hence the output of channel 2 will remain at zero after
the nth 1 present pulse.
After time n+2, the output of channel 2 will have the
value En+I*' since that value was still at the output of
channell when the nth 2 sampling occurred. The output of channel 3, En *, will still be zero. (This operation
is a consequence of the fact that each sample-and-hold
channel has a slight transportation lag.)
Therefore, a series string of sample-and-hold channels will act like a delay line which propagates values of
E that have been sampled and held. Further consideration will show that this is equivalent to the way a digital
computer would remember past values of a variable.
Thus

+

+

+

+

Output of channell = En*

335

Solving for Eo*,

The block diagram for this is shown in Fig. 6. The
implementation of a second-order difference equation
can be accomplished with three sample-and-hold channels instead of four; the diagram in Fig. 6 was chosen
because it made the error analysis more expedient.

A ccuracy of Simulation
Each Addaverter unit (ADC or DAC) is accurate to
within ±0.1 per cent of its nominal input (or output)
voltage. Therefore, the voltage out of each sample-andhold channel is accurate to within 0.2 per cent of the
voltage put into the channel. Because of the 0.2 per cent
inaccuracy of the Addaverter, some uncertainty in the
solution of a difference equation may arise. The following discussion treats the limitations caused by this inaccuracy.
A sample-and-hold channel may be visualized as having its output made up of the sum of a voltage which is
identical to the voltage at the input during the last
sample and an unknown random voltage. This is graphically explained in Fig. 7.
Using Fig. 7 as a model sample-and-hold channel, the
complete diagram for a second-order difference equation is shown in Fig. 8.
The expression for the output is

Output of channel 2 = En*e- 8T
Output of channel 3 = En*e- 28T •
Using z transform notation (z=e sT ; see Ragazzini and
Franklin6)

bl

b2

+ -z

b2
- Rm* - - - - - - - Rrv* - - - - - - . (7)

Output of channell = En*

bi

b2

1+-+z

1

Output of channel 2 = En*-

Z2

bi

b2

1+-+z
Z2

z

1

Output of channel 3 = En*- .
Z2

The first channel has the transfer function
GI = -

1

(1 - e- 8T )

(3)

Further analysis would be impossible without some
simplifying assumptions about the character of R i •
An Ri is a function of the voltage at the channel input
and hence cannot be assumed Gaussian. Observations
taken from the Addaverter have shown that the Ri are,
to a first approximation, constant offsets. Therefore,
let

S

(8)

and each succeeding channel has the transfer function
G2

1

= Ga = ... = - .
z

(4)

Suppose it is desired to simulate the transfer function

G(z) = z(aoz
Z2

+ aI)

+ bIZ + b 2

= Eo* .
Ei*

(5)

where Ei=constant offset associated with the ith chan~
nel.
Furthermore, to make analysis expedient, the secondorder difference equation is assumed to be that of a
damped sinusoid:

ze- aT sin aT
G(z) = - - - - - - - Z2 2ze-aT cos aT + e- 201.T

(9)

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

336

Perfect
E
o-----tSample-andHold.

E*+R*

+

Fig. 7-Model of a noisy sample-and-hold channel.

Fig. 6-Block diagram for simulation of secondorder difference equation.

Substituting (8) and (9) into (7),
Eo* =

E~*G(z)

z
z- 1

+ - - E1G(Z)
z2 -aT sin aT

z

+ - - En - - - -e aT- - - - - -2aT
z- 1
Z2 - 2ze- cos aT + eFig. 8-Block diagram of second-order difference
equation, including random disturbances.

2z 2e- aT cos aT - e- 2aT z
Enl----------Z- 1
Z2 - 2ze- aT cos aT
e- 2aT
z

+ --

+

z
- --EIV

Z- 1

e- 2aTz 2
Z2 - 2ze- aT cos aT

+

e- 2aT

. (10)

The first term of (10) represents the desired output of
the simulation. The remaining terms represent the error
caused by the offsets in the channel outputs. The error
is split into two parts: a constant term and an oscillatory term. These two parts are evident in the partial
fraction expansion of the error terms of (10).

Error of Eo* =

larity that they give the largest error: namely, suppose
eln = +'I/;f and elV = -'I/;f where '1/;, is the average channel

offset. Furthermore, suppose the total error of simulation is to be no larger than 100 '1/;, (i.e., the error-to-offset
ratio is 100).
Then substituting into (12)
(13)

+ En) + (2e- aT cos aT) (ElII) - (e- 2aT) (EIII + EIV)] . -z 1 - 2e-aT cos aT + e- 2aT
z- 1
{(e- aT sin aT)(EI) + (e- aT sin aT) (2e- aT cos aT - e- 2aT )(EII)
+ [(2e- aT cos aT - e-2aT ) (2e- aT cos aT) - e- ](EIII)

[(e- aT sin aT)(EI

I

2aT

l

- (2e--aT cos aT - ,,'aT) (-OPERATIONAL AMPLIFIER
1--_ _ _ _W

-[[]-LO·PASS FILTER

5K

Z8

.,.
.....
§
c

-5
Z -5

Y3

Ze

c
><

cc:
::e

5K

!;;C

r----sK--------l

Y2

:0-

,

Zo

5K

,

i'}-FROM
TIME DIVISION

I

I

YI

,

XI

CODER

,

:

I

l i T O OPERATIONAL
I
I
AMPLIFIER

X MATRIX DECODER

I

,

L ____________ ---'

Fig. 4-21 X21 switching array.

Fig. 6-Z interpolator.
-5

+

100
90
80
70
60
50
40
30
20

./

10

XOR Y

-

0
10
20
30
40
50
60

10
80
90
100

WNARY COUNT
UNES SELECTED

~

~

t:7~

/'

/'

./

/'

/'

/'

/'

./

/'

/'

/'

/'

/'

/'

/'
r-- -----------,
:

/'

0

1
1
1
0
0

0
0
0
1
0

1
0
0
1
0

FROM
PRE· INTERPOLATOR

1
1
0
1
0

I
I

I

,,
I

,,
I

I

I
I

,

I

I
I

1
0
1
1
0

0
1
1
1
0

1
1
1
1
0

0
0
0
0
1

1
0
0
0
1

0
1
0
0
1

1
1
0

0
1

0
0
1
0
1

1
0
1

0
1

0
1
1
0
1

1
1
1
0
1

0
0
0
1
1

1
0
0
1
1

1-22-3 3-4 4-55-6 6-7 7-88-9 9-10 0-11 11-12 12-13 13-14 14-15 5-166-17 7-188-19 9-20211·21

PRE.INTERPOLATH
INPUT
[ V / ' ./"./V
PRE· INTERPOLATOR
[
OUTPUT - - - - - \ . V .......... V i'-- V

V ' /V V V V ./ ./"V V
t'-- V i'---V i'--V i'---V "-..V

./ V

VV V

'-...V

"-V

'-...

Fig. 5-Sequence chart.

increment in the independent variable. See Fig. 5. The
counter energizes the matrix decoder which is a set of
gates. These gates energize a different pair of matrix
busses. for each state of the counter.
Interpolation is performed by time division and summation of the voltages on the A, B, C, and D output
wires. Fig. 6 shows the interpolator circuit. Each circle
containing a Llx represents a transistor switch which
rapidly turns on and off, with its on percentage determined by the position of x between the two x-boundary
values of its square, and similarly for y Fig. 7 shows the
time-division coder circuit. The chopped A, B, C, and D
voltages are filtered and then summed by the output
amplifier to produce z.
The interpolation process in somewhat more detail is
as follows: the x-input voltage is combined with fixed
voltages switched by the A-D converter to produce the
sawtooth pre-interpolator input shown in Fig. 5. The
pre-interpolator converts this to the triangular form
designated pre-interpolator output. This is fed to the

OUTPUT
SW.2

-2

OUTPUT
PRE· INTERPOLATOR

-5

,

I,

II

0
0
1
1
0

OUTPUT +5
SW.l

i
I

I

1
L_~
0
1
0
1
0

I

Ii0K

V0
1
1
0

TIME I
DIVISION
CODER ,

10K
I)lFd.

I

,

__:____ _ _J

Fig. 7-Time-division multiplier.

time-division multiplier of Fig. 7, whose output is a pair
of pulsing voltages whose on time corresponds to the
magnitude of the pre-interpolator output voltage. The
pulsing voltages are fed to the two transistor switches in
Fig. 6.
The interpolation equations governing the system
are:
Llx

Z"

ZA

+ -X

z'

Zc

(ZB -

ZA)

(1)

+ -X

(ZD -

zc)

(2)

~y
+ -y

(z' - z")

(3)

~x

Z:z;y

= z"

ZXY

= (1-

(
(
(

~x

~y

~x

~Y)

X - Y +X Y
~x

ZA

+

Ll.x Ll.Y)
-Xy ZB+

X

Ll.y _ Ll.x Ll.Y)

y

X

Zc

+

y

Ll.x Ll.Y) ZD.
X
Y

(4)

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

341

A Time-Sharing Analog Computer*
JOHN V. REIHING, JR.t
I. INTRODUCTION

HE transient behavior of physical systems is often
studied by the use of electronic analog computers.
If the system considered is characterized by a continuous distribution of properties, the describing system
equations are of the partial differential class. An analogous set of equations, soluble on the analog computer,
can often be formed by the application of finite difference approximations. For example, a physical system,
originally space and time dependent, can sometimes be
sectionalized into a number of space segments and then
described by a set of ordinary differential and/or algebraic equations. Such sectionalization ordinarily results
in a number of equation sets of similar form.
Most often the computational approach to the sectionalized problem is to associate with each section a
block of analog equipment. Each section block is usually composed of identical analog computer components. The total system is then simulated by cascading the section blocks. For multisection systems, as are
required for rapid transients, equipment requirements
increase as n, the number of sections, increases. As a result, some problems cannot be accommodated by the
analog computer facility. Further, the large number of
potentiometers resulting from the sectionalized cascade
solution increases problem setup time and the probability of operator error in the potentiometer-set-phase of
setup.
A time-sharing analog solution, described in this
paper, replaces the cascade of similar circuits by a single
circuit whose components are time-shared. In conjunction with the actual computing elements of the timeshared circuit are circuits to provide time delay and
timing functions. This sharing permits a reduction in
equipment requirements permitting a smaller investment in computing equipment for a given problem size
or increased problem capacity over that available without time-sharing.

T

I I. DESCRIPTION AND METHOD OF OPERATION

A. A Typical Problem and Method oj Solution
The description and method of operation of a timesharing analog computer designed to solve a typical set
of pressurized water, forced convection reactor core heat
transfer equations follows. The machine to be described

* This paper is an abstract of a thesis presented at the University
of Pittsburgh for the M.S. degree. The author is indebted to Drs.
J. F. Calvert, T. W. Sze, and D. J. Ford, all of the University of
Pittsburgh, for helpful criticism.
t Bettis Atomic Power Div., Westinghouse Electric Corp., Pittsburgh, Pa. Operated for the U. S. Atomic Energy Commission by the
Westinghouse Electric Corp. under Contract AT-ll-1-GEN-14.

illustrates the principal features of time-sharing and the
solution of the sectionalized reactor core equations may
be considered a typical application.
The set of differential-difference equations to be
solved is given below and is shown, along with a sketch
of the model, in Fig. 1. Coolant flow is assumed constant
for the analysis:!

(3)
(4)

The forcing functions for the kth section are the inlet
coolant temperature Tik(t) and the heat flux tjk(t). The
output coolant temperature is Tok(t), and the average
metal and coolant temperatures are T mk(t) and TWk(t) ,
respectively.
The block diagram of Fig. 2 illustrates the conventional, cascaded method of solution. Each of the n sections is composed of the same computing elements. Fig.
3 indicates, in block diagram, the solution of the equation set by time-sharing. The substitution of a single
time-shared circuit for the tandem string of circuits is
evident by comparing these two diagrams.
Fig. 4 shows a four-section analog circuit diagram for
a time-sharing machine to solve the set under consideration. This analog circuit can be considered as a combination of two circuits, i.e., an equation solving section and
an auxiliary or service section. The equation solving section consists of integrators A and B, summer D, and the
gain potentiometers with settings aI, a2', as', a/. Integrator A solves (1) for T mk(t) given the heat flux and the
film 'drop. Eq. (2) is solved by integrator B for TWk(t)
by integrating the film drop and the coolant temperature rise across the kth section. Summer D produces the
outlet coolant temperature TOk(t) by solving (3). The
auxiliary or service circuit, peculiar to this time-sharing
machine, includes six special devices. The devices and
their functions are as follows.
1) Delay circuits DA, DB, Do, and DD receive, store,
and discharge voltages at times determined by
control signals. The delay circuits can be classified
into two types by considering the nature of the
signals upon which they operate. First are those
1

See Appendix for a list of symbols.

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

342

----------------,

t

MULTI- SECTION REACTOR CORE TIME-SHARING
ARRANGEMENT

DIRECTION
OF
FLOW

I:

To It'

Tilltl

METAL-COOLANT INTERFACE CONVECTIVE HEAT
TRANSFER DESCRIBED
BY A FILM COEFFICIENT

I

-+______ ----1

SECTION
FORCING .FUNCTIONS

, qKltl

fit"

REACTOR PLANT COMPUTER
LESS

REACTOR CORE
COMPUTATION

------.

REACTOR
PLANT
OUTPUTS

I

dt
SECTION OUTPUTS FOR
TEMPCOEE FEEDBACK

Fig. 3--Reactor plant simulator with multisection
reactor core time-sharing computer.

To (t) = T.

Ik + I

k

(t)

Fig. 1-Reactor core heat transfer equations and model sketch.

------------------------MULTI-SECTION REACTOR CORE TANDEM SECTION ARRANGEMENT

,

Ti1...l_tl_ _--iI
if1ltl
fitl

I

~

_~- - - '-=IK'-I_: "tl-

.........

'ifKltl

_________t_(t_'_ _ _______

Ti,ltl }
______ ..

-------P

REACTOR PLANT
OUTPUTS

------~

SECTION OUTPUTS FOR
TEMP COEF FEEDBACK

Fig. 2-Reactor plant simulator with multisection
reactor core tandem section computer.

Fig. 4--Time-sharing computer for the solution of a sectionalized
heat transfer problem-reactor core.
Circuit model for: open loop, constant flow, non-uniform heat
flux situation (four section model).

dTwk
dt

I

I

- - = aa (Tmk - T wk ) - a4 (Tok - Tik)

TOk = 2Twk - Tik
TOk = Tik+l

Reihing: A Time-Sharing A nalog Computer

2)

3)

4)
5)
6)

which delay initial condition voltages, e.g.,
D A, DB. These signals are discrete in nature and
the necessary delay may be discontinuous, i.e., discrete sampling, storage, and discharge. Second are
those which delay continuous voltages, e.g., Dc,
D D. This second type is extremely difficult to realize economically, particularly when the time delays are long. For this reason all of the delays designed for this prototype computer are of the
first type. The continuous signal delays are approximated by smoothing operations performed
on the discontinuous delays.
The integrator operation control circuit changes
the operational state of integrators (Reset, Hold,
Operate) in response to command signals from a
timing circuit.
A heat input circuit provides the kth section heat
forcing function when the kth section equations
are being determined by the equation solving circuit.
Smoothing, gain, and phase inverting circuits perform the functions their names imply.
Gates AI, A 2, A a, A 4 , A 7 , As, and A9 control the
flow of signals.
A timing circuit controls the sequence of operations of the auxiliary devices and the main and
time-sharing computer. Command signals from
the timing circuit are shown as heavy lines in
Fig. 4.

Prior to the initiation of operation of the time-sharing
computer, steady-state calculations are performed to
obtain the initial conditions for all n sections. These
initial conditions are denoted by Tmk(O) and TWk(o). The
voltage analogs of these temperatures are stored in discontinuous delay circuits DA and DB, respectively.
These circuits consist of four tandem cells denoted by
CI through C4 • The number of cells making up delays
DA and DB does not necessarily equal the number of
sections being simulated. The requirement is that the
number of cells and the stepping rate yield a time delay
of nTR+(n-l)Ts where n is the number of sections, Ts
the sampling time, and TR the reset time. For the subsequent discussion delay Dc is assumed to consist of five
cells permitting five samples in each sampling interval
Ts, i.e., p = 5. The time delay employed is Ts for the first
sample and TR+(p-2)T slp for the four subsequent
samples. If a continuous delay were to be used a delay
of Ts+TR would be employed.
The heat input forcing function circuit (shown in the
upper left of Fig. 4) computes {b, fb, t.b, and {b, and these
variables appear at the inputs of gates Al through A 4 ,
respectively. The outputs of these gates are multiplied
to form the qk input line.
Two additional engineering considerations remain. A
sampling period is chosen to establish the rate of com-

343

putation as controlled by the timing circuit. The choice
of a sampling period is governed by the speed of th~
transient to be encountered, the degree of reactivity
feedback via the temperature coefficient, the desired accuracy, the allowable machine running time, and other
considerations. The sampling period is denoted by T
The second consideration is the evaluation of the hot
leg transport time TdT,' This time fixes the number of
tandem cells required in delay circuit DD (output storage, DD, could also be a continuous type delay, e.g.,
tape, if economic considerations permit). With Tdh
established, the timing circuits are adjusted to provide
such a delay.
The operation of the time-sharing analog computer
proceeds as follows. The computer integrators are set to
Reset, installing initial conditions Tml(o) and TWl(O) at
the outputs of integrators A and B. With gates Al and
A 7 open and all others closed, the main and time-sharing
computers are set to Operate condition. The circuit remains in Operate for Ts seconds during which time the
forcing function T i1 (t) flows into the computer. Since
gate Al is open, the heat flux presented to the circuit is
ql. The output of summer D is, consequently, the analog
behavior of TOI(t) for the period T i.e., the output water
temperature transient of the first section of the foursection model during the sampling period T This output
temperature transient is to become the input forcing
function for section two of the model during the subsequent operational period, and so provision is made to
store discrete values of TOl(t). Such storage is accomplished by stepping the discontinuous delay circuit Dc
at intervals during the initial T8 seconds. Such a stepping
action is caused to take place every T8/4 seconds by the
timing circuit. The result of this action is the storage of
five samples of T 01 (t) in delay circuit Dc at the end of T8
seconds. These five voltage analogs denoted by TOl(o) ,
T oI (l), TOI(2) , T OI (3), and T 01 (4) appear in cells Cs , C4 ,
Ca, C2 , and Cl of Dc, respectively. At the end of T8 seconds
the main computer (external to the time-sharing computer) and the time-sharing computer are set to the
Hold condition. Shortly thereafter, the initial condition
dela y circui ts D A and DB are stepped placing condi tions
Tm/o) and TW2(O) at the outputs of integrators A and B.
Stepping delays DA and DB also causes the state of
integrators A and B (at a time Ts after the beginning of
the transient) to be stored in CI of D A and DB. These
analog voltages are the initial conditions required for
the second complete cycle of computation. They displace, in the delay circuits, the "initial" initial conditions. The notation employed for these conditions is
T mk (4) and TWk(4) where the parenthetical number denotes the state of the variable after a time 4 T8/4 seconds. Then, with gates A2 and As open and all others
closed, the time-sharing computer is placed in Reset
and shortly therea~ter in the Operate condition.
The input forcing function for the second section is
8•

8 ,

8•

344

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

now the output of the first section as previously computed. This signal is introduced by stepping the Dc delay. Such stepping causes the discrete analog voltages to
pass out of the delay, through the smoothing, gain, and
phase inverting circuit, and into the computational
circuit via gate As. This same stepping of Dc causes the
output transient of the second section to pass into the
delay Dc for storage and future use in the next cycle of
computation. Again, the output transient is sampled at
five points 78/4 seconds apart in time. The output transient from the third section is obtained during the third
78 interval of time by the same sequential process employed in solving the section two response, and similarly
for section four.
At the end of the fourth 78 second interval the contents of delay circuit Dc are five voltages representing
samples of the outlet water temperature transient during the initial sampling period of the input water temperature forcing function. At this time, with the timesharing and main computer in the Hold condition, the
initial condition delays DA and DB are stepped placing
Tml(4) and Tw](4) upon integrators A and B. Further,
gate As is closed and A7 opened permitting the second
7"8 interval of the inlet coolant temperature forcing function to drive the first section computation when the
computers are set to Operate. Gate Al is opened and the
time-shar~ng computer is set to Reset. Both computers
are now placed in Operate and the second cycle of computation begins. The four section computations proceed
as previously described. During the first 78 seconds of the
second complete cycle gate A9 is open. This open gate
permits the output transient from the first cycle to pass
into the output storage delay circuit DD as the first section response to the second sampling interval displaces
this information in storage device Dc.
As the computation proceeds in a cyclic fashion the
output storage DD becomes filled with samples of the
desired output coolant temperature transient. Each analog sample, spaced 7"8/4 seconds apart in time, is stored
in sequential order in the DD device. The earliest (timewise) voltage appears in the highest order cell and the
latest voltage sample in the lowest order cell, i.e., the
input cell number (C1). As soon as sufficient 78 second
intervals have elapsed so that the sum of the intervals
totals the hot leg transport delay 7dh' the output storage delay begins to discharge the sampied output transient. This output information is sent out in spurts, four
section computation times apart. Each data spurt consists of five voltage samples spaced 7 8 /4 apart in time.
Such discrete data may be smoothed to convert to a
continuous analog form.

B. Feedback Considerations
Time-sharing techniques must include provision for
feedback signals such as the temperature feedback loop
signal in the sim ula tion of a reactor plant wi th a nonzero
temperature coefficient of reactivity. The effect to be
simulated can be described as

Or if the temperature coefficient, K TCk , is assumed spatially constant
(6)

where
(7)

An exact summation process, as required by (5) or (7), is
not possible with time-sharing techniques. This inherent
limitation is so because the instantaneous behavior of
the average water temperature in all n sections is known
only during the computation of the final or nth section,
and then only if all previous Twk(t) transients are
stored.
The circuit next described approximates Tave(t) as
given by (7). The method proposed is the repeated correction of the average existing at the start of anyone
com plete cycle by the use of the section data as it becomes available. Listed below are equations which
describe such a method.
Tave at the start of a four-section cycle is

1
Tave(O) = -

4

4

:E T Wk(O)
k=I

during the first section computation
1
Tave(t) = 4

1

4

:E T Wk(O) + -4

[T Wl(t) - T Wl(O)];

1

during the second section

141

Tave(t) = 4

:E TWk(O) + 1

+-

4

[T W1 (4) - TW1(o)]

4

1

[T W2 (t) - TW2(O)];

and the third section
1

Tave = 4

4

1

1

4

:E TWk(o) + 1

+ -4

[T W1 (4) - TW1(o)]

[T W2 (4) - TW2(O)]

1

+ -4

[T W3 (t) - TW3(O)]

and finally, during the fourth section computation the
average becomes
1

Tave(t) =

4

1

-:E
TWk(o) + 4
4

[TWl(4) - TW1(o)]

1

1

1

+ -4
[T W2 (4) - T W2 (o)] + - [TwaC4) - TwaCo)]
4 .
1

+-

4

[T W4(t) - T W4(O)].

Reihing: A Time-Sharing A nalog Computer
At the end of the first complete cycle of computation the
T ave signal is

345

. - - - - - _ APPROXIMATE AVERAGE COOLANT TEMPERATURE

}

1
Tave( 4) = 4

4

L: T Wk( 4)
k=l

and so, during the next cycle, the identical process can
be repeated.
A circuit to accomplish this task is shown in Fig. 5.
The expressions, indicated in terms of temperature, are
the analogs of the voltages which, of course, actually occur. The state of the circuit is that which would exist at
the start of the first section computation of the first
cycle.
The operation of the circuit proceeds as follows. Prior
to time zero,
1

4

4

k=l

- L: TWk(O)

4

4

1

- L: TWk(O).
The analog voltage of this term is applied to capacitor
C2 through the NC:c contact on relay Tave. The inputs of
summer Q are + TWI(t) and - TW1(o) which are obtained
from the output of integrator B and the gain and phase
inverter following delay DB, respectively. Integrator B
and delay DB are shown in Fig. 4. During the first section computation (computers set to Operate) TWI(t)
begins to differ from TWI(O). This difference is computed
by summer Q and added to the original summation
stored on C1 by summer P after being attenuated by lin
by the input potentiometer shown. This new voltage is
applied to capacitor C2• '
At the end of the first section computation, relay Tave
is energized and maintained up during the second section period. Now capacitor C2 "remembers" the initial
voltage and the new sum consisting of
1

4

n

1

- L: TWk(O)

is applied to capacitor C2 through the NO:c contact. During this period, the inputs to summer Q are TwJt) and
TW2(O). Relay Tave is thus alternately de-energized and
energized until all four sections have been computed. At
the end of four periods capacitor C2 has the analog of
1

4

- L:
4

TWk

~u2t

~~f

r ___-t-_ _ _ _

OP_ER_4T_E.T
...
_R_EL_AY_ _

Tove
HQ
GND

=

+'4 ~ Tw."l

-

RELAY
GND

-IOOV

.

CATHODE FOLLOWING OUTPUT

GAIN AND

D.C

RESTORER

Fig. 5-Summation circuit to approximate the average
coolant temperature.

C. Time and Space Dependent Forcing Functions

is stored on capacitor Ct. Relay Tave is de-energized and
TWI(t) = TWI(O) so that the output of summer P is also
1

FROM
CIRCUIT
FIG4

(4)

k=l

stored upon it. Summer R corrects the stored signals for
attenuation and dc shift suffered in passing through the
cathode-follower read-out circuit. The succeeding cycles
proceed as the first.

Provisions for forcing functions which are both time
and space dependent require another novel time-sharing
circuit. The heat flux input to the reactor core is a typical exam pIe.
By finite differencing techniques the kth section heat
flux input can be approximated by
(jk(t)

= mk{jT(t),

where mk is constant. Several system variables cause
time variations in the reactor core-heat flux, [(jT(t)],
e.g., changes in rod position, coolant temperature, and
pressure. If the sampling period of the time-sharing computer is chosen so that the core heat flux, (jT(t) , changes
appreciably during the period, provision must be made
to include such variations in the computation.
Fig. 6 is a circuit diagram of a heat flux circuit which
provides both a space and time variant forcing function.
During the first sampling period relay Q is inoperative
permitting the (jT(t) signal to flow to the input bus of the
mk scaling potentiometers and to the input of discontinuous delay D Q (this delay could also be of the continuous type, e.g., magnetic tape). While the {jT signal
flows, the D Q delay is stepped causing five (arbitrary
number) samples to be stored within the delay. Potentiometer ml scales (jT(t) yielding (jl(t). During this time,
rotary switch R Q is in position 1. Thus, (jl(t) appears at
the output of the heat flux circuit. At the start of the
second section computation relay Q is operated and
swi tch R Q is stepped to position 2 by pulsing the step R Q
lead. During the second interval delay D Q is stepped periodically causing the initial (jQ(t) signal to reappear on
the potentiometer input bus as well as at the input of
the delay. Smoothing and gain are applied to the discontinuous signal as indicated. During this period (j2(t) appears as the output of the flux circuit. This sequence is
continued until all n sections have been computed. At
the completion of the computational cycle rotary switch
R Q is set to position 1 by pulsing the home R Q lead which
actua tes the release magnet. Relay Q is released and the
circuit is ready for the second sampling period. This cir-

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

346

D~SCONTINUOUS - --------~

cirlt_l_ _ _ j---,
FROM MAIN
COMPUTER

C

---t--

Q

,OPR.

=

DELAY OF

I "I

I
I

Q

STEP OF DELAY

, FROM
MAIN
COMPUTER

GAT'

RELAY
GND

RQ SWITCH

n

POINT
ROTARY
SWITCH

~Q

r'

_ _ _ _ _--+ -t--+_ _

.t--_':.-:,,-:,,-:,,--.J___[_TO_k-T~

qKltl
TO THE TIME-SHARING
COMPUTER

Fig. 6-Heat flux forcing function circuit for a time-sharing
computer solving the reactor core equations.

cuit performs the functions of gates Al through A4 of
Fig. 4.

and

D. Time Dependent Forcing Functions
Another class of forcing functions which must be
handled by time-sharing are those which are time dependent only. An example of this class is the coolant
flow rate through the reactor core. For nonconstant flow
(1) and (2) are amended to read:

dTmk(t)

-- =

dt

Ollqk(t) - 0l2!(t)O.8[Tmk(t) - T Wk(t)],

(8)

dTWk(t)

- - = Ols!(t)O.8[Tmk (t) - TWk(t)]
dt

- 0l4!(t)[Tok (t) - Tik(t)].

Fig. 7-Time variant flow rate forcing function circuit for a timesharing computer solving the reactor core equations.

(9)

Clearly those terms in (8) and (9) with coefficients
a2, as, and a4 are dependent upon flow rate. Fig. 7 is a
circuit designed to include the effects of variable flow
upon the reactor core analog simulation. The circuit is
shown as it would appear when augmenting the timesharing simulator illustrated in Fig. 4. Only integrators
A and B of Fig. 4 are indicated and all other components
are omitted for simplicity. The variable flow portion is
set off by the heavy broken line in Fig. 7. At the start
of the transient run relay Fis de-energized allowing the
flow signal f(t) to pass into the computer as a continuous function. During the initial sampling period, delay
D F is stepped causing discrete samples of J(t) to be
stored in the delay (DF could be a continuous delay,
e.g., tape). The samplef(t) also passes to multiplier M B ,
and the function generator FG1 and hence to multiplier
M A. The outputs of these multipliers, MA and M B , are
-'---desired functions

respectively. At the end of the section 1 computational
period, relay F is energized by applying voltage to the
Operate F gate lead. During the section 2 and succeeding section computation periods the discontinuous delay
DF is stepped periodically causing the initialf(t) sample
to pass out of the delay, through the smoothing and gain
circuit, and into the computing circuit. Thus, the sample
is reused in each section period. At the conclusion of the
complete n-section computing cycle, relay F is deenergized and the circuit is prepared to receive the
second f(t) sample from the main computer.

III.

PILOT MODEL

In order to determine the workability and accuracy of
the time-sharing computing method a pilot model was
designed and constructed with sufficient capacity to
solve a four-section reactor core heat transfer problem
with constant coolant flow and uniform axial heat flux.
The necessary delay circuits for the pilot model were
designed by an extension of an invention attributed to
Janssen2 and later demonstrated by Philbrick. s A block
diagram of a delay circuit is shown in Fig. 8. Buffer
amplifiers B 1 ; B 2 , etc., have the following properties:
1)
2)
3)
4)

very high input impedance,
very low output impedance,
am plifica tion close to unity, and
high available output power.

2 J. M. L. Janssen, "Discontinuous low-frequency delay line with
continuously variable delay," Nature, vol. 169,p.148;January, 1952.
3 "A Palimpset on the Electronic Analog Art," ed. by H. M.
Paynter, G. A. Philbrick Researches, Inc., Boston, Mass., p. 163;
1955.

341

Reihing: A Time-Sharing Analog Computer

Eltl

~'--'---'--'--'---'--.L..-l..'

...
a TIME

E"'i~'
~~~
I
IT

Fig. 8-Block diagram of a discontinuous delay line with continuous
variable delay (after J. M. L. Janssen, Royal Dutch/Shell Laboratory, Delft, Netherlands, October 25, 1951).

These devices were obtained by the design of an extralinear cathode follower. Switches Sl, S2, etc. have characteristics as follows:
1) very low forward impedance,
2) very high reverse impedance, and
3) controllable by external command signals.

The switches for the delay circuits of the pilot model
were designed for two different applications of the delays:

charge Cl and E(o) to pass to C3• At time 3T even
switches are closed moving E(2T) to C2 and E(o) to C4•
The process of alternately closing the odd and even
number switches is continued with closures every T
seconds. Eventually, after (n-l) T seconds, voltage
E(o) appears on Cn and hence becomes the first output
sample. Following this voltage, every 2T seconds, are
E(2T), E(4T), E(6T), etc. Thus a delay of (n-l) Tis
achieved. Since T, the switching period, can be controlled, the objective is achieved.
The timing circuit of the pilot model was synchronized with a master clock. Clock pulses were used to
drive bistable multivibrators which performed desired
frequency divisions. The resulting subharmonics of the
clock pulse train were directed to a logic circuit which
generated the necessary control pulses to execute the
desired sequential switching plan. The control pulses,
after receiving power amplification, actuated relays
whose contacts formed a switching network. The signals
from the network controlled the operation of the component devices which made up the time-sharing computer.

IV.

TEST RESULTS AND CONCLUSIONS

In order to evaluate the performance and accuracy of
the pilot model a series of tests were run in which the
reactor core heat transfer equations were simulated
with zero heat input and constant flow, i.e., a simple
transport delay problem described by:

dT wk

2n

2n

dt

Tdo

Tdo·

- - = - Tik - -

T WM

(10)

1) The short time delay circuits, e.g., the recycled

(11)

forcing function delays (delay Dc of Fig. 4), required bilateral electronic switches patterned
after the work of Philbrick. 3
2) The long time delay circuits, e.g., the initial condition delays (DA and DB of Fig. 4), were designed
wi th fast-acting relay con tact switches.

(12)

Capacitors Cl , C2 , etc., are extremely low-leakage components. Output amplifier, Aout, has:
1) adjustable gain,
2) very high input impedance, and
3) low output impedance.

This component was obtained by cascading a buffer
amplifier and a conventional analog computer dc amplifier. This arrangement permitted the required smoothing operation and the gain adjustment to be performed
within the output device.
To begin the explanation of the delay circuit it is assumed that all switches are open and all capacitors initially uncharged. At time zero all odd number switches
are closed for a sufficient time to cause capacitor Cl to
charge to E(o). At time T all even number switches operate causing C2 to assume voltage E(o). The odd
switches are then closed at time 2T allowing E(2T) to

The forcing function was a cosine shaped increase in the
inlet coolant temperature. The results of these experimen ts indica ted:
1) Conventional and time-sharing circuits are compatible and reproducible results are obtainable.
Switching transients, relay contact "races," and
switching synchronism problems are evident but
they can be overcome by proper circuit engineermg.
2) Simulation accuracy is a function of the sampling
interval employed in the delay circuits and the
method of signal smoothing employed. Fig. 9
shows a typical input-output trace. The circuit was
forced by a 0.785 rad/second cosine rise in inlet
coolant temperature. Delay Dc of Fig. 4 was
smoothed by a 1/ (TcS+ 1) filter in which the optimum Tc was found to be 0.03 second. The maximum per cent departure from the ideal delayed
transient (also shown in Fig. 9) is 3.1 per cent occurring 3.6 seconds from the start of the transient.
3) The accuracy of the sim ula tion of systems in.
which feedback signals dependent upon instan-

1959 PROCEEDINGS OF THE WESTERN JOINT COMPUTER CONFERENCE

348
LO

l0r-------,-------,-------,-------,-~==~~

08

08~------~------+-------+-~~--~------~

06

06r-------~------+-~~~~~----~------~

g
1-'::-

~

G

~

~
0
IU
N

FROM SUMMATION CIRCUIT

~
::=;

I

2

~
l-

04

04~------~------~~----+-------~1------~

0.2

02~------~--~~~------

i::=;

I

OPERATION OF THE

- Rs ¢

a,,: C~C"
CfJ-SI/
a13 = -SfJa11: 5~S6-C I/I-C~SI/l

sm¢)

al1:

I

I

I

I

I
I

------,

+. I I
G

SHArltlG

PORTION OF GROUND GUIDANCE SUBSYSTEM

I

~

Fig. 2.

TARGET SUBSYSTEM
GENERATOR

~
~

CJ

a

~
~

~
~

'X: 'X

TARGET TRACK RADAk'
RECTANGULAR COORDINATES

t.=J

X=UQII+VQ21+wa 31
,...--I.
y=uo l2+vo 21+wa 3Z
I=UQ,3+voz3+wa 33

an~ C;C-6-

MISSILE TRACK RADAR
RECTANGULAR COORDINATES

I

ORTHOGONAL
TRANSFORMATION

au= S~C-&
a31= C~SffC"'-S4>SI/l

I

I5...

~

DIRECTION COSINE MATRIX

I

COMMAND COMPUTER

~

QI'y'y+PR(I"x-Ill)-R I"i tP IXi!=tM

MISS
OISTANCE

PLANE

a

h

~: P+ tan-e-(cos.4>R+Q

HORIZONTAL

VJ

1,

10

tl:i

.

MISSILETARGET
KINEMATICS
COMPUTER

~

RIGID BODY MISSILE DYNAMICS

RIU-PI"i!+PQ(!1J-I",,)+QRlx~=IN

I

VERTICAL PlANE
COMMAND COMPUTER

~
~

PI"X-RIxt!+QR(lu-1yy)-PQI,,!::IL

ITHRU.SThFORCE I

t

Q

m(w+PV-QU)=m9(c;c-&)tf3 o.

5()

FORCE

I
L
_________ ,

a

rAIRFRAME FLEXURE
,- FAILURE MOOES

J

THRUST VARIATIONS
MAGNITUDE .MAL ALIGNMENTS

~

I

;

LiI

SENSING DEVICE MAL ALIGNMENTS
EQUIPMENT RESPONSE VARIATIO~S DUE TO
TEMPERATURE. VI8RATIONS , etc.

v,.

m(V-pw+ RU ):ma(s-&c~)+ fZQ

{~I =-mg (S-&)
m(t)9 ['II: mg(S,::-&)
f9J ~ mgecf C-6-)

STOCHASTIC INPUTS
COMPONENT PRODUCTION VARIATION EFFECTS
CONPON£NT FAILURE (PARTIAL OR COMPLETE)
(EtN"IRONMENTAl INFLUENCES)

BURNING
ANOf.tlLIES

•

L=qslc,(oc,,s.711a.OC.o()
M' qs I (14(0< • .6'. 'Illa,Oc. 51)
N:qs

'C

'C

m(u+QW-Rv):ma(s~)tfh+f~

ft,Iy,ct> - -

IWINO TUHNElDATA VARIATIONS.

•

.....

MISSilE MASS
, INERTIA

CHARACTERISTICS

CONTROL SUBSYSTEM

'1110

0<:

(O)V'; X cit

8

dt

~

y:y(o)+rty'
'Jo
Z= z{o)~t i

dt

~
~
~

~

Q

........, ,......,.

M'UU "''''''Y._U Of .IIIIACIC.MACM

I

' ...lIut_

..... IL. CONTIIOL

IUllYITIMI

U

I

r-

_'1,_

....,,"l AW6Ut A' "ute"'"

1

-::::==s

I COIIYIIISIOIt TO I....
1,CONNIIIITI
'IODY fltAMIIN

~.

MIODYNAMIC

NICIS ........

(IF.,.,.R')

J

. . . ., All SNCtne " . 1

IHlCTM

~"'f

"('OM,T""

urlTn .........

~MIIln
OF mACk,
MACllIUMU••

UU.

IOCAL .....

I COIIYIUIOII.,.I ~-:::::..!

-I COMNIIIIITIIN
ICIOY' FIWU
J,
t tMI"LI'
T •.,,"',.
'OA"' ' ' N.....

"'IINTA"'"
Of n.uI1' fOIICI

=NTS

_MlC""'1IIIIIC
MWCOllftANT

'OIICI.

..... "A . . ~OI'1ROC"" WITH
RUNcr TO wtND f ' ......, r ....'

MIIMIII1'

~
~

ltI/ATlCIII'
MOMIIrrI Of

MOIo4ENTS 0' .NtIlT.4 '1445S

IlllllnAC .....

v,

T.M(

.,dYlLOC'"Y_'''

....CTTO . . . . ,.....
1N1IOD'¥',.. . . CC!"4NN11ftS

M.... U 0IID111S

rrrtnrw.

MClLIUtIO. AT

1

,

CION'" _

(_liltS

+

. . . YILOCITY

"lmALOII"'"

r - - .MOYPM... ItI

.--

u.,,, ...... 0411" rLLll"M(rrv

Accn"A1'10M
lUI TO IllAYIt,

COMPaIIIWf. OF

E= l-...... I
COMPOImfTI III

~

lOCAl AL'InUDl'

'''''Ma

8DD\'
COM_I.

.. - - nwu _.NTI
k"--···~

~

~
C':l
~

LOULftIOUICI'....

1

COMIJIIOItIlITS orWIIIO
YlLO(IT'f WIT ..

I""" TO U'""

IOD'I PllANI

~

torr .MIIt COMPONIMTI

or IODV ,IIA. AMCULAI
V(LOCITY WlTN _"PleT

1OtY,.
..., TO
LOCALWlDUNON...,

_",,"OCI1'Y
WIn! II'HeT 10
EAITII IN LOC"L
, . . . FUMI
CDMPOIIltITI

LOUI.I.AT"UOfI.LOeMITUOI

""III CON1',......
..,.. ..... AlII

ItOLL "SITtON

C':l

_ . . , . _ OF

TO ••IITIAl '!tAM(

OfIt(('TIOWCOI".n

"'''''1'I0N OF
IIOU POIItION

WI1'II • .,ICTTO

~

-

~

~
B"
~

AND

IDOY LONI AXIl.

~
~

lOC&L~rll,M(

.,T, 'VIO ""'UTS
ACallIOMlTl~

~

..C':l

~

wmt ....CTTO

r.u,uH

fAITN IOTAlION.-&. YrLO(fTV

1

"OOlT'C . .D'US AND
ClODIne er.TlI onSET

onllM....T1ON _

lDM*rNfS OIIW1MD
"(tOtlfy WIT. 1f,",T

...

TOAADA.F""~

IN'UTI

~
~.
LOCAL

1'AHIT POS'TIOII
"N'""TOII

I

.......1 TlACKI'"

TRA(KI'"

AADAR

t

I
I

GlNfItATION OF
GAOUMD FlA. .
IIIEITIAl ... lULAIl
YfI.OClnrs.AND
LOCAL 'IIOUNO
FIAMllOlAII1'N
rlAME DCllfCTlotI
(05INEI.

bOAR TltANIMITTII

1
..... " O,,"I'UTS

GROUND COM'UnR

t

I

'~NO fU.1t4(

LOCAL IiROIlNO

'0 lARTW FAA"'( OIA(Cn;)" COSIIiIFS

nt" ... ( 'NUT.Al

AN6IJlAA 'l'HO""',U

~

URTWItOTATIOIolLSlIU

L.t,",uor

GENEIATION 0'
"ADAI FRAME
INIII1'IAL ANGULAR
YIlOClTIES

1.0'"

.. ""c,ulUYIlO(I""If'S

.;toOI'TIr(UtlV'l'1IIO

GUOETI( ,JwU- orrUT

RI,At ••••• "

I4IUIU l'IA(ICIII'
RADA.

'litOU..tI

FI"'M,

COflC~Ololt .. TS or MI$.SILI' VElOCITY' wlfW

A(SptCT Yo

MISSIlE VfLOClT'I ~WITH "'I'I(T TO
INERTIAL OI"IN
• LOCAL ALTITUDE.
WITH "'SHeT TO, U)' .. lCOIIIO ... Of...... (
• STATED IN
lOCAL GIIOUND
FRAME.

2g?~~::·

GENERATION OF
"ADA. ,.4N' TO
LOCAL GIIOUND
FTlAME DCIIKTION
COSINES.

,.AMf 1/III",1'At

I
MOIllO .. ' ...·&. (.Ot:Al

.'ff:.~~~::~~l:~f

CONVERSION OF
MISSILE I'OSITION
INTO ""OAI S'NE.".L
FIlAME COMPONENTS

IftC, ...""tA'(QiM'fIIIfTS
0'''011'10111 WIT.

.1o#'f,TY'O.'OUI'1llAM'

CONvt ...OIIOF
NISSIU mOClT\'
WIn! RUPfCT TO
WIND. TNE IODY
FRAMI 1111'0 LOCAL
6110UND FIIAM'
COMPONENTS.

~
~
~

~

~

"" .

V)

~

lOCAL "IOUNO t.AW, , ••"""" CO,"
~

l)EG.

2

YAW

ACCE'LEROMETE'R

ay_

K6

I

I

I "oj~T16P'i I
,
EQ :(k OJ kzAy+ka E~)/( I+T'4 P;
I
n'y I
___
Ks
Ec:(k40 kyk.5 Ecl )/( ftT ,5P)
iiiJP+I I
IEJ:(E c+T'6 PEf)/(ltT'7P)
YAW RATE GYRO

~h)_

I

FIN POSITION

OJ'

PO~;NTIOMETER ~
'!y::J~v/d.e!J'

F.

FG'y

8y Sy
.

.'

~

Ef =[Eo'(ltT,ep)tT,9p Ed] 1(I+T20p)
Eo~:[1<6E f(ltTzl p}tk7Ea]/(H Tn p)

<:':>

'.

II "'~"
I I
I

I

~

~
~

~

I g

R.)LL AMOUNT G'fRl

~::
~

y

Qy

I

I
J
K7 ~o':.!
I
I --,-,,------- I
I ROllRATEGYRO I
h
K
II P-": 127 1'2+T6 81'+1 II
I

-+

L-----0,)'

~= Flip

\

/

Ep _J(4(I+TSP)(I·T9P)(14T/O p)

I

TRANSFfR FUNCTIOI4

1/1

-

~---I-----

'I,

BURST CONTROL C1RCUITJ

-

I.

TRANSMITTER
TRANSFER FIINCTION
~
lolA... BE CO/oJSIOERf'o

MAY BE CONSIDERFD

I
I
I
I
I
I

-

/

Ep: Eo+ Eo'

I

PITCH FIN
POn'NTIOMETER

~~--.,.......

TRA..rSFER FUt\lCTION

/

Oi,V/g

f\
/ \

~
<:':>
~

~
c..,

~

<:':>

~

.,..."

~

~

<:':>

~
~
~

<:':>

~

~'

~
~

~
~

~
c..,

~.

c..,

TUp2+T24

I

I

\ '- ____ J )

Fig. 4-Missile control system block diagram.

CoN
CJ1

VI

eN
CJ\

0\

'-

'C
VI
'C

~a

Q
TARtan
TRACk
RADAR

REAL TIME RANGE INSTRUMENTATION
TRACK

UDAR

ACCElEROMETER.

RATE
GYRO
OlIl"PUTS

OUTPUTS

COMPUTER OUTPUT

GROU~ £lCTERNAl
COMPUT~~ I
I
'--

S16NAL INPUTS
tOUTPUTS
STEERIN6
AMPLIFIERS

RECEIVED SIGNALS
SEFoRf ( AFT£R
FILTEltS

MISSilE
A1TJTUDE
AW&lE'S

or

FIN POSITIONS (
STAGNATiON PRESSURE

I

~

1

~

ANGLES OF ATTACK
AND/OR. EXTERNAL
WINO VELOC/T Y

V:l

a

~

INPUTS

J tl:i~

FLIGHT ANALYSIS AND CONTROL DEVICE"
SIGNAL
!lUORE
FILTfR.

SIGNAL
AFTER
FILTER

I

I I

I

STWUNG
I RATE A"EtEit. FORm
(MOMENTS
Ah4P!IFIEIt FIN POSITION
ACe. TO'MEASURED
OUTPUT
POTENTlO"1fTER OUTPLIT OUTPUT TRANH. tROT.
SI
Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : No
XMP Toolkit                     : Adobe XMP Core 4.2.1-c041 52.342996, 2008/05/07-21:37:19
Producer                        : Adobe Acrobat 9.0 Paper Capture Plug-in
Modify Date                     : 2008:11:16 14:11:36-08:00
Create Date                     : 2008:11:16 14:11:36-08:00
Metadata Date                   : 2008:11:16 14:11:36-08:00
Format                          : application/pdf
Document ID                     : uuid:d1bba953-6f18-4d55-92dc-bd7d85094d68
Instance ID                     : uuid:fb4b0507-4f4a-4c69-84ed-3a44b65231ab
Page Layout                     : SinglePage
Page Mode                       : UseOutlines
Page Count                      : 361
EXIF Metadata provided by EXIF.tools

Navigation menu