1971 05_#38 05 #38

1971-05_#38 1971-05_%2338

User Manual: 1971-05_#38

Open the PDF directly: View PDF PDF.
Page Count: 651

Download1971-05_#38 1971-05 #38
Open PDF In BrowserView PDF
AFIPS
CONFERENCE
PROCEEDINGS
VOLUME 38

1971
SPRING JOINT
COMPUTER
CONFERENCE
May 18 - 20. 1971 '
Atlantic City. New Jersey

The ideas and opinions expressed herein are solely those of the authors and are not necessarily representative of or
endorsed by the 1971 Spring Joint Computer Conference Committee or the American Federation of Information
Processing Societies.

Library of Congress Catalog Card Number 55-44701
AFIPS PRESS
210 Summit Avenue
Montvale, New Jersey 07645

©1971 by the American Federation of Information Processing Societies, Montvale, New Jersey 07645. All rights
reserved. This book, or parts thereof, may not be reproduced in any form without permission of the publisher.

Printed in the United States of America

Edited by Dr. Nathaniell\1acon, Technical Program Chairman

CONTENTS
COMPUTING MACHINES-MENACE OR MESSIAH?-PANEL
SESSION
(No papers in this volume)
IMAGE OF THE INDUSTRY-PANEL SESSION
(No papers in this volume)
THE NEW TECHNOLOGY-HARDWARE DESIGN AND
EVALUATION
The DINKIAC I-A pseudo-virtual-memoried mini-For stand-alone
interactive use .............................................. .
A multi-channel CRC register ................................... .
Features of an advanced front-end CPU .......................... .
Interpreting the results of a hardware systems monitor ............. .

1
11

15

23

R. W. Conn
A. M. Patel
R. B. Hibbs
J. S. Cockrum
E. D. Crockett

LAW ENFORCEMENT AND JUDICIAL ADMINISTRATIONPANEL SESSION (No papers in this volume)
APPLICATIONS REQUIRING MULTIPROCESSORS
4-way parallel processor partition of an atmospheric primitive-equation
prediction model. . .......................................... .

39

An associative processor for air traffic control ..................... .

49

E. Morenoff
W. Beckett
P. G. Kesel
F. J. Winninghoff
P. M. Wolff
K. J. Thurber

COMPUTER AIDED MANAGEMENT OF EARTH RESOURCESPANEL SESSION (No papers in this volume)
RESPONSIVE GOVERNMENT-PANEL SESSION
(No papers in this volume)
COMPUTERS IN TRANSPORT-FOR MANAGEMENT NEEDS
OR SUPPLIERS' DELIGHT?
A computer-aided traffic forecasting technique-The trans Hudson
model ...................................................... .
Computer graphics for transportation problems ................... .

61
77

Real time considerations for an airline ........................... .

83

A computer simulation model of train operations in CTC territory ....

93

PRESENT AND FUTURE DATA NETWORKS-PANEL SESSION
(No papers in this volume)

E. J. Lessieu
D. Cohen
J. M. McQuillan
J. Loo
B. T. O'Donald
I. R. Whiteman
D. Borch

TERMINAL ORIENTED DISPLAYS
A general display terminal system ............................... .

103

J. H. Botterill
G. F. Heyne
T. R. Stack
S. T. Walker
T. Konishe
N. Hamada
I. Yasuda

AIDS-Advanced interactive display system ...................... .

113

CRT display system for industrial process ........................ .

123

Computer generated closed circuit TV displays with remote terminal
control ..................................................... .

131

S. Winkler
G. W. Price

137

S. S. Nagel

143

C. H. Springer
M. R. Alkus

Evaluation of hardware-firmware-software trade-offs with mathematical modeling ............................................ .

151

System/370 integrated emulation under OS and DOS .............. .
A high-level microprogramming language (MPL) .................. .
A firmware APL time-sharing system ............................ .

163
169
179

H. Barsamiam
A. DeCegama
G. R. Allred
R. H. Eckhouse, Jr.
R. Zaks
D. Steingart
J. Moore

COMPETITIVE EVALUATION OF INTERACTIVE SYSTEMSPANEL SESSION (No papers in this volume)
COMPUTERS IN THE ELECTORAL PROCESS
The theory and practice of bipartisan constitutional computer-aided
redistricting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
"Second-generation" computer vote count systems-Assuming a professional responsibility ....................................... .
MICROPROGRAMMING AND EMULATION

INTERACTIVE APPLICATIONS AND SYSTEMS
Designing a large scale on-line real-time system ................... .
PERT-A computer-aided game ................................ .
Interactive problem-solving-An experimental study of "lockout"
effects ...................................................... .

191
199

S. Ishizaki
J. Richter-Nielsen

205

TYMNET- A terminal-oriented communication network .......... .
Implementation of an interactive conference system ............ , .. .

211
217

B. W. Boehm
M. J. Seven
R. A. Watson
L. R. Tymes
T. W. Hall

COMPUTATIONAL COMPLEXITY-PANEL SESSION
(No papers in this volume)
THE EVOLUTION OF COMPUTER ANIMATION-PANEL SESSION
(No papers in this volume)
SERVING USERS IN HIGHER EDUCATION
Who are the users?-An analysis of computer use in a university computer center. . .............................................. .

231

E. Hunt
G. piehr
D. Garnatz

INFORMATION AND DATA MANAGEMENT
An initial operational problem oriented medical record system-For
storage, manipulation and retrieval of medical data .............. .

239

Laboratory verification of patient identity ........................ .

265

The data system environment simulator (DASYS) ................. .

271

Management information systems-What happens after implementation? .................................................... .
A methodology for the design and optimization of information processing systems ................................................ .

J. R. Schultz
S. V. Cantrill
K. G. Morgan
S. Raymond
L. Chalmers
W. Steuber
L. E. DeCuir
R. W. Garrett

277

D. E. Thomas, Jr.

283

J. F. Nunamaker, Jr.

Computer generated repeatable tests ............................. .

295

R2-A natural language question-answering system ................ .

303

F. Prosser
D. D. Jensen
K. Biss
R. Chien
F. Stahl

COMPUTER ASSISTED INSTRUCTION

THE NEW TECHNOLOGY-STORAGE
Performance evaluation of direct access storage devices with a fixed
head per track. . ............................................ .

309

Drum queueing model ......................................... .

319

Storage hierarchy systems ...................................... .
Optimal sizing, loading and re-loading in a multi-level memory hierarchy
system .................. , ... , .............................. .

325
337

The TAB LON mass storage network ..... " ., .................... .

345

T. Manocha
W. L. Martin
K. W. Stevens
G. P. Jain
S. R. Arora
H. Katzan, Jr.
S. R. Arora
A. Gallo
R. B. Gentile
J. R. Lucas, Jr.

TOPICS IN COMPUTER ARITHMETIC AND IN ARTIFICIAL
INTELLIGENCE
A structure for systems that plan abstractly ...................... .
Unconventional superspeed computer systems ..................... .
High speed division for binary computers ......................... .
A unified algorithm for elementary functions ...................... .
A software system for tracing numerical significance during computer
program execution ........................................... .

SOFTWARE LIABILITY AND RESPONSIBILITY-PANEL SESSION
(No papers in this volume)
VENTURE CAPITAL-FINANCING YOUNG COMPANIESPANEL SESSION (No papers in this volume)
FROM THE USER'S VIEWPOINT-PANEL SESSION
(No papers in this volume)

357
365
373
379

W. W. Jacobs
T. C. Chen
H. Ling
S. Walther

387

H. S. Bright
B. A. Colhoun
F. B. Mallory

PERIPHERAL PROCESSING-PANEL SESSION
(No papers in this volume)
COMPUTER PICTORICS
Automated interpretation and editing of fuzzy line drawings ........ .
Computer graphics study of array response ....................... .

393
401

Computer manipulation of digitized pictures ...................... .

407

S. K. Chang
G. W. Byram
G. V. Olds
L. P. LaLumiere
N. Macon
M. E. Kiefer

AN INTERNATIONAL VIEW-PANEL SESSION
(No papers in this volume)
SIMULATION OF COMPUTER SYSTEMS
The design of a meta-system .................................... .
An interactive simulator generating system for small computers ..... .

415
425

A. S. Noetzel
J. L. Brame
C. V. Ramamoorthy

Multiband automatic test equipment-A computer controlled check-out
system ...................................................... ,

451

T. Kuroda
T. C. Bush

Coding techniques for failure recovery in a distributive modular memory
organization ................................................ .

459

Recovery through programming system/370 ...................... .
On automatic testing of one-line, real-time systems ................ .

467
477

S. A. Szygenda
M. J. Flynn
D. L. Droulette
J. S. Gould

APPLICATION OF COMPUTERS TO TRAINING-PANEL SESSION
(No papers in this volume)
THE NEW TECHNOLOGY-DIAGNOSTICS AND RECOVERY

THE NEW TECHNOLOGY-SYSTEMS SOFTWARE
PORTS-A method for dynamic interprogram communication and job
control ..................................................... .
Automatic program segmentation based on boolean connectivity .... .
Partial recompilation ......................................... , .

485
491
497

PL/C-The design of a high-performance compiler for PL/I. ...... , .

503

GPL/I-A PL/I extension for computer graphics ................ , .
ETC-An extendible macro-based compiler ....................... .

511
529

R. M. Balzer
E. W. Ver Hoef
R. B. Ayres
R. L. Derrenbacher
H. L. Morgan
R. A. Wagner
D. N. Smith
B. N. Dickman

THE COMPUTER PROFESSIONAL AND THE CHANGING
JOB MARKET-PANEL SESSION
(No papers in this volume)
THE NEW TECHNOLOGY-FILE ORGANIZATION
A file organization method using multiple keys .................... .
Arranging frequency dependent data on sequential memories ........ .

539
545

Associative processing of line drawings ........................... .

557

M. L. O'Connell
C. V. Ramamoorthy
P. R. Blevins
N. J. Stillman
C. R. Defiore
P. B. Berra

THE NEW TECHNOLOGY-COMPUTER ARCHITECTURE
The hardware-implemented high-level machine language for symbol ...

563

G. D. Chesley
W. R. Smith

SYMBOL-A major departure from classic software dominated von
Neumann computing systems ................................. .

575

The physical attributes and testing aspects of the symbol system .....

589

R. Rice
W. R. Smith
B. E. Cowart
R. Rice
S. F. Lundstrom

SYMBOL-A large experimental system exploring major hardware
replacement of software ...................................... .

601

W. R. Smith
R. Rice
G. D. Chesley
T. A. Laliotis
S. F. Lundstrom
M. A. Calhoun
L. D. Gerould
T. G. Cook

617

J. D. Benenati
P. Freeman

EDUCATIONAL REQUIREMENTS FOR SYSTEMS ANALYSTS
A semi-automatic relevancy generation technique for data processing
education and career development ............................. .
An architectural framework for systems analysis and evaluation ..... .
COMPUTER ACQUISITION-PURCHASE OR LEASEPANEL SESSION (No papers in this volume)
COMPUTATION, DECISION MAKING, AND THE
ENVIRONMENT-PANEL SESSION
(No papers in this volume)

629

The DINKIAC I-A pseudo-virtual-memoried mini-For
stand-alone interactive use
by RICHARD W. CONN
University of California
Berkeley, California

will be summarily described. Explicitly stated these
constraints include cheapness, component availability,
and completeness in the sense that the user will not be
required to purchase additional hardware. Once the
Dinkiac design has been outlined, its usefulness can
be assessed, its performance and architecture confirmed
by simulation; construction details and alternate
features may be presented, and its cost ascertained.

INTRODUCTION
The past three years have witnessed the development
and sale of a large and unanticipated number of small
general purpose digital computers. These machinesthe mini-computers-originally intended for real-time
use in applications such as production control, now
serve many diverse functions, ranging all the way from
data buffers to the central processing units of small
time-sharing systems. One trade journal even reports a
sale to a home hobbyist claiming that initial costs are
comparable, and upkeep less, than for other "recreational" equipment such as boats or sports-cars.
Several manufacturers have offered a basic machine
with four thousand eight or twelve bit words, and with
teletype I/O, for under ten thousand dollars.l,2 Because
of keen marketing competition and recent developments
in integrated circuit technology these prices are continuously dropping. Memory costs, however, have not
kept pace with the decreased logic costs br0ll:ght about
by the new IC's. Before truly spectacular price drops
can be made the cost of memory must be reduced.
Memory in the above context evokes images of undelayed random addressability by word, or, more
specifically, of magnetic cores. Yet if we consider computing systems generally, core memory represents but a
small percentage of a typical installation's total storage.
High core fabrication costs have led-in all but the
tiniest systems-to the utilization of memory hierarchies. Devices most commonly comprising these
hierarchies are, of course, the familiar magnetic cores,
drums, disks, and tapes.
The questions to be examined in this study are: How
cheaply can a machine adhering to storage hierarchy
principles be built? What will it look like? and What
good is it? To be in any position for viewing either of
the others we must first address ourselves to the question, "Wh,at will it look like?" To do this the design of
the Dinkiac, amachine meeting the implied constraints,

THE DINKIAC
Physically, the Dinkiac will appear as a typical
keyboard-cathode-ray-tube display terminal. It will
consist of a typewriter-like, 64 key, keyboard; a small
CRT with a display capability of up to 84 characters
presented in seven rows of twelve characters each; a
row of lamps and switches; a single track low quality
tape cassette recorder; four magneto strictive delay
lines-all packaged together with the necessary register
and logic components.
With its 16 bit word size the Dinkiac will appear to a
machine language programmer as one of the larger
minis. A word will represent data as either a single
fixed point binary fraction in two's complement form,
or as two eight bit character bytes, the last 6 bits of
each conforming to USASCII standards.
Each in~truction will comprise one full word in a
fixed format with the first four bits (0-3) for the operation code; bit 4 a possible index register designator;
bit 5, an indirect bit; "bits 6 and 7, a page (delay line)
address; and the last eight bits (8-15), the address
within a page of one of 256 sixteen bit words.
Main memory will be made up of four magnetostrictive delay lines each storing 4096 bits. These lines
will have a bit rate of two megahertz for a maximum
access of a little over two milliseconds or an average
access of approximately a millisecond. Each of these
lines with a capacity of 256 words will be said to store

1

2

Spring Joint Computer Conference, 1971

a page of information. Processing may take place in
anyone of these lines concurrent with an exchange of
information between secondary storage and some other
line, not including the first, or page zero line. (Many
readers will challenge the wisdom of choosing delay
lines over shift registers. The latter has a speed advantage as well as the greater potential for cost reduction, matching decreases in the other IC's. There are,
however, no large cheap shift registers currently available, and since it is our intention to show that a cheap
instrument can be immediately constructed from offthe-shelf components, we are forced to choose the
moderat~ly priced and readily available delay line. 3)
The previously noted cassette recorder will provide
secondary storage; a single tape retaining information
in one of 128 blocks of 256 words each. Bit storage and
retrieval rates will be around three kilohertz fixing
page transfers at around one and a half seconds. The
source and adequacy of these· speeds will be discussed
in the simulation section.
As originally conceived, the Dinkiac included hardware for automated page swapping, thus inspiring the
notion-echoed by the paper's title-of a virtual
memory machine; the virtual space being the size of
the tape or more accurately the number of tape blocks
times the number of words in a block, i.e., 32K Dinkiac
words. Memory addressing was to have employed a
page register-associative search scheme which operated
in the following manner: Three (because page zero is
not swappable) seven bit page address registers were
loaded under program control. An instruction pointing
to one of these registers (with the delay line address
bits) referred to the tape block indicated by that register's contents. The instruction's address field indicated
one of 256 words within the page. The requested page
mayor may not have been physically present in some
delay line. Three seven bit registers were to compare
their contents with that of the indicated page register
and, if found, switch in the associated line. Because of
the great disparity between word access and logic
switching time the hardware for this associative search
need not have been fast. If a specified page was not in
any of the delay lines it was to have been retrieved from
the cassette and stored in some line according to an
algorithm which first checked sequential delay lines_ to
find one in which the dirty bit had not been set. (The
dirty bit was set-by the memory store signal-for any
line which had been written into.) If all lines were
dirty one line was selected and written out before the
requested page was fetched. If program execution was
delayed awaiting the fetched page, the program counter
was stored and control transferred to a preset interrupt
location. The described addressing scheme is shown in
Figure 1.

Unfortunately this automation accounted for more
than 20 percent of the total logic costs. In addition the
primitive page swap algorithm may have proven unsatisfactory and required additional commands or even
a complex sequence initiated from a read-only memory.
In any event, the logic has been reduced to near minimum and any automated page swapping will now be
under software control.
It is assumed that this operating system software
will minimally include a keyboard input and display
program as well as a cassette directory and search
routine. Transfer instructions and busy flags will
facilitate its operation and attempts to execute instruc- .
tions from pages in the process of being swapped will
still effect a transfer of control to the interrupt location.
The inclusion of a fixed memory interrupt location is
the primary reason for not swapping delay line zero.
While the cassette's primary function is to provide
intermediate storage it also doubles as a cheap and
convenient source of input/output. Initial input, however, is entered by way of the alphanumeric keyboard.
Depressing a key will enter an encoded character into
an eight bit keyboard buffer, turn off a console lamp,
and set a one bit flag register. This flag may be interrogated by a running program and is reset-along with
the lamp-by transferring the contents of the keyboard
buffet to the accumulator. Striking a key will not enter
a new character into the buffer while the flag is set.
Visual output is direct to a CRT from the first 42
word locations of the zero or non-transferable delay
line. These words are gated sequentially in pairs
(modulo 21) into a 32 bit output buffer on each cycle
through memory. The low order six bits in each of the
four bytes are, in turn, used as an input to a small
read-only memory. This memory in conjunction with
an appropriate counter and shift register provides
serial output for modulating the CRT's" Z" or intensity
input. These components together with a character,
line, and row counter, and two deflection amplifiers
and digital to analog converters, constitute the output
device. It should be noted that the memory itself pro-

Figure 1

The DINKIAC I

vides for display buffering and that all information is
retained in character format. Scan conversion from the
32 bit buffer is performed as needed. Since it is possible
to change characters in the output portion of memory
before they have actually been displayed, it is anticipated that display programming will be handled as a
function of the machine's interactive use. The most
obvious example is provided by the displaying of a keyboard input message.
An operation panel located between the keyboard
and display tube includes 'power on' and 'interrupt'
toggles,' 'start' and 'go' buttons, a five position rotary
switch, and eighteen display lamps. The start button
clears all registers except the program counter-into
which the start address (64)10 is forced-and loads tape
block zero into delay line zero. Once block zero has
been read the machine will begin instruction readout
from the start location .. Depending upon the state of
the machine, depressing' go' will either initiate a read
of the next instruction-from the location currently
specified by the program counter-or begin the instruction execution. The interrupt toggle will set or reset an
interrupt step mode flip-flop. When set, this flag will
force a machine halt after each instruction read and
after each instruction execute. If' go' is depressed while
halted following a read, the machine will proceed to
the execute. If it is depressed after the execute-and
without a reset from the toggle-the program counter
will be stored and the ne,xt instruction will be taken
from the interrupt location.
The interrupt arrangement permits program stepping
in one of two ways. For possible machine malfunction
or difficult logical sequences the display lamps may be
used in conjunction with the rotary switch to. inspect
the contents of the major processor registers. For more
routine debugging, the user may choose to enter a
subroutine which will convert and store relevant
registers for subsequent display on the CRT. This mode
will allow him to view, for example, the contents of the
accumulator and the program counter-in any format
he has chosen-at every other push of the' go' button.
Given the above design it should be helpful to briefly
consider a couple of the Dinkiac's unique operational
and programming aspects. First and most obvious is
the procedure imposed by keyboard limited input.
Since all programs must be typed-in, it is probable
that the typical user will be concerned only with conversational routines such as JOSS, FOCAL, or conversational BASIC. These processors should be structured
in such a way that an .anticipated routine will be
scheduled into a delay line and ready for use. For
example, an interactive algebraic processor could be
segmented such that routines for matching, scheduling,
and arithmetic operations are seldom or never swapped,

3

while more complex numerical subroutines are arranged
in a hierarchy of priorities with the most common
(square root, sine, ... ) at the top and those seldom
used (matrix operations, error exceptions and comments, ... ) at the bottom. While the software designer
must try to segment these programs for the minimum
swapping delay, it should be borne in mind that in
conversational systems an occasional delay of several
seconds is no cause for concern. 4 Balance between computation and user interaction is the significant factor.
It is hoped' that by now the reader-having considered the design overview together with the cursory
remarks relating the machine with certain timesharing concepts-will have acquired sufficient intuition to answer, for himself, the third of our questions, "What good is it?" or more graciously put "What
market does the Dinkiac serve?" For our part we will
start with the statement that anyone now using a desk
calculator can-for the same price and without sacrifice
of calculator speeds or functions-enjoy the additional
benefits of a completely general purpose digital computer. Additionally, the machine will provide a single
user with a computing experience not unlike one he
would receive at a time-sharing terminal. That is, for
highly interactive work he can expect extremely fast
replies with respect to his own response time. For compute bound requests, such as compilations or iterative
numeric calculations, he should suffer no greater frustration than that engendered by a small well used
time-sharing system. It is accurate to add that for the
same jobs these periods of delay would compare favorably with a mini time-sharing system.
Because the tape cassette secondary storage will
double as a fast I/O device a library of special purpose
application cassettes can also be marketed. Examples
are: BASIC for the schools; 'desk calculator' for small
businesses; and, 'preparing your federal tax return'
for the 'home hobbyist.'

SIMULATION
Our concern with a computer simulation is twofold,
aiming first at determining the Dinkiac's gross architectural configuration, that is, the number and length of
its delay lines, and second, at obtaining some sense of
its overall performance. GPSS/360 (IBM's General
Purpose System Simulator for the 360 series) was chosen
for this task-both for its ease of use and its ready
a vailability. 5
For a simulation to serve its intended purpose the
assumptions upon which it rests must be both valid
and appropriate. The assumptions underlying this
simulation are of two kinds, the first has to do with

4

Spsing Joint Computer Conference, 1971

hardware component speeds and may be based on· the
price quotes of a number of manufacturers, the second
requires a knowledge of program behavior and is far
more tenuous. An early discussion of equipment characteristics will provide a foundation for the subsequent
consideration of these less structured issues.
Magnetostrictive delay lines are offered in models
with delays of up to 10 milliseconds at the maximum
or 2MHz bit rate. Prices vary only slightly over the
range with the longest lines (in quantity lots) costing
less than 10 dollars more than the shortest. Since
prices are typically constant up to a delay of around
2.5ms, a 4K bit line costs no more than one with half
that capacity. Restricting the choice to sizes which
facilitate binary addressing, these delays and bit rates
imply that lines of up to 16K bits are feasible.
Because the Dinkiac is a single address machine all
non-jump instructions must be taken sequentially,
and if operands are positioned properly those with
fewer than 128 memory fetches will be executed at
delay line speed. (Switching time, even for slow transistor logic, can always be accomplished during the delay
line to register transfers and may therefore be completely ignored.) A straight line program, then, will be
executed at about the product of the line speed times
the number of instructions. For the Dinkiac we have
described-with its 4K bit line-this would amount to
approximately 500 instructions/second while a 2K
line would double the rate and one with 16K bits would
cut it to a low of 125 instructions/second.
The tape cassette market is less stable than the
market for delay lines and one may find prices ranging
all the way from under thirty dollars to 100 times that
price. The machines.on the low end are intended for
audio use while those at the other are designed for the
reliable high-speed transfer of digital data. Advertised
speeds for the expensive instruments give writing rates
at under 10,000 bits/second with reading rates to
20,000. Experiments indicate that digital (square wave)
recording on cheap audio equipment can be successful
at speeds of two to two and one-half thousand bits
per second. Specifications from a number of manufacturers marketing inexpensive recorders indicate that
for under 100 dollars one can conservatively assume
the following characteristics: (1) Read/write speed of
3.75 ips with a recording density of 800 bpi (bit serial
recording) for a transfer rate of 3000 bps; (2) Search
speed (fast forward and rewind) of 75 ips; (3) Start/
stop time of 60ms; and (4) Inter-record gap of Y2
inch.
The properties given above will be used in the simulation, and to reinforce their conservative character,
cassette page transfer times will always include time
for the transfer of a full half inch inter-record gap as

well as the times for both starting and stopping the
tape. This caution also allows for any timing oversight
arising from the recording technique, which we have
assumed will follow teletype signal transmission methods, i.e., asynchronously, with a start pulse followed
by data followed by completion pulses. To time a 16K
block transfer, then, we will assume that 16,384 data
bits plus a 400 bit equivalent inter-record gap are
transferred at a rate of 3000 bps to which 120 ms,
start and stop time, are added. That is, block transfer
time = (( (line size +400)/3000) +.120) seconds.
Tape search time will be based upon a full tape
capacity of half a million (2 19) information bits. (Later
we will include some results gathered when providing
for 256 blocks of the larger page sizes, i.e., for tapes of
220 and 221 bits.) Tape length, not including interrecord gaps, is approximately 655 inches-219 bits at
800 bpi. Total search time will be determined by
adding-to this length-a half inch for each record
and dividing by the 75 inches/second rate, or, total
search time = ((655+ (no. of blocks on tape/2)/75)
seconds.
We may now specifically formulate three questions
we wish our simulation to answer: (1) What is the
best page size? (2) How many lines are necessary for
satisfactory performance? and (3) How will the
Dinkiac compare with other machines? Given some
assumption regarding the number of jumps expected
during the execution of a program plus the anticipated
distance of the jumps-i.e., what percentage of jumps
will remain within 10 words of the current address, 20
words, etc.-it is possible to run simulations based
upon the given transfer rates to obtain meaningful
results for the first two of these questions. If, however,
we wish to relate the Diilkiac's performance to that of
other machines we will need some standard.
Fortunately, such a standard exists in terms of
average instruction time. Given anticipated percentages for each instruction type and applying these percentages to the machine's actual instruction execution
times, we can determine the time required for an
'average' instruction. Gibson has provided us with a
set of such percentages by tracing 55 IBM-7090 programs involving 250 million instructions. 6 The traced
programs were comprised of 30 FORTRAN source
programs, 5 machine-language programs, 10 assemblies,
and 10 compilations. Gibson's set of percentages,
called the Gibson mix, has been used in many machine
comparison studies. Because the Dinkiac has no floating
point hardware, approximate averages for subroutine
execution tirnes will be given for the floating point instructions. The same will be done for multiplies and
divides. The Gibson mix programs were scientific and
give a conservative average with respect to a similar

The DINKIAC I

mix proj ected from the data processing field. Figure 2
is a table of Dinkiac instructions, 'worst' case times,
and the loosely corresponding Gibson percentage.
Execution times are given as delay-line revolutions.
Because subroutines are included, a single Gibson
mix instruction must represent more than one of the
Dinkiac's. Specifically, 87 percent are one to one, 7.7
percent are ten to one, and 5.3 percent are twenty to
one. There are therefore 2.7 Dinkiac instructions to
each of Gibson's and the average execution time for
these 2.7 instructions is 8.6 revolutions. At 2.7 words
for a Gibson instruction, each line of the 256 words/
line machine we presented is capable of 'storing' 94.8
Gibson instructions. Similarly a 128 word line will contain 47.4 instructions, and so on. We haye greatly
simplified the remaining calculations by assuming a
Gibson instruction size of 2.5 words and line lengths
which are integral multiples of that number-forcing
the use of 125 for the 128 word line, 250 for the 256 word
line, etc.
Returning now to the still unspecified assumptions
regarding program behavior, we find the question of
jumps partially resolved by the Gibson mix. The mix
assigns a 16.6 percent likelihood to the 'Test and
Jump' instruction. We will assume that the jump is
taken half this number, or 8.3 percent. To this we must
assign some number of jumps to compensate for those
subroutine loops incurred by our superimposition of
the Gibson instructions over the Dinkiac's. Suppose 100
Gibson (270 Dinkiac) instructions are executed. Of
the 270 Dinkiac instructions, 8 will be for multiply and
divide, 69 for floating add and subtract, and 106 for
floating multiply and divide. Assuming a five instruction loop for the first two instruction types and a ten
word loop for the last, we will arrive at 26 jump instructions or slightly less than 10 percent of the instructions executed. We may further assume that
these subroutines will be retained in the zero delay
line and that return jumps will be back to the lines
from which the subroutines are called. The model
reflects this analysis.
The question of how far each jump goes with respect
to' the current program address counter is not easily
answered and is closely allied to the question of how
often must a new page be fetched. Until some study is
made-similar tQ Gibson's but with just this aim-or,
until studies of time-sharing systems provide further
insight into page swapping behavior, no well-grounded
assumption can be made. We will postulate that of the
jumps taken-not including the 10 percent headed for
delay line zero-50 percent will remain in the line
they are at while the remaining half will go to the lines
following with percentages of 50 percent, 37.5 percent,
and 12.5 percent, respectively. Here the delay line

5

sequencing is considered circular. This jump distance
assumption is, of course, inconsistent with the varying
line size and favors short lines. We will compensate for
this advantage "Qy ma,king a near worst case assumption
regarding page swapping, namely, that anew page be
fetched once for every straight line pass through the
memory.
We are now in a position to present details of the
model. Each GPSS 'transaction' will represent either
ten Gibson instructions or a signal to initiate the operation of some given line or tape. Each delay line consists
of a holding 'queue' for the transactions, a memory
'facility' and a 'storage' capable of accommodating the
appropriate number of instructions for a specified line
size. To avoid simulating the simultaneous execution
of instructions in more than one line, only sufficient
transactions to queue up for a single line are generated
at anyone time. A transaction entering a facility (one
of the delay lines) from a queue 'seizes' that facility
precluding its use by any other transaction. An appropriate number (25 for 10 Gibson instructions) of instructions is 'entered' into the line storage and the total
storage entries compared with the line capacity. If the
storage is full, it is reset to zero; the facility is released;
a transaction is removed from the queue; and new
transactions are created for the next memory line. If
the storage is not full, 18;3 percent of the transactions
go to a jump instruction sequence where the clock is
advanced 10' jump' times and the transaction is entered
into holding buffers according to the previously discussed jump distribution. In the 81.7 percent non-jump
cases, the clock is advanced by the time required for
ten line revolutions times a GPSS 'function' which
randomly chooses (on the basis of a given bias-in this
case the Gibson percentages) the number of revolutions.
The facility is then released to allow for another entry
from the queue; a transaction is removed from the
queue; and ten transactions (instructions) are
terminated.
Except in the case of the zero line, the completion of
each line triggers a set of tr~nsactions for the next in a
round-robin fashion with the last line triggering the
first. Thirty percent of the completions from the zero
line may additionally store a transaction in one of the
holding buffers to simulate the subroutine return jumps.
A counter at the end of the last line starts an end-ofjob sequence which continues the program for only
those lines which have items in their holding buffers.
Completion of the last line also sends a transaction into
the tape queue. Transactions in the tape queue seize a
tape facility and then randomly' pre-empt' one, of the
swappable delay lines. A pre-empted line is held until
'returned' and is precluded from seizure or use by any
other transaction~ The tape and pre-empted line times

6

Spring Joint Computer Conference, 1971

INSTRUCTION

TIME IN
GIBSON
REVOLUTIONS
PER(Worst case for CENTAGE
nonsubroutines)

Load and Store
Add and Subtract
Logical

1.5

Multiply and Divide
(10 word subroutine)
Floating Point Mult. and Div.
(single precision)
(20 word subroutine)
Floating Add and Sub.
(single precision)
(10 word subroutine)
Shifts and Register
Test and Jump

50.

.8

100.

5.3

25.

6.9

DESIGN SPECIFICS AND OPTIONS

1.

9.7

Sufficient detail to familiarize the reader with· the
Dinkiac's peculiarities was given in an earlier section.
Here we will add a few design particulars, as an aid to
cost estimation, and present some significant options.
The Dinkiac is designed around a five register bus
in a manner typical of the minis. Signals from decoded
instructions, together with outputs from a sequencer

.5

2.

Index
Search or Compare

38.9

lines both predictable and non-predictable and the
same for three lines. The dotted lines in Figures 5 and 6
are the results of allowing the number of tape information bits to double once for the 256 word block and
twice for the 512 word block. That is, to maintain the
tape block count at 256. Each graph includes upper
and lower bounds in addition to the simulation's finding
for the particular case. The graphs argue convincingly
for the 256 word page size, and yield insight into the
nature of the balance between instruction execution
and page transfer times.

16.6

21.8

Figure 2

are advanced by one block transfer time and also,
when appropriate, by tape search time. Simulations
may be eit4er "non-predictable"-in which case time
to search half of the tape plus or minus any random
interval up to that same amount is always appliedor, they may be "predictable." In the predictable or
"75 percent predictable" runs it is assumed that the
tape will have been correctly pre positioned in all but
25 percent of the transfers. At the completion of these
tape advance times -the pre-empted line is returned
and the tape released. A general program flow is given
in Figure 3.
Two results quickly emerged from the simulations,
most apparent is the ruling out of either very short,
or very long lines. The second, while less glaring, verifies
the adequacy of a four line machine. It is tempting to
continue the simulations with a greater number of
storage lines-and when shift register prices fall this
may prove feasible. Meanwhile, price considerations
for this study dictate that the number be kept as small
as possible. Upper and lower performance bounds were
found by running the simulation with either no, or
with complete, tape buffering.
The number of instructions executed during anyone
simulation varies slightly due to the randomness of
the jumps. All runs, however, simulate the execution
of close to 12,300 instructions. Execution time varies
from a lower bound of two plus minutes (120,391 ms)
to an upper bound of almost 12 minutes (707,961 ms).
A table showing the total execution time in milliseconds
for thirty-one simulations is given in Figure 4. Figures
5 through 8 are graphs of the four general cases: four

IU~

AI.""""c,e.
1/11..._'1'

r.....,s

JlSoo ....k
"fa
6,,~ti

1&...........1"..

l'i."~

Ad."...... &.~

'0

TCoL.... '

Ti;"'e~

J)15Tllltil.t;te.

..,.

a,.{'tuS

Teno";"'a.'Tt..

&.>TCa QtJ r, the theory of this paper can be applied without
any change, except that the partitioning will then be
applied to the D matrix rather· than to the Tf matrix.
This is obvious since D, in this case, contains Tf as
one of its partitions.

ACKNOWLEDGMENT
The problem was originally suggested by Mr. W. D.
Benedict and Dr. M. Y. Hsiao.

REFERENCES
1 M·Y HSIAO K Y SIR
Serial-to-parallel transformation of feedback shift register
circuits

IEEE Transactions on Electronic Computers Vol EC-13
pp 738-740 December 1964
2 M Y HSIAO
Theories and applications of parallel linear feedback shift
register
IBM TIt 1708 SDD Poughkeepsie March 1968

3 W W PETERSON
Error correcting codes
MIT Press 1961
4 W W PETERSON

D T BROWN

Cyclic code.'3 for error detection
Proceedings of the IRE pp 228-235 January 1961

Features of an advanced front-end CPU
by RICHARD BARR HIBBS
The Bunker-Ramo Corporation
New York, New York

INTRODUCTION

In an environment where multiprogramming is the
exception rather than the rule, the added hardware
complexity required to implement an indexed basedisplacement addressing scheme (a la 360) is questionable, but for the frequent use made of virtual tables.
Manipulation of data and control information maintained in tabular form is required to implement
re-entrant coding techniques. To efficiently access table
structures, a variety of addressing schemes must be
available to the programmer.
Queueing of data and control information maintained
as elements in a linked list is the basic operation of
communications tasks. Low overhead queueing operations on several common types of queues will eliminate
an often unwieldy set of system subroutines. The
addition of coroutines and subcoroutines to the types of
program elements handled by the program control
structure would add two very useful facilities to any
computer coded as independent modules, and would be
particularly valuable in a front-end. Extending the
control instruction repertoire beyond the familiar
"BRANCH ON CONDITION, "and "INCREMENT
(or DECREMENT) AND TEST FOR ZERO" is
needed to effectively make use of the new types of
program elements.
Rather than settling the question of word versus
byte-oriented computer organization, note that by
allowing partial-word operation on all data-manipulating instructions, a considerable amount of
masking and shifting can be eliminated from operating
programs, although only limited flexibility is available
without creating difficult instruction coding problems.
As the internal circuitry of a CPU is far more reliable
in operation than the attached communications lines,
the extra memory cycle time and additional hardware
necessary to provide memory parity or error detection
and correction is unacceptable. Adva,nces in technology
coupled with the use of error-correcting codes may
change this point of view, however, in the near future.
The real power of a CPU is not measured by the

A central processing unit to handle data communications
chores as a front-end computer has historically been
eIther a maxi-computer, overpowered for the intended
job, or a mini-computer, stripped of many instructions
and architectural features that now ease the programming burdens of commercial data processing. Front-end
CPU's are evolving into general-purpose machines in
their own right due to demands for more generalized
processing by the front-end, such as code conversion,
message text pre-editing, and local (i.e., not performed
by the host computer) message switching. Front-end
CPU's must be dual-purpose machines-a specialpurpose input-output structure. to handle communications efficiently, and a general-purpose data handling
structure to perform tasks such as described above.
Certain desirable features of a front-end CPU are
described informally in this paper, then the architecture
of a proposed front-end CPU which incorporates these
features is presented.

DESIRABLE FEATURES OF A
COMMUNICATIONS PROCESSOR
The more densely a program can be coded, the more
reliable it may be considered to be. That is, if the set of
machine operation codes include null codes, privileged
codes., context-sensitive~ codes, or codes which bypass
normal machine operation, then the set of codes is
inherently less reliable than a set which does not include
such codes.
The use of re-entrant coding techniques has a
beneficial side effect, the elimination of program code
which modifies other code Of. can itself be modified, and
reduces the frequency of instructions which can in
themselves, cycle indefinitely. Program modification is
probably the source of many "phantom clobberers"
found in any large software system.
15

16

Spring Joint Computer Conference, 1971

))22

i

<»lIN'

r ~~ ...

iI

2 2

r ____D_~
~ It! P_I_c_C~~~IR~I
1
)210 98

1

24

221

___Am8
____________

0

~1

Figure 1-Processor state register (PSR)

instruction cycle time, but rather by the amount of
processing performed by each instruction and the number of instructions necessary to perform a given
function.
Taking advantage of storage technology by buffering
main storage with high-speed local storage and placing
operating programs in read-only storage suggests a
return to Harvard-class computers with separate
addressing spaces for data and instructions. Immedi..,
ately, the possibility of executing data or operating on
instructions is completely eliminated, thus improving
the reliability of the software. Storage protection is
unimportant for the program store of a Harvard-class
machine, but is still important in protecting constant
areas in data storE( from accidental destruction. Both
read and write protection of data store are useful, almost
mandatory, and should be provided for.
Conventional input-output channels of large-scale
computers are constructed to provide an efficient,
general scheme for input-output. By specializing the
input-output channels to handle communications only,
and by integrating channel controls within the front-end,
the power of the front-end computer is extended and
directed at communications. Assuming only communications lines and interprocessor channels are attached,
the lack of general facilities for input-output is not a
limitation of the processor.
GENERAL CP ARCHITECTURE
The communications processor (CP) is a muItiaccumulator, two's complement, fixed-point binary,
stored program digital computer with separate addressing spaces for programs and data. CP control circuitry
.interfaces with operating programs through the processor state register (PSR), a control register which is
the central element of the interrupt mechanism. All
input-output channels and their controls are functionally integrated within the CP itself to expedite
input-output operations by treating each device interface as an addressable extension of CP data memory.
CP program memory consists of up to 65,536 words of
44 bits each, with the first 2,560 words reserved for
interrupt processes. The operand address of all program
transfer of control, queue manipulation, and input-output instructions refer to locations in program memory.
Program memory is addressed consecutively from 0 to

the highest available address-an invalid address
generates an addressing error interrupt. The format and
interpretation of words contained in program memory is
described in the "Instruction Set" Section.
CP data memory, separately addressed from CP
program memory, contains up to 4,194,204 thirty-six
bit words. CP data memory is under control of a
protection lock assigned to each 2048 word module of
core and a protection key contained in the current PSR.
Storage addresses of data memory run from 0 to the
highest available address with invalid addresses
generating an addressing error interrupt. Locations 0 to
2048 are reserved as control words for input-output
channels.
The PSR consists of PRTY, KEY, T, P, CC, W, R
and INSTR ADDR fields, as shown in Figure 1. Only
the PRTY field may be modified by an operating
program without loading an entire new PSlR. The
PRTY field specifies the "level" at which the current
program is operating--O indicates non-interruptible,
critical processes and 15 indicates non-critical, completely interruptible processes. With 16 priority levels
at which the CP can operate, the dispatching urgency of
interrupts can be dynamically altered. Every request
for interrupt is at a priority determined sometime before
the request is generated. All "program" interrupts have
a fixed priority of 1, 2, or 3. All input-output interrupts
have a priority specified by the START I/O instruction
which initiated the operation. When a request for
interrupt is presented whose priority is equal to or
greater (less numerically) than that specified by the
PRTY field, the interrupt request is granted, otherwise
the request is stacked by CP control for later servicing.
The KEY and T fields control access to CP data
storage. Data storage protection is always in effect.
When an instruction requires access to CP data storage,
the KEY is matched against the lock associated with
the memory module containing the desired address, with
access granted according to the match-up.
The CC field controls conditional transfer of control
instruction execution. The meanings of each bit are
defined according to the preceding instruction executed .
The CC field is reset following execution of all but
conditional transfer of control instructions. All conditional transfer of control instructions interrogate the CC
field according to the mask given by the Rl field of the
instruction in order to determine whether or not a
branch will be taken. Matching one bits in any bit
position causes the branch to be taken.
The INST ADDR field contains the address of the
next instruction to be executed. It is updated sequentially after execution of each instruction until a transfer
of control instruction takes a branch, when the branch
address becomes the next instruction address,

Features of an Advanced Front-End CPU

The Wand R bits define the operating states of the
CPo If W is set, the CP is in the wait state and no
instructions are executed. Input-out does not stop,
however, when in the wait state. The R bit specifies
which set of general registers is used by each instruction.
Two interchangeable sets of general purpose registers
exist within the CP. Each register is 36 bits wide and is
designated by a number from 0 to 15. Fifteen of the 16
registers may be used as accumulators, index registers,
or base registers. Resister zero may be used only as an
accumulator.
The P field describes the current type of program
element being executed. A main program or subroutine
is indicated by 00 or 01, the distinction between them
being almost impossible to determine as a return branch
from a subroutine takes exactly the same form as an
indirect branch within any program element. A
coroutine is indicated by 10 and a sub coroutine is
indicated by 11. If a program transfer of control
instruction is executed which is invalid for the current
type· of . element, a program linkage interrupt is
generated.
When an interrupt request is granted, an address
presented along with the request is used to specify the
address in program memory from which to fetch a new
PSR. The old PSR, having been saved in a pushdown
stack, may be made the current PSR in order to
re-enter the interrupted routine at the end of the
interrupt service by executing an UNCHAIN
instruction.
As each interrupt is identified with a distinctive new
PSR, no interrogation of devices, i.e., polling of interrupts, is required during interrupt service.

INSTRUCTION SET
Each CP instruction occupies one 44-bit word of CP
program memory in one of the formats shown in
Figure 2. Instructions operate between registers,
between storage and registers, or between storage
locations. In the register to register and storage to
register formats, three addresses are specified-two for
operands and one for the result. The result and one
operand are specified by the contents of the indicated
register. The other operand address refers either to CP
memory (storage to register format) or a general
register (register to register format). In the storage to
storage instruction format, a source and destination
address and a length code are specified. All references to
CP memory refer to CP data memory unless the
instruction is a control instruction (e.g., BRANCH ON
CONDITION, LOAD PSR, or PUSH).
Memory addresses of storage to register format

43

I OPOODI

3615

33

12 1029

22
2625 22 10 19 1615

llllDl~l·l 1"1_1

x

I·

17

1211

DISPL
I

1

Storage to legist.. Fol"Mt

Regist .. to &.gist.. Forat

~_ .. __ J§li_.~~

2827

1615

1211

~~~ _ _ l_~_B1_1 ____D_1 __---lI'---B2---lI___m

_J

storage to storage Foriat

Figure 2-Communications processor (CP) instruction formats

instructions are formed and interpreted according to
the addressing mode (AM) field of the CP instruction.
If AM is 00, the modifier (M), index (X), base (B), and
displacement (DISPL) are taken as an absolute 16 or
22-bit address and the addressing mode is called
DIRECT. If either program or data memory contains
fewer words than addressable by the full 16 or 22-bit
value, any reference to an address lying outside the
addressing space will generate an addressing interrupt.
If AM is 01, the addressing mode is called INDEXED
and the address is formed from two sums. The DISPL
field is added to the contents of the register specified by
the B field, unless the B field is zero. If the B field is
zero, the value zero is used in forming the first sum. To
the first sum, the contents of the register specified by the
X field are added, unless the X field is zero. If the X
field is zero, the value zero is used in forming the
second sum. The second sum is used as the data or
program memory address. The register specified by the
B field is considered to contain a signed, 35-bit integer,
even though the resulting sum will be truncated to a 16
or 22-bit address. The register specified by the X field is
considered to contain a signed, 17-bit integer. Addresses
are formed in a 36-bit register in the CP control section,
then truncated to the appropriate precision.
The high-order 18 bits of the register specified by the
X field are taken as a signed, 17-bit modifier of the
actual index, contained in the low-order 18 bits of the
register, according to the M field of the instruction. If M
is 00, the index is not modified. If Mis 01, the modifier
is added to the index and the sum becomes the new
index. If M is 10, the modifier is subtracted from the
index and the difference becomes the new index. If M is

18

Spring Joint Computer Conference, 1971

TL------~.~I____~~=A~____~

AE& Jddt"line

ADJllISS

~ 1_1__ ~__I _
-It.o}.! !Iil_____

[-:{'!!p,1 xl _I

J--:-l

or WA

or IIDIRD a.
ADJllISS

or

IDCltIV.

jf

'--Wl i·,-'_. . ,.-I--]---,

~-----W--A------~
Figure 3-Communications processor (CP) addressing structure

11, the index is multiplied by the modifier and the
product becomes the new index. All index modification
is performed after current instruction execution, before
the next instruction is executed.
If AM is 10, the addressing mode is called

INDIRECT and the address is formed as described for
the INDEXED mode but is not the address of data or a
new program address, but the address of an indirect
word. 'The indirect (I) bit of the indirect word specifies
whether the address pointed to by the indirect word is
the address of another indirect word or the address of
data. If I is 0, the next word is data (or next instruction
address). If I is 1, the next word is another indirect word.
The M and X fields are interpreted for the indirect
word just as they are interpreted for an instruction,
with the contents of the register specified by the X field
added to the ADDRESS portion of the indirect word.
Multi-level indirection and indexing are thus possible.
If AM is 11, the addressing mode is LOCATIVE,
a combination of INDEXED and INDIRECT modes.
The address is formed as described for the INDEXED
mode, but is the address of a locative rather than that
of data or a new program address. A locative is
distinguishable as data, the address of data (beginning
of an indirect chain), the address of another locative, or
the address of the address of another locative (beginning
of an indirect chain ending with a locative). For a
transfer of program control instruction, LOCATIVE
mode has the same interpretation as INDIRECT mode.
The L field determines the interpretation of the locative.
If L is 10, the locative contains the address of another
locative. If L is 11, the locative begins an indirect chain
terminated by another locative. The M and X fields are
interpreted just as for the indirect word.
If the resulting address addresses CP data memory,
the protection KEY of the current PSR is compared to
the storage lock for that segment of data memory. If
they match, access is granted according to the tag bits
which match between the PSR and the storage lock. One
tag bit allows read access, and the other tag bit allows
write access. When the key does not match the lock a
protection interrupt is generated. The general CP
addressing structure is illustrated by Figure 3.
Storage-to-storage format instructions have fewer,
but similar, fields than do storage-to-register format
instructions. These are two address instructions, with
the source address given by the sum of the D2 field and
the contents of the register specified by B2 (unless zero)
and the destination address given by the sum of the
D1 field and the contents of the 9-bit characters of the
fields involved in an operation are contained in the
register specified by Rl.
Program elements can be one of four types: main
program or open subroutine, closed subroutine, coroutine, or subcoroutine. Program control is passed to
the main program or passed within any of the elements
by means of a BRANCH ON CONDITION or
BRANCH ON INDEX instruction. The BRANCH ON
CONDITION instruction substitutes a four bit mask

Features of an Advanced Front-End CPU

for the Rl field which is compared to the four bit CC
field of the current PSR. A match in any bit position
causes the branch to be taken to the address in program
memory given by the B, X, and DISPL fields in the
mode specified by the AM field. Note that the R2 and
PWD fields do not enter into the instruction execution.
The BRANCH ON INDEX instruction affects the
register specified by the X field. The X and M fields are
not used in determining the branch address. The Rl
field specifies a register containing the value to be
compared to the index portion of the register specified
by the X field. Only the low-o~der 18 bits of the Rl
register are used in the comparison. The R2 field
specifies a register containing a signed increment in bits
0-18 which modifies the index register according to the
M field when the branch is not taken. The M field is
interpreted as before. The branch is taken whenever
the specified test condition is met. The test conditions
are: index high, index equal, and index low. Transfer of
program control to a subroutine is made with a
BRANCH AND LINK instruction. The register
specified by the Rl field is taken as a four bit mask for
comparison against the CC field of the current PSR,
just as for the BRANCH ON CONDITION instruction.
If any bits in the mask match the CC, the register
specified by the R2 field is first loaded with the address
of the next sequential instruction, then the branch is
taken just as for the BRANCH ON CONDITION
instruction.
A coroutine is a program element defined by its entry
locator, which specifies the address of the first instruction to be executed upon entry to the coroutine. An
INITIALIZE or LEAVE group instruction defines the
contents of the entry locator, and an ENTER group
instruction performs the co-transfer into the coroutine.
INITIALIZE is a storage to storage format instruction
that sets the entry locator for the named coroutine to
the given address in CP program memory . LEAVE and
ENTER group instructions are also storage to storage
instructions which set the entry locator for the current
routine to the address of the next sequential instruction
then transfer indirect through the entry locator of the
target routine (if a coroutine or sub coroutine) or branch
to the target address (if a main program or subroutine).
A sub coroutine , like a coroutine, is defined by its
entry locator, and also by an exit locator. The entry
locator may be defined by an INITIALIZE or a
RETURN group instruction. Entry to a sub coroutine is
made by executing a CALL group instruction, which
sets the entry locator of the current module (if a
coroutine or sub coroutine) to the next sequential
instruction address and transfers control indirectly into
the target subcoroutine through its entry locator, then
sets the exit locator to the address of the next sequential

19

Figure 4-Coroutine and subcoroutine structure

instruction following the CALL in the calling routine.
When a RETURN instruction is executed, the entry
locator of the current module is set to the next sequential
instruction address and a branch is taken to the return
point, indirectly through the exit locator. The entry and
exit locators occupy contiguous locations of CP program
memory. Figure 4 shows coroutine and subcoroutine
structure.
To provide low-overhead queueing operations, three
types of queues are maintained by hardware: pushdown
stacks, normal FIFO or head-tail queues, and doubleended head-tail queues. A pushdown stack is defined as
a contiguous area of data memory with a locator word
in program memory. The locator word consists of
length, count, and pointer fields. The count is decremented by one for each element pushed down into the
stack. The pointer initially contains the address of the
first available element, and is incremented by the length
field for each element added. Likewise, for each element
removed, the pointer is decremented by the length field.
If the count is zero then an attempt is made to add an
item, and overflow condition exists and a stack overflow
interrupt is generated.
Normal and double-ended head-tail queues are defined
as unbounded linked lists. Any head-tail queue may be
either normal or double-ended, defined by the needs of
the moment by the instruction executed to manipulate
an element. All instructions affect the queue pointer
words and the link word of the cell being manipulated.

20

Spring Joint Computer Conference, 1971

" Ioop.u. AID
v Ioop.u. oa

001na1d..a..
() Conttnt. ot
@

HI
~

IJI)

Ift~TO Jddr...

Int..ohaco

SJBTIAOf

(R1)- (M) ... (Rt)
(R..i.~_ (R.1.)- (M)

IIMIlU

(Ri,R.1+1) ....... (~).(It'2.)

DIV'Im

(R.J,.)- (R'l,~ R1.+i)/(M) j (It~H~ _~w.u.J\:)E.~

A1fD

(~J.)- (W\)"(~"2.')

..

(Rl)....... 01\, v (lt2.)

OODCIafCl

(ru.)- (W\)~"(~1..)

LOAD JIUL!IPLI

(R. t) ... )R..1.)_ (M) ... ') ~tRl.-1t.l'HJ)

SlOB JIIl4'IPLI

(M 1 • .. ) ~~-U~l.J ) _('1<1.) '" ,t.'t)

1ftaCIU.1UI

(tv\) ~ (R..!.)

LOAD &D1ItDS

(tll.)- M

OOJIPAII IUIID

[(~)\J(~L)1:

WAD PSI

(P~) -

(ti)

(M)

Figure 5-Representative CP instructions

To put a cell on the tail of the queue,a PUT instruction
is executed. A GET instruction removes a cell from the
head of the queue. FETCH removes from the tail, and
STORE adds to the head. Complete freedom in
intermixing these instructions would allow, for example,
a processing routine to· break off processing a message
cell for a low-priority message by placing its pointer
back on the head of the processing queue in order to
respond to a request for processing of a high-priority
message cell.
Figure 5 describes the operation of representative
instructions of the CP instruction set in shorthand
notation.

INPUT-OUTPUT
Input-output is the reason for being of a front-end
computer. A high-powered processor such as has been
described here must be capable of sustaining high data
transfer rates in comparison with processing speed or it
will be severely mismatched to its task. In the CP
input-output proceeds simultaneously with processing
by allowing multiple access to data memory according
to priority of input-output (determined by a START
I/O instruction). The processor always has lower
priority for data memory access than does input-output.
By providing one memory port for each 2048 word
module of data memory, the overall memory bandwidth

is increased; unless, of course, the operating programs
attempt to place all buffers in the same (few) modules.
All control information for each device attached to
the CP is contained in a line control word (LCW)
maintained on a per-device basis in the first 2048 word
module of CP data memory. When a START I/O
instruction is executed, the control words are activated
and a buffer address in data memory is provided, along
with the appropriate protection key. Input-output then
proceeds until an error occurs, the last data character
is transferred into memory, or a HALT I/O instruction
is executed for the device. Control information may be
modified successfully whenever the control words are
not active, thus allowing the operating programs to
dynamically reconfigure the input-output to meet
shifting demands.
Each communications interface consists of receive and
transmit circuitry sufficiently general to allow selection
of functional characteristics by signals on the I/O
control bus. In order to dynamically reconfigure the
attached communications network, each unit must be
capable of handling several code structures and
transmission speeds.
For a small number of combinations of code and
speed, the communications interface for asynchronous
transmissions is not overly complex. The advantage of
being able to dynamically change operating characteristics of a line is apparent for a time-sharing service,
which could use only as many line terminations as the
number of simultaneous users to serve several types of
terminals, rather than apportioning facilities according
to projected loads from the different types of terminals
which usually results in several unused lines when the
system is heavily loaded.
Synchronous transmissions, if they involve any kind
of line discipline for transmission, are code-dependent;
so the only kind of dynamic reconfiguration would be
to change clock speeds for the line. This trick is now
being used by at least one terminal manufacturer to
overcome a noisy line-if too many errors occur at, say,
4800 baud, the unit switches to a 2400 baud clock to
improve transmission and reception.
Characters are assembled/disassembled at individual
line termination units (LTU) , buffered, then stored a
word at a time in CP data memory. Memory words are
left-justified with zero pad. Common controls direct the
LTU to select a particular code structure/transmission
speed, collect assembled words according to dispatching
priority (specified by a START I/O instruction), and
direct transfer between CP data memory and the
LTU's.
A complete interface is provided for a modem by
each LTU, and provision is made for the addition of an
Auto-Call Unit. All status-bearing and control leads

Features of an Advanced Front-End CPU

present at the modem interface are represented by bits
in the LCW, allowing an operating program having the
proper protection key to actually control the L TU at
the interface level. Additionally, for asynchronous lines,
the state-of-the-stop-bit is reported/controllable, allowing detection and generation of open line conditions.
See Figure 6 for the layout of the LCW.
Syncronous LTU's would be constructed according to
the requirements of the user, in order to provide
hardware for handling of line discipline. Redundancy
checking and message retransmission are handled more
easily by hardware than software.
As data can be manipulated easily in the CP as
nine-bit bytes, character-by-character line service is not
infeasible for applications requiring intimate inspection
of message traffic. On the other hand, a complete
message could be assembled in data memory before the
L TU acknowledges reception. The ability to select
between these methods of line service within the same
unit indicates some of the power of the CP.
First-generation computers had all input-output
control integrated within the CPU for simplicity of
construction. Later generations used special-purpose
controllers to handle all input-output devices. Now, the
newest round of computer announcements shows a
return to integrated peripheral controllers as a cost
saving and to upgrade performance of essential devices.
A dedicated communications controller would utilize
integrated device _control for both reasons. Similar
communications input-output controllers have been
built by at least one firm on an experimental basis, but
initial efforts provided a costly design. It is not out of
the question to expand such a controller to the capacity
indicated here, nor to integrate it with the logic of a
central processing unit.
CONCLUSION
A rather high-powered communications subsystem has
been described in varying degrees of detail. Certain
combinations of architectural features are unique in any
digital computer, especially so in a front-end which is
usually thought of as a small computer system. Many
advanced programmers probably make use of data
structures closely resembling locatives without realizing
that a name exists for such a structure, just as many

21

:3 :3

~._1 029 ~222J ~----'-'----------'-'- --- -.---------.°

l

my

IgJ

mAftFACI

~
BITS
-.... _. J . __
--.__

ADIIlISS 0' DATA IJ)CATOR

_. _._, ...._._..__.__...,..

__ ._ . ____ .... _....J1

Figure 6-Line control word (LCW) layout

utilize subcoroutines without knowing it. The revival of
some first-generation architectural features coupled with
the combination of more modern ones yields a machine
whose applications could be greatly expended with only
small conceptual changes. Still, the primary purpose in
attempting the design of a novel central processor was
to look for the new ways of handling familiar problems;
and the design has fulfilled the author's intentions,
raised many questions to be answered, and provided new
techniques for discussion. The evolution of computer
systems seems to indicate a collection of peripheral
processors for a new configuratihn-the CP being such a
proposed component.
REFERENCES
1 L L CONSTANTINE
Integral hardware-software design
Parts 8 and 9 Modern Data November 1968 and February
1969
2 RW COOK M J FLYNN
System design of a dynamic microprocessor
IEEE Transactions on Computers March 1970
3 C J WALTER A B WALTER M J BOHL
Impact of fourth generation software on hardware design
Computer Group News March 1969
4 E C JOSEPH
Evolving digital computer system architecture
Computer Group News March 1969
5 H W LAWSON JR
Programming-language-oriented instruction stream
IEEE Transactions on Computers May 1969
6 G D HORNBUCKLE E A ANCONA
The LX-l microprocessor and its application to real-time
signal processing
IEEE Transactions on Computers August 1970
7 E A HAUCK B A DENT
Burroughs B6500/ B7500 stack mechanism
Proceedings Spring Joint Computer Conference 1968
8 J G CLEARY
Process handling on Burroughs B6500
Proceedings Fourth Australian Computer Conference 1969

Interpreting the results of a hardware systems monitor
by J. S. COCKRUM and E. D. CROCKETT
M emorex Corporation
Santa Clara, California

INTRODUCTION
Hardware monitors are widely used to enable the data
processing manager to effect cost reductions and improve the efficiency of his installation. Several papers l - 7
have presented hardware monitors and system measurement, but have presented relatively little information
regarding the interpretation of the monitoring results.
A brief overview of hardware monitors and the necessity of system measurement is presented. A section
deals with determining and measuring events-significant occurrences to a unit of work being processed by
the system. A "performance optimization cycle" is developed and actual results of a monitoring run are
shown.
The body of this report treats the heretofore neglected area of interpretation of the results. The stress
is on providing quantitative measures to assure that an
economic return on the computer system is obtained.
The system performance profile is presented and the
basic indicators in interpreting the profile are developed.
Methods are given for corrective actions of system reconfiguration, program change, data set reorganization, job scheduling and operator procedures. Predictive
methods are developed whereby reconfigurations can
be evaluated prior to their implementations.

CPU
DEVICES

CHANNEL

COMPARE
LOGIC CONTROL
ACCUMULATORS
COUNT
TIME

ACCUMULATOR
CONTENTS

DA~
REDUCTION

~ROG~ ' "

~

_

L:Y
~

~

~

./

~

COMPUTER

~L...-S_Y_S_T_EM_.... ----~£>

ANALYSIS
SUMMARY
GRAPHS

Figure I-Hardware monitor system

accumulators are written periodically to a magnetic
tape unit. The magnetic tape is batch processed by a
data analysis program to produce a series of analysis,
summary and graphic reports. Figure 1 shows a hardware monitoring system.

DESCRIPTION OF MONITOR

NECESSITY OF MONITORING

A hardware monitor consists of sensors, control logic,
accumulators, and a recording unit. The sensors are
attached on the back panels of the computer system
components-CPU, channels, disks, etc. The signals
from registers, indicators and activity lines picked up
by the sensors can be logically combined or entire
registers can be compared at the control panel and then
be routed to the accumulators for counting or timing
events, e.g., CPU active, any channel busy, disk seek
counts and times, etc. Typically, the contents of the

The complexity of the present computing systems
has made monitoring a necessity for effective management. Effective management means optimizing the
computer system performance for increased throughput, turn-around time, or a reduction in expenses; and
predicting future computer system needs. A hardware
monitor provides a tool to efficiently obtain these
managemen t goals. It is easy to install and use, and
23

24

Spring Joint Computer Conference, 1971

measures simultaneous occurring events of hardware
and software operations without any interference to the
system being monitored.
Computer system performance optimization

A computer system can be optimized according to
different strategies. The most common strategy is to
optimize the system's throughput, i.e., the rate at which
the workload can be handled by the system. Other
strategies include optimizing turnaround time (delay
between the presentation of the input to the system
and the receipt of the output), availability (percentage
of time during which the system is operating properly),
job time (length of time the system takes to perform a
single application), cost (the costs of the computer system used in processing the workload), etc. Often a combined optimation strategy will be followed, e.g., maximize throughput for a given cost. Although a hardware
monitor can be used for any optimization strategy, the
concern in this paper will be for throughput and cost,
since it is felt that these are the principal measures of
economic return on a system. To this end, consideration
will be given to system configuration including reconfiguration and additions/deletions of components, programs, rotuines to be made resident/non-resident, data
set allocation, job scheduling and operating procedures.
The performance optimization cycle which will be
developed consists of computer system measurement,
evaluation, improvement and returning to new measurements to start a new cycle.

It is necessary to identify the events upon which the
system performance depends and quantitatively determine their interdependency. The basic events monitored
are components active, time spent performing various
operations, storage utilized, resource contention, system resource overlapping, etc.
Single sensor events

The types of events which can be monitored in a
computer system using a single sensor for each are such
occurrences as:
CENTRAL PROCESSOR UNIT
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.

CPU STOP or MANUAL
CPUWAIT
CPU RUN
MULTIPLEX CHANNEL BUSY
SELECTOR CHANNEL BUSY
PROGRAM CHECK INTERRUPT
I/O INTERRUPT
ALLOW INTERRUPT CHANNEL
PROBLEM STATE
SUPERVISOR STATE
INSTRUCTION FETCH
EXTERNAL INTERRUPT
CONSOLE BUSY
STATUS OF THE SYSTEM IN RELATION
TO A PARTICULAR PROGRAM (IBM PSW
PROTECTION KEYS)
DIRECT ACCESS STORAGE DEVICE

Prediction of future needs

From the records obtained in the performance optimization cycle, not only is the current performance
known, but also a historical base is constructed for
predicting future needs. Based upon actual system
measurements it is possible to predict and evaluate
major changes in the capability of the system before
their implementation. System changes of reconfiguration, addition/deletion of new devices and model
changes can be simulated and analyzed. Accurate prediction of future needs dictates a continuing monitoring
program.

1.
2.
3.
4.
5.
6.
7.

CONTROL UNITS
1.
2.
3.
4.

MONITORING EVENTS
An event in a computer system is an occurrence of
significance to a unit of work processed by the system.
A hardware monitor can count or time the duration of
events or combinations of events.

CONTROL UNIT BUSY
NUMBER OF SEEKS
INTERRUPT PENDING
READ BUSY
WRITE BUSY
DATA BUSY BY MODULE
TOTAL SEEK TIME BY 1VIODULE

DEVICE BUSY
REWINDING TAPES
DATA TRANSFER
POLL of TERMINALS
UNIT RECORD EQUIPMENT

1.
2.
3.
4.

LINES PRINTED
CARDS READ
DEVICE BUSY
CARDS PUNCHED

Interpreting Results of Hardware Systems Monitor

Examples of combination events are:

JYI ultiple sensor events and comparators

1. CPU ACTIVE

Events which require multiple sensors and comparators are:

(CPU RUN /\-;::;C~P~U:-:;:W;::;-A-:-:;I=T:;-:/\:--:C=P=U:-:::-:=-:M::-:-A--=-N=U-=-cA--,-,,-L)
2. ANY CHANNEL BUSY
(CHANNEL 1 BUSYVCHANNEL 2
BUSY V ... V CHANNEL N BUSY)
3. ANY CHANNEL BUSY ONLY
(ANY CHANNEL BUSY /\ CPU WAIT)
4. TOTAL SYSTEM TIME
(CPU ACTIVE+SYSTEM WAIT)
5. CPU-CHANNEL OVERLAP
(CPU ACTIVE/\ANY CHANNEL BUSY)
6. CHANNEL OVERLAP
(CHANNEL 1 BUSY /\ CHANNEL 2
BUSY A ... /\ CHANNEL N BUSY)
7. SEEK ONLY
(CPU WAIT /\ -:-A-=-=N=-=Y=--C=H-AC-::N-N-E-L-B-U-SY-/\ SUM
OF SEEKS IN PROGRESS ON ALL
MODULES)

1. INSTRUCTION ADDRESSES

2. REGISTER CONTENTS
3. INTERFACE DATA
4. PARTITION BOUNDARIES
5. DATA SET BOUNDARIES

Combination events

Any number of combination events can be constructed using the monitor and the data reduction
program.

~~ASUHEO

SYST£M/3tC

1QI 611C

~COEL 4~

25

A~C

S( ANALYSIS

-----------------------------------------------------------------------------------------------------------------------~

l

~

E

~

I C

P MOL C G U E

-----------------------------------------------------------------------------------------------------------------------SYSTE~ UTILIZATION MONITOR RECOROl~uS ~ER~ MADE eN lei 611J STARTING AT
CRIGl~Al

SUM

~[CORCING

I~TE~~Al a

SUM

TAPE P~ASE SUMMARllED

S U "-

COUNTERS hill HE SUMMARllEU EVERY

3C S U ~ TAPE RECOROIS) I~TC EACH WCRK FILE RECORD.

Cc\;NTEP

It

Tlf4E/E'vENT

(I

0

1
2

1
2

T
T

3

3

T

It

1

5

!>
6
1

T
T
T
T

~

H.
II
12
13

14

15

e

9
A

B

T
T

T

C

T

E
F

T
T

0.0 AND ENDING AT

9qq999.~

SEcc~rs.

BASE 10

Ct:SCR I P TI (II

SElECT(~

CHA~NEl

SElECTCR

C~A~NEl

t

G
G
~OCEl

1 eusy MOCEl 50

40

2 eUSY MctEl 5C
A~Y CHA~NEl BUSY ~CCEl 50
BUSY
~NO
CPU WAIT "CCH 50
AIIIY CHA""!\iEl
CHA"'hEl 1 AND 2 OVERLAP ~CCEl 50
,(314 8USY TG MCCEL 40
2H~ Bl;SY TC "GCEl 5':1
~r:CEL ItC HAS 2314.MODEl 50 .A~rs
~CCEL 5~ HAS 2314.IICCEl 40 .A~TS
~CCEL ItC ~SING 2314.MCCEl 5C ~AIT CNl Y
flCCEl 5C It. "ANUAl STATE
FRCBlt:" STATE AND ~OCEl 50 CPU ACTIVE

1

(

1C lCORK FILE RECORDS. BEGI!l.NING AT

TGTAL ElAPSEC TI~E
CP~ ACTIVE ~OOEL 50
FRC8lE~ STATE MUDEl 50
ElAPSEt TI~E ~ETER RUNNI~G

T

4
6
1
8

8.~J. C.O

1.t S£CCt;t5.

G

G
G
G

G
t
G
G
G

G
G
G

C

-----------------------------------------------------------------------------------------------------------------------G
•••••••••••• SOFTWARE (lOCK ••••••••••••
G
16

--------------------------------------------------------------------------------------------~--------------------------f
CO~TENTIC~ RATIO 8ETWE~ MCO~l 40 AND 5(;
11
18
19
211

21
22
23
24
25

I~TER~E[lATE

H
I

CPU
CP~

~AIT

"Ct~l

5~

WAIT Ct.lY MeDEL 50
CPU ACTIVE CNlY "OCEl 5C
filE'" ANY C"A~~fl BUSY MCCEl 50
",E", SYSTE~ TIME flCOEl 50
t.E" ANY CHAI\NEl BUSY ANC kAJT ~OCEL 50
"E" "AIT MCCfl 5~
INCREASE 11\ MODEL 5C SYSTEM T I"'E

J
K

l

"N

G

P

SUMMARY "'Ill eE PRII\TED fVENY

5

I~TERVAl

SU~~ARIES.

Figure 2-Desci-iption of events monitored and combined

G

c

G

M
f

''G""

26

Spring Joint Computer Conference, 1971

MEASURED lei 6/70
FILE SUMMARY
COUNTER

OVE~

SYSTEM/36C MODEL ItC
PREVIO~S

300.0

~~C

SECC~OS.

c

ANALYSIS

BEGI~~I~G

_T

10

DESCRIPTION

o

a

1
2
3

1
2
3

4

4

5

5

6

6

7
8

7
8

9

9
A
8

TOTAL ELAPSED TI~E
CPU ACTIVE MODEL 50
PK08lEM STATE HODEL 5~
ELAPSED TIME METER RUNNING ~CDEl 40
SELECTOR CHANNEL 1 BUSY MeDEL 50
SELECTO~ CHANNEL 2 8USY ~CDEL 50
ANY CHANNEL BUSY MODEL 5C
ANY CHANNEL BUSY AND CPU WAIT MODEL 50
CHANNEL 1 AND 2 OVERLAP MeDEL 50
2314 BUSY TO MODEL 40
2314 BUSY TO MODEL 50
MODEL 40 HAS 2314,MODEl 50 WANTS
MODEL 50 HAS 2314.MODEl 40 ~ANTS
MODEL 40 USING 2314,MODEl 50 ~AIT ONly
MODEL 5~ IN MAN~Al STATE
PROBLEM STATE AND MODEL 50 CPU ACTIVE
•••••••••••• SOFTWARE CLOCK ••••••••••••
CONTENTION RATIO BET~EN MODEL ~o A~C 50
CPU ~AIT MODEL 50
CPU WAIT CNLY MtDEL 50
CPU AC HVE ONL Y MODel 50
NEW ANY C~ANNEl BUSY MeDEL 50
NEW SYSTEM TI'E MODEL 50
NE. ANY CHANNEL BUSY A~D ~AIT MODEL 50
NEW NAIT MODEL 50
INCREASE IN MODEL 50 SYSTE~ Tl~E

10
11
12
13

C

14
15
16

E
F
G

17
18

H

o

I
J

19

20
21

K

L

22

M
N

23
24
25

o

BOARD Ie

so

9. 5. 0.0 AND ENDING AT

JOB 10 24

COUNTE~

~.lC.

C.C

PERCENT

VALUE

296.fj!J098G
74.228995
170.242988
29f:.686C;8C

101'}.1)0
25.GO
57.3)
99.98
4(1.21
1.75
41.51
34.29
0.45
7.11
40.22

119. ~8d992

5.190000

123.2411992
101.813993
1.326000
21.121999
119.437992

0.0
3.16
100.00

PCT
PCT
PCT
PCT
PCT
PCT
PCT
PCT
PCT

CF
OF
OF
OF
OF
UF
OF
OF
OF
OF
OF
OF
OF
OF
OF
OF
OF

CCUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER
COUNTER

75.00
40.71
17.18
41.71

PCT
PCT
PCT
PCT

OF
OF
OF
OF

COUNTER
COUNTER
-COUNTER
COUNTER

1.99

5.n3000

4.18

12."22999

o.c

0.0

0.0
~.3e4999

296.939850

BASE
PCl
PCT
PCT
PCT
PCT
PCT
PCT

pc,t

16
~6

16
16
16
16
16
16
16
16
16
16
16
16
16
16
16

O.1t16111

222.110855
120.896862
52.193996
124.518991

16
16
16
22

~98.269850

103.143993
224.040855

34.58 PCT OF COUNTER 22
15.11 PCT OF COUNTER 22

1. :noooe
0.45 PCT OF COUNTER 16
------------------------------------~-----------------------------------------------------------------------------------P

Figure 3-Interval summary of event activity

~EAS~~EC

I I I bIlL

SYSTEM/3t(

~GUEL

4C

A~C

5~

ANALYSIS

•••••••••••••• •••• * ••••••••••••••••••••••••••••• *.# ••••••••••••••••••••••• * ••••••••••••••••••••••••••••••••••••••••••••••••••••
~.*

I~TERMtCIA1E/F[~Al

SU~MARY

••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
~IL£

SU~MAHY

CESCR IPll eN

(.

TCTAL ELAPSED TI~E
CPU ACTIVE ~OD~L 50
pReBLEM STATE ~U~[L 5~
ELAPSEG TI"'( METER RU~~IIloG ~CCCl 4L
SELECTOR (HANN~L 1 dUSY ~(DEL 50
SELECTOR (H~NNEL 2 H~SY ~CCEL 5J
ANY CHANNEL RtSY MeUEL 5C
ANY C~ANNEL R~SY A~O CPL ~AIT MOCEL 50
CHANNEL 1 AhU 2 UVERLAP ~CDEL 50
2314 BUSY TO MCUEl 40
2314 BUSY TO MODel 5C
MODEL 40 HAS 2~14,MUCtL 5C ~A~TS
MODEL 5C HAS 2314,MGOEL 4C ~ANTS
MOUEL 4u LSING 2314,~COEL 5C wAIT CIIoLY
MODEL 50 IN MA~UAL STATE
PROBLEM STATE ANC MOCel 50 CPU ACTIIIE
•••••••••••• SCFTwARE CLceK
CONTENTIC~ PATIO BET~E~ vCOEL 40 AIIoC 5(
CPU ~AIT ~OC(L 5J
CPU ftAIT ONLY ~CDEL s:
CPU ACTIVE ONLY ~OOEL 5C
NEw ANY (HAIIoNEL BUSY V.CDcL 50
NEw SYSTEM TI~E ~OOEL 5C
NEw ANy CHANNEL BuSY AND ~AIT ~OO[L 50
NEw wAIT MODEL 50
INCREASE IN MODEL 5(, SYSTF.~ TI~E

1
2

4

4
5
6
7

6

7
&
<;

H!
11

12

3

Ii
q

A
B
C

13

o

14
15

E
F
G

it:
17

H

18

1

IG

J

2"

K
l.

21
22
23
24
25

PREVIO~S

10

1
2
3

5

OVE~

,..

N

o
P

BCARC IC C

JCP.

(0 24

.* ...••.....

CCUII:TER IIALUE
2<;6<;.51219')
e]I.4zl<;43
1517. 76!>e<;1
2 <; 13 • i_ 2 5 7 <; 9
926.357'>36
19.12b~<;<;

<;41.':"9935
714.257<;51
~. 7nl'·1)0
2l:<;.e<;7<;el
<;24. VH<;3l:
59.<;619<;6
13<;.!:3799r,
C.2HOv\J

255.c'l5982
2<;69.4')66<;'1

t'ASE

pt~CElIol

IUJ.C1 PCT CF CCLIIoTER 16

28.0: PCT
53.13 PCT
~q.l( PCT
31.2C PCT
~.64 PCT
31.11 PCT
24.C5 PCT
0.13 PCT
Q.09 PCT
31.13 PCT
2.02 PCT
4.11 PCT
~.Cl PCT
J.~
PCT
~.59 PCT
101.ryo PCT

CF

CC~~TE~

CF

CCLII:TfR 16
CCU~TER 16
CC~II:TER 16
COLNTFR 16
CCUNTER It
CCL~TER 16
CCUNTER 16
CCL~TER 16
CCLNTtR Ib
CCU~TER 16
CCU~TER 16
CC~~TER 16
CCU~TER 16
CCUNTER 16
CCUNTER 16

OF
GF
Gf

CF
CF
Of
LF
CF
CF
CF
OF
CF
CF
CF

It

0.429939
21~8.\.la156

1423.7118:)5
H·3.<;E5<;58
<;45.4B4935

l2.~: PCT CF CC~IIoTER 16
47.Q5 PCT CF CCUIIoTER 16
2G.34 PCT CF CCLIIoTFR 16
31.a~ PCT CF CCUNTER 22

l<;13.241693

71H.':42951
2141.813156
3.785000

24.15 PCT UF CC~NTER 22
72.C4 PCT OF CGUNTER l2
0.13 PCT CF (CUNTER 16

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• * •••••••••••••••••••••••••••• *••••••••••••••••••••••••••••
INTER~ECIATE/FINAL SUM~ARY
•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• *.* ••••••••••••••••••••••••••••••••••••••••••••••••••••••• * •••••

Figure 4-Final summary of event activity

Interpreting Results of Hardware Systems Monitor

STATISTICAL

SU~MA~Y

FaR

lZ CHSERVATIONS,

eEGI~~ING

AT

EACH C8SEPVATION REPRESENTS T~E ABSClUTE (~CREASE I~ COUNTEP VALUE

C.CUI\TEH

a
1
2

..
j

')

6

7
8
9
10
11
12
13

14
15
16
11
16
19

2';
21
l2

23
,4
25

OE:SCRIPTION

MINI"UI"

feTAL ElAPSEC T1~E
CPU ACTIVE ~OOEL 5~
PHClllt::'" STATE MUDH 5(.
HAPScO TIME ~ETEk RUNNI~G "COEl 4C
SE:LECTOP CHANtIIEl 1 BUSY !"COEl 50
SUtCTO" CHANNEL 2 BUSY .. cDEl 50
A~Y C~ANN[l eusy MeCEl 5';
A"" C~ANNH IlUS., ANC CPl. ... AIT MUCH 50
CI-9 c 9

c.o

0.0
13.906999

u.:l
O.D
UeU

c.o
C.415)';~

196.93qd5J

v.1)

121.505862
h9.41)1!:!66
6.56 hI"

SI~CE T~f PREVICLS OBSERVATION.

~AXIMUM

296.935980

J

IJ.901QQ'l
296.9lQ8')0
12.4659'19
121.50c :;,2
0.0

27

2<;7.C.C9C;8(
175.ItB<;se
2'.'0.61:'2<;8(,
2'Ot.949S!!C
121.785<;<;<'
(:.476COC
123.248<;«;2
1,,2.35(.«;<;3
1.326CO(
89.115«;«;4
121.835<;<;2
1<;.<;26«;«;<;
41.016<;<;7
e".536<;<;4

.) .c.

IH.e2C.C;«;I
2«;6.Q<;8C5f.
r..656P("'i

2EI'!.<;41eS.:
2H.47t:851
132.226'7<; 1
1,4.578<;<; 1
298. 2f:98 Sf;

1I.J.1439<;3
2eA.942esc
1. 33lJOti:..

MEAN
296.<;538'06
71.("48828
132.«;O5~24

2')3.t:88'Ht:
f.!\).687244'
1.593911
H.<;l:51)t:(
6Z.t474«;l:

r.313417
~4.6t:55el

!!v.f'78161
5.{ .. q·~f!3
11.8eIlO!!3
11.207499
(l.t
21.33b6t:':'

296.9"41<.'.'
·::.3351H
225.8<;5871
163.2 4 P.3U
51. -13 .. b63
1!2.2~1l61

S.D.
,!.(·I.1lt>9>:il2
3":.229311

mary of the event activity. Figure 5 shows a statistical
summary of the events. Graphic results are easily correlated with various activities and give a clear picture
of the sequence of events. Figures 6-10 show the histograms of CPU ACTIVE, SELECTOR CHANNEL 1
BUSY, SELECTOR CHANNEL 2 BUSY, ANY
CHANNEL BUSY ONLY, and CPU WAIT.
The next section deals with the interpretation of these
results.

INTERPRETATION
The areas which can be investigated to optimize
throughput or cost of a computing system are. system
configuration, programs, routines resident/non-resident,
data set allocation, job scheduling and operation methods. In a multi program environment, throughput is a
measure of the time required to process a fixed amount
of work or simply stated the number of jobs per day. A
good assumption is that the CPU processing time is
constant whether the jobs are run sequentially or multijobbed in an interweaving process. The improved
throughput by multijobbing should come by overlapping system resources, e.g., CPU activity on one job
overlapped with I/O activity on another job. This increases the percentage of time the CPU is active to
yield better system utilization. Unfortunately, the desired positive effects of multiprogramming are not
always obtained, e.g., competition may exist between
two different jobs for the same direct access device. It

28

Spring Joint Computer Conference, 1971

~EASUREO

lCI

~/7C

SYSTEH/360 MODEL 4C

A~D

5G

ep~ AeTI~E

J

JCH

SEeS.

2 ..
24
24
Zit
21t
24
24
24
24
24
14
24
24
24
24
24
24

<)L.('

.lit

24
21t
24
21t
24
24
24

le~.~
~1<;.C

~o

•• .J

45U.()
54;;.(.
t3C.C
7l0.C
81G.(.
C;)(.O
C;C;'J.Q
1(80.0
117C.C

12t:C,.C
135l'.C

14411.,)
153( .0
1621.. 0

17H... C
1800.~

lec;C.C
1 C>8C·.{I
2(7(.0

216r..C

(:It

225,j.O
231tC.C
243v.J
25l0.C

24
24
24
24
24

270C .0
21C;C.C
288(;.0
2 H('. C

l4

3Ct:~.C

Z'.

24

l4
24
24
24

24
24
JC8

2t:l:.<.

315".C

x

10

x

x

MODEL 5C
40

x

(BASEsG'
50

x

60

x

70

x

80 .

x

9Q

x

100

x

111111111111111111111111
•
•
11111111111111111111111111111111111111111111
111111111111111111111111111111.
11111111111111111111111111111 •
111111111111111111111111111111111

8.36. (i.0
8.37.30.0
8.39.
8.4C.30.0

c.e

111111111 •

8.42. r.O
8.43.30.C
8.45. 0.0
8.46.30.0
8.48. 0.0
8.4«;.3(..0

8.51. l:.C
1).52.Je.r.
8.54.

v.e

8.55.30.0
8.51. c.e
d.5s.3e.O
~. 0. ':'.0
9. 1.3<-.\:
9. 3. fJ.t::
~. 4.3e.1)
s. 6. ,'1.,
9. 1.3(;.0
C;. 9. I).:!
9.1G.30.C
9.12. ('.J
9.13.)u.C
~.15. C.lI
9.16.31.1.'
9.18. (...0
C;.19.3.:.C
9.21. C.C
~.22.3C.C

1

3510.C

1
1

•

9.24. 0.(.0
9.l5.3oJ.1J

9.21. C.G
9.28.3(.0

•

•

•

•

•

•

•

•

•

9.3;.).

~

TIME

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
x
x
x
x
x
x
x
x
x
~

C

10

TIME
8.31.3(1.')
A.33. 0.1.
8.34.3":.0

1111111111111111.
•
•
111111111111111111111111111111111111111111111
11111111111111111111111111111111
1111111111111111111111111111 •
11111111111111111111111111111111
111111111111111
•
1111111111111111111111
•
•
•
11111111111111111111111111111111111111111111111111111111111 •
•
1111111111111111111111111111111111111111111111111111111111111111111111111
1111111111111111111111111111111111111111111111111111111111111111111
111111111111111111111111111
1111111111111111111111
1111111111111111111111
111111111111111111111111
111111111111111111111111
111111111111111111111111
11111111111111111111111111
1111111111111111111111111
111111111111111111111111
•
111111111111111111111111111111111
•
11111111111111111111111111111111111111111111
11111111111111111111111111111111111
11111111 •
11111111111111111
1111111111111
111111111 •
1111
11111
•
111111111111111111
111

SEes.

x

3e

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
11111111 •
•

324".C
333 fi .(;
H20.C
3t:C;~.O

20

A~AlYSIS

2C

3C

40

5'J

6C

7C

80

90

".C'

ICC'

Figure 6-Histogram of CPU active

is crucial in any performance optimizing cycle that the
system resources be overlapped for the job"stream and
that competition for resources be eliminated. This can
have far greater effects on throughput and cost than
on increasing the speed/of the system components. Indeed, it is often possible to increase throughput while
at the same time returning or delaying purchase of system components, or going to slower components.

Basic indicators

Some of the basic indicators to look for in interpreting
a system performance profile are:
SMALL CHANNEL OVERLAP,
CHANNEL IMBALANCE,
HIGH CHANNEL UTILIZATION,
LARGE WAIT ONLY, and
LARGE CPU ACTIVE ONLY

System performance profile

In order to look at the overlapping of system resources, a system performance profile which shows the
activity of the CPU, channels and the amount of overlapping between them is constructed as shown in Figure
11. The system performance profile shows the overall
system utilization. It may be immediately apparent
that some of the system components are essentially
unused.

1. SMALL CHANNEL OVERLAP
The' probable cause for small channel overlap
when the channel utilization is high is poor device
placement on the channels resulting in sequential
operations. The control units and devices should
be monitored and the job stream examined in
order to determine the data sets used. The device

Interpreting Results of Hardware Systems Monitor

~EASUREO

leI 61le

SYSTE~/36C MUDEl 4C A~D 5~ ANALYSIS
SElEeTC~

.J
Jce

SEes.

24
24
24
24

'lO.D

<'4
24
24
24
24
24
24
24
24
,4
24
24
24
l4
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
JOIl

1%.(
2 7l'. Q
3t:('.G

45C.C
54().C
0('.0
72C.C
81C.0
C;CO.{l

<;90.C
1L80.0
1170.1,)

126 v .O
135 1).0
144".0
1 ~ 30. ~
1620.0
1110.0
1800.0
1a'lC.O
1~80.0

2C7C.C
2160.C
225C.J
234\1.C
2430.0
2520.0
2flO.O
2100.0
27%.0
2880.0
2<;7e.o
3(6(,.0
3150.0
3240.0
333\1.0
342U.O
3'HO.C
3600.0
SEes.

x

29

11.'

20

3C

X

X

CHANNEL 1 BUSY

~CCEl

(I!ASE=GI

50

50

60

J(

X

1C
X

'!(,

x

90

ICC

X

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
444444.
•
•
444444444444444444444444444444.
44444444444444444444444444444444444444
4444444444444444444444444
444444444444444444444444444
444444444444444444444444444444444444
444444444 •
4444444444444444444444
444444444444444444444444444444444444
4444444444444444444444444444444444444
44444444444444444444444444444444444444
44444444444444444444444444444444444
44444444444444444444.
4444444444444444444444444444444444
•
4444444444444444444444444444444444444444.
4444444444444444444 •
•
•
4444444444444444444444444444444444444444.
4444444444444444444444444444444444444444.
4444444444444444444444444444444
44444444444444444444444444444444444
•
444444444444444444444444444444444444444444444
444444444444444444444444444444444444444444
444444444444444444444444444444444444444 •
44444444444444444444444444444444444444444
44444444444444444444444444444444444444444
4444444444444444444444444444444444444
•
444444444444444444444444444444444444444444444
44444444444444444444444444444444444444
•
44444444444444444444444444444444444444444444444
444444444444444
444444444444444444444444444444444444
4444444444444444444 •
4444444
4444444

0."

s. v. c.o

9. 1.30.0

9.
9.
9.
9.

3. 0.0
4.30.0
6. 0.0
1.30.0
~. 9. 0.0
9.10.30.0
9.12. 0.0
9.13.30.C
9.15. 0.0
9.16.30.0
9.18. 0.0
9.19.30.0
9.21. 0.0
9.22.30.0

4

4444
•
•
•
•
4444444444444444444444444444444444444444.
44444
4
4

•

•

o

Ie

20

TI~E

8.31.30.C
8.33. O.G
8.34.3G.0
8.36. c.o
8.31.30.0
8.39.
8.4a.3o.c
8.42. 0.0
8.43.30.0
8.45. c.o
8.46.3e.c
8.48. 0.0
8.49.30.0
8.51. 0.0
8.52.30.0
8.54. 0.0
9.55.3C.0
8.51. 0.0
8.58.30.0

~.24.

•

•

•

•

•

•

•

•

30

40

50

60

1C

80

110

1CO

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
x
x
x
x
x
x
x
x
x
x
J(

c.o

9.25.30.0
9.21. 0.0
S.28.30.0
9.30. 0.0
TIME

Figure 7-Histogram of selector channell busy

and data set information will provide sufficient
data so that a rearrangement of devices and data
sets to produce better balance can be achieved. A
new system performance profile should be constructed to verify the change.
A small channel overlap when the channel utilization is low, means that all the work can be put on
one channel with little effect on the job stream
processing time.
2. CHANNELIMBALANCE

work .can be placed on one channel with little
effect on the system throughput.
3. HIGH CHANNEL UTILIZATION
The system data sets should be examined. There
may be a problem as to which routines are resident/
non-resident. A measurement should be made to
determine transfer time of system routines relative
to total device active time. If the transfer time is
high, make all system routines non-resident and
measure their activity to determine which routines
to make resident/non-resident.

If the channel utilization is high, but the channel

load is not balanced, the device activity needs to
be measured to determine device rearranging. A
new system performance should be constructed to
verify the results.
Low channel utilization would indicate that all the

Another possible cause is record blocking in data
sets on direct access devices. Measure the I/O device utilizations and examine the data sets on each
device to locate ones in which a larger number of
records could be placed in each block to increase
the efficiency of access.

30

Spring Joint Computer Conference, 1971

SYSTEM/3t(

~OOEL

4C

A~C

SElEeT(R

Jce

SEes.

24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24
24

9C.G

le(.(
27'••.)
31:(,

.~

45) .r
'j4\.'. v
~3(101

72\.. C
'110 •• -

C;v1.C
C;"I).~

1:8:).0
l17J.O
1260.('
13 5 'J . 'J

144::.(J
153('.C

10

)(

2"·

i

3C

x

CHA~NEL

2

eusy

40

x

~OCEL

5

i.)

x

leASE"GI

50
b('

)(

7C

)(

~~

~ ...............................................................................
5
5

9':.

Ie':

X

'I

:.........:.........:
K

iI.31.3(,.~

G.e

e.3~.

5

!!.34.30.C
'1.3b. ':. C

5
5

'l.37.311.C

5
5

A..39. (j.1
13.4(·. 3C..C

5

ij.42. c.,)

5
5
5
55';5555

'J.43.3C.C
8.45. (!.C
d.4b.30.~

8.48. (;.G

5

iI.49.30.u
3.51. C.V

5
5
5

~.')2.3t.C

.J.54. t.O

5

8.55.3C.C
11.57. t.:)
'l.5R.3C'.O

1b2,).0

55!:5

17l().O
181)',.0

55'S5
':i

Ul9:::.C

5

1«;8).0
2(71).0

5


'i.l'l. 0.0

9.12.

'j.~

q.13.3(:.':

9.15- • .:: • .;

':i

9.1q030.~

3((:C.C

5

3l5u.O
324(,.0
3331.0
3420.0
3510.(;

5

Q.2l. v.J
<; • .22.3".';
q.24. C• .,)

3bO(.(.

JOB

O
·

5C ANALYSIS

SEes.

5
5
5

<;.2'j.~t.o)

9.21.

5

•

•

:

:

:

•

•

•

•

•

~

x

x

x

1(

x

)(

'(

'(

l(

)(

51)

be

7(

8;

cr

lel

; ••••••••• : ••••••••• ; •••••••••••••••••• * •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
1e

2~

3~

40

.0
23 4 u.O
2430.0
2520.C
261V.O
2700.0
219C.C
2880.0
2HO.O
3(60.0
3150.(;
3240.0
3330.0
342(.0
351C.(;
360(.C

SEes.

o
X

10
X

211
X

A~C

50 ANALYSIS

eHA~NEl

PUSY AhO epu NAIT

3C
X

40
•

50
X

~CCEl

5l

CeASE-G.

60
X

70
X

80
X

9(l
X

lC(,
X

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• * ••••••••

Tl"'E

8.31.30.0
8.33. C.O
8.34.30.0
11.36. ('.C;
8.37.30.0
8.39. v.1l
8.4;). 3e. 0
8.i.2. C.Ci

711711.
•
77717777711111771771777
711117777111171777717
17171771777171
7711177711717111
71711717117111711117711117
71111777 •
•
7711111171711111777 •
71117111111171117177177711
11711717711177771177111111177 •
7111771777777717777177777777777
111171171111177717177177171177.
71117111777777711
•
71171711777771777177777711117 •
11711111717111177111.
11717717111
•
71777171717771177717.
•
77171777171111771717717171171171117
11111717171171111171711711711 •
11717777111111711177711171171 •
77771717177111111711717777111771117171
771111711771111111771171777111117117
711117171711171111711111117177117
1711177777771717717177171717111717
11117777177777771177117117117711777
1177171717171117111111111711111711
7717777111717111111171777171711711
11111711777777711111111777
•
171111171111711171177171111117711711
77171117771171
•
•
77711111111177171177771711171777
77171177717777117
171171
717177
7
117
•
•
•
777717777117117711717717777177717111
7111
7

.

31

8.43.3(.C
8.45. o.t.
8.46.30.0
8.108. ~.o
8.49.3~.0

8.51. C.O
8.52.3(.(,
8.54. (;.0

8.55.3(.'.0
8.51. C.(
8.58.3t.O
.e

8.4C.3e.c
8.42. c.~
8.43.)0.0
8.45. (:.0
!J.46.1C'.C
~.48 •. :.c
8.49.3~.C

@.!>1. C.c
~.52.30.C

8.54. o.c
!!.55.3U.O
A.51. t'.C
8.58.3(.0
9. ll. 0.0
9. 1.3e.')
9. 3. <..i.e

9. 4.3l!.C
9. 6. c.c
q. 1. 3t'. C
9. 9. (.e
q.l : ... 3(.. C.

9.12. v.i:

9.13.30.(;

9.1".

{I.e

<;.10.3(;.0

<;.18.

v.f

~.19.3ri.(I

".11. 1...(\
9.l2.30.0
q.21t. 0.0
".25.30.C
9.21.

l.e

9.2b.3l.v

S.3':·.

TI"'E

v. C

Interpreting Results of Hardware Systems Monitor

order to calculate the effect of increasing the speed
of a component, a measurement, must be made
which isolates the nonoverlapped portion of that
component. Thus, to calculate the effect of increasing the CPU speed one must make a measurement that will isolate the CPU ACTIVE ONLY
TIME, and for increased device speed, the measurement must isolate ANY CHANNEL BUSY
ONLY TIME. Improvement factors will be applied to these nonoverlapped times and then added
to the other times which make up the system time
to calculate a new system time.

33

TOTAL SYSTEM
CPU ACTIVE
CPU WAIT
ANY CHANNEL BUSY
CPU ACTIVE ONLY
SEEK ONLY

I-----t

WAIT ONLY
TIME

Conjiguration change equations

Figure 12-System profile including 1/0 measurement

1. INCREASED CPU SPEED

for the change in seek speed is used, e.g., the ratio
of the new average seek time divided by the old
average seek time. The new system time is given
by the equation: NEW SYSTEM TIME = CPU
ACTIVE ONLY TIME+ANY CHANNEL
BUSY TIME+SEEK ONLY TIME/IMPROVEMENT FACTOR+CPU WAIT ONLY TIME.

The CPU ACTIVE ONLY TIME, ANY CHANNEL BUSY TIME and WAIT ONLY TIME are
measured. Then an improvement factor for the
increase in the CPU speed is used to modify the
system time equation. For a CPU which is twice
as fast, the improvement factor is 2. The new
system time is NEW SYSTEM TIME = CPU
ACTIVE ONLY TIME/IMPROVEl\1ENT FACTOR+ANY CHANNEL BUSY TIME+CPU
WAIT ONLY TIME.
This new system time is the time to process
the job stream that was measured by the monitor.
2. INCREASED DEVICE SPEED
The CPU ·ACTIVE TIME, ANY CHANNEL
BUSY ONLY TIME and CPU WAIT ONLY
TIME are measured. Then an improvement factor
for the increased device speed is used, e.g., for
direct access devices use the ratio of the new average rotational delay divided by the old average
rotational delay. Next calculate a new system time
by the following equation: NEW SYSTEM
TIME = CPU ACTIVE TIME+ANY CHANNEL BUSY ONLY TIME/IMPROVEMENT
FACTOR+CPU WAIT ONLY TIME.
3. INCREASED/DECREASED SEEK SPEEDS
The CPU ACTIVE ONLY TIME, ANY CHANNEL BUSY TIME, SEEK ONLY TIME, and
WAIT ONLY TIME are measured. Figure 12
shows the measured times. An improvement factor

4. SLOWER CPU
The CPU ACTIVE TIME, ANY CHANNEL
BUSY ONLY TIME and WAIT ONLY TIME
are measured. An improvement factor is used for
the decrease in CPU speed and the new system
time is calculated by the equation: NEW SYSTEM
TIME = CPU
ACTIVE
TIME/IMPROVEMENT FACTOR+ANY CHANNEL BUSY
ONLY TIME+WAIT ONLY TIME.
Notice that this equation is not the same as for
calculating the effect of substituting a faster CPU.
In the case of the slower CPU, the assumption is
made that the overlap between the CPU and the
channels remain constant, instead of decreasing as
in the faster CPU case.
5. SLOWER I/O DEVICES
The CPU ACTIVE ONLY TIME, ANY CHANNEL BUSY TIME and WAIT ONLY TIME are
measured. An improvement factor for the decrease
in device speed is used for calculating the new
system time: NEW SYSTEM TIME=CPU
ACTIVE ONLY TIME+ANY CHANNEL
BUSY TIME/IMPROVEMENT FACTOR+
CPU WAIT ONLY TIME.

34

Spring Joint Computer Conference, 1971

ONLY TIME+CPU WAIT ONLY
TIME.

TOTAL SYSTEM
CPU ACTIVE ONLY

An example of channel balancing is
given in the next section, Example 3.

CHANNEL 1 BUSY
CHANNEL 2 BUSY

8. SEVERAL CHANGES IN A SINGLE RUN

CHANNEL OVERLAP
(CHANNEL 1 BUSY AND
CHANNEL 2 BUSY)

The new system time is equal to the sum of the
nonoverlapped times + the largest values of the
overlapped times. It is necessary to very carefully
consider the overlapped areas and determine which
area is least affected by the speed change. Using
the system profile shown in Figure 14, what is the
effect of increasing the speed of the CPU by 2,
I/O devices by 1.5, and seek times on the new
devices by 2.5?

CPU WAIT ONLY
TIME

Figure 13-:-System profile with channels measured separately

This equation is not the same as for substituting
faster I/O devices. For slower I/O devices, the
overlap between the CPU and channels is assumed
to remain constant instead of decreasing as in the
faster I/O device case.

Each area of the system profile is examined to
determine which values to use for the new system
time.
AREA 1 The CPU ACTIVE ONLY TIME becomes CPU ACTIVE ONLY TIME/2.

6. ALL WORK ON ONE CHANNEL

AREA 2 Since the CPU ACTIVE TIME will be
decreased more than the ANY CHANNEL BUSY TIME, AREA 2 is changed
to CPU ACTIVE TIME 1\ ANY CHANNEL BUSY TIME/1.5.

The CPU ACTIVE ONLY TIME, CHANNEL 1
BUSY TIME, CHANNEL 2 BUSY TIME, ... ,
CHANNEL N BUSY TIME, and CPU WAIT
ONLY TIME are measured. The system profile is
shown in Figure 13. The equation for the new
system time using only one channel is: NEW
SYSTEM
TIME=CPU
ACTIVE
ONLY
TIME+sum of CHANNEL BUSY TIMES+
CPU WAIT ONLY TIME.

AREA 3 The new SEEK ONLY TIME is SEEK
ONLY TIME/2.5.
TOTAL SYSTEM

7. CHANNEL BALANCING
This calculation is composed of two parts:
Part 1: Measure CPU ACTIVE TIME,
DEVICE DATA BUSY TIMES,
CHANNEL BUSY TIMES and CPU
WAIT ONLY TIME.

CPU ACTIVE
CPU ACTIVE ONLY
CPU ACTIVE AND
CHANNEL BUSY

ANY

AREA 1

t-----1
AREA 2

I

CPU WAIT
.AREA 3

SEEK ONLY

Part 2: Examine the DEVICE DATA BUSY
TIMES and specify a new device allocation on the channels so that a better
balance of the channels is achieved.
Calculate a new ANY CHANNEL
BUSY ONLY TIME. The new system time is given by: NEW SYSTEM
TIME=CPU ACTIVE TIME+
NEW ANY CHANNEL BUSY

ANY

1-----1
AREA 4

CHANNEL BUSY ONLY

I

I
AREA 5

CPU WAIT ONLY
ANY

t--f

CHANNEL BUSY
TIME

Figure 14-System profile for multiple system changes

Interpreting Results of Hardware Systems Monitor

AREA 4 The ANY CHANNEL BUSY ONLY
TIME becomes ANY CHANNEL BUSY
ONLY TIME/1.5.

35

CPU ACTIVE TIME/\ANY
CHANNEL BUSY TIME +
1.5

AREA 5 The CPU WAIT ONLY TIME is unchanged.

SEEK ONLY TIJ\tIE

- - - -2.5- - - +

Thus,

ANY CHANNEL BUSY ONLY TIME
1.5
+

NEW SYSTEM TIME

= CPU ACTIVE ONLY TIJ\tIE

--------------+
2

CPU WAIT ONLY TIME.

EXAMPLES
All examples will ,use the same configuration and be run for the same period of time (3600 seconds).
Configuration
I-CPU
2-CHANNELS
4-DIRECT ACCESS DEVICES ON EACH CHANNEL
(D1, D2, D3, D4) on channell
(D5, D6, D7, D8) on channel 2

EXAMPLE 1
Putting all of the channel work on one channel
COUNTER
CO
C1
C2
C3
C4
C16
C17
C18

DESCRIPTION

SECONDS

PERCENT

1200.00
900.00
60.00
945.00
630.00
3600.00
885.36
1769.64

33.33
25.00
1.66
26.25
17.50
100.00
24.59
49.15

CPU ACTIV1~
CHANNEL 1 BUSY
CHANNEL 2 BUSY
ANY CHANNEL BUSY
ANY CHANNEL BUSY1\ WAIT
ELAPSED TIME
CO-(C3-C4) CPU ONLY
C16-CO-C4 WAIT ONLY

The NEW ANY CHANNEL BUSY is the sum of the channel activity
C19

C1 +C2 NEW ANY CHANNEL BUSY

960.00

The NEW SYSTEM TIME is CPU ONLY + ANY CHANNEL BUSY + WAIT ONLY
C20
C21

C17+C19+C18
C20-C16 SYSTEM SLOW DOWN

3615.00
15.00

.41

The above equation shows that there would be an increase in running time of 15 seconds by putting all work on one
channel. The 15 seconds is exactly the amount of overlap that occurred when the work was on both channels.

36

Spring Joint Computer Conference, 1971

EXAMPLE 2
Substituting a CPU that is twice as fast as the old CPU
COUNTER
CO
C1
C2
C3
C4
C16
C17
C18

DESCRIPTION
CPU ACTIVE
CHANNEL 1 BUSY
CHANNEL 2 BUSY
ANY CHANNEL BUSY
ANY CHANNEL BUSY/\ WAIT
ELAPSED TIME
CO-(C3-C4) CPU ONLY
C16-CO-C4 WAIT ONLY

SECONDS

PERCENT

1200.00
900.00
60.00
945.00
630.00
3600.00
885.36
1769-.64

33.33
25.00
1.66
26.25
17.50
100.00
24.59
49.15

The new system time is CPU ONLY/IMPROVEMENT FACTOR+ANY CHANNEL BUSY+WAITONLY
C19
C20

C17/2+C3+C18
C16-C19 SYSTEM IMPROVEMENT
TIME

3157.32
442.68

12.29

The above equation shows that there will be a decrease in running time of 442.68 seconds.
COUNTER
C21
C22
C23
C24

DESCRIPTION
C16/C19 RST
(CO-C17) /2 NEW CPU /\ CHANNEL
OVERLAP
C19-CO/2 NEW WAIT TIlVIE
NEW CPU ACTIVE
CO/2

SECONDS

PERCENT

1.14
157.32

4.98

2557.32
600.00

80.99
19.00

SECONDS

PERCENT

1200.00
900.00
60.00
945.00
630.00
150.00
100.00
350.00
300.00
10.00
30.00
15.00
5.00
3600.00

33.33
25.00
1.66
26.25
17.50
4.16
2.77
9.72
8.33
.27
.83
.41
.13
100.00

EXAMPLE 3
Balancing Cha:p.nels
COUNTER
CO
C1
C2
C3
C4
C5
C6
C7
C8
C9
C10
C11
C12
C16

DESCRIPTION
CPU ACTIVE
CHANNEL 1 BUSY
CHANNEL 2 BUSY
ANY CHANNEL BUSY
ANY CHANNEL/\ WAIT
DEVICE 1 DATA BUSY
DEVICE 2 DATA BUSY
DEVICE 3 DATA BUSY
DEVICE 4 DATA BUSY
DEVICE 5 DATA BUSY
DEVICE 6 DATA BUSY
DEVICE 7 DATA BUSY
DEVICE 8 DATA BUSY
ELAPSED TIME

This measurement shows that channels 1 and 2 are not balanced with respect to utilization. An examination of the
device utilizations shows that a better device allocation is:
Channell should have devices 2,3,5, 7 for a channel utilization of 475 seconds.
Channel 2 should have devices 1,4,6,8 for a channel utilization of 486 seconds.

Interpreting Results of Hardware Systems Monitor

37

N ext a new system time is computed
COUNTER

DESCRIPTION

SECONDS

PERCENT

C17
C18

C16-CO-C4 WAIT ONLY
C6+C7+C9+C11 NEW CHANNEL 1
BUSY
C5+C8+C10+C12 NEW CHANNEL 2
BUSY

1769.64
475.00

49.15
13.19

486.00

13.74

C19

Since there does not exist an ANY CHANNEL BUSY for the new device arrangement, it will be estimated by using
probability theory.
C20

869.04

C18VC19

24.89

The new ANY CHANNEL BUSY ONLY is assumed to be proportional to the old ANY CHANNEL BUSY ONLY
TIME
COUNTER
C21

DESCRIPTION
(C20jC3)*C4

SECONDS

PERCENT

597.64

The NEW SYSTEM TIME is CPU ACTIVE + ANY CHANNEL BUSY ONLY + WAIT ONLY
C22
C23

CO+C21+C17 NEW SYSTEM TIME
C16-C22 SYSTEM IMPROVEMENT
TIME

3567.68
32.32

.89

The above equation shows that there will be a decrease in system time of 32.32 seconds.

EXAMPLE 4
Calculating a new system time when different types of devices are on the same channel.
The basic configuration will be expanded to include tape as well as disks on channel 2. Then a new system time will
be calculated for tapes that are twice as fast. Since tapes and disks exist on the same channel, the measurement
should isolate the time that only the tape is operating so that the unit improvement factors can be applied.
The following measurement is made:
COUNTER
CO
C1
C2
C3
C4
C5
C6
C16
C17

DESCRIPTION
CPU ACTIVE
CHANNEL 1 BUSY
CHANNEL 2 BUSY
ANY CHANNEL BUSY
ANY CHANNEL BUSY /\ WAIT
TAPES BUSY /\DISK ON
CHANNEL 2 NOT BUSY
TAPES ONLY /\ WAIT
ELAPSED TIME
C16-CO-C4 WAIT ONLY TIME

SECONDS

PERCENT

1200.00
900.00
700.00
1424.88
949.68
640.00

33.33
25.00
19.44
39.58
26.38
17.77

426.24
3600.00
1450.32

11.84
100.00
40.28

38

Spring Joint Computer Conference, 1971

The NEW ANY CHANNEL BUSY ONLY is equal to ANY CHANNEL BUSY ONLY-TAPE ONLY/IMPROVEMENT FACTOR. The NEW SYSTEM TIME is equal to CPU ACTIVE+NEW ANY CHANNEL BUSY
ONLY + WAIT ONLY.
C18
C19

CO+C4-C6/2+CI7 NEW SYSTEM
TIME
C16-C18 SYSTEM IMPROVEMENT
TIME

3386.88
213.12

5.92

Equation 19 shows that substituting a tape twice as fast will reduce the system time by 213.12 seconds.

SUMMARY
The paper has presented a description of hardware
monitors, effective methods of optimizing installation
throughput and costs, provision for a historical base
for predicting future system needs, and significant
events to measure. The interpretation of the monitoring
results are discussed in detail. Consideration is given
to system configuration, programs, routine resident/
non-resident, data set allocation, job scheduling and
operation methods. Stress is placed on predicting improvements based on actual systems measurements in
order to optimize the system with the actual job stream.
The performance optimization cycles and the interpretation of the system performance profile were developed.
REFERENCES
1 C T APPLE
The program monitor-A dev.ice for program performance
measurement
ACM 20th Nat Conf 1965 pp 66-75

2 P CALINGAERT
System performance evaluation: Survey and appraisal
Comm ACM 10 January 1967 pp 12-18
3 G ESTRIN D HOPKINS B COGGAN
S D CROCKER
Snuper computer-A computer in instrumentation
automation
FJCC 1967 pp 645-656
4 F D SCHULMAN
Hardware measurement device for IBM System/360 time
sharing evaluation
Proc ACM Nat Meeting 1967 pp 103-109
5 D J ROEK W C EMERSON
A hardware instrumentation approach to evaluat1:on of a
large scale system
ACM Nat Conf 1969 pp 351-367
6 A J BONNER
Using system monitor output to improve performance
IBM Systems Journal 8 1969 pp 290-298

7 How to find bottlenecks in computer traffic
Computer Decisions April 1970

4-way parallel processor partition of an atmospheric
primitive-equation prediction model
by E. MORENOFF
Ocean Data Systems, Inc.
Rockville, Maryland
and

W. BECKETT, P. G. KESEL, F. J. WINNINGHOFF and P. M. WOLFF
Fleet Numerical Weather Central
Monterey, California

INTRODUCTION

tional parallelism to exploit the four powerful central
processing units available in the FNWC computer installation. Additional speed-ups involved machine language coding for routines in which the physics were
considered firm, and the substitution of table look-up
operations for manufacturer supplied algorithms. The
resultant four-processor version of the PEM was considered ready for final testing in August 1970, four
months after work was initiated.
The one-processor version of the PEM required 184
minutes of elapsed time for the generation of 36-hour
prognoses. The four-processor version, on the other
hand, requires only one hour of elapsed time to produce
the same results.
This paper summarizes the principal factors involved
in the successful operation of the 4-processor version of
the PEM. Operating System modifications needed to
establish 4-way inter-processor communications
through Extended Core Storage (ECS) are described in
the second section. The PEM structure is described in
the third section. The partitions into which the PEM
is divided are examined in the fourth section. The fifth
section is devoted to the methods employed for synchronizing the execution of the partitions in each of the
multiple processors and the model's mode of operation.
The results of the PEM development and reduction to
operational use are summarized in the last section.

A principal mission of the Fleet Numerical Weather
Central is to provide, on an operational basis, numerical meteorological and oceanogr:;tphic products peculiar
to the needs of the Navy. Toward this end the FNWC
is also charged with the development and test of numerical techniques applicable to Navy environmental forecasting problems. A recent achievement of this development program has been the design, development,
and beginning in September 1970, operational use of
the FNWC five-layer, baroclinic, atmospheric prediction model, based on the so-called" primitive-equations," and herein defined as the Primitive Equation
Model (PEM).
The PEM was initially written as a single-processor
version to be executed in one of the two FNWC computer systems. In this form the PEM was exercised as
a research and development tool subject to improvement and revision to enhance the meteorological forecasts being generated.
The development reached a point in early 1970 where
the PEM was skillfully simulating the essential threedimensional, hemispheric distribution of the atmospheric-state parameters (winds, pressure, temperature,
moisture, and precipitation). Its ability to predict the
generation of new storms, moreover, was particularly
encouraging. The FORTRAN coded program, however,
required just over three hours to compute a set of 36
hour predictions. To be of operational utility, it was
clear that several types of speed-ups were in order.
The principal effort in the development of the operational version of the PEM was directed at partitioning
the model to take advantage of all possible computa-

FNWC COMPUTER SYSTEMS
COMMUNICATIONS
The Fleet N umerical Weather Central operates two
large-scale and two medium-scale computer systems as
39

40

Spring Joint Computer Conference, 1971

Ell. . _

S''''

•••
Figure I-FNWC computer system configuration

shown in Figure 1. The two CDC 2200 computer systems communicate with each other through a random
access drum. One of the CDC 3200 computers is linked
to one of the CDC 6500 computers by a manufacturersupplied satellite coupler. The two dual-processor CDC
6500 computer systems are linked with each other
through the one million words of Extended Core Storage (ECS).
Normally, the ECS is operated in such a manner
that 500,000 words are assigned to each of the two
CDC 6500 computer systems with no inter-communication permitted. A mechanism was developed by the
FNWC technical staff allowing authorized programs in
each of the four central processors of the two CDC
computer systems to communicate with each other and,
at the same time, be provided with software protection
from interference by non-authorized programs.
There are three classifications of ECS access, normal,
master and slave, designated for each job in the system by an appropriate ECS access code and a pass key.
For normal ECS access these fields are zero. If the ECS
access code field designates a job as a master, then the
associated pass key will be interpreted as the name of
ECS block storage assigned to that job. A slave has no
ECS assigned to it but is able to refer the ECS block
named by its pass key.
A master job in one of the CDC 6500's may have
slave jobs in the other CDC 6500. A communication
mechanism called lSI was established between the
operating systems by FNWC technical staff to facilitate implementation of the master-slave ECS access
classification. lSI is a pair of bounce PP routines (one
in each machine) which provide a software, full duplex
block multi-plexing channel between the machines via
ECS. Messages and/or blocks of data may be sent over

this channel so that lSI may be used to call PPprograms in the other machine or to pass data such as
tables or files between the machines.
Obtaining a master/slave ECS access code is accomplished by two PP programs: ECS and lEC. A job
wishing to establish itself as a master first requests a
block of ECS storage in the same manner of a normal
access job. Once obtained, the labeling of this block of
ECS storage is requested by calling the PP program
ECS with the argument specifying the desired pass key
and the access code for a master. The program ECS
searches the resident control point exchange areas
(CPEA) for a master with the same pass key. If one is
found the requesting job is aborted even if the program
ECS used lSI to call lEC in the other machine. lEC
will perform a similar search of the CPEA in its own
machine and return its findings to the program ECS
via lSI. If the other machine is down, or if no matching
key can be found, the label is established, otherwise the
requesting job is aborted. Before returning control to
the requesting job, the program ECS increments the
ECS parity error flag and monitors via a special monitor function developed at FNWC. A non-zero value of
this flag has the effect of preventing ECS storage moves
in the half of ECS assigned to the particular machine.
Similarly, a job wishing to establish itself as a slave
calls the PP program ECS with the appropriate pass
key and access code. ECS searches its own machine's
CPEA for a master with a matching key. If none is
found, lEC is called on the other machine via lSI and
the search is repeated in the other CPEA. If still none
is found, this fact is indicated to the requesting job. If
a match should exist in either machine, the original
ECS will have the address (ECRA) and field links
(ECFL) of the requesting job saved in its CPEA and
will be given the ECRA and ECFL of the matching
master.
Modifications made to the ECS storage move program allow ECS storage moves in a machine with no
master present. Modifications to the end of job processor reset the ECRA and ECFL of slaves to their
values and decrement the ECS parity error flag in the
monitor when a master terminates.
~

ATMOSPHERIC PREDICTION MODEL
STRUCTURE
Several developmental variations of a five-layer baroclinic atmospheric prediction model, based on integrations of the so-called primitive equations, were designed
and developed by Kesel and Winninghoff1 in the 19691970 period at FNWC Monterey.
The governing equations are written in flux form in a

4-Way Parallel Processor Partition

manner similar to Smagorinsky et al., 2 and Arakawa. 3
The corresponding difference equations are based on
the Arakawa technique. This type of scheme precludes
nonlinear computational instability by requiring that
the flux terms conserve the square of an advected parameter, assuming continuous time derivatives. Total
energy is conserved because of requirements placed
upon the vertical differencing; specifically, the special
form of the hydrostatic equation. Total mass is conserved, when integrated over the entire domain. Linear
instability is avoided by meeting the Courant-Friedrichs-Lewy criterion.
The Phillips4 sigma coordinate system is employed
in which pressure, P, is normalized with the underlying
terrain pressure, 7r. At levels where sigma equals 0.9,
0.7, 0.5, 0.3, and 0.1, the horizontal wind components,
u and v, the temperature, T, and the height, Z, are
carried. The moisture variable, q, is carried at the
lowest three of these levels. Vertical velocity,w == - if,
is carried at the layer interfaces, and calculated diagnostically from the continuity equation. See Figure 2.
The Clarke-Berkovsky mountains are used in conjunction with a Kurihara 5 form of the pressure-force
terms in the momentum equations to reduce stationary
"noise" patterns over high, irregular terrain.
The Richtmyer centered time-differencing method is
used with a ten-minute time step, but integrations are
recycled every six hours with a Matsuno (Euler backward) step to greatly reduce solution separation. The
mesh length of the grid is 381 kilometers at 60 North.
The earth is mapped onto a polar stereographic projec":'
tion for the Northern Hemisphere. In the calculation
of map factor and the Coriolis parameter, the sine of
the latitude is not permitted to take on values less than
that value corresponding to 23 degrees North.
Lateral diffusion is applied at all levels (sparingly)
in order to redistribute high frequency components in
the mass and motion fields. Surface stress is computed
at the lowest layer only.
A considerable part of the heating "package" is
fashioned after Mintz and Arakawa, 6 as described by
Langlois and K wok. 7 The albedo is determined as a
function of the mean monthly temperature at the
earth's surface. A Smagorinsky parameterization of
cloudiness is used at one layer (sigma equals 0.7), but
based on the relative humidity for the layer between
0.7 and 0.4. Dry convective adjustment precludes hydrostatic instability. Moisture and heat are redistributed in the lowest three layers by use of an ArakawaMintz
small-scale
convection
parameterization
technique. Small-scale convective precipitation occurs
in two of thf;f three types of convection so simulated.
Evaporation and large-scale condensation are the main
source-sink terms in the moisture conservation equa-

41

SISM.

'aria~les

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 0.0
1,',T,1

- - - - -------

--------- - -•
-------------•
--- •

1.2

u

1,',T,1 _ _

0.&

1",T,l"

__ -

__ ..---

1",T,l"

U

__ -- - -

•
1",T,l"

Figure 2-Diagram of levels and variables

tion. Evaporation over land is based on a Bowen ratio,
using data from Budyko.
In the computation of sensible heat flux over water,
the FNWC-produced sea surface temperature distribution is held constant in time. Over land, the required
surface temperature is obtained from a heat balance
equation. Both long- and short-wave radiative fluxes
are computed for two gross layers (sigma = 1.0 to 0.6
and from 0.6 to 0.2). The rates for the upper gross
layer are assigned to the upper three computational
levels. Those rates for the lower gross layer are assigned
to the lower two computational levels.
The type of lateral boundary conditions which led to
the over-all best results is a constant-flux restoration
technique devised by Kesel and Winninghoff, and implemented in January 1970,
The technique was designed to accomplish the following objectives:
a. To eliminate the necessity of altering the initial
mass structure of the tropical-subtropical atmosphere as is the case when cyclic continuity is
used.
b. To eliminate the problems associated with the
imposition of rigid, slippery, insulated-wall
boundary conditions; particularly those· concerning the false reflection of the computational
mode at outflow boundaries.
c. To preserve the perturbation component in the

42

Spring Joint Computer Conference, 1971

aforementioned areas in the prognostic period
(although no dynamic prediction is attempted
south of 4 North the output is much more
meteorological than fields which have been fattened as required by cyclic continuity).
The procedure is as follows: All of the distributions
of temperature, moisture, wind, and terrain pressure
are preserved at initial time. A field of restoration coefficients which vary continuously from unity at and
south of 4 North to zero at and north of 17 North is
computed. At the end of each ten minute integration
step the new values of the state variables are restored
back toward their initial values (in the area south of
17 North) according to the amount specified by the
field of restoration coefficients. The net effect of this
procedure is to produce a fully dynamic forecast north
of 17 North, a persistence forecast south of 4 North,
and a blend in between. The mathematical-physical
effect is that the region acts as an energy sponge for
externally (outwardly) propagating inertio-gravity
oscillations.
The basic inputs associated with the initialization
procedure are the virtual temperature analyses for the
Northern Hemisphere at 12 constant pressure levels
distributed from 1000 MBS to 50 MBS, height analyses
at seven of these pressure levels, moisture analyses at
four levels from the surface up to 500 MBS. In addition,
the terrain height, sea level pressure and sea surface
temperature analyses are used.
Several types of wind initialization have been tried:
geostrophic winds (using constant Coriolis parameter) ;
linear balance winds; full balance winds; winds obtained
by use of an iterative technique. Aside from geostrophic
winds the quickest to compute is the set of non-divergent winds derived from solution of the so called linear
balance equation. These are entirely satisfactory for
short-range forecasts (up to three days).
The degree of prediction skill currently being observed from the tests is very gratifying. It is clear that
little or nothing is known about the initial specification
of these parameters over large areas of the Northern
Hemisphere, particularly over oceans and at high
altitudes.
As noted at the start of the section, the equations
are written in flux form and an Arakawa-type conservative differencing scheme is employed. No attempt will
be made to exhibit herein a complete set of the corresponding difference equations, since it is well beyond
the scope of this paper to do so. Rather, it will suffice
to show the main continuous equation forms (using
only symbols such as H, Q, and F, to denote all of the
diabatic heating effects, moisture source and sink terms,
and surface stress, respectively).

There are five prognostic equations, one of which
must be integrated prior to parallel integration of the
remaining four. These are the continuity equation, the
east-west momentum equation, the thermodynamic
energy equation, and the moisture conservation equation. Heights (geopotentials) are computed diagnostically from the hydrostatic equation (the scaled
vertical equation of motion). Vertical velocities are
calculated from a form of the continuity equation. The
pressure-force terms are shown in their original forms.
[The pressure surfaces are actually synthesized" locally"
about each point, by means of the. hypsometric conversion of pressure changes to geopotential changes;
and geopotential differences are computed on these
pressure surfaces.] This Kurihara-type modification
tends to reduce inconsistent truncation error when
differencing the terrain pressure (which remains fixed
in any column) and geopotentials of sigma surfaces
(the "smoothness" of which varies with height).
A. East-West Momentum Equation

a7rU = _m.J~(UU7r)+~(UV7r)}+~
at
lax m ay m
au

B. North-South Momentum Equation

a7rV = _ m2{~(UV7r) +~(VV7r)}+~
at
ax m ay m
au

C. Thermodynamic Energy Equation

J~(7rUT) +~(7rVT\}
a7rT
at = _ m-lax
m ay m-}

RT{
[a7r (a7r
a7r)]}
+Cpu -W7r+U -+m
at u-+
ax vayD. Moisture Conservation Equation

where Q=moisture source/sink term

4-W ay Parallel Processor Partition

E. Continuity Equation
;11r =

at

_mJ~(U1r)+~(V1r)}+1raw

-lax m ay m

au

F. Hydrostatic Equation
afj>

au

RT
u

PARTITIONING THE MODEL
The PEM may be considered in three distinct sections: the data input and initialization section; the
integration section repeated in each forecast time step;
and the output section. Each sixth time step, the basic
integration section is modified to take into consideration the effects of diabatic heating. This includes incoming solar radiation, outgoing terrestrial radiation,
sensible heat exchange at the air-earth interface, and
evaporation, Condensation processes, in contrast, are
considered every time-step. Each thirty-sixth time
step, the results of the preceding forecast hours are
output and the integrations reiterated.

,...1

Figure 3--Overall model partition structure

43

The basic structure of the PEM, as represented by
the governing set of difference equations and the
method of their solution, is naturally suited for partitioning for parallel operation and concurrent execution
in multiple processors. The particular partitioning implemented was selected in order to insure approximately
equal elapsed time for the execution of concurrently
operating partitions. Four-way partitions were principally employed, although both three-way and two-way
partitions were introduced where appropriate.
The basic partition of the model was based on the
observation that during each time step in the forecast
process the momentum equations in the east-west and
north-south directions, the thermodynamic energy
equation, and the moisture equation could each be executed concurrently in each of four different processors.
By virtue of the centered time-differencing method, the
forcing functions to be evaluated in the solution of
each of these equations require data generated during
the preceding time step and accessed on a read only
basis during the current time step. Hence parallel processing could be achieved by providing separate temporarylocations for storage of intermediate results
during execution of a time step by each processor and
by providing a mechanism to insure that each processor
is at the same time step in the solution of its assigned
equation and, where required, at the same level within
that time step.
With this four-way partitioning within the basic time
step as a starting point, additional possibilities for
simultaneity in the model's operation were observed
and further partitions developed. For example,prior
to the execution of the four-way partitioning within
each time step a three-way partition was implemented
which allowed the continuity equation to be solved for
the interface vertical velocities and the local change of
lower boundary pressure at the same time that geopotential-field correction terms are generated. The
model's initialization section was similarly partitioned
three ways and the output section two ways. Finally,
the heating effects computations were implemented as
a three-way partition.
The four-way, three-way and two-way partitions
were packaged and compiled as four separate programs,
one for each of the four FNWC processors. The overall
structure of the partitioned model is illustrated in
Figure 3. Following completion of the output section
at time step (36), the integration sequence is recycled
from time step (1) as shown.
Processor 1 is designated as the "master" processor
and Processors 2, 3, and 4 as the" slave" processors,
both in the sense described in the inter-computer communications section and in the sense that each time step
is initiated by command from Processor 1 and termi-

44

Spring Joint Computer Conference, 1971

'-1

'- 2

......

Figure 4-Typical time step partition structure

nated by Processor 1 acknowledgment of a " complete"
signal emanating from each of Processors 2, 3, and 4.
At the completion of each step, results from the computations of that time step are transferred from temporary to permanent locations in storage and the next
time step initiated. Once again, the transfer is initiated
by command from Processor 1 and terminated by Processor 1 acknowledgment of a transfer complete signal
received from Processors 2, 3, and 4.
The structure of a typical time step partition is
illustrated in Figure 4. At the start of the time step a
three-way split is initiated by Processor 1 during which
timePllocessor 1 integrates the continuity equation to
obtain vertical velocities and Processors 2 and 3 compute the ten pressure-force-term geopotential correction
fields in the east-west and north-south directions, respectively. At this time Processor 4 is not executing a
portion of the model and may either be idling or operating on an independent program in a multi..programmed
mode. The completion of the assigned· tasks by Processors 2 and 3 are signaled to Processor 1 which then
initiates the basic four-way split. The variables u, v, T
and e represent the new values of the variables obtained

through integration of the east-west and north-south
oriented momentum equations, the thermodynamic
energy equation and the moisture conservation equation, respectively. The variable L represents the computation of the effects of the large scale condensation
process.
Once the computations of the Ui, Vi and Ti (i= 1, 2,
3, 4, 5) are initiated in Processors 1, 2, and 3, respectively, they proceed independently of one another to
the end of the time step. Each "i" value represents
another layer in the five-layer atmospheric model.
An added consideration is introduced into the computations of PrQcessor 4, however~ Before the effects of
the large scale condensation process can be computed
for a layer, both the Thermodynamic Energy equation
and the Moisture Conservation equation must be solved
at that layer. Hence, a level of control is required to
synchronize the execution of Processor 4 with Processor
3 within the individual time step computations. Further, the Dry Convective Adjustment computation in
Processor 4 requires the completion of all five layers
of the Thermodynamic Energy equation before it can
be initiated so that a second level of intra-step control
is required. At the conclusion of the Dry Convective
Adjustment computation, the Hydrostatic equation is
integrated in Processor 4 to obtain the new geopotential
fields. The time step is concluded with the transfer of
intermediate time step results from temporary to
permanent storage.
The basic time-step partition structure is modified
each sixth time step to include the effects of adiabatic
heating. The heating section was implemented as a
three-way partition illustrated in Figure 5. Additional
intra-step level control is required to synchronize the
execution of each of the partitions as shown in the
figure.N ote that the heating partition in Processor 3 is
itself divided to allow as great a degree of simultaneity
as possible with the execution of partitions in Processors
1 and 2.
The output section, executed each thirty-sixth time
step (at the completion of six forecast hours), is partitioned as shown in Figure 6. The output section partitions were placed in Processors 3 and 7 4 principally for
central memory space considerations, more central
memory being available in these processors than in
Processors 1 and 2. The basic. function of each output
partition is co-ordinate transformation of the forecast
variables and conversion to forms suitable for the user
community.
Each output partition is initiated by command from
Processor 1. ProcesSor 4 may immediately begin processing of the east-west and north-south momentum
equation variables but must wait on the transformation
of the Phi. fields until Processor 3 has completed the

4-Way Parallel Processor Partition

Preprocessor program. A three-way partition was not
implemented since the Preprocessor must be completed
prior to the transformation of the Thermodynamic
energy equation and moisture conservation equation
variables.
To increase total system reliability a checkpoint restart procedure was designed and coded. At each output step (6, 12, 18, ... , 72 hours) all of those data fields
required to restart the PEM are duplicated from their
permanent ECS locations onto a magnetic tape by
Processor 1, at the same time that Processors 3 and 4
are processing the output forecast fields. The essential
difference between these two data sets is that the restart fields contain the variables on sigma surfaces as
opposed to the pressure surface distributions required
by the consumers.
The "restart" procedure itself requires less than a
minute. If the prediction model run is terminated for
any type of failure (hardware, software, electric power,
bad input data, etc.), the restart capability ensures
that the real time loss will be less than ten minutes.
In addition to the four processor version of the Atmospheric Prediction Model a two-processor version
was also implemented. The primary motivation for the
second implementation was to provide a back-up capability with graceful degradation which could be operated in the event one or two of the central processing
units were down for extended periods. The two-proPnc_ 1

'1IC1SSIr

2

Figure 5-Influence of heating on time step computation

....: .n.s 1'1

_at.

45

tIIll 1.".1 , . .lis .. • fl'lSSR SIIfICts.

Figure 6-0utput partition structure

cessor version will also be used as the vehicle for further
research· and development efforts to improve the meteorological and numerical aspects of the model, and the
quality (skill) of the resultant forecasts.
PARTITION SYNCHRONIZATION AND
EXECUTION
The parallel execution of the multiple partitions is
realizable because it is possible to postulate a mechanism by which the operation of each partition in each
of the multiple processors can be exactly synchronized.
This mechanism is an adaptation to the requirements
of the PEM and the characteristics of the FNWC
computer installation of a general program linkage
mechanism known as the Buffer File Mode of Operation. 8,9,10
Implicit in the Buffer File Mode of Operation is the
concentration of all inter-program communications
through Buffer Files. A Buffer File is a set of fixed
length blocks organized in a ring structure and placed
in each data path from one program to another. The
program generating the data to be passed places the
data into the· Buffer File once its operations on that
data have been completed. The program to receive the
data finds the data to be operated on in the Buffer
File.
The flow of data through the Buffer File is unidirectional; that is, one program may only write data to the
Buffer File and the other may only read data from the

46

Spring Joint Computer Conference, 1971

Buffer File. Pointers are maintained which indicate
which blocks in the Buffer File have last been written
into and read from by the two programs involved in
the data transfer. The Buffer File Mode of Operation
can be used to synchronize the operation of otherwise
asynchronously operating programs in the same or
different processors by either of two methods.
In the first instance, program synchronization is
effected by regulating the streaming of data through
the Buffer File from one program to another. The program writing data to a Buffer File cannot proceed beyond the point in its execution when it is necessary to
place data into the Buffer File and there is no room for
additional data in the Buffer File. Similarly, a program
reading data from the Buffer File cannot proceed beyond the point· in its execution when it requires data
from the Buffer File and there is no additional data in
the Buffer File. The execution of a program, either
waiting for additional data in its input Buffer File or
for additional space in its output Buffer File, is temporarily delayed, and thereby brought into synchronization with the execution of the other program.
In the second instance, program synchronization is
effected by conveying" change of state" or" condition"
information from one program to the other. The Buffer
File block size is chosen on the basis of the quantity of
information to be passed between programs. The internal state change of a program is noted as a block of
data in that program's output Buffer File. The fact
that there has been a change in state of the program
can readily be sensed by the other program which then
can read the block of data from the Buffer File. The
second program can determine the nature of the change
in state of the first program by examination of the data
in the block it has read from the Buffer File.
The bi-directional transfer of the program state information is realized by the introduction of Buffer File
pairs. The first Buffer File can only be read from by the
first program and written to by the second program,
while the second can only be read from by the second
program and written to by the first program. This
method of exchanging state information between
programs not only provides a mechanism for synchronizing the execution of two otherwise asynchronously
executed programs, but also eliminates the internal
program housekeeping which would normally be needed
to coordinate the accesses and the sequences of such
accesses of the programs to the program state information.
'
The PEM synchronization mechanism, referred to
herein as the Partition Synchronization Mechanism
(PSM), is based on the latter alternative. The application of the PSM to the multi-processor FNWC com-

puter environment requires the Buffer Files to reside
in some random access storage device jointly accessible
by each of the processors. The device which satisfies
this requirement is the ECS, operated in the manner
previously described.
A pair of Buffer Files is assigned between each two
partitions for which bi-directional transfer of state information is required. Hence in the typical time step
partition structure illustrated in Figure 4 and amplified
in Figure 5, Buffer File pairs are assigned between
partitions resident in Processors 1 and 2, 1 and 3, 1 and
4, ·2 and 4, and 3 and 4.
The nature of the change of state information to be
passed between any pair of partitions in the PEM is
whether or not one partition has reached a point in its
execution where sufficient data has been developed to
allow the other partition to initiate or continue its own
execution. This can be represented as a single "GO-NO
GO" flag to be sensed by the second partition. Hence,
in the PEM the Buffer File recirculating ring structure
reduced to a simple single one word block maintained
in ECS.
Referring to Figure 3, it can be seen that the issuance
of a "GO-NO GO" signal by a partition is equivalent to
either a command to "split" the straight line execution
of the model into multiple partitionS' or to "join" the
execution of the multiple partitions into a lesser number of partitions. A five character Buffer File naming
convention was established to facilitate identification
of which process was involved.
The first two characters of the name serve to identify
whether the Buffer File is associated with an inter-step
or inter-level signal; the former is designated by the
characters" IS" and the latter by the characters" IL".
The third character specifies whether a split (" S") or a
join (" J") is being signaled. The fourth and fifth characters specify the Processors in which the partitions
writing and reading the Buffer File are located respectively. Hence Buffer File ISS12 is used by the partition
resident to Processor 1 to split its operation by initi-:ating execution in Processor 2 in going from one time
step to another.
When the PEM is to be executed the four programs
of which it is comprised are loaded, one into each of
the four Processors. The programs in Processors 2, 3,
and 4 are immediately halted upon initiation and manually delayed until the program in Pr<;>cessor 1, the
master Processor, has been assigned the necessary ECS
for the model's execution and has initialized all Buffer
Files to reflect a NO GO condition. Processors 2, 3 and
4 are then permitted to enter a programmed loop in
which each periodically tests a Buffer File to determine
when it may initiate processing of its first partition.

4-W ay Parallel Processor Partition

While in this programmed loop the slave Processors
may either be engaged in the execution of unrelated
programs or simply remain in a local counting loop.
Upon completion of the data input phase of its operation, Processor 1 removes the hold on the execution of
Processors 2 and 3 which then proceed with the initialization phase while Processor 4 remains at the hold
condition. At the completion of its portion of the initialization phase, Processor 1 holds until receipt of a GO
signal from Processors 2 and 3, signifying the completion of their assigned partitions. Processors 2 and 3
again enter a hold status after providing the GO signal
to Processor 1. Finally, Processor 1 initiates the iterative integration section by signaling the GO condition
for Processors 2, 3 and 4. At the completion of the execution of the partitions in Processors 2, 3 and 4 the
master Processor is notified via the appropriate Buffer
Files and each once more enters the hold condition and
remains there until Processor 1, having verified that
each partition has been completed, signals the transfer
of the time step results from temporary to permanent
storage. This process then continues to repeat itself,
modified as previously described in each sixth and
thirty-sixth time step.
Inter-level holds and go's are generally implemented
in the same manner as the inter-step holds and go's
described in the preceding paragraph. There is one exception, however. In the partition executed in Processor
4 in the iterative integration section, a separate Buffer
File is provided to control the initiation of the execution
of the large scale condensation effects computation at
each of levels 1, 2 and 3. The separate Buffer file at
each level is predicated on the need to allow the partition in Processor 3 to proceed on with its execution
after signaling the start of execution of Processor 4 at
each level without waiting for an acknowledgment of
completion of that level by Processor 4.
This emphasizes a particularly important aspect of
the operation of the PEM. The execution of the partitions in the different processors cannot get out of synchronization with one another. Each is always working
on the same time step at the same time. If the partition in one of the Processors is delayed, for example,
while that Processor solves a higher priority problem,
then all the Processors at the completion of the processing of their partitions will hold until the delayed Processor" catches-up." The execution of the partitions will
not fall out of synchronization.

and a 2-Processor configuration, in addition to the 1Processor configuration for which it was initially designed. The 4-Processor version is currently in operational use atFNWC while the 2-Processor version
provides a back-up capability in the event of equipment
malfunction and a new research and development tool.
A Partition Synchronization Mechanism was developed for purposes of synchronizing the execution of
the partitions being executed in each of the multiple
processors. The nature of PSM is such as to insure that
each partition is always operating on data in the same
time step. The ability to guarantee this synchronization
implies it is possible to allow other independent jobs
to co-exist and share what computer resources are
available with the -Partitioned Atmospheric Prediction
Model.
The PSM fully utilizes modifications to the operating
systems of each of the two CDC 6500 dual processor
computers to allow programs in each of the four processors to communicate with each other using ECS. In
addition to the intercomputer communications the
FNWC operating system modifications insure software
protection from interference by non-authorized programs.
As a consequence of employing the 4-Processor version of the Atmospheric Prediction Model, the same
meteorological products were generated in 60 minutes
rather than the 184 minutes required of the I-Processor
version. This reduction in time allowed the incorporation of a new and more powerful output section and the
extension of the basic forecast period from 36 hours to
72 hours. The 72 hour forecast is produced in an elapsed
time of 2-hours.
The next step in the evolution of the FNWC PEM
involves expanding grid size from 63 X 63 points to
89 X 89 points. To accommodate the additional central
memory and processing requirements required of such
a shift in grid size, partitioning of the horizontal domain rather than the computational burden is under
consideration. It is estimated that partitioning the
horizontal domain will reduce overall central- memory
requirements by one-half and allow the 72 hour forecast on the expanded grid to be performed in only
four hours as opposed to the five and one-third hours
required by the current partitioning method. The results of these new efforts will be reported on in a later
paper.
REFERENCES

CONCLUSIONS
The Atmospheric Prediction Model developed at
FNWC was partitioned to be operated in a 4-Processor

47

1 P G KESEL F J WINNINGHOFF
Development of a multi-processor primitive equation
atmospheric prediction model

48

Spring Joint Computer Conference, 1971

Fleet Numerical Weather Central Monterey California
Unpublished manuscript 1970
2 J SMAGORINSKY S MANAGE
L L HOLLOWAY JR
Numerical results from a 9-level general circulation model of
the atmosphere
Monthly Weather Review Vol 93 No 12 pp 727-768 1965
3 A ARAKAWA
Computational design for long term numerical integration of
the equations of fluid motion: Two dimensional incompressible
flow
Journal of Computer Physics Vol 1 pp 119-143 1966
4 N. A PHILLIPS
A coordinate system having 80me special advantages for
numerical forecasting
Journal of Meteorology Vol 14 1957
5 Y KURIHARA
Note on finite difference expression for the hydrostatic relation
and pressure gradiant force
Monthly Weather Review Vol 96 No 9 1968
6 A ARAKAWA A KATAYAMA Y MINTZ
Numerical simulation of the general circulation of the

7

8

9

10

atmosphere
Proceedings of WMO /IUGG Symposium of NWP Tokyo
1968
W E LANGLOIS H C W KWOK
Description of the Mintz-Arakawa numerical general
circulation model
UCLA Dept of Meteorology Technical Report No 3 1969
E MORENOFF J B McLEAN
Job linkages and program strings
Rome Air Development Center Technical Report TR-66-71
1966
E MORENOFF J B McLEAN
Inter-program communications, program string structures and
buffer files
Proceedings of the AFIPS Spring Joint Computer
Conference Thompson Books pp 175-183 1967
E MORENOFF
The table driven augmented programming environment: A
general purpose user-oriented program for extending the
capabilities of operating systems
Rome Air Development Center Technical Report
TR-69-108 1969

An associative processor for air traffic control
by KENNETH JAMES THURBER
Honeywell Systems and Research Center
St. Paul, Minnesota

INTRODUCTION

read/write operations since these operations are performed simultaneously over all bits of every word. On
the other hand the bit-slice processor may have a speed
advantage for processing operations because it will
usually be able to perform bit-slice read and write
operations faster than the distributed-logic processor.
Thus for a specific problem, the faster of the two approaches will depend on the mix of operations required.
The design of an associative processor that combines
the best features of the above approaches and can be
applied effectively to problems such as air traffic control is given in this paper. This system has the flexi-

In recent years associative memories have been receiving an increasing amount of attention. 1- 3 At the same
time multiprocessor and parallel processing systems
have been under study to solve very large problems.4-5
An associative processor is one form of a parallel processor that seems able to provide a cost effective solution to many problems such as the air traffic control
(ATC) problem.
In general, an associative processor (AP) consists of
an associative memory (AM) with arithmetic capability
on a per word basis. Usually, the arithmetic logic is a
serial adder and the associative processor can thus
perform arithmetic operations on the data stored in it
on a bit serial basis in parallel over all words.
The two main types of associative processors are a
distributed-logic type and bit-slice type (non-distributed logic). The most significant difference in the two
types is that the distributed-logic associative processor
has logic at every bit position, while the bit-slice associative processor has logic only on a per-word basis.
The differences in features of these two approaches are
summarized in Table I.
The distributed-logic associative processor has significant speed advantages for the equality search and

HOST
(General Purpose
Sequential Computer)
4~

,~

I/O Interface and
Controller

4~

TABLE I-Summary of the method of operation of distributed
and bit-slice associative processors
~

Distributed Logic

Bit-Slice

Parallf'l-Bv-Bit

':;prial-Rv-Bit

Other Search
ODe rations

Serial-By-Bit

Serial- Bv - Bit

Arithmetic Operations

Serial-By-Bit

~erial- By- Bit

Word Write

Parallel- By - Bi t

erial-By-Bit

Word Read

Parallel-By-Bit

erIal-By-Bit

Operations

EOII:l1itv

Search

,

I/O
Interface

...

-

•...

Parallel
(Associative)
Processor

Figure 1 (a)-Block diagram of the overall computing system

49

50

Spring Joint Computer Conference, 1971

To General Purpose Computer
[10 Interface

Controller

Data Flip Flop
(used as data register
for bit slice processor)

Word 1
Word
Word 3

•
•

Assoclatite
Memory •

•

•

(Part A)

which has the interface unit built into the system as an
integral part of the associative processor.
The overall system is shown in the block diagram in
Figure 1 (b). The system consists of the following parts:
1. A hybrid associative processor (AP)
2. A microprogrammed controller, and
3. The input output interface.
The input output interface is designed to interface
the processor with the host computer. The interface
contains registers and gating (such as shown in Figure
2) that perform the following functions: voltage level
translations, acceptance of a word from the host
processor, routing the word to its appropriate destination (controller or associative processor), and acceptance
of a word from the distributed logic portion of the
associative processor and transmission of desired

Word N

ToAP*

Register
(one bit per word)

Word Select Register
(one bit per word)

Figure l(b)-Block diagram of the associative processing system

bility to solve the problems that associative processors
can solve and do it in a more effective manner than
any other processor using the same operation speeds.
SYSTEM DESCRIPTION
There is a large class of problems to which a parallel
processor can be applied. However, even this class of
problems requires both types of processing; i.e.,
sequential and parallel. Figure l(a) shows a general
block diagram of a parallel processing system. (For the
purposes of this paper, the parallel processor is an associative processor.) The system consists of a host
(sequential computer), a control unit for the associative processor (and interface beween the controller
and host), the associative processor, and the interface
between the associative processor and its controller.
'The interface unit to the associative processor is there
because generally the associaitve processor and host are
incompatible. For example, in a bit slice type associative
processor I/O is accomplished bit serial, whereas, in the
host sequential computer I/O is usually accomplished in
word parallel. This represents a basic limitation to the
overall system! This paper presents a design of an associative processor that does not have this limitation and

From A
or o>ntroller

From HOST

l

To HOST

Input Indicator
Fiip Flop

outputLator
Flip Flop

Toggle Regis ter (Manual Controls)

Indicator Register

Gatigg to Cogtroner or

AP

':'AP Means Associative Processor

Figure 2-Block Diagram of the I/O Interface

Associative Processor for Air Traffic Control

portions of this data to the host processor, a host word
time.
The microprogrammed controller (Figure 3) accepts
instructions from the HOST and then performs the functions called for by the HOST. The controller's memory
consists of ROM and RAM. The section of ROM
stores the (microinstructions for less-than-search, etc.,
and the remaining ROM stores constants and other
necessary fixed data for the system. The controller also
has read/write memory for storing the programs that
can be called by the HOST. These programs are written
with instructions that are either microinstructions or
machine instructions such as equality search, etc. The
instructions the HOST sends to the controller activates
the programs. This arrangement enables easy design
of the software since, the micro-programs, the programs,
and HOST / AP interaction software can be written
almost independently after they have been defined.
The block diagram of the controller is shown in Figure 3.
The associative processor is shown in Figure 4. It
consists of two different parts which share the adders
and results. registers. One part is a distributed logic
associative memory. The memory has the advantage
of being able to read and write words in word-parallel
form thus eliminating the input/output bottleneck that
will occur if only serial-by-bit read/write capabilities
are present. The other portion of the processor is a
RAM oriented in such a manner that it can do a bitslice read and write. With the addition of the per-word
arithmetic hardware this memory has the fast bitslice capabilities that we desire for arithmetic operations. This combination gives us the advantages of both

51

~ta

Input Datp /Tnstructions

Output Data

A bit slice of Part A.
Tht" bit slice- is addresse
oy means of the Mask

Register.

One bit of as~odative
memory contains both
storage and tagh'

One bit of RAM

('apabilit~-.

capahiHt·;.

contains only

stora~e

*:S~~i~t~!:::c~a~ii~eO~~;::::::;;:u~:n~~~~:::rr~ :::~ !~: ~:~

A.
select register. and a word from Part B (one bit of many unique RAM
wnrds t!sed to form a word for Part B).

Figure 4-Block Diagram of the Associative Processor

types of associative processors; i.e., all-parallel equality
search and read/write features of the distributed logic
approach along with the high speed arithmetic capabilities of a non-distributed logic approach.
The operation of the processor requires that the
RAM operate as follows. The RAM can be thought of
as being rotated 90° from its normal position. When an
address is placed on the input lines to the decoder a
"RAM" word is selected, but because of the orientation of the RAM this "RAM" word is a bit slice (a
single bit of all data words) to the AP. This bit slice
can then be read out into the registers or adders. In
addition this bit slice can be gated by the word select
register if a subset of words is to be selected. (See the
Appendix for a description of an associative memory
and its associated registers.) To perform an associative
search is very simple. If the bit slice is being compared
to a one, it is just read out. If the equality search is on
zero , the bit slice is read and every bit complimented.
This procedure then yields a 1 in the search results
register in every matching bit position. This method
allows the bit slice portion of the AP to be implemented
using standard off the shelf RAM and conventional
IC logic.
PROCESSOR CAPABILITIES

Search Results

Register Status

Memory
Control
Signal

Memory Muk and

Arrument Re.iatera

Figure 3-Block Diagram of the Controller for the Associative
Processor

Table II is a comparison of typical associative processor speeds available. The distributed logic system
speeds are based upon the Honeywell semiconductor
associative memory. A description of the Honeywell
associative memory can be found in Reference 1. The
bit slice (non-distributed) processor speeds are based
upon a bipolar RAM implementation and are what
can be achieved with current TTL technology.6 The
bit slice processor uses the decoder as a mask register

52

Spring Joint Computer Conference, 1971

TABLE II-Typical operation speeds for the distributed,
bit-slice, and hybrid associative processors

Bit Slice Read
Bit S1i('t" Write

20o,.s

2oon8

100. .
Not • .,.i able
(lOOns Ibit)

Parallel Maskable Equality Search

100na/bit

Equality Search

300ns

lOOns/bit

~ot aV~~la~le

lOOns/word

Not available
(lOOns/bit/word)

toone

Parallel Word Read

word

(lOOns/bit/word)

lOOn.

(lOOns/bit/word)

lOOns

Add Rit Slice to Bit SUee
and Store in a Bit Slice

400ns

Multiple Match Resolve

IOOna

and a single flip flop to hold data since it operates in a
bit serial fashion. Data will be shifted into the flip flop
serially while the decoder address is changed. The
speed of the parts is shown in Table II.
The hybrid processor has certain features that can
be used to advantage. The two parts of the system have
complimentary properties!
High speed I/O can be obtained from the hybrid.
No data has to be taken from the bit slice part in a bit
serial manner. For example if 50 words of 20 bits were
to be read from the processor the output time is 5 JLS
(11 JLs) if the words were in the distributed logic portion (bit slice portion). (The extra 6 JLS are consumed
by reading 20 bit slices from the RAM and storing in
the AM portion.) Compare this to a bit slice processor
that required 100 JLS (20X50X100ns) for the same
I/O. For most applications this processor has been
found to have an I/O rate 10 times that of a bit slice
processor and 72 that of a distributed logic processor.
In addition, consider the speed of arithmetic multiplication. Multiplication (assume 20 bit operands and 40
bit result) in the distributed logic processor requires
about 360 JLS ((20)2(.9) or n 2 bits slice addition operations) compared to 160 JLS for a bit slice processor. The
worst case in the hybrid processor would be when both
20 bit operands were in the AM and the result was to
be stored in the AM. A worst case algorithm would
read 40 bits into the RAM, multiply, and store the 40
bits in the AM. This would require 188 JLS.
The arithmetic multiply is nearly twice as fast as the
distributed logic processor and about the same speed
as the bit slice processor.
Arithmetic addition speeds are not significantly enhanced by this processor and are the same as for a bit
slice or· distributed logic processor depending upon
where the operands are stored and the result is to be
stored.
Table III is a table summarizing the results of comparing the three processor types. The characteristics of
the hybrid processor may be described as faster and

more flexible than either of the two standard implementations of associative processors. The values in
Table III are for typical operations for problems that
have been studied. The hybrid processor combines the
best of both standard processing approaches and this
can be seen in the table. For most associative processing
applications, the hybrid approach should be far superior
when compared to either of the other two approaches.
DESCRIPTION OF THE AIR TRAFFIC
CONTROL PROBLEM
Three areas of the air traffic control problem will be
discussed in this paper. These are tracking, conflict
detection, and display processing. For the ATC application the AP size will be 512 words of 104 bits of distributed logic memory and 128 bits of a bit slice type
memory. One track will be assigned to each 232 bit
word. The controller will require about 2000 words of
read/write memory, 500 words of ROM for microprograms," and 2500 words of ROM for system constants.
This processor has been sized to accommodate 512
tracks in the terminal area (64 mile radius). In the
terminal area the general purpose computer to which
the processor interfaces would probably be the ARTS
III (HOST).
Tracking

The tracking function has three main subfunctions
that it must perform. These are: correlation of target
reports, positional correction of correlated tracks (correction); and positional prediction (prediction) for all
tracks.
The correlation function includes the following
operations:
1. Obtaining target reports from the HOST.
2. Range and azimuth correlation of target reports
against all tracks stored in the associative processor (AP).
3. Tagging the target report for prediction and/or
correction.
4. Storing the target report in the track file.
TABLE III-Summary of the computational capabilities of the
distributed, bit-slice, and hybrid associative processors
Dh=trihutf:'d t ojit:k

EQualit\" Sear('ht's

to

110

20 units/sePlmd

liitSlkt'

units /s('~'()nd

fI~·brid

7

Arithmt:'tk

t unitts("("Clnd

3 unit~ {s('("und

2-3 unit~/st><,ond

Bit Sli("t> Pru('t'"sl-iing

I unit/l-it>('Clnd

:l unit~'s .. n>nd

2 - 3 units I second

Associative Processor for Air Traffic Control

53

BEGIN
tag track for

E . -____

~~

__

~

,...----------1 prediction

B

establish a new
track with this
target report

de('reas~

firmness of
tile traC'k

t

re!.:ipon:-

p

obtain new
-<,Il for the
traC'k

tag track for
. update

- - 4 - - + - 4 C'alculation

---+-1

establish
turning tracks

increase the
firn'ness of
the track
YES

increase

>-----....... firmness of
traC'k

t~g track for
update
calculation

store target
report
information in
track ..

Figure 5(a)-Path taken by a target report that correlates
uniquely with a track in the associative processor (track file)

Figure 5(b)-Path taken to establish a new track or a turning
track

Figure 5 is a flow chart for this portion (correlation)
of the air traffic control function. The correlation function will be done as target reports are available from
the HOST. The correlation function will be performed
once for each target report; i.e., once for each track in
the system. Therefore for 256 tracks, the function will
be called 256 times every four seconds (one radar scan),
etc.
When actually performing the functions on all tracks,
the tracks to be corrected will be corrected, and then
all tracks will be predicted to their next position.
All tracks correlated during the last Va second will be
updated, therefore groups of tracks will be updated
eight times per second or thirty-two times per scan (4
seconds for a complete radar scan). To correct the
tracks position, four equations must be solved.
These are:
(1)

54

Spsing Joint Computer Conference, 1971

A similar equation can be derived for Y values. After
the prediction calculation, the turning tracks will be
calculated.
Conflict detection

decrease bin
)--....oI~size and set
1=1

Figure 5(c)-Path taken by a target that correlates with more
than one track

(Yc)n= (Yp)n+a(YR - Yp)n

(2)

(Xc)n = (Xc)n-l +{3/t(XR - Xp)n

(3)

CYc)n= (Y c)n-l+{3/t(YR - Yp)n

(4)

where Xc means X corrected; X R means X reported
by radar return; X p means X predicted; and a and {3
are constants determined by the tracks past history or
firmness.
The prediction equations are as follows:
(X p) n = (Xm) n-l + (Xc) n-1T

(1)

(Yp)n = (Y m)n-l +(Yc)n-1T

(2)

Figure 6 is a flow chart for. a conflict detection scheme.
This algorithm uses X, Y oriented rectangles to do gross
filtering of the data. The remaining tracks that are
potential conflicts are then subjected to a detailed cal-.
culation involving the law of cosines to determine if the
circular shapes overlap. Any conflicts are then outputted to the HOST for conflict resolution and false
alarm checking.
Figure 7(a) shows the ideal conflict detection areas.
The circle around the airplane is an area of immediate
danger. The larger area is an area of potential future
danger. In order to effectively process a conflict algorithm a search is made over rectangular areas surrounding the shapes. This is shown in Figure 7 (b). Figure
7(c) shows the basic philosophy behind the conflict
equation. It is desired to know if any circles overlap,
however, this is a very hard search to accomplish. Refinements and approximations to this criteria designed

increase the
bin size

where

and
(Ym)n-l=(Yc)n-l

or

(Yp)n-l

establish
turning
tracks

The turning track equation is given by the following
formula:

where,
N=time
V = velocity

R = rate of turn

Figure 5(d)-Path that establishes turning tracks for target
reports that correlated with more than one track

Associative Processor for Air Traffic Control

55

Enter

"I.arge Squar-.,"

"Small CIrcle"

Figure 7(b)-Confiict areas for the conflict detection algorithm
*Tt.is assumed that the HOST contains a
Conflict Resolution Routine and all the
AP must do is identify conflicts and pass
them to the HOST via this subroutine.

Figure 6-0verall conflict detection algorithm

to shape the search areas more like those shown in
Figure 7(a) have been considered but are beyond the
scope of this paper.
Figure 7(c) shows the equation that is derived from
the conflict detection function. This equation is just
the Law of Cosines applied to the conflict detection
problem to determine if the circular shapes overlap. To
avoid a possible conflict the following must be true:
[p2+p 2i-2ppi cos(O-Oi)]-[(R+Ri)2]>O. This equation must be true for all aircraft being compared to the
aircraft being processed.
Display processing

Figure 7(a)-The privileged airspace around an aircraft

The display processing function will send the display
data to the HOST after filtering the data. It is assumed
that there is reserved storage in the HOST that contains the detailed filter information for each display
(i.e., for each display there is a node of data containing
the X, Y, Z limits that the display is controlling) and
information to assemble the display data in the HOST
refresh memory.
It is assumed that the display data are read out to the
HOST, the display data assembled, and the display
data entered into the refresh memory. This is done
twice_ a second; i.e., the complete display routine is
processed twice a second. The HOST will receive the

56

Spring Joint Computer Conference, 1971

__

S>R+R

-- -- .............
i

that is sent over with the positional information. The
algorithm for this function is given in Figure 8.
Definition and allocation of memory fields

The Associative Processor contains the following
fields:
• Xc-corrected X position
• Yc-corrected Y position
• Z-altitude if the plane has a beacon transponder
otherwise zero
• BC-Beacon transponder code otherwise zero
• GPID-A code that identifies this track uniquely.
Allows the HOST to identify each track
• Firm-the firmness of the track (essentially a
measure of the consistency of correlation of the
track)
• a-atracking coefficient for the X positional values
derived from a least squares fit tracking algorithm

..z..... y

K . . _......

_

1

S> R·

- .... .....

ENTER
I

_ _ _. . ._

e·

. . . . .1

Y

Figure 7(c)-Derivation of the conflict detection equation and
one of its possible refinements

GPID field of the· word, X position, Y position, and
altitude and thus can assemble the full and partial data
blocks, along with the tabular list information as each
track is sent to the appropriate display refresh memory.
In order to perform this function the HOST needs a
table of information of each track stored in its memory.
This function is performed quite fast since it is all
searches and reading. All of the detailed display
information that does not change very fast is kept in the
HOST and can be identified by use of the GPID field

~

I

Function
{'ontrolled B.,Hand Off
Read Out
Accepted B~­
Tabular

This algorithm requires
that 10 bit slices be
temporarily available.

Figure 8-Display filtering algorithm

Associative Processor for Air Traffic Control

• {j-a tracking coefficient for the Y positional values
derived from a least squares fit tracking algorithm
• Temp-temporary storage fields
• CB-controlled by field. The number in this field
designates the display that is controlling this track
• AB-accepted by field. The number in this designates the display that has accepted the track if it
was being handed off from one display to another.
• HO-hand off to. This field designates the display
the track is being handed off to.
• RO-read out by. This field designates the number of the display reading out tracks other than
those under its control.
• TAB-Tabular~this field designates the number
of the display on whose tabular list this track
appears
• Update Flag-Designates tracks that correlated
but have to have their positions corrected and
predicted.
• Conflict Detection Flag-designates tracks that
need a conflict detection check
• X p -Predicted X position
• Y p -Predicted Y position
• Op-predicted azimuth
• PP- Predicted p position
• Xc-corrected X velocity
• Yc-corrected Y velocity
Figure 9 shows the manner in which the fields of
each word have been allocated. When the system is
first initialized, there will be a momentary bottleneck
because a lot of data will have to be put into the RAM;
however, this bottleneck should be less than a bit slice
processor. After this initialization has been accomplished the number of changes in the information in the
RAM will be small. The fields were distributed between
the AM and. the RAM in order to minimize output
from the RAM. None of the fields in the RAM portion
of the system are read out and sent to the HOST. They
are either fields that change slowly and have to be
written into the memory from data received from the
HOST (CB, AB, HO, RO, TAB) or fields that are calculated and never have to be read out to be sent to the
HOST (Flags, X p, Y p, Op, Pp, Xc, Yc).
In the associative memory, we have the data which
require that a quick output capability be available.
Data fields that we would like to be able to read out in
word parallel, such as Xc, Y c, Z, BC and GPID have
been included in the AM. Also, data fields that ~eed a
word parallel read and write capability, such as Firmness Factor (FIRM), (x, and {j, have been included in
the AM. This organization gives as the best speed solution to the input problem by overcoming the bit serial
input output problems of the non-distributed logic ap-

57

proach and the slow bit slice read of the distributed
logic approach.
Timing and I/O Data Estimates
The following timing and I/O estimates were made
assuming that (1) 512 tracks are contained in the associative processor; (2) 128 tracks must be correlated
per second (512 tracks per radar scan); (3) the updating
routine is processed eight times a second; (4) the conflict detection algorithm is processed on each track two
times per scan (that is 256 conflict detection checks are
made every second); (5) all display information is updated twice per second; and (6) a software organization as discussed in the next section is used.
In order to time the conflict detection algorithm it
was assumed that the maximum number of responses
to the XYZ filtering was 6 for the small square and 45
for the large square. Because of the accuracy required
it was decided that the trignometric functions would
be done fastest by table look from the ROM in the
controller.
Under the above assumptions it is estimated that
with the speeds in Table II, the performance of the air
traffic control problem for 512 tracks will utilize 50
percent of the processors capability. The 50 percent
use includes all overhead and bookkeeping functions.
Comparable 'estimates were made for a bit slice processor and a distributed logic processor. Using the speeds
given in Table II, both of these processor require approximately 65 percent of the processor's capabilities.
Therefore, the hybrid processor can handle approximately 30 percent more processing than either of the
other two processors.
Input to the AP and its controller is estimated at
1200 words per second. Output to the ARTS III is
estimated at 30,000 words per second. Therefore an
approximate total of 31,000 words of I/O per second
are anticipated for worst case operation. In the air
traffic control problem as formulated here, I/O does not
seem to present a major problem.
Software organization
A very simple organization of the AP and interface
is assumed. The HOST can transmit only data or one
of four instructions to the AP. The instructions are as
follows:
•
•
•
•

Correlate (number of tracks)
Update
Conflict Detection Probe (number of tracks)
Display

58

12

Spring Joint Computer Conference, 1971

12

8

15

10

4

6

31

104 Bit AM

34

Temp

4
CB

4

4

AB HO

4

4

RO

1 1

12

TAE

12

Xp Yp

12
fl

12

12

12

P Dp

Xc

Yc

~,
l'pdate Flag

~

Conflict DetectIon Flag

128 Bit Bipolar RAM
hit of 128 different RAM words used as
a 128 hit Associative Processor word)

radius. A microprogrammed controller was used to
provide future flexibility. The processor was about 50
percent loaded (for 512 tracks) considering overhead
functions. This processor can provide a viable solution
for the ATC problem.
The air traffic control system used memory speeds
that are available from current MOS associative memories and off the shelf bipolar RAM's. The processor is
built from a combination of a distributed logic associative memory and a bipolar RAM. The processor has
one word per track (104 bits AM and 128 bits RAM
per word) or 512 words of memory. Each word has a
serial adder plus associated registers. Several tables
(ROM) are needed because certain functions will be
performed by table look up.

(1

Figure 9-Allocation of fields in the memory word

The AP only transmits the results of the performance
of the above functions to the HOST. The programs for
the instructions are stored in the 2000 word read/write
memory in the AP controller. These programs are
written in terms of the AP's macroinstructions (multiply, add, less than search, etc.) which in turn call for the
execution of the appropriate microprogram to be executed from the controller's ROM. The microprograms
are written in terms of the basic machine instructions
such as equality search, bit slice read, word read, etc.
CONCLUSION
A new type of an associative processor has been designed. This processor combines the best properties of
the bit slice and distributed logic associative processors.
The processor provides the flexibility that will enable
it to out perform either of the other two processors on
most applications. In typical applications, the processor
can handle 30 percent more processing then either of the
other two types of processors.
In general, the processor has the I/O and equality
search capabilities of a distributed logic associative
processor combined with the bit slice and arithmetic
processing capabilities of a bit slice processor, thus
making it more effective than any other associative
processor. This processor overcomes the main drawback of current associative processors, i.e., I/O
problems.
The processor was applied to the air traffic problem.
It was sized for 512 tracks which corresponds to a 1975
traffic load for most terminal areas with a 64 mile

ACKNOWLEDGMENT
The author wishes to thank L. D. Wald and D. C.
Gunderson for their assistance and pateienc in helping
the author gain an understanding of associative techniques. Thanks are also given L. D. Wald for the details .of the control unit for the associative processor.
The author wishes to thank the following personnel of
the FAA for their help in understanding the air
traffic control problem: Lawrence Shoemaker, James
Dugan, John Harrocks, and Jack Buck.

REFERENCES
1 L D WALD
M OS associative memories
The Electronic Engineer August 1970 pp 54-56
2 L D WALD
A n associative memory using large scale integration
National Aerospace Electronics Conference Dayton Ohio
May 1970
3 A G HANLON
Content-addressable and associative memory systems-A
survey
IEEETEC Volume EC-15 No 4 1966 pp 509-521
4 J A GITHENS
A fully parallel computer for radar data processing
National Aerospace Electronics Conference Dayton Ohio
May 1970
5 J C MURTHA
Parallel processing techniques in avionics
National Aerospace Electronics Conference Dayton Ohio
May 1970
6 J W BREMER
A survey of mainframe semiconductor memories
Computer Design May 1970 pp 63-73
7 R E LYONS
The application of associative processing to air traffic control
1er Symposium International Sur La Re'gulation du
Trafic, Trafic Ae'rien Versailles June 1970 pp 6A-31 to
6A-40

Associative Processor for Air Traffic Control

8 J A RUDOLPH et al
With associative memory, speed is no barrier
Electronics June 22 1970
9 N A BLAKE J C NELSON
A projection of future A TC data processing requirements
Proceedings of the IEEE March 1970

APPENDIX-DESCRIPTION OF AN
ASSOCIATIVE MEMORY
An associative memory (AM) is a device that combines logic at each bit position along with storage
capacity. A n word AM with p bits per word can store
n binary words of p bits. In addition, certain logic
operations can be performed on the words stored in the
AM. In particular, search operations can be performed
simultaneously, over all words. These operations can
identify words in the memory that are related to the
externally supplied test word. For this reason AM's
are sometimes referred to a content addressable memories (CAM). The types of operations that can be performed are:
•
•
•
•

Fully parallel maskable equality search
Bit serial inequality searches
Bit serial incrementation of fields
Bit serial maximum (minimum) search (identifies
the maximum or minimum stored word)

A brief example is given to illustrate the use of an
associative memory. An eight-word associative memory,
with four three-bit fields, is shown in Figure 10. In
addition to the memory that stores the words, an AM
must have a search register for storage of the word to
be compared with the stored words, a mask register to
designate which of the bit positions of the search word
are to be included in the search operation, a results
register for storing the results of the search, and a word
select register to select the words to be searched over.
For the example, word seven has not been selected as
shown by the contents of the word select register in
Figure 10. In Figure 10, the contents of the mask register show that only the first field of the search register
is to be included in the search. An equality search
operation in the above associative memory will result
in the simultaneous comparison of the contents of the

010

111

1110

I

000

I

I

000

000

I

I

59

000

Data Register

000

Mask Register

WORD 1

110

III

101

110

WORD 2

011

111

101

110

WORD 3

010

110

101

III

WORD 4

101

110

101

101

WORD 5

110

000

001

001

• WORD 6

010

110

000

010

WORD 7

010

110

010

110

WORDS

111

111

011

110

FIELD

Register

Select
Register

Figure lo-An associative memory

first field of the search register with the contents of the
corresponding field of all stored words. It can be noted
that only stored words three and six satisfy the search
and are therefore identified by l's in the results register
after the search. Word seven would have satisfied the
search; however, it was not in the set of words designated for performance of the search by the word select
register.
In many associative memory applications, such a
search operation would normally be followed by a readout operation (whereby the identified words are sequentially read out) or another search operation (in
which case the search results register would be transferred into the word select register). One notes that a
series of searches can be performed and the results
ANDed together if the results in the search results
register are used as new contents of the word select
register.
A multiple match resolver (MMR) is also an integral
part of the memory. This is indicated by the arrow in
Figure 10. The MMR indicates the "first match" III
the memory if there were any matches.

A computer aided traffic forecasting technique-The transHudson model
by EUGENE J. LESSIEU
The Port of New York Authority
New York, New York

INTRODUCTION

traffic patterns and their influences, and forecasting
traffic based on this research program. Because of the
large amount of data available and complex research
techniques applicable only to computer solution, the
use of a high speed computer as a tool was mandatory.

The transportation problems of the N ew York Metropolitan region are many and diverse and there are
several major governmental agencies concerned with
and working towards solutions to these problems.
Among these problems is that of planning, providing
and maintaining transportation facilities across the
Hudson River between the states of N ew York and
New Jersey. Although the trips across t~e river constitute only a small part of the total- regional travel,
they amount to over one million trips a day.
Over the past years, the Port Authority has collected
and analyzed much data on the volume of traffic
crossing the river by all modes. It has also conducted
origin and destination surveys to study many of the
characteristics of this trans-Hudson traffic. Through
the years, the region has grown, the data have become
more voluminous, and the analysis more complex. It
was becoming more difficult to do comprehensive
research and analysis with the data and it was apparent that some sort of formalized information
system was necessary to research the trend changes
in the volume and the pattern changes in the 0 and
D.
The purpose of traffic research is primarily for forecasting. If one understands the reasons for traffic
changes as they occur, then one can more reliably predict future traffic changes based on these reasons.
Traffic may shift to a new facility because it makes
travel faster. Traffic may grow at one facility and not
at another because it is a lower cost facility, or because
rapid development is taking place in the market area
of one facility and none in the other.
With this in the background, the Port Authority
embarked on the development of a system of traffic
data handling that would be aimed at researching

REVIEW OF AVAILABLE TOOLS
There have been many urban transportation studies
in the past decade. Techniques have varied but in
most cases new methods are built upon old ones. When
the Port Authority decided to embark on the transHudson study, it was natural to review all existing
processes. It was discovered that the focus of most
other studies was generally to depict and forecast all
traffic patterns in an entire region with perhaps a
special focus on the Central Business Districts (CBD).
With the Port Authority's major focus being on only
the Hudson River Crossings, it was decided that much
of the theory and many of the techniques applied were
inappropriate for our problem.
There are many theories of movement; gravity model,
intervening opportunity, etc. Most of these however,
are strongest in describing the phenomenon that most
trips are short trips-as distances increase between
zones less trips occur. In the traffic that crosses the
river, there are few short trips. Most of the trips across
the river are major trips-not simply going dowp- thestreet for a quick shopping trip (which by the way
makes up a considerable part of the region's total
travel). The feeling was that if we were to isolate these
major trips from total trips we would have to have additional theories and a different system from that used
by others.
With regard to techniques, most of the other studies'
end product was a traffic assignment on each link

61

62

Spring Joint Computer Conference, 1971

(representing a transportation system segment) of a
multi-link network (sometimes thousands). With only
the few links that cross the Hudson River of interest
to us, many' of the efficiencies of the existing techniques
would be wasted on our problem.
There was, however, one technique which ·we considered indispensable and that was the general network tracing and least-path calculating process. There
were several programs available that used the Moore's
algorithm or some adaptation thereof, that we could
count on using.
There was available in house a computer, magnetic
tapes containing all the trip data from our 0 and D
surveys, and the rudiments of a computerized data
bank. This bank had a data matrix of 180 X 180 cells
with space for 50 pieces of information in each cell.
There were programs available to put the 0 and D
data and other data into the bank. There were programs
to modify the data once in the bank, and there were
programs to extract and manipulate the data so that
they could be fed into other standard analytical programs. We decided to use this data bank and modify
it to our needs.
Of equal importance to us was our finding that an
existing multiple regression program was available
that would accept our data in both size and format,
had the flexibility to manipulate the data easily and
produced printed results sufficient for analysis.
It seemed, then, that we had sufficient tools to put a
whole system together and that we could start, get
results, and improve the system as we went along.
Some interesting comments regarding this assumption
are related later in the paper.

system by a three stage-process-(I) Trip generation
or interchange; (2) Modal split and (3) Traffic assignment. Trip interchange concerns the total number of
person trips between zones. Modal split describes the
process of determining what share of the total trips
will be made by each mode. Traffic assignment is the
term used to describe which route or which specific
transportation facility will be used once the mode of
travel is chosen.
As mentioned earlier, we had collected a great deal
of origin-destination data on the various modes of
transportation across the river. After a long study of
trip data it was decided that, in order to get an explainable group of trips, the trips should be segregated
into several sets. First, peak period travel and off peak
travel were known to exl¥bit entirely different characteristics particularly with regard to modal choice,
but also with regard to associating travel times and
costs to the trips since congestion is greater in the peak.
The second separation deemed necessary was a classification by residence, since trips from A to B would

DESIGN OF THE RESEARCH AND
FORECASTING SYSTEM
Knowing the tools available, the system was designed
around them. It was necessary, of course, to review
and organize the input data and to specify the output
requirements. Further demands on the system were
that there had to be a complementary flow of data
through the system for research and testing and forecasting, and the system should be designed to provide
for continuous use and change as later data become
available.
Our output requirements were specified, of course, by
the job we set out to do-forecast trips across the
Hudson River by facility. Exactly which process to
use to get down to the level of facility traffic forecasting
was considered in depth. Standard metropolitan
transportation studies had usually developed the

Figure I-Hudson River crossings

Computer Aided Traffic Forecasting Technique

have different modal choice and different trip generating characteristics depending on whether the home
based end were A or B.
To describe mode and facility classification of these
trips, a little geography of this region is necessary. The
map shows the river crossings available. There were
seven vehicular crossings: The Tappan Zee Bridge,
George Washington Bridge, Lincoln Tunnel, Holland
Tunnel and three Staten Island bridges. There were
three rail facilities: PATH downtown (to Hudson
Terminal) , PATH uptown and the Pennsylvania Rail
tunnel. There were two railroad passenger ferries and
there were two locations where major flows of interstate buses occurred.
Because of space limitations in the data banks and
because analysis of the system revealed that specific
definition of some crossings was unnecessary for our
future forecasting requirements, it was decided to
collapse the crossings into the following mode and
facility groups:
Auto mode-Tappan Zee Bridge
George Washington Bridge
Lincoln Tunnel
Holland Tunnel
three Staten Island Bridges
Bus mode-Po A. Bus Terminal (at Lincoln Tunnel)
George Washington Bridge Bus Station
Rail mode-Penn Station
PATH downtown (Hudson Terminal)
PATH uptown
CNJ ferry

63

trip making. But it must be remembered that only
those explanatory variables that themselves can be
forecast can be used to explain trips if one wishes to
forecast as well as explain. With these restrictions we
chose population, employment and area (so that
densities could be used) and some description of
proximity. It was the latter item that established the
basis for the construction of the master program that.
ties together the entire forecasting system.
The data needed to cover the complete range of
studies and models planned had to be placed in a data
bank so that it was readily accessible for both developing the models and using them for forecasting. A data
bank is nothing more than an arrangement of information stored (on tape) in some meaningful indexed form.
In the system developed, the index was geographical
zones.
The data bank programs that had already been developed had space for a 180 X 180 zone classification,
but we used only part of this. We classified 100 zones
west of. the Hudson as "i" zones and 80 zones east of
the river as "j" zones. The reference index then contained 8,000 i - j cells that could be referenced by an
i - j number. A map of these zones is shown in Figure 2.
Within each of the cells we had space for 50 different
data items. With reference to the earlier description of
data it can be seen that we had eleven facilities and
four classifications of trips (residence east, residence
west, peak and off-peak). We also had time, cost and
transfer data for each of the facilities and population
and employment, and area data for each of the zones.
Summing these up:

Traffic using the other ferry (Erie Lackawanna Rail
passenger ferry) was included with the PATH downtown traffic because the two crossings were parallel and
served an identical market and it was known that the
ferry service was soon to be eliminated.
Up to now, we have eleven facilities within three
modes and four classifications of trips (Peak, Off-peak,
residence east, residence west). In order to get data to
explain why trips might be made over one facility or
another or one mode versus another, travel network
characteristics data had to be collected. The items of
data we felt would be important, could be collected,
and could be forecasted were travel time, travel cost
and number of transfers.
The trip interchange part of the forecasting problem
is probably the most difficult in deciding what information is needed to study trip interchange characteristics
and attempt to forecast future trip volumes. Many
variables can be included in the study part of it in
developine: relationships that explain differences in

5 auto facilities X2 network variables
6 transit facilities X3 network variables
11 facilitiesXtrips for 2 residence classes
4 demographic variables
space for new facilities in forecast years

= 10
= 18
=22
= 4
= 10
64

It can be seen that the 50 data item spaces of a single
bank were easily exceeded, and it was necessary to
devise a method to utilize more than one bank. Such a
method was developed which in essence, simply keyed
to the fact that a single bank was only critical when
using it as input to the forecasting system. In the model
development stages, separate banks could be used for
each of the models-assignment, modal split and trip
interchange. A listing of the data in the various banks
developed is shown in Figure 3.
Considering the data bank limitations and the fact
that peak and off-peak traffic differ in many respects,

64

Spring Joint Computer Conference, 1971

change models and therefore also contains population,
employment, area data.
For testing the models a data bank similar to Data
Bank I is used since the fine grain facility detail is
necessary. Modal total trips are also in this bank
(1964 M) so that the assignment and modal split
models can be tested on some existing base total.
Forecasting is done with a similar Bank (1985 M) which
contains estimated future network characteristics and
zone populations and employments.
The process is more fully explained under the discussion of the master program.
FLOW OF INFORMATION THROUGH THE
SYSTEM
Figure 4 shows how the basic data is gathered and
made use of within the system. The source data has
been discussed in general earlier.

Item #

1
2
3
4
5
6
7
8
9
10
11
12

13
14
15
16

Figure 2-Port Authority analysis zones

it was naturally decided to approach the peak and offpeak as two separate and distinct efforts, and to forecast each time period independently. A set of banks of
similar format but with entirely different data is used
for the off-peak. The only similarities between the
peak and off-peak is the process used and the demographic data.
From inspection of the data bank listing it can be
seen that the first :nata Bank (1964 I) contains the
most fine grained data; facility network data and
facility trips. This bank is used for developing the
assignment model. A secondary Data Bank (1964 II)
was developed (it could have been placed in the same
bank except for lack of space) by collapsing the facility
network data to mode network data through the concept of weighted average of the facility network data
using the existing trips as the weighing factor ..The
second bank also contains weighted average total network data developed by a similar concept of using the
existing modal trips as weighing factors. This bank is
used for developing the modal split and trip inter-

17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
A = auto
B = bus
R = rail
AA = avg. auto
AB z avg. bus
AR • avg. rail

19641
Atl (GWB)
At2 (LT)
At3 (Irr)
At4 (SIB)
At5 (TZB)
Ad
Ac2
Ac3
Ac4
Ac5
AVI
AV2
AV3
AV4
AV5
Btl
Bt2
Bcl
Bc2
BVI
BV2
Rtl
Rt2

(GWB)
(LT)
(Irr)

(SIB)
(TZB)
(GWB)
(LT)
(Irr)

(SIB)
(TZB)
(GWB)
(PABT)
(GWB)
(PABT)
(GWB)
(PABT)
(P. Sta)

1964II

1964M
Atl
At2
At3
At4
At5

AAt
AAe
AVE a
AVWe
ABt
ABe
AVia
BVWe
ARt
ARe

ACI
Ac2
Ac3
Ac4
Ac5

RYE

RVW
Pi
Ei
Area i

Pj
Ej
Area

j

PC
ABF
ARF

(Irr)

Rt4 (PUP)

Pi
E1
Area i
Pi
Btl
Bt2
Bcl
Bc2
Ej
Area j
Rtl
Rt2

Rt6 (CNJ)
ReI (P. Sta)
Rc2 (Irr)

Rt4
PC
Rt6
ReI
Re2

Re4 (PUP)

Rc4

Re6 (CNJ)
RVl {Po Sta}
RV2 (Irr)

RV6 (CNJ)

Re6
AVE a
AVWe
BVEa
BVWe
RVEa
RVWe

BFI
BF2
RFI
RF2

BFI
BF2
RFI
RF2

RV4 (PUP)

(GWB)
(PABT)
(p Sta)
(Irr)

RF4 (PUP)

RF4

RF6 (CNJ)

1985M
Atl
At2
At3
At4
At5
At6 (new)
ACI
AC2
Ac3
Ac4
Ac5
Ac6 (new)
Pi
Ei
Area I
Pi
Btl
Bt2
Bel
1102
Ej
Area j
Rtl
Rt2
Rt3 (new)
Rt4
PC
ReI
Re2
Rc3 (new)
Re4

reserve;r
for
output

1
BFl
BF2
RFl
RF2
RF3 (new)
RF4

RF6
t = travel time
c - travel cost
V = trip volume
F = transfers
P = population
E = employment

PC = parking cost
i = zones west of river
j = zones east of river
Ea = resi!lence east
We = residence west

Figure 3-Data bank formats

Computer Aided Traffic Forecasting Technique

The determination of the proper values to be placed
in the data bank merits some attention. Auto trip data
were taken from the continuous sample origin-destination surveys taken at the Port Authority's facilities
and the Tappan Zee Bridge. Bus trip data were based
on origin-destination surveys taken at the two bus
terminals. Rail trip data, including the PATH system
were synthesized from a PATH origin-destination
survey, origin-destination surveys of those rail lines
involved in the Aldene Plan (Central Railroad of New
Jersey; Pennsylvania Railroad-Shore Branch), from
various railroad conductor counts, and from the Manhattan Journey-to-Work Surveys taken in 1961-1962.
For auto times and costs, it was necessary to build
peak and off-peak link and node networks. Travel time
for each facility was calculated along the minimum
time path with all of the other trans-Hudson facilities
removed from the system. Travel distances were found
by skimming over those paths. Costs were based on
over-the-road costs of 2.8¢ per passenger mile, plus
tolls and average parking costs. The 2.8¢ figure was
developed by an independent study and was based on
out-of-pocket vehicle costs divided by average vehicle
occupancy.
The bus and rail time, cost, and transfer matrices
were developed by adding rows and columns for what
might be called a common point network. Travel times,
were determined from each zone west of the Hudson
to a Manhattan terminal (Penn Station for example).
Then travel times were determined from that terminal
to each zone east of the Hudson. The same was done
for costs and transfers. This depicted, quite naturally,
how a bus or rail trip is made, and it was necessary only
to add the rows and columns to determine the full i - j
matrix of all time, cost, and transfer data.
Population, employment and area data were extracted and updated from Federal and State Census
Data.
It is interesting from an information handling aspect,
to describe some of the trials and tribulations of
manipulating the various pieces of input data from
their original form to a finally completed data bank.
First, a guiding decision was made in the development stages that all attempts would be made to use
"in house" computer services. The Port Authority
had for internal use both an IBM 7070 and an IBM
360-40. A data bank had already been built for some
similar work using the 7070. We had also done some
earlier work on the auto networks using the Control
Data Corporation's Tran-Plan Programs and some
further work using the Bureau of Public Roads Transportation Planning programs run on an IBM 7094.
To make a long story short, we decided to utilize
as many of the existing programs as possible to build

65

Figure 4-Model development process

the data bank. The auto network data was run on the
CDC 3600 and converted on the 3600 to input compatible for the 7070 programs. The bus and rail network data was constructed from punch cards on the
IBM 7094 with the BPRprogram and converted to
the input format with that same CDC 3600 program.
The trip data was transferred to the desired input
format from its original state by an IBM 360 program
and converted to proper input format for the data
bank, with the IBM 7070. The population, employment
and area data was directly input to the bank from
punched cards.
The data extract program was written originally for
the 7070 and we used it in the early stages to extract
data compatible for input to a known regression program run on an IBM 7094.
In the course of the work all new programs were
written for use on the IBM 360. Further, all the programs originally written for the 7070 were rewritten
for the IBM 360 and made more flexible in the process.
We also hunted up regression programs and altered

66

Spring Joint Computer Comerence, 1971

them for our needs, so that regressions too could be
run on the IBM 360. During the progress of the work,
the Port Authority in-house computer was changed
to an IBM 360-75 and all programs were modified where
necessary to operate on it.
As is the case with all information handling systems,
errors can be expected in processing data from one
stage to another. It is necessary then to have some
means of correcting the errors. A flexible data bank
update program was developed to accomplish this.
Since the amount of data in the bank is so large, the
program not only provides for correcting individual
pieces of data, but also provides for massive changes
with a few simple instructions. If, for example, it was
discovered that the bus travel time on the west side
of the river was five minutes too short from one zone
it would mean that travel time from that zone to all
zones on the other side would be wrong. Correcting
this can be done with three punched cards. This updata program is also used extensively to create forecast
year network data and to change this data to describe
_many different alternate transportation systems for
study.
Once the data is in the data banks and verified as
correct, it is necessary to extract this data, in certain
pre-determined groups and in certain formats in order
to perform the regressions to develop the models. For
each of the models, the data is extracted in a different
form.
For the assignment model, we used a rating system
for each of the facilities and a rating had to be calculated and placed on the extract tape along with the time,
cost and transfer difference for each of the facilities.
The modal split being done in two stages required two
separate extracts. Further, a classification system was
used in the Bus vs. Rail modal split and each class
required a separate data extract. The trip interchange
also used a classification system that required many
separate extracts to properly group the necessary data.
Since multiple regression programs vary greatly as
to their capacity and flexibility, it is necessary to
formulate the data in the extract stage so that it can
be easily used by the regression program. We have
used a number of regression programs, none of which
we can get to work on our own computers with the
fiexibilitywe would like. Both the extracting of data
and the preparation of the instruction to the regression
program are tedious jobs. They require rigorous- attention to detail. Ratios cannot be calculated by dividing by zero; natural logs cannot be taken of negative
numbers, etc.
Model development also is not a simple undertaking.
If a hypothesis is stated such that trips = k+a population+b employment+c travel time, and the regression

shows that this is not a linear relationship then other
forms of the variables might be tried such as logs, ratios,
powers, etc. If this shows a poor fit then data might
be reclassified so that different groups will be included.
If this proves negative, then new variables have to be
sought to try to explain the variation in trips. The latter
approach also presents problems for, as explained
earlier, every explanatory variable used for forecasting
must be forecastable itself. This means that each
search for a new variable must be thorough to the
point of satisfying this condition.
A brief description of the models that have been
developed to date and the theory behind their development is contained in the following section.

Assignment models

The assignment technique employed is based on the
concept that each crossing facility within a mode of
travel competes with all others for the trips to be
made within that mode between each origin-destination pair. While it is true that there is also competition
between modes, as well as between facilities within a
mode, considerable literature is in existence that indicates there are different factors that govern choice of
mode. These factors might not easily be handled in an
allocation method that does not specifically identify
the mode. The technique considered for allocation
within mode does not necessarily identify the facility,
per se, in its concept.
The assignment model is based on a rating system
first introduced by Cherniack. The concept assumes
that the traveler compares the travel time, travel cost,
and, in the case of bus and rail, the numbers of transfers
for the alternatives he can choose from. In evaluating
the alternatives, the traveler perceives the fastest
facility and compares that time to the times of the
other facilities; he perceives the least expensive facility
and compares that cost to the costs of the other facilities; he perceives the most convenient alternative and
compares it to the others; or, more realistically, he
perceives some combination of all factors. He then
rates the alternate facilities and gives the highest
rating to the one that he perceives to have the best
combination of time, cost, and convenience and a less€r
rating to those he perceives to not have these advantages. Conversely, if the use of each facility is based on
the cumulative rating of all users, then each facility
could be given a rating based on its traffic volume
compared with the traffic volume of all other competing
facilities. The facility with the highest volume gets the
highest rating and others, comparatively lower ratings.
Using multiple regression techniques the relationship

Computer Aided Traffic Forecasting Technique

between these three factors and the comparative usage
of the facilities was explored for each mode. The function considered can be expressed as follows:

67

PEAK CBD
r---~--~--'---'

~--~--~--~--~

R1 = T11TH = f (tl- ts, c1- ee, F1- Ff) where,
R1 = rating of facility 1; the ratio of trips via facility
1, T1, to trips via facility most heavily used, TH.
tl = door to door travel time via facility 1,
ts = door to door travel time via the fastest facility

e1 = travel cost via facility 1,
ee=travel cost via the least expensive facility,
Ff = number of transfers via the facility with the
fewest transfers,
F1 = number of transfers via facility 1
The R value or rating will equal 1.0 if the facility
in question is the most heavily used and will be less
than 1.0 for all lesser used facilities. Also, the differences will equal zero if the facility in question is the
best for the particular transportation variable. The
ratings and the differences (to be known as t:.t, t:.e, t:.F,
for time, cost, and transfer differences, respectively)

R •• -.2"1I716t -.OI'UII6C + .0115"1

.---__~--~--_r_-PE-A...,K NON.--C=B:.:D~__--,-__- ._____,

.---__~--~--~~OF-F...,-PEAKrC~B~D_r--~--_.--___,

PEAK COMBINED CBD & NON-CBD
~

5

~

AC(CENTS)

10

At (MI NUTES)

R•• - 3231106·t-Otl356C -.27"55

Figure 6-Bus assignment models

are calculated for each facility within each ongmdestination pair for each mode. Thus, for the automobile allocation model, where five auto crossings are
considered, each origin-destination pair can theoretically contribute five data points. In this study, each
origin-destination pair contributed fewer since only
those facilities that were within twenty minutes of the
fastest were deemed worth considering. Needless to
say, few if any trips were found in that excluded
category.
When using the model to forecast facility usage, it is
not necessary to find the most heavily used facility.
The rating for each facility, being the dependent
variable, is determined by the time, cost, and transfer
differences. The share of the total traffic for each facility
is the ratio of its rating to the sum of all the ratings.
Graphs of the models are shown in Figures 5, 6 and 7.

CI

~ .50~-+--+-~..c---i

-c(

a:

6C~2

5

6C= 5

-6C~10

6110

15

6C (CENTS)

20

2.5

5

7.5

At (MINUTES)

10

Modal split models

R •• -.12"506' - .0""'AC-I.27723

Figure 5-Auto assignment models

Having studied many approaches to modal split as
well as having tried a few ourselves, it was decided to

68

Spring Joint Computer Conference, 1971

PEAK ceD ~:::'='---r--'-""

1.00

.7~

C)

z

~ .5 0

a:
.2

~~

~
...At-

t-.4 ? "
/};I-I

10

0

R

AC (CENTS)

=.

2
t§~~~!!!!!!!!~-~
2.~
7.5
10-

15
20 1-2 C
5
-.351147AI-.080836<:-1.3I11AI' - .0211114

PEAK NON-CBO

At(MINUTES)

1.00

tive, rail predominant and PATH areas. Each zone was
classified into one of these groups based on the ratio of
service frequency of bus and rail. The latter group was
separated out because PATH was a rail service that
had a frequency of service more like a bus service, and
did not fit within the definition of the classification
index.
Trial runs of the early bus vs. rail models indicated
that certain zones on the trip destination end, particularly The Manhattan CBD, were being systematically
over or under estimated. A search for reasons indicated
that the Lower Manhattan area-the focus of most of
the N J rail service-exhibited entirely different characteristics than the remainder of the CBD. When we
separated this area and ran separate models) the explanation of the variation was much higher and conversely the reactions to the remaining variables were
much lower.
The following were the equations finally derived for
the bus versus rail modal split:

.75
C)

z

~ .50

a:
.2

MODAL SPLIT MODELS
Bus vs. Rail

1\

5\
0

\,~

Figure 7-Rail assignment models

attempt to develop the modal split models in two
stages-first, would be a split between the two forms
of public transport, bus and rail; then a split between
public transport and auto. The reason this approach was
taken was that the motivations for using auto or public
transport seemed to us entirely different from a choice
between two different means of public transport. The
regression runs at least partially proved us correct.
The bus versus rail model was derived by regressions
using travel time, travel costs and number of transfers.
Early attempts at deriving meaningful models indicated
that' we were not explaining nearly enough of the
variation with just those variables. It was then decided
to investigate some sort of service index. Considering
the problems of forecasting an exact service index we
chose instead to classify areas according to a frequency
of service ratio. In that way, we could be reasonably
sure we could approximate this classification for forecasting purposes. The bus vs. rail models were subsequently grouped into four groups and modeled separately. These groups were bus predominant, competi-

Bus zonesB/B+R= .80+.00373 (tr-tb)+.337 (Cr-Cb)
(R=.55)
Competitive zonesB/B+R=.231+.0064 (tr-tb)+.522 (Cr-Cb)
downtown
(R = .41)

B/B+R=.436+.0139 (tr-tb)+.65 (Cr-Cb)
other CBD
(R=.73)
Rail zonesB/B+R= .261+.1705 (tr/tb)+.1868 (Cr/Cb)
downtown
(R = .36)

B/B+R=.928+.745 (tr/tb)+.4829 (Cr/Cb)+.038
(Fr-Fb)
(R=.65)
PATH zonesB/B+R=.220+.0055 (tr-tb)+1.01 (Cr-Cb)+.097
(Fr-Fb)
(R=.94)

B/B+R = ratio of bus trips to total transit trips
tr = rail travel time (minutes)
tb = bus travel time
Cr = rail travel cost (dollars)
Cb = bus travel cost
Fr = number of rail transfers
Fb=number of bus transfers
R = multiple correlation coefficient

Computer Aided Traffic Forecasting Technique

The second stage of the modal split process was the
auto versus public transit split. In the earliest attempts
at deriving this set of models, we ) had assumed that
since the percentage auto usage to the CBD was much
lower than to the remaining areas east of the river,
we would try to derive a separate model for the CBD.
The vari~bles included in the trials were employment
density east of the river, population density west of
the river, travel time difference or ratios, travel cost
differences or ratios and parking costs.
The resultant regression equations explained very
little of the variation in the percent auto, however, the
analysis of the results compared to existing trip patterns
showed that where bus was the predominant public
transit mode, there was a larger percentage of auto
trips than where rail was the predominant transit mode.
This finding indicated that while the choice between
bus and rail might be a different one from the choice

.10,....---.,-------------1"---,

69

between auto and public transit, there also appears to
be a difference in the choice of auto versus bus and
auto versus rail. Rather than establish three sets of
equations (auto vs. bus, auto vs. rail and bus vs. rail),
which would have to be normalized to sum to 100 percent, it was decided to try the percent bus of total
public transit as a variable in the auto vs. public transit
modal and still depict the difference in choice between
auto and the two public transit modes. When the percent bus was entered as a variable, the regression
equation proved to be dominated by this variable, but
still did not explain enough of the variation in percent
auto in the CBD. Similar trials with non-CBD traffic
only had even less explanation.
Observation of the range of values of the independent variables led us to discover that while the
range of values of many of the variables did not explain the large difference between percent auto to
CBD and to the non-CBD, two of the variables, employment density and parking cost, did seem to be
highly correlated with the percent auto if the CBD
and non-CBD trips were combined. A regression run
using all the observations showed a relatively high
degree of explanation. It did not seem to go far enough
towards explaining differences between percent auto
within the CBD and those within the non-CBD. In
order to correct this, the finding that the percent bus
variable was highly explanatory, in the CBD, was combined with the general equation derived for both CBD
and non-CBD observations, and the model proved
reasonably successful in depicting the general pattern
of auto as a percent of total traffic.
The model as finally established was:

AIA +B+R= .65 - .091 In (Ejl Aj) - .033 In (Pil Ai)
- .0068 In (PC) +.1279 In (tplta) +.175 (BIB+R)
Multiple correlation coefficient = .76
Where EjlAj=Employment density of east of
Hudson zones
Pil Ai = Population density of west of
Hudson zones
PC = Parking cost
tplta=ratio of transit travel time to auto
travel time
BIB+R=ratio of bus trips to total transit
trips.
In = natural log

.4l----+-----t-----:;iIr--t"---:

.3l----+----I-~~-_::;

-10

0

+10

+20

buS time ",ail time

Figure 8-Bus vs rail modal split CBD destinations

+30

A graph of the model is shown in Figure 9. It was
developed using the (Ejl Aj) variable as the basic
variable and shows the effect of the other variables as
they extend to their maximum value range. For example, if a zone interchange were between a zone with

Spring Joint Computer Conference, 1971

70

m~--~----'----'-----r-'---'----'----'----I

90'~--+

cac

non

cac

10

M

rang. 01 E;/Aj

1

Ej /Ai

employment density (1.000/sq. mi)

Figure 9-Auto vs public transit modal split

(EJ/ Aj) of 300,000 jobs per square mile and one of
the highest (Pi/Ai) zones it would have 13-11=2
percent auto. Including the remaining variables would
drop it below 0 percent if it had the highest parking
cost, raise it back up by 4 percent if it had a high
(tp/ta) ratio and raise it an additional 17.5 percent if
all the transit trips were by bus. That zonal interchange
would then be predicted to have about 20 percent of
its trips by auto.
Also shown on the graph is the range of values for
(EJ/ Aj). It can be seen why that variable explains so
well the percent auto since the CBD employment
density is so much higher than that of the other areas.

Trip interchange models

In this model we were attempting to develop relationships that describe total trips between a zone on one
side of the river and a zone on the other side. Previous
studies in this subject have concentrated more. on a
concept of trip generation and trip attraction where
the trips from the sending zones are estimated separately from the trips to the receiving zones and then a
balancing of trip interchanges is made. This process

lends itself to a gravity concept that postulates that
trips are generated by population, attracted by employment and vary by some function of the distance
between zones. The function is usually a decay function
and short trips predominate in the model description
of trip patterns-which is quite true to life.
The trans-Hudson trip market is only a very small
portion of all trips taken in the region and it is a portion
that includes mostly longer trips, Therefore for our
approach we tried to go directly to trip interchange
and we developed a method that is based on segregation
of geographic areas on each side of the river that, by
earlier study, exhibited different trip patterns.
The general theory behind this approach to trip
interchange is that communities change as they age,
and there are several directions of change that they
can take. This can best be explained by discussing the
types of areas we used for our classification systems.
There were 5 separate area types west of the Hudson;
(2) urban core: old densely developed areas near the
river, (4) urban self-sufficient: also older areas but
further from the river with more or less their own economic base, (6) stable suburban: old areas originally
developed as bedroom communities with little economic
base of their own, (5) mid-suburban: newer areas fastgrowing in the recent years with a mixed orientation,
(1) emerging suburban: sparcely settled areas now with
growth expected in the future. Further, there were 3
separate area types east of the Hudson: (3) Manhattan
CBD, (7) urban areas-includes most other New York
City zones, (8) suburban areas-New York suburban
counties.
From our past studies it has been shown that each
of these interchange groups exhibited different characteristics as to trips interchanged with zones on the
other side of the river as well as different socio-economic
and demographic characteristics.
Further analysis of the characteristics of these areas
indicated that they could be classified into separate
groups by study of economic and demographic data
relating to each of the zones. The classification of these
areas is presently done with a non-rigorous method of
observation, but we have just begun using the statistical
technique of descriminant analysis for a more precise
classification system and it seems to be working well.
It has proved our original classification to be accurate
in most cases and has given us further insight to troublesome zones.
While the models we have developed so far still have
many shortcomings, they appear to verify the general
theory and the variables used are logical ones. It can
be seen from the list of equations that are now being
used that the effect of the predominant trip-producing
variable (population) varies considerably between area

Computer Aided Traffic Forecasting Technique

type groupings. Further the gravity theory of trips
varying with distance between zones (represented by
travel time) is maintained with the inclusion of the
time variable. The CBD models are shown in a graph
form in Figure 10. The scales of the graphs should be
noted since the lines plotted indicate the range of the
population and trips within each of the area types. The
slope of the lines indicates the effect of population on
trips and the brackets at the end of each line indicate
the range of the effect of employment on trips. It can
be seen that Area Type 4 zones produce a small amount
of trips per capita and do not react very much to employment attractions on the east side of the river.
These are the "self contained areas." Area Type 6
zones, on the other hand, have higher trips per capita

TRIP INTERCMANIlE MODELS
AREA TYPES 1, 5, IS TO CaD
400'1----~--

... -

-_.

--.-.---+--.-+------f

~I---~--~---+---+--~.

:r

.Mllx...E.i.

1200

\

TRIP INTERCHANGE MODES
AREA TYPES 2,410 caD

Table I-Trip Interchange Models

I

1

CBD Zones

I

...

Area Types

_

2-3
4-3
5-3

Tij = -125 +2.36 Pi +3.16 Ej - .487 tij
Tij = +5+.307 Pi+.514 Ej -.059 tij
Tij = -159 +44.3 In Pi + .596 Ej - .059 tij - 25
(Ei/Pi)
6-3 Tij = -60+.197 Pi + 1.509 Ej - .131 tij
1-3 Tij = -3+.682 Pi+.338 Ej -.037 tij

2-7
4-7
1-7
5-7
6-7
2-8
4-8
1-8
5-8
6-8

Non CBD Zones
Tij = -10.4+.124 Pi+.108 Ej+1.12
(Ej/Aj)
Tij =5.0+.118 Pi+.076 Ej+.48 (Ej/Aj)
-.028 Atij
Tij =191+6.66 In Pi+3.60 Ln Ej-0.50
In (Ejl Aj) -33.57ln Atij
Tij =167+8.79 In Pi +7.93 In (Ej/Aj)-3.59
In (Eil Ai) -32.9 In Atij
Tij =250+13.24 In Pi+8.17]n (Ej/Aj)
-48.2 In Atij
Tij = 33 +4.22 In Pi - 6.5 In tij
Tij =15+3.82 In Pi+1.25 In Ej+1.87
In (Ej/Aj)-4.74ln tij
Tij =13+1.9ln Pi +.4 In Ej -2.66ln tij
Tij =67 -6.11 In (EilAi) +7.53 In (Ej/Pj)
-8.2ln tij
Tij =2.3+16.2 In Pi+11.2In (Pi/Ai)+30.7
In (Ej/Pj)-6.4ln tij
Tij =total trips between i zone and j zone
Pi = population (000) in i zone
Ei = employment (000) in i zone
Ai = area (square miles) of i zone
Pj =population (000) in j zone
Ej = employment (000) in j zone
Aj =Area (square miles) of j zone
tij =weighted average travel time from ito j
Atij =weighted average auto time from i to j
In = natural logarithm
R = multiple correlation coefficient

(R= .65)
(R=.71)
(R=.71)
(R= .75)
(R= .69)
(R= .66)

71

V

V

./

p,"
~

,."!IEI

~

t
• ...lInEj

AVO;'
200

-

-~
50

AREATy~4

100

150
200
POPU'alion I (.OOO's)

.,

-(

250

(R= .74)
Figure 10-Trip interchange models
(R= .74)
(R= .69)
(R=.76)
(R= .46)
(R= .58)
(R= .44)

(R= .60)
(R= .63)

and much higher attraction to employment. These are
the stable suburban, or bedroom communities. The
highest trip producers are the Area Type 2 zones which
are closest to the river. The trip values for the graphs
were calculated using the average travel time for each
area type.
Improvements in the models are currently being
sought through investigations of additional explanatory
variables such as competitive employment opportunities on the same side of the river and employment
classification. Further investigations are also being
made with different forms of the dependent variable.
With models being developed for the purpose of
forecasting, some method must be utilized to judge
their forecasting quality. One method is to carefully
inspect the way that the models perform in reprQducing
the basic traffic patterns from which they were developed. The process we developed for this is best
described by discussing the master program.

72

Spring Joint Computer Conference, 1971

CALCULATE

CALCULATE

FACILITY VOLUMES

AVERAGE TIME
A. B. R

EAST &: WEST
USING EXISTING
MODE VOLUMES
EAST f: WEST

AVERAGE COST

A. B, R
AVERAGE TRANSFER

B. R

*"odel
01

Trana

Hud_

Em~lcal

R.lllllonllhipa

Ha.lng

~"Ina

D.atlnatlona

Figure 11-Trans-Hudson model-Motherhood Program

MASTER DATA PROGRAM-"MOTHERHOOD"
A system designed for forecasting must have some
method of utilizing the developed models to produce a
listing of the forecasted trips in the desired detail. The
design of the forecasting program was, of course, controlled by the concept of the entire system.
The program developed has been named "Motherhood." This is an acronym representing "Model of
of Trans-Hudson Emperical Relationships Having
Origins or Destinations." It was dreamed up for us by
one of those fellows who has nothing better to do with
his spare time other than dream up acronyms. It has
done well by us since who, in the long run, can criticize
Motherhood. It has also created some interesting dialogue with the programmers who run the system, since
many of them are young girls.
Figure 11 is a block diagram of the information flow
in the "Motherhood" program. It is indexed by the
numbers in circles for easy reference.

The program is designed to accommodate each of
the several model stages (assignment, modal split and
trip interchange) separately or combined. The thread
of continuity in the combined operation is the network
data. The program takes the basic facility network data
from the data banks, uses it for the assignment model
and then reduces the facility data to mode and total
travel data for the modal split and trip interchange
models in much the same fashion as was done to build
Data Bank II for development of the models.
Starting with the input in the data bank where travel
times and costs are referenced to a specific facility the
program calculates (box 2) the percent of each mode's
total traffic that will be assigned to each facility within
that mode. A separate equation (model) is used to
calculate the percentages within each mode. The program 'then uses these percentages to calculate (box 5)
the weighted average time and cost of each mode. It is
these mode averages that are then considered as input
to the modal split models. The modal split models (box
6) operate in two stages. First, the percent split between bus and rail is calculated from a model (or set of
models) then the weighted average time and cost of
public transport is calculated and a percent split is
calculated between public transport and auto. The last
step is then to calculate a single weighted average time
and cost (box 9) so that it can be used as input to the
trip interchange model (box 10). It can be seen that
the eleven times and costs in the data bank (one for
each of the facilities) are now transferred to a single
time and cost (representing a weighted average of all
facilities) by flow through the program. The trip interchange model using this averaged value of time or cost
then calculates total trips between zones. The program,
having previously calculated and saved the percent
split value on modes and facilities, simply uses these

:....::...
:...: ··r· :...: :::.. .........
::::. :...: :...: :...: :...:
. . ... .. .. ..
. ..... . . . . ... ... .....

l' •••
.C •••

t ".
lit
'''1
c-.,
I

1
1
I-~~-I
II
II
CJ

THIS fIt-t4 MJN IS A COfiIPLETf REPOiJtT WITH fAC Ate . . PIltJNT Dun.
IE~UlTS

AilE SASFD Ott 6 AUTO, l

Figure 12-Print out example

aus-

litO ], ",IL FACILITIES.

Computer Aided Traffic Forecasting Technique

JOB

AI64 Jftlq
85

J
All

!II
"1

2
3

IH

4

1111

5

At

"

Al

7

"

~1

'"

q

Al

II)

"1

l'
12

e"
Al

11

"1
Al
"1

14
1 '5
16
17
18
lQ

q,

PI

"I

Al

,(\

SUI' TOTAL

174
]n6

32
22
14
27
76
'57
,,9

60

Aft
75
6It
61
1 HI
106

121
14'
155
III

2')

62
III
17
13

60
110
16
11

19

18

~
I)

17
45

13
42
28

0
1
0

41

t

'6
1,.

I)

'6
26

)7
loS

14
4'
3'5
31

2Q

A1

~r'I

Al
Al

A,

'q
A'

Al
AI
Al
01

'"

3'

J"
'B
14
3"i
36
'H
3-

00

79

117

91

74

Qf:.

Itl

1('1

III
I"n

oq

39

3'13

11,

1'46
341

""

III

4('

16

Al

41

PI

4
1

"I

4'41
44

'"

4'5

"'
fll

41'1
47

It,

q, 4'

!I,

4"

AI
PI

51

0:;0

Q1
5'5
r;69
I n ,)1

Ion
3
0
0
1
(\
0

0
n

7
1

3
1

I)

n

7
2

I)

I)

1
0
0

I)

(\

0
n
0

n

(\

(\

3
15

D

96

0

14

0

0

0
0

0
0

0

n

(\

0

('

(\
('

('

I)
I)

(I

I)

f\

')

I)
I)

o

n
I)
(\

2

')

n

,(\
('

t)

o

r)
I)

I)

(\

f\

n

o
I)

.,"

n
n
n
n

('

0

21

('

0

19

o

o

o

o
o

35

(\

4~3

6no

(I

48
0

335
III

n

136
341

l:-

4

.,

(\

I
t2
3

1

n

I)

n

9

2

I)
r'I

1

(\

2('

13

9

9

2
3
15

1<)

14

24

17
13
1

18
1<)
1'5

o

"n
"I)
I)
r'I

n
I)

53
97
93
569

o

"

0

n

n
0

I)

o

4

0

o

n

2

o

I)
(\

o

n

12
2
15

14
3

o

t)

r'I

n

"
n

"

n

n
n

(\

n

n

OPTION

o
o
(\

o

n
('I

n

0

2

t)

I)

1
3
1

n

0

n

n

('104

o"

o

n

o

"o

n

o

I)

I)

C

0

')

7

2

0

50

46

9

8

3
1
1

0
0

5

4
7
5
13
9
17
11
21
18
15
15

6
14

10
20
13
23
22
21
lIJ

1
2
3
2

0

2

0

It

0
0

0

4
7
4

11
10
24
12
15
19
12
12

o

o

82

0

139

3

042
031
o
0
1
004
0017
o
0
10
o
0
24
o
0
12
o
0
15
o
0
19
o
0
12
01)12
000
002

0

2

41

5

0

68

58

o

10

0

70

!tB

5

52

1
5

0

51

o

0
0
014
000

I)

3

003

600

74

0

378

o 231 147
000
000
000
000
000
000
o
0
0
000
000
000
000
000
000
000
000

39

526
lCJ

29
33

2CJ
H

27
27
20
25
13
12

17
27
20
25
13
12

7
2

o

7
2
0

o
I)
o
o
o

0
0
0
0

2

1

1
3
1
1

1
3
1
1

3
2
~

3
2
?

I)

7

(1)5

4

4

('

3
1

3
1

18

0

0
0
0

6

0
0

46

7
9

14

I

o

83
143

39

5f!
109
52

CJ

0

I)

6
3
12
10

9

0

8582 1

It

o
o
o
o
o

I)

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

o
1
o
o
o

8586 2

858T 2

9/11/70

---~---~------- RAil --------------TOT
PRR
HTR PUPR

27

IJ

COMP

85

TAPE NO.5332

29

10
('
I)
5
(\
0
5
007

o

1

--------- BUS
TOT
GIIRS PART

1

o
(\

('

'?

16

(I

1
1
1

n

n

12

21
22

4
1
8

(I

non

'Al
III
116
341

17
12
1"6
26
H

0
0
0

n
n

67
89
'HI

In'n

0

17

7'

'i7

n

tt3'
74
'51
75

81
Q,

")6<)

0
0
n

04'5

76

"1
II 1

r
0

2
1
1
2

'51

6'5

26
'7

(I

"3
4,.

21
'3
24

en '5

r'I

A6
"3

Al
"I

14
55
42
67

0
n
0
0

J

F + W FOR

CPRR + lMtNI 8582T12, 8586T14, 8O:;810Tll

0
n
0

31
61
'54
70
69

21

III
AI

8ASE Tq

VOLUMES

_LL MnoFS ----------------- AUTO -----------______ _
GR. TOT TOT
GW8
LT
HT
SI8
TZ
NEW

Al
Al

PfA~

FACILITY

73

0
0
0
0
0
0
0

0
0
0
0

0

0
0

0
0
0
0

0
0

0
0

0

o

o
o
o
o

o
o
o
o
o

o
n

o
o
o

o
o
o
o
o
1
o
o

o

-0

0

0

000
000
000
000
000

o

0
0
010

0

1
1
1

0

3

030

0

1
1

000
o
0
1
000
000
000

0

n
0
0
0

o
o
o

010
000

o

o
o
o

o
o
o
o
o
o
o
o
o
o
o
o
o
o

o

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

o
o

o

o
o

o
o
o
o
o

o

o

o
o
o

o

o

o
o
o
o
o
o
o
o
I)
o
o

o
o
o

o

o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o

Figure 13-Print out example

percentages to distribute the total trip volume into
trip volumes by mode and facility.
You can note also on the diagram that there are
optional routes through the program. These are for
model testing purposes so that each of the models can
be tested and calibrated separately. A test of the assignment model can be made by running the program with
option 1 on (box 3) where we can take the given number
of trips on each mode and simply assign them to facilities. In order to test the modal split model, the program
must have both the assignment model and the modal
split models in so that the weighted averages of the
times and costs can be made to input data in the modal
split models. And lastly, a test of the trip interchange
model must be made with all three models (box 2, box
6 and box 10) in the program, so that the average times

and costs can be calculated by the two previous models
as input to the trip interchange model.
There are several print options in the program. The
major option is a facility print or mode print where
residence east trips and residence west trips are separated (see Figures 12, 13 and 14). These options are
used so that the analyst may review the forecast in
the manner most appropriate to his investigation. The
residence split being most important in the trip interchange review and the facility print most important in
the assignment and in the review of performance of the
overall system.
Additional print options are available that collapse
and sum the trip forecasts. The basic print is in an i - j
format, i.e., each cell is printed. One summation is by
county groups, another sum is by rail corridor groups

74

Spring Joint Computer Conference, 1971

J(l"

MODE

AI64 J019
lie; PEAK RASE T9

J

111
Al
111
A'
AI
Al
'II
81
III

"l
Al
''1

AI

GP.. TOT
174

5
6
1
A

'''6

122
14'
155
123

,
2
2
1

153
121

~4

17

7.

1"
It

pn

7.
1
1
2
2

.3

"1

I'>

6(\

75
'.4
61
11"

81

16
17
18
19
In
TOTAL
20
21

Al

72

()I.

"1

73
74
25
='6

1tl

,7

117

,,~

01>

29

111

'"

"1
S.. ~

AI
III

Al
AI
81

"'

"1

Al
111
Al
01

Al
Al

3n

31
37
33

34
.15

1 0 24
115
11)8
1 =,n
qo

~A3

131

III

~9

;»

At
Al
III
lit

Ion

16
4

Al
III

51

At

RT
1'1

7.

2

111

At

"

4
2
4
3
')

31'

Al

38
4
1
4

'57
56 0
tnf\l

Al

41
47
43
44
45
46
41
48
49
50

?

lOt'

36
37

III

2
='
7
2
1

11>

12

T'H. W

2
1
2

l='
22

51
119

14

lit
"I

F

9

II;
ql

,

177.
304
Vl
71)
37
25
74
56
'H
513
111
13
62
61
116
It'.'4
121)

~O6

1
4

Tt:'IT.

13"

341
A

I

11
12
16
37
76
'3
21
21
16

1
1
1
7.
7.

1
2
1
2
2
1
2
2
3
1

,
1

'I
='
2

lion

1885
III
ell

In7
105
116

96
In
94
108
911
55
568
10n2
387
129
134
l4t)
6
1
14
2
0

15
11)
13
31
23
31
lq
19
14

VOlUMFS

EAST

AND

WEST

FOR

OPTION

(PRR + IMIN. 8582H2. 8586Tl4. 85810TI1
--------- AUTO --------WEST
EAST
TOTAl
61
1
62
1
In
113
16
1
11
13
1
13
19
18
1
16
1
17
44
45
1
16
'H
1
44
45
1
14
1
33
41
42
1
14
15
1
30
1
31
32
1
33
61
1
60
1
53
54
69
70
1
69
68
1
85
1
"6
63
62
1
945
76
65
78
PI
92
79
93
113
99

en

'55
569
In03
3e3
131
136
341
6

1
12
3
1
13
9
12
20
19
24
113
lq
15

18
2
1
2
1
2
1
2
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
2
1
1
1
1

927
74
63
16
AI)

90
7fl

91
82
9A
92
'54
568
1002
382
129
134
341)
5

1
11
7.
0
12·
IJ

11
20
17

COMP

85

---------- BUS ---~----WEST
EAST
TOTAL
18
29
1
49
50
1
9
8
1
4
5
1
1
8
1
5
6
1
13
14
1
10
10
1
19
20
1
12
13
1
23
1
22
11
1
22
20
21
1
18
1
11
51
58
1
49
48
1
51
52
1
61
68
1
68
70
1
57
56
1
600
39
29
33
21
21
20
25
13
12
1
2
0
0
0
0
0
0

2
1
3
1
1
3
2
3
q
1

17

q
It

lR
14

3
1

2?

2

TAPE NO.5332

20
2
2
2
2
2
1
2
1
1
1
1
0

0
0
(

580
31
21
31
25
26
18
23
11
10
6
1
0
0
0
0

0

0

0
1
1
1
1
1
1
1
1
1
2
1
1

0
1
0
2
0
0
2
1
1
8
5
7
3
2
0

1

1

8582 1

8586 2

858T 2

9/11110

--------TOTAL
83
143
6
4
1
4
11
10
14
12
15
19
12
12
0
2
0
5
0
3

RAil --------EAST
WEST
83
0
143
0
6
0
4
0
1
0
4
0
0
11
10
0
24
0
0
12
15
0
19'
0
12
0
0
12
0
0
0
2
0
0
0
5
0
0
0
3

378
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0

0
0

378
0

0

0

0

0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
1
1
1
3
1
1
0
0
0

1
1
1
3
1
1
0
()
I)

0
0

0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Figure 14-Print out example

and the last sum is by area type groups. An additional
feature is built into· the program that converts auto
passenger trips to auto trips so that analysis can be
made on a vehicular basis.
Another feature of the master program is that it can
be used to read and print out the data within any of
the data banks. This is a necessary procedure when one
considers the massive amount of data in the banks and
the necessity of maintaining accuracy when changes
are made.
Perhaps the most important feature of the master
program is that the model sections (boxes 2, 6 and 10)
are completely flexible. They are branches within the
master program and can be changed simply by rewriting the FORTRAN statements within each branch.
This allows the user to change the entire model or

parts of it. It allows us to use the master program for
the peak and off-peak models with equal ease simply by
replacing the model sections of the program. Of great
importance is the fact that this master program can be
used to test models developed at a later time based on
some new data and new research.

CURRENT USE OF THE SYSTEM
The peak period models have already been developed
to a point where we· think they can be used for forecasting. The basic models were derived from 1964 data
and they were tested by running them through the
"Motherhood" program. Because a multiple regression
program does not provide one with a perfect fit of the

Computer Aided Traffic Forecasting Technique

data, the test runs provide for analysis of the output.
These analyses point to areas where additional variables could be used, where network errors might have
been made, and where certain phenomenon simply cannot be explained adequately. Correcting for these items
is called calibration.
The 1964 models were calibrated in several stages.
First, the assignment models were run and calibrated.
Then the assignment models were used along with trial
modal split models and the two stage runs were calibrated. Finally, the calibrated assignment and modal
split models were run with the trip interchange models
in the full three stage run. This way, the entire set of
models could be considered calibrated to reproduce the
1964 data in the same procss that the 1985 forecast
runs would be made.
Many data summaries were made to make sure that
traffic patterns derived from the models were reasonably
in line with those found in the original trip data. Areas
that were checked were those where specific transportation improvements were to be made or where alternate
systems were to be tested. The following table shows
some of the calibrated data summaries made on the 1964
trial runs.
For the forecast year 1985 an initial transportation
network had to be constructed. The network data was
derived by an analysis of the plans and programs of all
the transportation agencies in the region. The time, cost
and transfer effect of new or improved facilities that
would be in place and operating by 1985 were coded
into the networks to depict a base network. These inTABLE II-1964 Peak Period Trans-Hudson Model Calibration
Facility Comparisons

Actual
Grand Total
Auto
GWB
LT
HT
SIB
TZ
Bus
GWB
PABT
Rail
PS
HT
PUP
CNJ

159
45
23
11
3
2
5
58
11
47
54
7
26
13
7

438
909
560
159
214
770
205
845
268
577
684
593
060
153
878

Assignment
Assignment Modal Split
and Modal
and Trip
Split
Interchange
Assignment
Models'
Models
Model Only
159 398
45 909
23 675
11 677
3 269
2 540
4 749
58 833
10 270
48 563
54 658
7449
25 652
12 885
8 671

159
46
22
12
3
2
4
58
10
47
55
7
25
13
8

394
199
847
618
474
534
726
142
860
282
056
543
282
521
712

158
46
23
12
3
2
4
57
10
46
55
7
25
13
8

659
230
091
466
586
672
416
195
765
430
235
504
199
954
578

75

cluded new highways, improved rail service and some
improved bus service resulting from new highways.
Additional data necessary to run a 1985 trial forecast
was the forecasted independent variable data-population, employment, area types for the trip interchange
model, and service classification for the rail vs. bus part
of the modal split model.
Trial forecasts for a basic assumed transportation
system have been run and are being analyzed. Several
alternate systems have also been coded into the network
and run and we are analyzing the reaction to the changes
caused by the various alternates. Further work in this
area includes reassessing the 1985 population and employment forecasts based on later census data.
The off-peak models based on 1964 data are still
under development. Because off-peak traffic exhibits a
great amount of variation, it is difficult to "zero in" on
what causes the variation. First, the market is split,
being part work oriented travel and part non-work.
Another problem is that 21 hours of traffic are included
in the data some hours of which may differ from others.
Still another is that public transport usage is extremely
variegated in this off-peak market perhaps being influenced by income in some areas or by entertainment
centers in other areas, etc. Unfortunately, few of these
types of influences can be accurately forecast.
While we are working on the off peak models, we continue to search for more usable explanatory variables
for the peak models. In this assignment model, we had
difficulty duplicating representative patterns for users
of the two bus terminals and we are working on a
variable that will attempt to define more clearly the
difference in access to the different bus routes on the
New Jersey side. This same variable and one· like it for
access to the rail service might help explain more of
the variation in the bus vs. rail Modal Split models.
We are also experimenting with variables that might
better describe the differences in distribution on the
New York side, particularly in the CBD. We are now
using travel time, cost and number of transfers involved
in the entire trip, and it seems as though these variables
do not focus strongly enough on the attractiveness of a
trip that requires no additional mode for distribution
from the terminal to the final destination.
In the trip interchange model we are currently using
total trips as the dependent variable but we are trying
other forms such as trips per capita. We are also introducing new variables such as competing employment
opportunities within 20 minutes of the origin zone and
managerial and professional employment. We must be
careful with the latter since it may be difficult to
forecast.
Earlier in the paper it was stated that one of the aims
of the system was to allow for updating and using new

76

Spring Joint Computer Conference, 1971

data. In meeting this requirement, we are now building
a new data bank to represent the base year 1,968.
The trip data for that year is available from numerous
o and D surveys. Auto data comes from the continuous
o and D sample and 0 and D surveys were taken that
year at the two bus terminals, on the Penn Central
Railroad and on the PATH system. Only minor adjustments to the original trip data programs are necessary
to format these data for use in the model systems.
Population data for 1968 will be estimated from the
1970 census of population and employment data will be
extracted from continuous sources of employment data
available from N ew York and New Jersey State
agencies.
The travel time, costs, transfers and other network

variables will be updated from sources of permanent
records available such as service schedules, fare tariffs
and sample travel time runs. With the utility available
from the battery of programs that have been written
and from the design of the systems we now have a
capability long sought after in urban transportation
studies. We can update on a short time cycle so that
we can integrate time series analysis with the new forecasting system. We can now derive new models with a
1968 base and compare them with those derived from
the 1964 base. We can attempt to forecast 1968 from
the 1964 models and analyze the differences from actual
performance. With this technique we hope to have
greater insight to the causes of traffic pattern changes
both for short run analysis and long range forecasting.

Computer graphics for transportation problems*
by DAN COHEN and JOHN M. McQUILLAN
Harvard University
Cambridge, Massachusetts

specific programs. A ten minute film will be shown to
demonstrate the application of interactive computer
graphics for urban traffic problems.

INTRODUCTION
The central problem in designing transportation systems
and networks is determining the optimal control techniques for given transportation facilities. For example
it is essential to find the best strategy for handling th~
traffic in a given airspace, in a given highway system
.
'
or m a network of city streets. The other side of the
problem is to determine for given or predicted traffic
conditions, the optimal transportation facilities. Urban
planners must solve these problems when designing new
developments; similarly it is important to determine
how many airways and airports will be required to
handle the air traffic in the 80's. From the answers to
such questions one can decide how to allocate funds,
for example, to improve the radar systems to allow
smaller separation or to put more navigation aids in
order to increase the number of airways.
It is not just a mere coincidence that in many languages the word "see" and "understand" are synonyms.
In many cases to see is to understand, and this is what
computer graphics is all about.
Computer graphics is used mainly as an interface
between the man and the machine. Problems which
)nherently require display of output or have graphically
oriented input are the clearest beneficiaries of computer
graphics. Graphical output gives the ability to display
arbitrary shapes quickly. Graphical input provides the
ability to define shapes and the ability to identify
things naturally by pointing to them. Transportation
systems are often best represented graphically. For
these reasons we have found that the application of
computer graphics techniques to the solution of transportation problems is most fruitful.
In this paper we discuss the philosophy behind our
approach, and illustrate it with examples taken from

COMPUTER APPROACHES FOR
TRANSPORTATION SYSTEMS

It is often the case that practical problems deal with
system behavior, rather than behavior of a single
particle or a single element. Describing and dealing
~th syst~ms is manyfold more complex than working
WIth. a smgle element. Often one can describe very
preCIsely the exact mathematics which govern the behavior of a single element. However, it is very seldom
that one can find equations which describe a system
completely, and still be consistent with the behavior of
each of its elements.
Si~ula~ion of urban traffic, or air traffic, are examples
of thIS dIfficulty. One can describe very precisely the
motion of a single car or of a single airplane. If the
motion of the car is unrestricted, then its behavior is
simple to explain. When more than one element is
introduced into the system, the interaction between
them adds a new dimension to the problem. The complexity of the interactions might grow as the square of
the number of objects in the interaction. In general,
one can solve situations where few vehicles are involved.
H~wever, any practical problem involves too many
objects for a human being to solve without a computer.
In many transportation problems, there is a system
of many particles moving concurrently in the same
space, obeying some interaction restrictions. These restrictions are usually in the form 'of separation criteria
(for cars, airplanes, ships, etc.), staying in some corridors (like highwayst airways, etc.) sharing some navigation facilities and so on. Such system problems lend
themselves very well to computer use. In order to solve
these transportation systems on a computer, one can

* This research was supported in part by ARPA Contract
F 19628-68-C-0379.

77

78

Spring Joint Computer Conference, 1971

use simulation techniques, rather than integrating equations into system-behavior. A computer can perform
the tedious job of simulation particle by particle and
make local decisions about each of the particles. In
some systems these decisions are based only on local
information, as observed by each particle. In other
systems these decisions may depend on global information about the state of the system.
If the behavior of each individual particle is nondeterministic and some distribution and probabilities
are involved in the description of each particle, then
the behavior of the entire system is non-deterministic;
in order to simulate it properly one has to simulate the
distributions. These non-deterministic simulations have
to be repeated many times in order to average the
behavior and the distribution to get meaningful results.
Clearly, it is appropriate to use a computer for such
simulations.
Computer graphics lends itself very well to this kind
of simulation. After every updating cycle, one can
display the state of the system. For example, if one
simulates the air traffic in a given space based on some
known rules, one would like to observe the dynamics of
the system changes. The visual display of this information, at a rate meaningful for the viewer, might
introduce new understanding of the behavior of the
system. In the case of traffic simulation, whether it is
urban traffic or air traffic, one can learn a great deal
by viewing the intermediate steps through which the
system is going.
For example, one might observe that due to some
latency in traffic lights, some cars happen to jam an
intersection, which in turn might cause a total breakdown of the traffic flow. If the eonditions of cause and
effect are not known in advance, global measures are
not enough to explain this kind of behavior. The only
way to understand the system is by viewing it, and
recognizing its behavior patterns. These patterns, which
are not known before they are observed, rely very
highly on the intelligence of the human being and his
ability to recognize patterns. If the behavior patterns
are already known, one might assign the computer to
look for them, measure them, and report them. This
can be done off-line (batch processing, for example)
and interactive graphics is not needed for it. However,
in many cases the internal behavior patterns are not
known and one has no idea what to look for, and
cannot assign the computer to search for and measure
them. The dynamic graphic simulation allows one to
see and recognize behavior patterns which he never
expected to find, and watch them develop. This recognition leads to an improved understanding of the
system.

INTERACTIVE COMPUTER GRAPHICS FOR
SIMULATION PROBLEMS
Under many circumstances, the best use of a computer simulation is an interactive one. There may be
so many variables that the only way to understand
their interdependence is to study the problem in realtime simulation, seeing how it is affected by various
changes. There may be such uncertainty in the model
itself that the parameters should be altered as the
simulation proceeds. Using this approach, one can
quickly gain a good understanding of the model's
strengths and weaknesses, its suitability to certain situations, and its sensitivity to incremental changes of
many kinds. Such an intuitive appraisal of a model is
frequently more valuable than extensive numerical
evaluations. The conditions of a smoothly flowing
dialogue are decidedly more conducive to thought than
the use of a computer merely as a calculating machine.
We felt that an interactive system would be desirable
in view of the nature of problems in traffic flow. They
are infinite in variety, yet they can be formulated
intuitively. We have daily experience with many of
these problems, and we know how traffic behaves
under many conditions. Since these perceptions are
often difficult to include in a precise model, it is to our
advantage to exercise a model in an interactive way,
and supply it with our reactions as the simulation takes
place. We are then employing the computer where it
is most useful in the problem-solving process.
Interaction with a computer simulation becomes
much easier for a man, as well as a more valuable
technique, when results are supplied quickly and clearly
in picture form. Pictures carry immediate meaning;
details and patterns can be recognized easily, and
factors of cause and effect are evident. When he changes
the conditions of the problem, he gets meaningful results right away and is therefore in an excellent position
for further interaction. He may continuously change
the parameters and see the sensitivity of the system to
these incremental differences. This kind of continuous
dialogue, uninterrupted by technical details, is a powerful and valuable method of investigation. A man is
thus able, with computer assistance for computation
and communication, to solve many problems beyond
the scope of a man alone or an off-line computer
program.
Just as graphical output is the natural form for the
machine-to-man communication, graphical input is the
natural form for the man-to-machine communication.
Many transportation problems require a specification
of a map and associated parameters. This can be done
initially in a digital form; however, it is much more

Computer Graphics for Transportation Problems

convenient and natural to input this information graphically,by using stylus-like devices. Furthermore, during
run-time the need often arises to identify particular
objects, which may be in motion. It is most natural
and efficient for a man to do so simply by pointing at
them with his stylus.
It is most important to provide the transportationengineer with natural means for communicating with
the computer. He should be able to concentrate solely
on solving the specific transportation problem, rather
than concerning himself with the details of computer
operations.

STAAT

II
•••••

-----.,1

_ _ _..J •

MAP

AUTO

L

f)

:.....
~o

• •' .

G

.

79

- -I.

<1

•....................•....g..............

-

G!:-""::4A"E
,

CITY/STA AllOC
NAME

ClrYISTA ALlOC

,

NAME

~IJY/SJA

ALlOC

NAlti

,

•

CO-TtRMINAl DATA
RECORD
CODE
3:)

3\)
30

3u

City
NAME

CtH
FLA
NYC
WAS

CD-TERMINALS
STA A STA B STA ,
OIeU

MIA
JFK
DCA

A TU d
OfH
00:18
.)0:19
OO:!6
0(,: Zu

01:"~

"0.-

Fll
*="'R
BAl

OUTY

lGA

vl :00
l.:30

lAD

OU1~

TRAN

DUTY

02:00
01: 31l
02: 3,)
:>Z:JO

01:0)0

"o:~s

A

TO

C

O/H

TICA....

DUTY

00112
00:20

01:)\)
,)1:30

01:15
01:45

8 TO C
O/H

00:14
oo:~o

TRAN

02:'00
ijj:30

STATION DATA
REC!)RD

coot

STATION
~M.E

ft.O
ft."

ABE
4bY
AeA
AGS
All

~o

AVP

40
~u

~o

It'
~O

SAL
ADA

CO"4VERSllIl'f
FACTUR fIJP GMT

EXT fNS IUN TO
8Ht:AK TlM£

(l'):"u

urI 00

+

~~:"o

oJ VO

,,5:00

OS:Ju

• Vi:()(;

PRI:-ClfARAhCE
TIME

uo 00
00 au
00 U
''0 00
00 00

ut. vO

+
+
+
+
+
+

US:Uli
05:110
YO :Ou

It.Tl
CO~E

Oi; 01)

0(, 00
0\1 JCI

Ou uti
\.IV

l'O

CUSTOMS
TIMe

00 00
00 00
ClO 15

DO OU

DU 0"

00 00
~C; 0:)

u\) ""
00
00

DO flO
(10 ::'5

Figure 2-Input audit listing

more favorable fashion. The effects can not be determined without trial. He can pursue this action time
and time again. After some succession of such attempts
the equipment specialist may feel that the overall combination has been exhausted. He may elect to start the
allocation from scratch if he feels that there is little to
be gained through minor perturbations of the originating allocation.
The equivalences of all' of these actions can be seen
in the flow chart. Prepass indicates the formation of
an allocation from scratch. Postpass indicates the
reduction of the previous allocation in accordance with
prescribed selection criteria; All pairing is based upon
random selection from the remaining unassigned
flights.

As each allocation is generated, its Figure Of Merit
is compared with those of the allocations previously
generated. If it is not considered to be a good candidate,
it is discarded. If it is considered to be a good candidate,
it is saved and the worst of those previously saved is
discarded. Throughout the processing some fixed
number of allocations is preserved representing the
best of the allocations generated. Final selection of the
best allocation is made from that reservoir.

The outputs
There are a number of different outputs available
from the CREATION program. The basic output is,

Real Time Considerations for an Airline

;(lJl'o ~v:.. fI'Ul

HeaRD
CODE

NSR. .Jf
PREPASSb

50

"11130(. ,-,f
POSTP.\S~tS

\JJJ10

i!~E::'t(up

IoAI\PPM

THo(t:SHt'L'J

S(fo

liOlvv

Owl-J

AwAY Mh;

JulY

S

vii42vtd3& 1

Ci.lNlI\J

Jl:1t5

"1:00

0&:30

00:30

Ol:O~

')4: :;0

Ii ECORD
CUDE

CRIH
TIME

rEBR IEF
TlM(

"U.MAl
06 TIME:

COTH"

STAPT
NIGHT

01: ')0

00:15

ub:-,O

61

RECOPO
CODE

F I~S T

DUTY

START

"110

T1p.tE

62

BIE

81

4

PthHASS

ST4. T RfPORT

PRINT
5

CO!'4T"Ol

3

10

t:fFE::' TIVf OUt.
DAY Yf,.

"'u

UROINATt

fUR FIIIUtlE

Cu~TK(jl

1

EFHCT I V[ DAY
OF THE WEEK

~AXIMUM

AllUC PRINT

..

\/AltJE
5

-LS

OISCONTlN"t: tJAT€

)
D<'B
LAX
0(8
1020
LAX
1300
DCB
LAX
nov
;)~tl

STATILiN

DCd

:lee

!<;28

ell

6 .. 6

EWK

212A
:'Jl5

.l.-i<0U
LAX
LAX
LAX

Meo

.. \Jul

""CO
r.41A
MIA
JFK
JFK
SJU
1-BAL
All
MI/.
~IA

J.7 5iJ

All
4-f<()U

uC~

u(.!1

'4It.

(;T"

"'I~

"111\

<'l

:)( d

~IA

1'>0)

~IA

0

12!>

I)

0
C.

521
941

l'

0
1

8l
699
&7
12

llS5
O4S
,2 ~5

1It1t18

.. 25'

17
33

140u

92·4

540

deO

1040

901
1341
134!
1'1511

83

9S3
!9B

(:

C
(j

89

1
0
0

101
2lt1
241
238

tl3

'tou

953
1953
89

b60
b60
990

b

510
1410
13S
6uO
4 .. 0
840
4020

lq't~

800
lc.35

AJL

000
1J. 25
H55

245

.I.2~o

lo2t1

84
ill',)

41b

bUt.

.I.1 .. b

3bO

ilOS

416
1H115
.. 38
U8
luO
44
425

BUl

22~'J

ur;8

@

l.!>:>'t

03

911

.... 6

()

LOCAL
~£P-MIN

4t>

C.

i245
tHO

TURNS TfJ
FLIGHT

a6

0

917

'11.\

UCd
')Ctl

2211
1520
2233
2,33
152(.

0

IJ{

403
16';1

~CO

11

cH

,

FLI(;HT
TIME

1$'1
8 ..

SJU
SJU

Me:.)

r;]

111e.

1

~('d

11

elf' S Tu~S

105:)

DC8

tl55
iH5'>
1£3.>
100)

NU'4Bt:R

NU"'I!E~

tit!
82

lOJI
2211

4

fll&HT

:~1A

"11t.
MIA
MIA
MIA
SJlJ
SJU
SJU
YUL
All
All
All
.HL
,,"II A

J.

T I '1~

,;3~

133;
l33'"
11'>6

901
801
~28

C.
0

2!

{.

n

33
40:..
923

a

40

od5

927

9d'>

0
0

924

G

28
'thl

l35
235
326
.Be;

0
()

~5jtJ

~4.b

94c.
d1U
9 .. 1

a5

(,

21~~

?30 Q

225
3H
33c.
15b

C.

l4

bbO

't ..

n2

1260
450

338
!23

9S.t:
9t15

63u
12u5

356
356
4UO

82
04

i.71t-

~l

r.

1956

H...

lJ

5Ju
17 .. 5

~3

J

tl7

0

117

o!334

694

it"'"

246

(j

"

21H

')~.:

~

213 7

lC;!I~

nd

Hlu

blO
78(1

d3

u.!u

d7
'115

.. 88
4J7J

(J

44
ill
142
242

14%

b

;)

12C

lSS.)
89

J~t<

12::5

li

(.;

U5

JF I(

c!1

i Cll'>

2 ..

C

U",

1135
7'>J
"OJ

401

90v

1,,1 ..

("'''

d8St;

1ua

~53

IDS

Figure 4-A listing of accepted segment records

THE CONTRIBUTIONS AND TIME RESPONSES
OF THE CREATION SYSTEM
There are a number of distinct types of requests
that must be satisfied by the CREATION program
from the time a flight schedule is initiated until the
time that the flight schedule is finally implemented.
The detail of information and accuracy of information,
and the time response in which the response must be
made, continually change over the course of the development of the flight schedule. These changes are
dictated by the degree of interaction between the participating groups concerned with formulation of the
flight schedule.
During the course of development there are four
primary groups concerned with production of the ultimate schedule. Fundamental responsibility falls within
the Flight Schedule group; theirs is the need of formulating the best schedule from the viewpoint of the
traveling public and the operations of the airlines.
And for the flights to be flown there must be an airplane
available for every scheduled flight; these considera-

Figure 5-Creation flow chart

Real Time Considerations for an Airline

eM.
f~EQ

EQ~IP

FlT

~L

SEQ /liP
08Ul
0801
Od01
0601

ALL~U

0027
1410
1043
0400

NR

=

1

PP

~o

S~HED
5E~

F.T.

Of-ILY
AJJ
f.T.
NITl SEQ TUT

08'll 0528
L

DdOl 0952
0801 0951

0
0

OttOI 0960

..

'- 35

... 15

2 15
2 25

1

2~

1 23
12.7l

JFK
BOl
SJU
MIA
SJU

BOL 072~ 0600 U
SJU 09Ju ).504 •
MIA 164C 1805 0
SJU 1456 ),017 0
E",I( ~955 2235 (.,

4v
4 25
~ 25
2 22
3 40

LINES
FLL1AT OATES

~I<

LINES
FLOAT DATES

-----SEU NR
09135
0801 u921t

L
0

flUAT 'lATES

0801 0007
0801 0006

.03

l'~

40

I.

8
l

LINES
FlllAT OATES

F~OM

AWAY

~ASE

C~EOIT

OUTY ~
TOUR C
tREOIT R

KUUTE
QUAL

JFK SJU 1.83J 2309 U
SJU JFK 0030 0301 0

3

JfK MIA
MIA PHl

S "'6
It

40

9

40

32 40
10 5J TOT
1 23 CR

----..

------

t'O

JFK BOl
aUL SJU
!)J~ MIA

36

20 :;0
1 j8

12 OJ

1 30

25
1 30

6 1v

.1.(.

!)JU EWR

13- 3£

---

- -

U

l4 01) TOT

2a CR

--

3~

-----!

3 31

40

.2l

JFK SJU

7 10
7 10

2

i!J

S u

IT

c.t J£>
~.;

~

M A

K

TiJU"
(.ttfOIT

3J3

L

l'i

iC" :.2
111 i.JI

",17 J7

u,",n
L~([J

..

!~

,,1

11. oj

I

,pi

1 ')
'II I. ..
iil .,<;

l.Ui

U

r{

),\ T

1:>

TRAIN 1

I

\

\

/
SIDING B

SIDING A

Delay to TRAIN 1 if meet takes place at siding A =

15 minutes

Delay to TRAIN 2 if meet takes place at siding B =

18 minutes

TRAIN 2

Exhibit II

mileage, the gross weight of the train, the horsepower, and the schedule or standard running time
of the train over the territory being simulated,
(b) contains the scheduled,
or work stop, delay information, such as mileage
and duration of the delay, any change in the consist
of the train (adding or deleting cars and diesel
locomotives), and whether the delay is clearing
or non-clearing.
The inputs for the Track Profile and the Signal Block
Records are obtained from Engineering drawings. The
input for the Train Data Records is obtained from
dispatcher train sheets, existing train schedules, and
Transportation planning personnel.
Appendix I contains the generalized logic diagram of
the simulation. After the Track Profile and Signal Block
Records have been edited and stored in the computer
memory, a reservoir of the first sixteen trains, in
chronological order, is created. Throughout the simulation, this reservoir of trains is scanned, and the train
with the lowest clock time is selected and progressed
through the system to its next event or decision point.
As the trains move through the system, the information
describing each train is actually moved from signal block
to signal block in the computer memory. The simulator
treats each signal location as a decision or event point,
where the relative position of each train can be evaluated
in order to determine whether they should stop,
proceed, slow down, or enter a siding.
If the selected train is originating, or returning from a
clearing work stop, the simulator must decide whether

to allow the train to enter the system. In real life, a train
dispatcher must scan the track in the vicinity of the
train to determine if the track is clear before deciding
whether to permit the train to enter the system. This is
precisely what the simulator does. If the track is clear,
the simulator places the train in the signal block and
calculates the time it will take the train to move to the
next signal location. This time is then added to the train
clock time and the train returned to the reservoir. If the
track is not clear, the train is not placed in the signal
block. One minute is added to the train clock time before
the train is returned to the reservoir.
When the selected train is one that has arrived at its
terminating point, a statistical record is created for that
train and the train is removed from the system. The
signals for that signal block are reset to clear. A new
train is then added to the reservoir.
When the selected train is one that has arrived at a
scheduled or work stop delay point, the duration of the
train's delay is added to the train clock time. A work
stop may be of two types-clearing and non-clearing.
A clearing work stop is one in which the train leaves the
CTC system, whereas a non-clearing work stop is one in
which the train remains in the CTC system. Therefore,
if the train has a clearing work stop, the simulator
removes the train from the CTC system, cancels all
meets and passes for that train, resets to clear the
signals for the signal block the train is in, and returns
the train to the reservoir. If the train has a non-clearing
work stop, the simulator returns the train to the
reservoir.
Before moving a train to the end of a signal block, the

Computer Simulation Model

l'& Iti't+l"H-ttItH+W,]" ::,'
II , ;;;

.. 1:

i

;1::

,' .. it"

. " ' : ~+ .. ,

j:,',:.i1.~,
'"
,
I 'I '" '

,,'j,

I'

95

II', i

Exhibit III

simulator must first determine if the train terminates in
this signal block or has a work stop in the signal block.
If either of the above is the case, the time for the train to
move to the terminating or work stop point from its
present location is calculated. Otherwise, the time for
the train to move to the next signal location from its
present location is calculated. Appendix II is the logic
diagram for the routine which calculates the train
running time over a specified distance. The dynamic
characteristics of the train are developed by the
Newtonian force equation F = MA, where F, the net
force available for acceleration, and M, the mass of the
train, are known and A, the acceleration, is calculated.
The net force is the algebraic sum of the forces acting on

the train, that is, the 'pulling' force or 'tractive effort' of
,the diesel locomotives minus the 'retarding' forces or the
train resistance due to grade, curvature, flange and
bearing friction, and still air resistance. The train is
progressed in small increments of distance of between
100 and 260 feet until the train has travelled the
required distance. The times for the train to move over
all the small distance increments are summed and
added to the train clock time. The train is then returned
to the reservoir.
When the train has reached the end of a signal block,
and hence, is about to enter the next signal block, the
simulator must first evaluate the position of the train
relative to other trains in the vicinity of this train. All

96

Spring Joint Computer Conference, 1971

... ___.___ ._ _ _ _ _ _ _ _ _ _ _ _ _ _-=-2

EXHIBIT IVa

C

TC

S I M l l • TIC N

CAN A C I A N
S TAT I ! TIC • t

38 TRAINS PER CAY PLUS 2 LCCAl

, .. A I

~

.1 S... T'P ClS

lell.

E

,
F
F

E
(
E

DEPAPTURE
TI'E
DV .... 'Jill

POWER
IN

08

01 16]0

24000

~IP

Ne

~

S~ITCHI~

"'E

TONS

ELAPSEC
UN CC RtN 01
HR fllN .... Mil'

OF

CARS

PAC I F I C

T
PE~loe

JAN

OFfiCE

OF

DEL A, T
CNFlCT
'Ito H.. PIN

THE

3. 1971

CHIEf

NO
Of
eNFLCT

" E

TOTAL
HR MIN

WI( SIP
H~

1 TO JAN

ENGINEER

AVG
SPD

e4ft!
4 3C
3 5;1
33
15
02
17
2
31.8
03tOC
1 J(
1 25
05
I
30.3
01 Ie 00
24000
04695
4 3C
4 10
20
15
14
29
5
30.1
01 Ie 15
18000
15990
4 45
4 31
08
15
15
30
4
-27.2
01 19 15
24000
04e.5
4 3C
4 Ie
12
15
Z2
31
5
29.2
1C1J.
F C 06
01 19 l'
18000
15'ge
4 45
4 22
23
15
15
4
28.8
I~OI'
,
A 03
01 21 05
05000
01200
3 4C
3 41
07e2
26
28
4
33.2
0469L._
4 1'- 4 .. 45
15-_
_u. __ ~_-A...__IjOlAZ_ _.....5'_____'2u6......,.4'___ _ _ _ _ __
2'12'
F F O. OL...HJQ.._~.o.t... U4
2Ff4 C
F
F
0'
01 21 45
24000 114
046.5
4 3C
4 05
25
15
10
25
5
30.8
01 21 15
18000 114
159.C
4 45
4 52
0115
31
46
5
25.8
1(1' i
F
(
06
1£~1 D
F
F
02
01 22 30
06000 019
02365
4 1C
4 11
11
2.
28
5
29 ••
·IOC..
F
E
08
01 ~3 30
24000 114
046"
4 3C
4 11
19
15
13
28
5
.30.0..
2e02.
P
A 03
02 01 15
05000 012
0120e
3 4C
3 ~1
03
11
04
15
~
34.1
02 00]0
18000 .IlL 159_~ _ _ ~_~L __ ..1 _OJ
.ll-; .. _._ .._.1l! _ _---:l4U16L...-~L_...JoQuJ~_ _',iL__..&2.:1.4....,"'--_ _ _ _ _ __
li.l)
F
Q
06
02 01 30
24000 114
04695
4 3(
4 21
10
15
23
38
4
29.0
2(01.
F E O'
02 01 30
18000 114
159fC
4 45
4 54
0915
__ 1..2
41
S
2.5.• 6
lCl"
F
C
06
02 02..,
24000 114
04695
4 3C
3 56
34
15
15
5
31.9
2elC·'
f
E O'
lG.3'
F
Q
04
02 02]0
12000 114
10452
4 45
4 34
11
15 ... ___ H__
29
5 _.21.~ __ .
IFj.3'---' . D 06 02 04 00 18000 114 159.0
4 45
4 37
08
15
15
30
5
21.2
02 04 eo
240eo 114
04t9~
4 3C
4 5,7
.. ___ ._~1- _ _~1....5t_ ___l---'01tl01L__....1_1.L5L__~5----J2""LII•..::J4'---_ _ _ _ _ __
'2~1!6 C
f
F
O.
02 05 15
240oii--IT"---0 ...695--~c-4-iG ..
20
15
14
29
5
30.1
IGU 0
F
F
08
2eOl D
,
C
06
02 05 40
18000 114
1599C
4 45
4 '8
1315
. 1 1 . . __ . .52.
__ 5 ... H...3... __ _
ii"U-lI-··,
D 06
iii 06 40
18000 114
15990
4 45
4 23
22
15
02
11
4
28.1
'02 06 30
24000 114
04n!
4 3C
4 ~
051 $ . n__ .
52
5 . __ n.I..~
I-:~::;
~.~! 02-09 15 06800 lCO e635C
1 3C
2 01
3109
09
3
21.3
2t02.
f
8
06
02 01 45
18000 114
0.172!
4 15
4 ..1L-. _-l0,a5-c..._ _~1~5~-........l1~5L---.....3!11QL-_-l6---~2.z'.a;.Q~-_----,.68.
F
B
03- 02 08 15 09000 114 07150
4 OC
4 211
2815
20
35
6
28.1
·2G£4.
F
F
06
02 09 00
18000 114
04!Of
4 3C
4 ~5
lJ15
~I.l
03
S __ 26.4
, 1t4" C·_·F·-- II ·-06 02-1i 00 18000 114 0772C.
4- 15
4 05
10
15
15
5
30.8
02 09 30
18000 114
1599Cl
4_ 45
5 3.!5
5015
1 13 _ _1 21.
.6. _ 22.~
02-10 20 -- 18000 114
15990
4 45
5 00
1515
31
52
1
25.1
02 11 30
2"000 11"
04f:9f
4 3C ~..L ____.u _____A.Uz-_ _...
J.A.l_ _ _..a.36
___......Ii6L__'2L:9r......3'--_ _ _ _ _ _---'
2e14 D
F
E
88
COl 0
F
(
06
02 11 20
18000 114
15990
4 45
4 39
06
15
18
33
5
27.0
2182 1I
F
F
02
02 13 15
06000 lC8
03090
4 3Cl
4 20
10
2.5_
._2.5. __--' __ ...29--0_ . _____ ._.
2CO'--.-F-- (
06 02--13-·00--18000 114
159ge
4 4 5 5 01
2315
44
59
6
24.5
. 2&~1.
F
F
03
02 13 40
09000 lllt
04fOl
4 3C
4 3'1
0136
.. 36 __ ._ ...5. .. __ 2L2. ___ . __ .. __ ..
:·-H~--i·-06
02 14-15-18000 085
1009! -.
4 -15
4 44
2915
31
46
6
26.5
len.,
F
E
08
02 15 15
24000 114
0ltt95
It 3C
4 12
18
15
15
30
5
29.9
4 00
4 4'8
48----~1c;;:5'----~3~9'-----~5;!!!4L---!5~-~2~6.....2 ! - - - - - - - : 2.61. f
B 03 02 14 50 0'9000 114 07150
20R_'_ _
F ___
E __ 08 .JtL!'!'~Q __ 2ltctQO ~ 14. ~~.E'f ___ ._4._ 3C
4 1'6
lit
15
20
35
4
.. 2.9.4 _

ICt.

F

toe
eF
CStS

C

~!'

F

2C12.
lell)
2e14.

I

04

08
06
08

OLH.J.O_J)~e.o.t

114
100
i14
114
114
114
012

:.-

i

____ "':" ____ .

::

~. ~~a-:--;-··~
I

H
I

------38 TRAINS PER DAV PLUS 2
PERFOR~ANCE

, R A I to
_toiJ: "8 TVP CLS

·:~G2

•c

2A:91

)I

UII

I

._.2Cll 0

~

.3e04

.. ICna
lCOI
'fOI2
leI.5

!H't

·'Baa

·

a

••,

,
8

!CC6 i
-3G02 -C
IGBI e
'3C08
2ell II

!CU •

IGI3 )I
!E!6 .

Zn3

·•

101 57

F
F
F
f
F

F
P
F
F
F

··-F

F

P

F
F

,

F

F
F
F

~C

DEPARTURE peWER NO
ef
T IPf
.
IN
OF
CSLSDYHR ·tlJN-· tl/P CARS

D

06

f

01

I

04

_to 06
E

C_

A
f
C

F
F·
E

·A·
12

E
t.
E
D
F

D

AV~GES

08

oe

0]
08

ii'
08
02
08
03
06
08
06
08
04
08
Q6

06

---"-

_._-

02 16 50
g~

III gD

02 20 ]0
....0.2 U.U
02 19 15
--'l2_l! 15
02 21 05
O~ 2C 30
02 21 15
02 21 45
02 2230

0 12

*

0051-0419

3 28

*

*

0056-0125

0
0
0
1

0426-0659

2 33

• 0106-1510

6 04

29
0131-0209
21 - -.--0642-=015-1
38
1130-1404
24
1845 1934
2124-l134
o...10--_.. __
.. . __
2141-2202
---- _._-_
.
_.
_.

.

0 38
1 15
2 28

0216-0248

'"

o

•* 0802-0839
• 1411-1440
.'" 1941-2003
2208-2247
•• 0121-0213
0639-()104

0 32
0 n
29
5 22
0 39

0422-0434 0
0841-091z 0
1451-1514 0
1937-200-'--0

12 •
31 •
23 •
:
29.----·

• 2137-22~_~_.!??.!_3__:~?_~_~,~.!... ____:
• 1516-1834

3 18

• 0255-0431

0434-0525
••• 0915-10011528-1607
•• 20z6=2103
0 33 • 0258-0424
"0--"5--.
0759-0842
2 35
141+-1437

•

SIDDlG 1

8.92 M

7.17SIDDlG 1

*

•

-----

*'"
*
*
*
*
*
*

8.92- 10.32

10.3l- 11.72

*
*
----

--

-~5

0
* -0530-'::06\)<}-1007-1047 0

'"
*

*
*

----

2l0'i-2121

2316-0042

1 26
45

0437-,)512

I)

0<34!l-090b
1442 15013
1944-2000
1204-225G

0 HI
'J 26

2319-(,040
0429-0435
1)]')(,-0845
1416-1434
1!l4u-l.14:>
212';-2141

(J

U

16
4(,

1 21
,)

00

l)

49

.J hi
;J O'j
()

12

..* 0616-0631
1053-1131
1116-1640

*'"
*
*
*
*
*
*

,.

*
*'"
*
*

*

o 4')
21

2e05-l01S
2256-2312

0 06 .. 0059-0123 o 24
0- 4b - *0619-0635 0 16
o 47
1003-11)43 0 40
0 12
1531-1611 0 40
0 13
0 45
* 2022-2107
- ... --- - - 0 16

0044-0056
0440-0520
0850=090z
144v-1506
185')-1941
2146-2157

0 12 --.-olbT:";Oll,j
0 40
0524-')616
0 12
0909-0956
a 26
1510-152d
0 51 ·-1946-1953
0 11
2202-2253

0047-0053
0527-06T3
0911-0958

f5T3-f525

•

•
.--.

(j
0

•

*'"

*

19
52

o 41

-

0 10
0 12
0 51

1,)49-1128
• 1617-1113

'"

*

2112-2124

o

0 46

o 25
0 39
0 56
0.12

•* 0622-0632
1000-103 I

0125-0216

•J 51
0 10

'•" 1533-1614
2002-2015

0 37
o 41

*
*

0 13
0 11

2258-2309

1 36
0 24
0 31
0 Iz
0 21
------

•• 08"5-0909
1445-1522
•• zoos-zozo
2253-2314
•• 0219-0252
0709-0754
• 1133-1408
1119=1837

••
..

2126-2138

1 18
0 12

0221-0255

0 34

•• 0637-0706
104S-UZS
1619-1116
• l020-2110
*
*

0 29
b n
0 51
0 50

1841-1938
••• 214+-2200

0300-0426
•• 0712-0752
un-nu
•• 1721-1833
2115-2126
•

•••

0 51
0 -46
0 39
0 31
-

1 26
0 43
0 23

0 51
0 16
1
0
Z
1
0

26
40
40
12

11

•

•
--.----•
---------~

•

•

-

••
••

.

Exhibit Va

3. reports listing and summarIzmg the intervals
that each signal block is unoccupied.
Exhibits III, IV, and V are samples of these outputs.
On the time-distance graph, time is represented on
the horizontal axis, and distance on the vertical axis.
The sidings are represented as the horizontal lines on
the graph. The trace of the train is indicated as it
traverses the territory, showing the location of any work
stops, meets and passes that the train had en route.
A similar graph is produced by the CTC system, which
makes the simulation graph easily understood by
operating personnel in the evaluation of the simulation
output.
The statistical reports are comprised of three tables.

The first (Exhibit IVa) tabulates the performance of
each individual train in terms of its elapsed running
time, which is compared to the standard or scheduled
running time, the amount of delay due to work stops,
meets, and passes, the number of conflicts (meets and
passes), and the average speed. These results are then
averaged to give general performance indicators for the
simulation. The second table (Exhibit IVb) gives the
same general information as the first table, but averaged
for various classes of trains. This report is useful in
determining the performance of each class of train. The
third table (Exhibit IVc) summarizes the performance
of each siding in terms of the delay due to work stops
and conflicts, the-number of conflicts at each siding, and
the average delay per conflict. This report is very useful
when evaluating alternative siding configurations.
The third type of output is the report listing and

-

-- ~

100

Spring Joint Computer Conference, 1971

r-~~~__~~~~~~~__~____~~~~~~~~~~~~~~~~____~prn~I~B~IT~V~,______________________________________~3~

eTC
JUN

S I MU l A T I

0

N

SUMMARY

OF

FREE - TIME

I~TERVAlS

2 19n

TOTAL

MILEAGE
FREE TiME
FROM ___!O _____ -'i~__"'_~
_EAST TERMIIIAL"'--_ __

LESS

THAN I HR

INTEWAtS ___

NQ..._~~~O

------------------.-

o F

I NT E RV Al S

NuHnER OF

1-1.5 HRS
HR MIN

1.5-2 HRS
2~2.5 HRS
NU
HR MIN__._-NO --HR
MIN
-----_.
-

F R E E - TIM E

2.5-3 HRS

NO HR HIN

3-3.§"$

3.5=4"$

" HRS
IND OveR

N_t:l_~"I_N__

Nt:I_ "_"IN

NO

HR._":.::.I::.:N-=----_ __

.------~-----

1.51-

2.93

20

16

29

24 13 41

4

4 45

1 50

•••••••

•••••••

•••••••

.•••••••

• ......

2.93-

4.34

19

01

34

30 13 25

3

3 39

1 57

•••••••

•••••••

••••• ••

•••••••

. . . . . ••

4.34-

5.75

18

29

34

30 12 34

3

3 49

•••••••

1

2 06

•••••••

5.75-

7.17

. 19

40

31

27 13 18

3

,. 05

•••••••

1

2 17

19

15

5

••• •••• •••••••

1 52

21

16

29

24 12 58

2

2 39

8.92- 10.32

21

02

32

28 14 17

3

4 10

••• ••••

10.32- 11.72

21

01

34

30 14 22

3

3 59

•••••••

•••••••

1

2 40

•••••••

.......

11.72- 13.12

20

59

34

30 14 30

3

3 4.3

•••••••

•••••••

1

2 46

•••••••

......-

....._-

13.12- 14.52

20

31

29

22 11 08

6

6 31

•••••••

•••••••

1

2 52

•••••••

•••••••

. . . . . __

14.52- 16.27 S
SIDDle; 2

25

15

9

° 52

2

2 25

•••••••

•••••••

•••••••

•••••••

" 19 SI

,__ ~t"7-_ 8.92 S
SIDIIIG 1

7.17-

8.92 H

2

•••••••
1

3 11

2 28

*...... ....... ....... ... ....
1

2 33

••• •• ••

•••••••

2

6

~

••••• ••

•••••••

••••• ••

•••_._._._-_ _----'1=----=1=----::0-=-"_ _ _--'

••••• ••

. . . . . ••

J!IllIIG 1

r
I

2

_._
••
_._._._._ _--=1-----.:2=------=3:.::5__._._._
••
_._._ _
••_._._._
..
_ _._
. _. .__
. ._
. _ __

1

.i.

2 20

. . . . . ••

! 14.52-

16.27 M 21

~.27~.51-

17.51

.20

38

30

24 12 13

5

5 21

•••••••

•••••••

*.. *. ••

1

3 04

•••••••

• ......

18.75

21

00

34

30

14 41

3

3 15

**.....

•••••••

••• ** *.

1

3 04

•••••••

....._.

f--1---=-1:..::8-=-_-=-75"-------=1:..::9-=-.-=-99-=----_-=-2-=-0--=5-=8____--=--34--=---_--=3=---:1=----=1-=--5_3"--9_ _ _2=----=2=----::15=--_*_*_._._._*_._ _•• * • • •*

*** •• *.

1

3 04

•••••••

• ••_ .
_._-_._ _ _-1

1---=1:.::9__=_.-=-99-~~2~I__=_.~23:::-._~21~----=:0=__=8~_ _-=-34~_-----.:3~1:.__.::1-=-5----'5=--:0=--_...:2=__-=2:...-=.14--=---_._*._._._._*__._._. ••.••

.*.....

1

3 04

•••••••

• . . -..••
-----1

3 05

•••••••

• . . . . .-

.......

3 53

SIDDG2

8 01
i i i it i.
i. it i i . if it
j 04
i i• • • Ii
i ...... ·
25
17 10 12
23
---------------------------~---------------------------4

21.23- 22.41

20

32

30

21 15 16

22.41- 24.34 S

23

41

5

••• ••••

2

2 11

**.....

..* • • • *

•••••••

•••••••

••• •• ••

•••••••
2

5 14

2 l"'-----.:"-=-O_ _ _---l

SIDIIIG 3

~~~~~~-~22~~1~9~.---~34~-~3~0~1~5-5~1~--3~----=:3~11~~.~.-._._*_._.__._._._
••____._.__._._._._._.__•_ _-=---=3--=-05:::-._••
_----'.__._...

25.11- 27.01

22

23

21.01- 28.1t4

22

09

*. •••••••

35

31 15 58

3

3 19

•••••••

• ••••

35

31 15 1t6

3

3 18

•••••••

•••••••

3 06

.......

• ......
.......

••••• • •

Exhibit Vb

summarizing the intervals of time that each signal block
is unoccupied. These reports are used to estimate the
amount of track maintenance that could be performed
with various train densities and schedules.
Initially, the simulation was used mainly to evaluate
various siding configurations for proposed CTC installations. More recently, the simulation was used extensively
to determine the effects of large increases in the density
of trains through an existing CTC system, and, what

effects capital improvements would have. The timedistance graph was very useful in helping operating
personnel visualize the projected train operations up to
ten years in the future.
In terms of future requirements for simulation models
of this type, computer aided dispatching and dispatcher
training simulation models are only two of the possibilities. In both these cases, visual display systems will
play a key role.

.1..

Computer Simulation Model

101

Clear signal
for train

Set signal to
restrict train
to 30 m.p.h.

Set signal to
prevent train
from entering
next signal
block

Scan the track
ahead up to the
first signal
block after the
first siding
Determine
whether trains
should pass and
~:""';";;--I--t if so where.
Set signals
accord ingly
yes

Scan the track
up to the
third siding
ahead
Determine
where trains
will meet and
~--~whlch train will
go into the
sidin
yes

yes

Set signal to
reduce train

>---........~ speed to 30
m.p.h.

Scan track up
to the first
siding

yes

Appendix I

Set signals to
stop train at
end of double
track

Set signals
accord ingly

Computer Simulation Model

Appendix II

102 A

102

Spring Joint Computer Conference, 1971

Set signal to
reduce train
speed to
15 m.p.h.

Set signals to
prevent tra.n
from entering
the next signal
block

Cancel the
meet or pass

Cancel the
meet or pass

Scan the track
up to the next
siding

Set signal to
reduce train
speed to
30 m.p.h.

Clear signal
for train

Clear signal
for train
yes

Cancel the
meet

Appendix I-Continued

A general display terminal system

by J. H. BOTTERILL and G. F. HEYNE
IBM General Systems Division
Rochester, Minnesota

INTRODUCTION

OBJECTIVES

Large multiprocessing computer systems using large
capacity, high-speed direct-access storage have become
very common. Likewise, high speed display terminals
are becoming increasingly popular, especially in special
applications such as airline reservations systems. Display terminals offer an even greater potential in making
available the resources of the large computer system
and its data base to the majority of computer users in
their office area. A generalized display capability of
this type is superior to a terminal system tied to the
relatively slow typewriter terminals, cards, or to a
data base which is incompatible with non-terminal
facilities.
Multiple Terminal Monitor Task (MTMT) described
in this paper is a display terminal system which extends to the individual users in their local areas the
advantages of display access to a centralized computing
system with its common pool of direct-access storage
and high computing power. The terminal user is provided immediate access to his source programs, data,
and job-control statements resident on direct-access
storage. Records may be viewed, changed, added, or
deleted freely and rapidly. In addition, the user may
set up and submit background jobs or request commonly needed data management services. This system
was designed and written to run under OS/360 MVT
(Multiprogramming with a Variable Number of Tasks).
A further description of the individual services is
given in the Appendix.
This paper presents the design objectives for MTMT
and how we attempted to meet them. First, we describe
the overall system design and control, and secondly,
the approach taken to several of the most important
services. These objectives could be used for any display
terminal system design.

The way programmers, engineers, and administrative personnel used our computing facilities indicated
that all three would greatly benefit from a terminal
system. The programmer needed to conveniently
modify his programs and execute them so that he
could debug them more rapidly. The engineer needed
to execute application programs, like Electronic Circuit
Analysis Programs (ECAP), to solve his problems
without· having to learn programming or to commute
to the computing center. The administrative personnel
needed rapid, easy-to-use data retrieval and update
capabilities on a real-time basis. Consequently, we
decided to provide data retrieval, data updating, and
job submission services along with the ability to add
application-oriented support. These services had to be
compatible with standard data sets or files for ease of
conversion to the system and compatibility with currently provided programs.
The terminal system could not be a dedicated system
since it was necessary to satisfy large production requirements on the same computer. We could not
tolerate frequent interruptions in the production
throughput caused by terminal system failures, and
terminal down time would severely reduce the terminal
system's value as an immediate data base access
facility. Therefore, the terminal system had to be
reliable and resilient, and had to possess a recovery
capability that would protect the user from loss of
data, loss of changes to data, or loss of other previously
accomplished work. The system also needed to provide
a convenient hardoopy capability so that users could
print or punch copies of their data sets for backup or
record purposes. All the above objectives had to be
met without modification to the Operating System.
Otherwise the future of the system would be in jeopardy
103

104

Spring Joint Computer Conference, 1971

and self-tutorial information could be used to make all
services conveniently available and avoid an extensive
command language.
With the objectives set and the type of display
terminal selected, we found it necessary to design and
implement our own system, since our needs were not
met by any available system. The following sections
describe the system developed to meet these objectives.
INTERNAL PHILOSOPHY
System
Tasking structure

Figure I-IBM 2260 Display Station

at each Operating System release change, plus, the cost
of release change modifications would be prohibitive.
Finally, the terminal selected had to be a low cost
unit which could handle large quantities of information.
To provide the data transfer speed desired-yet not
require extensive user training and user familiarity
with the system to make it functional-we used the
IBM 2260 Display Station (Figure 1). Cables allowed
placing the terminals withinYs mile of the computer
center and avoided the reliability and speed problems
of telephone lines. With this display, 960 characters of
12 lines by 80 characters could be displayed quietly
and quickly. The 12 lines permitted a display of an
area of data to be changed, a group of records describing
a machine part, a logical group of instructions (Figure
2), or an option list from which the user could select
his next operation (Figure 3). Thus, option displays

T'Ol 5 I • •

•

EDIT

MTMT22fiO. SAMPLEOl

OPEN FILUTEMPS) RECORD INPUT;

001010000

OPEN FILE(TEMPU RECORD INPUT;

001020000

ON ENOFILE (TEMPl) GO TO 052;

001030000

ON ENDF I LE (TEMP2) GO TO 053;

00 .... 0000

ON ENDFILE (TEMP3) GO TO OS,,;

001050000

ON ENOF I LE (TEMP .. ) GO TO LOOPC ;

0"60000

ON ENDPAGUSYSPRINT> PUT SKIP LIST ('DATA CONTINUED');
PUT PAGE LIST( 'CUSTOMERS WHOSE TOTAL CLAIMS) TOTAL PREMIUMS');
PUT SlCIP;
LOOPB:

The needed reliability and fail-soft capability was
provided by using the multi-tasking facilities of the
Operating System. A separate task was created to
control each terminal. This controlling task is referred
to as the "terminal driver." All services requested from
a terminal are attached, using OS/360 multitasking
capabilities, as sub-tasks of that terminal's terminal
driver.
If a service fails, the terminal driver can detect the
failure, notify the user, and allow him to continue his
session. Each terminal driver is attached by the system
driver, whose job is to monitor all terminals attached
to the system. If a failure occurs at the terminal driver
task level, the system driver is in the position to detect
the abnormal termination and reattach the terminal
driver, thus putting the terminal back in operation at
the sign-on display.
At initialization, the system driver attaches a terminal driver for each terminal and control is turned over
to these terminal drivers as shown in Figure 4. Thereafter, the system driver has only two functions: (1) to
reattach a terminal driver if it abnormally terminates,
and (2) to respond to system operator requests (for
example, specify number of active terminals, halt job
submission, or halt a particular terminal).

READ FI LE(TEMPl) INTO (CARD);

Figure 2-Sample MTMT edit display

001070000
001080000

MULTIPLE TERMINAL MONITOR TASK OPTION MENU
T'Ol
SELECT IIY NUMBER ONE OF THE OPTIONS Bnow OR SUPPLY PROGRAM NAME AS APPROPRIATE
O. LOGOFF.
1. BACKGROUND .JOB STATUS OPTIONS.
1. ADDITIONAL TERMINAL SERVICES.
I. REQUEST HARD-COPY OUTPUT.
2. DISPLAY SEQUENTIAL DATA.
3. EDIT SEQUENTIAL CARD IMAGES •
... DATA DEFINITION AND RESOURCE OPTIONS.
5. INPUT SEQUENTIAL CARD IMAGES.
5. INITIATE A BACKGROUND oIOB.
I S THE OPTI ON OR PROGRAM CHOSEN.
••• PARAMETERS

001090000
00500000

_ _ _ (ADDITIONAL SPECIFICATIONS)

Figure 3-MTMT option menu

A General Display Terminal System

The terminal drivers, meanwhile, completely control
the activity on their respective terminals by controlling
user sign-on, displaying menus of services, and then
attaching service modules. When a service module is
attached by the terminal driver as the result of a
request from the option menu, the service module is
given control of all displays for that terminal until the
user requests the service to complete. At this time the
terminal driver has the option menu displayed.
Overall system status flags and information are kept
in a common area in the system driver. The address of
this common area is passed to each terminal driver.
Each terminal driver has a common area which contains all the information pertinent to the terminal and
the address of the system driver common area. The
address of the terminal's common area is passed to all
the modules that are attached or linked by the terminal
driver. Therefore, MTMT is built around two basic
common areas-a system common area and a terminal
common area-each of which is accessible to each
module in the system.

Interface with operating system.

Multiple Terminal Monitor Task is a long-running
job which does not require modification of the Operating
System code. However, MTMT needs two capabilities
not normally granted to background jobs. First, it
needs to be able to dynamically change the unit address
field in the Task Input/Output Table (TIOT) for a
given Data Definition (DD) statement. This allows
DD statements to be swapped to point to any data set
on any permanently-mounted, direct-access volume.
Thus, data sets on any permanently-mounted volume
may be viewed or updated.
Secondly, MTMT must be able to issue operator
commands to the Operating System such as start
reader (S RDR). Both of these additional capabilities
are provided by an MTMT Supervisor Call Routine
(SYC).
A second SYC (which is optional) will allow the user
to run background jobs that communicate back to the
terminal from which they are submitted. Each such
background job formats its own displays and accepts
input conforming to its own standards. This second
SYC is not needed unless the batch conversation
capability is desired.
Since MTMT does not depend on modification of
the Operating System, it is very release independent.
The only areas where MTMT is subject to release dependence are in the format of the OS internal tables it
reads, and the input parameters for macros and SYCs.
Operating System changes in these areas seldom affect

--,----,-----r _.1. --, r - .1. -,

A_- - - - ------,- - - - , - - - -

.-----L--,

I~II~I
IDr_II~1

1

1 I

L ___ J

105

1

L ___ J

_l._,

r-.1. - , r
1_11_1
1_11_1
I
1 1
I
L ___ ...J

L ___ ...I

Figure 4-MTMT tasking structure

MTMT because such changes usually pertain to new
fields or parameters, rather than to existing ones.
Core use philosophy

MTMT is designed to require only a small region of
core. This allows normal batch processing capability on
the same machine. These batch processing regions
are then used by MTMT to execute compilers and
assemblers for the terminal user, as well as other types
of OS/360 jobs. By eliminating the need to execute
compilers and assemblers in the MTMT region, the
terminal system can better manage its region and require a smaller amount of core, but still provide the
full services of OS /360. This control is gained by having
all modules that run in the region hold to certain core
use conventions. In general, modules are kept smaller
than 4K bytes. Modules are made reenterable so that
no more than one copy of a given program is in core
at a time. Reenterability requires each load module to
obtain a work area (dynamic area) each time it is
called. All request dependent pointers, parameter lists,
I/O buffers, and tables are kept in this work area and
the module itself remains unchanged.
Dynamic areas from lK to 4K are acquired from a
pool of 4K pages in subpool 0 of the MTMT region.
This pool of 4K pages is reserved during initialization
and is managed by paging routines which keep only
currently-active pages in core. When an MTMT
service needs a page owned by a module which is
waiting for a reply from the terminal, the page is
written (rolled) to disk or drum and assigned to the
requesting module. When the rolled page is again
needed, it is rolled back into the same area of core for
use by the proper routine. A page will not be rolled
unless its area is required by another routine. In general, the modules supporting a terminal need only one
4K page. No more than two pages are ever needed for

106

Spring Joint Computer Conference, 1971

TermiNI
Driver

30

Total Responses 89,156 38,379147,802 19,947 12,607 1,440
Percentage
28.82 12.41 47.78
6.45
4.08
0.46

Figure 7-Response time

CONCLUSIONS
MTMT was run in 200K of core with an average of
15 of the 27 local IBM 2260 terminals active at any
given time. Note that the MTMT processing time for
an interrupt is the same for both local and remote
terminals. If remote terminals were used, only the teleprocessing I/O time would increase. The bar graph in
Figure 7 shows the number of responses during 132
hours of prime shift operation. The average response
time for all terminals at anyone time with the above
interaction of jobs was 2.46 sec. Of the total responses,
approximately 90 percent were less than 5 sec.
The number of terminals active at anyone time
affects the user response time. The following listing
illustrates this point:
Number of terminals active
1-4 5-9
Average response (seconds)
1.11 1.39

10-14

15-19

20-24

1.96

2.55

3.02

Measurements taken on our system showed that
l\1TMT used 2.0 percent of the time the processor
was active, and 1.5 percent of the time the multiplexor
channel was busy transferring data or instructions. This
amounts to 1.3 percent of the total processor capacity,
and 0.05 percent of the total multiplexor capacity.
These figures substantiate our experience and indicate
that MTMT does not degrade the performance of the
entire system to any noticeable extent.

The initial objectives for developing a general purpose
display system have been exceeded in the implementation of MTMT. Through the use of the multitasking
facilities of OS MVT, each display terminal operates
independently and reliably. The services of OS/360
are provided in a convenient form in the local work
areas.
Because standard data sets are used and accessed,
MTMT is completely compatible with other Operating
System facilities. Data bases and programs can be
easily updated through MTMT. Programs can then
be submitted to the OS through the complete remote
job entry capability. We provided the use of standard
compilers and assemblers without requiring a large
terminal-dedicated region. This allows running normal
production programs concurrent with supporting the
terminals. By using an SVC to allow the dynamic
specification of the volume to be accessed, we avoided
modifying the Operating System. This bought Operating System release independence and ease of installation and maintenance. The convenience and practicality
of the terminal system is reflected by the fact that over
60 percent of the jobs at our installation are setup and
initiated from MTMT terminals. All the programming
for the IBM System/3 Operating System was performed
using the services of MTMT. In addition, engineers
found MTMT useful for running their circuit analysis,
part design, and computational programs.

110

Spring Joint Computer Conference, 1971

The option menu concept, along with the standardized procedures throughout the services, have made
the system easy to learn. The modularity of services
facilitates adding special applications which use the
submodules of MTMT. The success of MTMT, ,in the
three years of operation, has proven that displays are
not only the" way of the future, but also the way of the
present.
ACKNOWLEDGMENTS
The authors express their appreciation to W 0 Evans,
R J Hedger, and D H McNeil, IBM GSD Rochester
Development Laboratory, for their help in designing
and implementing MTM'r; and to R F Godfrey, IBM
Poughkeepsie, for the inter-region communication SVC.

REFERENCE
1 J H BOTTERILL W 0 EVANS G F HEYNE
D H McNEIL
Multiple Terminal Monitor Task (MTMT)
IBM Type III Program 360D.05.1.013 for the IBM 2260
Display Station Rochester Minnesota March 10 1969
172 pp

APPENDIX-MTMT SERVICES
MTMT supplies the services of OS/360 MVT to
users at local and remote IBM 2260 Display Stations.
Up to thirty-two terminals may be active simultaneously while normal batch job processing runs concurrently. The MTMT system options may be summarized as follows:
1. Display sequential data-A data set containing
EBCDIC records of fixed, variable, or undefined type may be viewed 10 records at a time.
Scanning for desired text and multiple paging,
both forward and backward, are options available to the user. These data sets may contain
job output ora terminal-maintained data base.

2. Edit sequential card images-A data set containing EBCDIC records having fixed format,
80-byte logical records (blocked or unblocked)
may be viewed 10 records at a time. Card images
in this data set may be changed or deleted and
new card images may be added. The user can
scan for a sequence of characters and page forward and backward through his data set.

3. Remote job entry-The user may submit a
background job from the terminal by specifying
a user defined or standard cataloged procedure.
A hardcopy (print or punch) option for any
sequential or partitioned data set on-line is
available.
4. Key entry into OS card image data set-The
Edit intermediate file is blanked and card
images can be entered directly. The options of
Edit are available to the user as he enters his
data or program.
5. Data management services
Allocation of data sets-Direct-access data sets
may be allocated, deleted, cataloged, or uncataloged. The unused space on a direct-access
volume may be determined.
Move or copy OS data sets-This option provides
the ability to move or copy sequential data sets
or members of partitioned data sets on directaccess storage to either sequential or partitioned
data sets.
Data sets status and attributes-The attributes
and allocation characteristics of a sequential or
partitioned data set on direct-access storage may
be obtained.
6. Console operator services
Display active-The job name, subtask count,
core location in decimal, region size, and initiator
in control of job are shown for each job in the
system. Free areas are also displayed.
Display queues-All input and output queues in
the system can be displayed. The job name,
priority, and position in queue are shown for
each job.
Display job control statements-For each job in
the queue, the job control and system statements
can be scanned.
Enter system commands-Jobs can be cancelled,
moved to another queue, or have their priorities
changed by the system operator.
Terminal status-This is a display of all terminals
on-line and the service in use at each terminal.
7. Background job conversation-A background
job submitted from the terminal may exchange
display loads of data with the user at the 2260
through inter-region communication.
8. Execution of well-behaved user written programs
in the MTlYIT region-Subsystems using MTMT
modules have been written to perform special
types of data retrieval and updating. The- subsystems run under the control of MTMT; there-

A General Display Terminal System

fore, the user has all of the MTMT serVIces
available to him.
MTMT requires the current release of 08/360 MVT
on a System/360 Model 40 or larger. MTMT core
requirements depend upon the number of terminals
active and the modules permanently resident in the
Link Pack Area. A general guideline for the core re-

111

quirements of an MTMT region capable of supporting
local 2260 display stations is:
2 terminals- 64K
8 terminals-110K
16 terminals-160K
24 terminals-200K
32 terminals-240K

AIDS-Advanced Interactive Display System
by T. R. STACK and S. T. WALKER
National Security Agency
Fort George G. Meade, Maryland

compiler, are either so complex or unnatural that
applications programmers are driven away.
A detailed analysis of these problems evolved into a
design for a multiterminal interactive graphics display
system supported by a compiler level language. This
design brings together a number of concepts which have
been used before and several new proposals. It is not
intended to be the answer to everyone's problem, but
from our experience it is a significant advance over what
is generally available.
The system environment consists of an SEL 840MP
processing unit with 64K of 24 bit core memory,
movable head disk, multiple magnetic tape units, a card
reader, a line printer and three modified SEL 816A
graphics consoles. Interaction devices provided at each
console are light pen, three shaft encoders, a bank of
function switches with lamps independently programmable and an ASCII compatible keyboard. In addition
to the user controlled interaction devices the system
provides the application programmer with two special
interaction aids. A clock pulse occurring at one second
intervals is available to any interactive program. The
application programmer should treat this interaction
aid as an asynchronous "interrupt" since his program
response is subject to system loading at execution time.
The precise time of day is available should exact time be
necessary to the application. The second special
interaction aid is a program-to-program communication
package. .In essence one program may transmit a
message to another, provided the latter has been
enabled to accept the message. We feel this notify
interaction capability is extremely powerful in a
command and control environment.
To the system, jobs are classified either batch or
interactive. Further, interactive jobs may be either
graphics or non-graphics in nature. Batch jobs are the
type most commonly supported· by standard operating
systems. An interactive job is one generated by the
AIDS compiler and must contain at least one interaction

INTRODUCTION
Faced with the problem of developing a multiterminal
interactive graphics display system, analysis of past
experience l led to three specific problem areas which
must be addressed in order to build a workable system.
Not necessarily in order of importance or complexity
these areas are (1) the difficulty in specifying interaction
between the user and the computer, (2) the complexity
of handling large quantities of graphic data and its
interrelationship with the display hardware, and (3) the
need to get away from assembly (or assembly type)
languages so that one need not be a senior systems
programmer in order to write a. graphics program.
In past systems, the primary concern h3s been the
handling of display images. The specification of
interaction between the user and the computer has
appeared of secondary importance if it was addressed at
all. Unfortunately, a human being sitting in front of an
array of keys, buttons, knobs, pens (both light and
tablets) along with sets of lights, noise makers and the
display itself cannot be programmed quite as cleanly as
a card reader or magnetic tape handler. Experience has
shown that attempts to control all the interactive
devices available on most display systems leads to
several months of "call-back debugging" to handle those
unforeseen situations where "that key was supposed to
be disabled."
The problem of handling graphical data has led many
people to the conclusion that some form of data structure
is required to facilitate efficient utilization of the
display. 2 Closely related to this problem is the development of a true compiler-based language to support
graphics efforts. To get valid applications for graphics
systems, people who understand the application must
write the programs. As long as systems programmers
must be employed to write application programs,
graphics will remain in the experimental, almost useful
state. Current languages, either assembly or pseudo113

114

Spring Joint Computer Conference, 1971

specification. A non-graphics interactive job is one
which employs either the clock or notify capability.
A number of non-graphics interactive jobs concurrent
in a system provide considerable enrichment. Allocation
of memory and files as well as execution priority are
based on job classification.
Before describing the fundamental concepts employed
by AIDS, the distinction between users, programmers,
and systems development types must be pointed out.
A user is the ultimate consumer of a graphics system and
is expected to know the application being run on the
display but understand nothing of the workings of the
display itself. A programmer is a person well versed in
the application· and possessing reasonable knowledge
about the graphics system and language. He should
know only the compiler language (to require him to
learn assembly language would reduce his effectiveness
in his application specialty) and cannot be expected to
understand or perform "tricks" with the display.
A systems type is one who has very limited knowledge
of the applications being developed but is thoroughly
familiar with the details of the display system.
It is the user's responsibility to utilize the display
system effectively. The programmer must develop
programs which allow the user to concentrate on his
application and to develop confidence that the machine
is helping rather than opposing him. Ideally, after
sitting at the console and logging on, the user should
forget that he is directing a computer and be led in a
natural way through his application with all his concentration and effort being directed at that application.
The systems developer must devise a reliable operating
system offering in "a reasonable manner" all of the
capabilities of the hardware being used. To the extent
that he succeeds, the programmer and user will be able
to perform their jobs better.
This short divergence probably has validity in many
areas, but when considered with the present state of
graphics development it has particular truth. The
complexities of graphic systems have caused the
systems developers to concentrate on devising reliable,
capable operating systems which unfortunately don't
offer these capabilities in what programmers consider
"a reasonable manner." Thus, most programmers are
reluctant (at best) to undertake graphics programs and
those that do, are so enmeshed in the system that they
lose their close contact with the application they're
programming. Ultimately the user suffers from lack of
good application programs a:nd develops the idea that
graphics is just a cute toy.
AIDS LANGUAGE AND SYSTEM:
DESCRIPTION
The AIDS system deals with each of the three
problem areas mentioned above. The Interactive

Operating System provides a simple means of organizing
and specifying user interaction at the terminal. The
Graphic Structure Commands allow orderly development of complex display structures. The AIDS
Pre-Compiler removes the programmer from assembly
language problems while providing valuable bookkeeping and error detection functions.

The interactive specification problem

The major contribution of the AIDS development is a
simplified means of specifying user-computer interaction. With the WHEN Interactive Specification
Statement, the programmer details precisely what
interactive devices should be enabled, when, and what is
to be done if one of these is activated. The system3 was
devised from analysis of the way a programmer designs
the proposed user interaction with the computer.
Consider a user at a graphics console. Before any
pictures appear on the screen or lights flash etc., the
program passes through an initialization phase. A
picture is presented, certain of the interactive devices
are enabled for the user's choice, and the program then
pauses waiting for the user to decide what to do.
Selecting one of the enabled devices causes a burst of
computer activity, perhaps changing a picture, perhaps
enabling a different set of devices, but eventually
resulting in another pause where the user must make a
selection. Looking at the interactive program as a
whole, the process appears as a series of interrelated
pauses and bursts of activity which can be described in
a state table type notation. In the Appendix, Figure 4,
states are represented by circles and conditions active in
a state are represented by lines proceeding from one
state to another. The conditions are written in quotes
along with the responses to each condition (see
example, Figure 4). This state table development is the
process which a programmer, consciously or not, goes
through in designing his application program. The
extent to which the transitions between states are made
in a natural way determines the effectiveness of the
program to the user.
In the past, the specification and processing of all
interactive devices, was handled entirely by the
programmer, allowing uncertainty to develop as to
which devices are active. AIDS uses the WHEN
statement as an extension of the state table to specify
interaction. The operating system then performs all
enabling and disabling functions. For each condition in
each state, a WHEN statement is written detailing the
state, condition (interactive device), and the responses
(what to do if that device is activated by the user). The
program which results consists of a main program
handling initialization of the necessary pictures, tables,

AIDS

etc., followed by a series of statements derived from the
state table and detailing what the programmer wishes
to do.
Given this unusual program form, the AIDS precompiler maps each WHEN statement into a form
usable by the operating system. In this way the system
developer has provided a buffer between the complexities of his system and the desired simplicity which
the programmer demands. The programmer has an
easily analyzed and well organized specification of his
thoughts which can readily be revised or expanded and
the user has a far better chance of developing confidence
in the equipment since there are fewer chances that it
will fail to do what he commands.

Description of WHEN statements
The following is a summary of interactive control
statements associated with the AIDS Operating
System.
WHEN IN STATE n, IF condition a,
THEN ... response ...
This is the fundamental statement which is written
directly from the programmer's state table. The
pre-compiler offers considerable flexibility in that the
programmer can make his statements as wordy or
concise as he wishes, offering a self-documenting, easily
readable program or a tight, easily coded form for
quick preparation of test routines. The simplest form
of the statement is:
WHEN n, condition a, ... response ...
Where . . . "n" is a positive integer corresponding to
the arbitrary number assigned to each state in the state
table. If a condition is active in a number of states,
a list of the states can be given in place of the single
state number. If a condition is active in all states, (an
emergency panic button, for example) it can be declared
to be in state "0" and will be active at all times.
... a condition may be any of the interactive devices
available on the terminal being used, or the clock or
notify as mentioned earlier. Here again a long form of
the condition is available for the finished documented
program, but a short, easy to use form will also be
recognized.
. .. a response can consist of any single AIDS or
Fortran statement or any sequence of statements
excluding another WHEN statement. Since any statement can be included in a response translating the
programmer's state table into AIDS code is quite
simple. Anything that can be done in the main program
can also be done in a response.

115

ENABLE STATE n
This statement causes a particular state to be
activated (i.e., the conditions specified in WHEN
statements containing this state will now be searched
for by the operating system). No further action is
required by the programmer to activate interactive
devices. Whenever he wished to enable a new set of
devices, he enables another state.
WAIT
This statement needs background explanation. All of
the routines which we developed for this system are
written as reentrant code so that one copy will suffice
for all terminals. However, the Fortran library is not
reentrant and the code produced by the compiler has the
same flaw. A serious problem develops if one allows the
main program to be interrupted. If the response calls a
library routine which the main program was in the
process of executing, the main program's return will be
destroyed along with any temporary storage locations.
The WAIT statement was instituted to relieve this
problem.
The ENABLE statement activates the conditions
associated with a state but the main program will not
be interrupted by satisfaction of a condition until a
WAIT statement has been encountered. The effect of the
WAIT command is to indicate that unless the user
satisfies an enabled condition, the program needs no
more CPU time. At the beginning of each time slice
normally allocated to this program a check of the interactive devices is made, if no conditions were met, the
time is allocated to another program. The program is
reactivated as soon as an enabled condition is satisfied.
ENDWAIT
This statement is available in case the programmer
would like to reactivate the main program to do more
than its initialization role. An ENDWAIT statement
executed in a response will cause the main program to
restart following the WAIT statement at which it is
currently stopped. In our experience this feature is a
valuable one for developing parallel processes.
CHECK
The combination of WAIT-ENDWAIT doesn't quite
complete the picture. WAIT causes the program to
relinquish control and needs an ENDWAIT to restart
it. There are times in a long calculation process where
the programmer would like to see if any conditions have

116 - Spring Joint Computer Conference, 1971

should be used if possible to speed display execution.
The PDP-9 is used to interpret everything except the
actual display code. In our design the main CPU is time
sliced between three displays and several "background"
jobs and isn't available for file interpretation forcing us
to devise an executable data structure.

Figure 1

been satisfied without stopping the main program.
CHECK provides this capability. Upon executing a
CHECK the system will determine if any responses
remain to be satisfied. If any exist they will be executed
and then the main program continued; if not, the main
program is continued immediately.
Graphical data management problem

In deciding to provide a means of graphical structure
manipulation for AIDS, several considerations were
involved. First, the programmer had to be relieved of
the tedious problems of constructing display instruction
files, and yet any capability which he had before must
be available in the new system. Next, something more
than the display subroutine capability offered by many
hardware systems and echoed without improvement by
software systems must be provided. Some reasonable
means of building and organizing files in a logical and
concise manner was needed. Third, a capability to link
non-graphical data to the display structure must be
available so that, for example, by selecting a circuit
element on the screen with a light pen the programmer
could easily determine the notation, component value,
and other non-graphical data for that element.
The basic form of the graphical data structure which
we chose is a variation on the GRIN Graphical Structure
as described by Christensen. 4 The elements of the
structure described there. satisfy most of the requirements mentioned above and offer the most natural, easy
to use yet powerful structure which we had encountered.
The basic elements which will be described here are
directly related to the GRIN system and many of the
operations which our system provides are similar to
GRIN commands; however, we made no attempt to
duplicate the elaborate -dual processor (GE635 and
PDP-9) program ex~cution, or memory management
schemes which are part of the GRIN design. In addition
a major difference between systems is the executabl~
data structure which we implemented as opposed to the
interpretive structure developed for GRIN. Their
design has a PDP-9 dedicated to each display which
they calculate is idle much of the time and therefore

Description of AIDS graphical data structure

The basic elements of the graphical data structure are
the SET, INSTANCE, IMAGE, and LABEL blocks.
An IMAGE is a collection of points, lines, and/or
characters which is considered to be the most basic form
of display entity. It is the only element which contains
displayable code. An INSTANCE defines the occurrence
of an IMAGE. Each time an image is displayed on the
screen it is specified and positioned by means of an
INSTANCE. A SET is a collection of INSTANCES
whose occurrence can again be defined by another
INSTANCE. An elaborate "tree-like" structure of
these basic elements can be developed describing many
applications in an organized manner and allowing quick
retrieval of information at any level of the structure.
A trivial example of the structure and a convenient
way of diagramming it (as specified by Christensen) is
given here. Consider the picture of a house in Figure 1.
The typical non-structured display file for this picture
might describe each element in a sequence of instructions
or subroutines containing instructions. Figure 2 shows a
possible AIDS data structure representation of this
picture which is not significantly improved over the
subroutine description method. However, Figure 3
illustrates a much more complicated representation of
the picture organized by the programmer according to
his particular need to retrieve specific information at the
various data structure levels. Note on the diagrams the
representation of IMAGES as rectangles containing a
drawing of what the IMAGE will produce on the screen,
INSTANCES as lines connecting SETS with IMAGES
or other SETS (an INSTANCE defines the occurrence
of a SET or IMAGE) and SETS as circles which collect
together INSTANCES at various levels. The illustrated

IMAGE
TREE

IMAGE
WINDOW

Figure 2

IMAGE
DOOR

AIDS

structure shown in Figure 3 is exaggerated but for many
complex applications this type of multi-level structure
is vital.
The LABEL block is used to store non-graphical data
associated with any graphical element. It could be
attached to each of the INSTANCES defining occurrences of the IMAGE WINDOW in Figure 3, stating
the sash dimensions of each window. The organization
of data within a LABEL block is entirely up to the
programmer, the system provides a means of entering
and retrieving data and of associating the block with
any graphical element. As a follow-on to the AIDS
system, a list processing capability could be implemented to improve the handling of these blocks.
Examples of AIDS Structure Manipulation Statements
are given in the next section.
Compiler level language requirement

The pre-compiler idea has been used on a number of
systems and we feel it is a valuable tool in providing the
flexibility which an interactive graphics programming
system requires. An alternative is to write a complete
compiler and unless significant resources are available
this approach should be undertaken with considerable
caution. Fortran is a good computational language but
is not well suited to either interactive specification or
graphical manipulation, therefore we decided to maintain the algebraic qualities of Fortran and let the
pre-compiler handle all interactive and graphic statements. Examples of the AIDS statements as given here
and in the Appendix exhibit little similarity to Fortran
and in particular are designed to be readable by a
non-programmer.
We have already discussed the Interactive Specification Statement. The pre-compiler extracts the necessary
information from these commands and passes it to the
operating system. The Fortran compiler was not

117

modified to handle interactive capabilities. The following examples show how the AIDS graphic structure
manipulation statements are constructed. First the SET,
INSTANCE, and IMAGE associations are specified by:
SET ALPHA . CONTAINS. INSTANCE BETA
INSTANCE BETA . DEFINES. IlVIAGE GAMMA
where ALPHA, BETA, and GAMMA are graphic
elements declared at the beginning of the program. The
descriptors SET, INSTANCE, and Il\l[AGE are
optional; the verbs . CONTAINS. and .DEFINES.
determine the nature of the association and the
pre-compiler further checks to be sure the elements on
each side of the verbs are of the proper type. This is a
valuable service which the pre-compiler furnishes since
a type error undetected here can cause multiple errors
later. A picture can be specified as a series of individual
statements or as a concatenation in one single statement.
Thus the structure in Figure 2 can be specified as:

*
*
*
*
*
*
*

SET PICTURE .CONTAINS. INSTANCE A
.DEFINES. IMAGE TREE
.AND.
INSTANCE
B
.DEFINES. IMAGE
HOUSE
.AND.
C(l)
.DEFINES.
WINDOW

Building graphical data within an IMAGE is done
with the INSERT command. The following are typical
examples:
INSERT INTO IMAGE PICTURE: A LINE
FROM 100, 200, TO X, Y
INSERT INTO PICTURE: TEXT *THIS IS
AN EXAMPLE*
INSERT: FORMAT 100 AT IX, IY/(ARRAY
(I), I = 1, 25)
Other capabilities include:

Windows

Door

Figure 3

Detaching any element from another
Clearing or copying an IMAGE
Positioning or determining the position of an
INSTANCE
Entering and Fetching Data from a LABEL Block
and associating it with any graphic element
Showing (causing to be displayed) any SET and all
structure below that SET
Destroying any element not currently needed to
conserve core

118

Spring Joint Computer Conference, 1971

Many of the functions of the AIDS pre-compiler
could have been bypassed by allowing simple calls to be
added to the Fortran input deck. However, the interactive specification tables, the structure generation
statements, the compile time diagnostics, and the ease
of compiler maintenance make the pre-compiler concept
attractive.

CONCLUSIONS
The principal programmer complaint at the present time
is the slow response time. Some of this can be attributed
to the generalization of the system functions and to the
currently untuned status of the overall system. Another
complaint concerns the lack of editing facilities at the
image level. Sufficient "handles" have been designed
into the system to address this problem but as yet no
effort has been made to specify the functions. A possible
weakness concerns the fact that the system provides no
queuing of interrupts with associated user control of
priorities. 5
On the positive side, a number of programmers with
limited experience have been writing reasonably complex interactive graphics applications in a matter of
days rather than the weeks previously required.

8 R L WIGINGTON
Graphics and speech computer input and output for
communications with humans
Computer Graphics Utility /Production/Art Thompson
Book Company

APPENDIX
As an example of an AIDS program, we have
selected a problem that everyone is familiar with: (1)
draw an object on the screen, (2) position it wherever
desired, and (3) be able to delete it from the screen.
Figure 4 illustrates the State Diagram of the action
which this .program is to perform. Table I lists the
Interaction· Requirements which are derived from the
State Diagram. With the help of these figures, the
WHEN Interactive statements needed for this program
are easily written. Figure 5 illustrates the display
element structure which is developed by the program.
Initially all display data elements which are going to
be used in the program must be declared. There will be
one SET, four individual INSTANCES and one array
of 20 INSTANCES, and a similar number of IMAGES.
In addition certain Fortran variables are declared Type
INTEGER. The first relational statement creates an
occurrence (INSTANCE CROSS) of the IMAGE
TRACK. The next two INSERT commands create a

HKNOBl"KNOB2"
Update
Cross

REFERENCES
1 N A BALL H Q FOSTER W H LONG
I E SUTHERLAND R L WIGINGTON
A shared memory computer display system
IEEE Transactions on Electronic Computers Vol EC15
No 5 October 1966
2 I E SUTHERLAND
Sketchpad: A man-machine graphicatcommunicationssystem
Proceedings of the 1963 Spring Joint Computer Conference
3 S T WALKER
A study of a graphic display computer-Time sharing link
Masters Thesis Electrical Engineering Department
University of Maryland College Park June 1968
4 C CHRISTENSEN E N PINSON
Multi-function graphics for a large computer system
Proceedings of Fall Joint Computer Conference 1967
5 R G LOOMIS
A design study on graphics support in a fortran environment
Proceedings of Third Annual SHARE Design Automation
Workshop New Orleans La May 1966
6 W M NEWMAN
A system for interactive graphical programming
Proceedings of the 1968 Spring Joint Computer Conference
7 W M NEWMAN
A high-level programming system for a remote time-shared
graphics terminal
Pertinent Concepts in Computer Graphics Univ of Illinois
Press 1969

"LPH"
Position CROSS
at INSTANCE
Penned

current curve

Figure 4

Figure 5

AIDS

119

TABLE I-Interaction Requirements for the Example

1

DRAW Light Button

1
1

MOVE Light Button
ERASE Light Button

Draw Function
2

Button 1 L

2

KNOB1 or KNOB2

2

Button 2 L

')

Button 3 L

lVlove Function
3
3

3
Erase Function
4
0

RESPONSE

CONDITION

STATE

KNOB 1 or KNOB 2
LPWLB
(Light Pen Hit with
Light Buttons)
Button 4 L

NLB
Button 24 L

Add an INSTANCE and IMAGE to the display. Position the
Tracking Cross at 500,500 on the screen and enable State 2.
Enable Light Pen for selection of object to move and enable State 3.
Enable the Light Pen for selection of object to be erased. * Enable
State 4.
Get starting point of curve to be drawn from KNOB 1 and KNOB 2;
OLDX=KNOB1; OLDY=KNOB2.
Position Tracking Cross according to current counts of KNOB1 and
KNOB2; CURX=KNOBl; CURY=KNOB2.
Add a line segment to the current IMAGE from OLDX, OLDY to
CURX, CURY; OLDX=CURX, OLDY=CURY.
Curve drawing is complete. Remove Tracking Cross from screen.
Set up for next IlVIAGE. Enable State 1.
Same as KNOB1 or KNOB2 in State 2.
Determine position of object penned. Position tracking cross at this
location. Set KNOB1, KNOB2 to same X and Y values.
Place object to be moved at current X,.Y position of cursor.
Remove cursor from screen and enable State 1.
Erase object selected.
End wait mode. Job is complete.

* Note: User may not erase the Light Buttons 'MOVE,' 'DRAW' or 'ERASE,' however, movement of these
Light Buttons is allowed.
cross in IMAGE TRACK, the occurrence of which is
positioned at 500,500 by the POSITION command.
The next six commands similarly create occurrences of
the words DRAW, MOVE and ERASE. The Fortran
variable I, used to control allocation of images created
by the DRAW function, is set to 1. The next command
causes SET PICTURE to be displayed. You will note
that nothing will appear on the screen since nothing is
yet attached to PICTURE and SETS contain no
display 'ink.' Enable State 1 causes the system to
activate those interactive devices defined in State 1. All
other devices are inactive and will not burden the
system with wasted interrupt processing. The WAIT
command notifies the system that the main program no
longer needs the CPU. As soon as a ~ondition is met for
the current state the program will be given CPU time
and execution will begin at the first command of the
response corresponding to the condition met.
There are three active conditions in State 1: LB
DRAW, LB l\iOVE, LB ERASE. LB is an AIDS
mnemonic for Light Button. LB DRAW notifies the

system that INSTANCE 'DRAW' is alight button and
that whenever State 1 is enabled, the system should
display this Light Button. Thus when State 1 is enabled
the words DRAW, MOVE, and ERASE will appear in
the upper right corner of the screen. Further selection
of one of these Light Buttons will cause the corresponding response to be executed.
Selecting Light Button 'DRAW' causes INSTANCE
CROSS to appear on the screen and INSTANCE
BRANCH (I) to be attached to PICTURE. Only the
cross will be seen since no ink has been inserted into
LEAF (I). Next State 2 is enabled, the KNOB counts
for KNOB 1 and KNOB 2 are initialized to 500,500 and
the response is complete. Selecting Light Button
'MOVE' turns on the Light Pen for Set Picture and
enables State 3.
In States 2 and 3 rotation of either KNOB 1 or
KNOB 2 will update CURX and CURY with the
counts at KNOB 1 and KNOB 2 respectively.
Conditions of the form "if button n L" cause the
system to tum on the lamp associated with button n

120

Spring Joint Computer Conference, 1971

TABLE II-DRAW-1VIOVE-ERASE Program

C
C
C

C

C

C
C

C

C

C

C

PROGRAM DRAW
THIS IS AN AIDS PROGRAJH WHICH ALLOWS ONE TO DRAW FIGURES,
MOVE THEM, AND DELETE THElVI FROM THE SCREEN
DECLARATION STATEMENTS
SET PICTURE
INSTANCE CROSS, DRAW, MOVE, ERASE, BRANCH (20)
IMAGE TRACK, DRAWL, MOVEL, ERASEL, LEAF (20)
INTEGER OLDX, OLDY, CURX, CURY
INITIALIZATION
INSTANCE CROSS. DEFINES. IMAGE TRACK
INSERT INTO IMAGE TRACK: A LINE FROlVI -10,0, TO +10,0
INSERT: LINE 0, -10,0,+ 10
POSITION INSTANCE CROSS, AT 500,500
INSERT DRAWL: TEXT *DRAW*, 900,900
INSERT l\t10VEL: TEXT /MOVE/, 900,850
INSERT ERASEL: TEXT XERASEX, 900,800
INSTANCE DRAW. DEFINES. Il\t1AGE DRAWL
l\t10VE. DEFINES. l\t10VEL
ERASE. DEFINES. ERASEL
1=1
SHOW SET PICTURE
ENABLE STATE 1
ALL CONDITIONS IN STATE 1 ARE NOW ENABLED
WAIT
STOP
INTERACTION STATElVIENTS
DRAW FUNCTION
WHEN IN STATE 1, IF LB DRAW, THEN
SET PICTURE. CONTAINS. INSTANCE CROSS
. AND. INSTANCE BRANCH (I)
. DEFINES. LEAF (I)
ENABLE STATE 2
PUT 500,500 IN KNBCNT
ENDRESPONSE
UPDATE TRACKING CROSS
WHEN IN STATES 2,3, IF KNOB1/KNOB 2, THEN
GET KNBCNT INTO CURX, CURY
POSITION INSTANCE CROSS AT CURX, CURY
ENDRESPONSE
SETUP STARTING POINT OF LINE
WHEN IN STATE 2, IF I3UTTON 1 L, THEN
GET KNBCNT INTO OLDX, OLDY;
ADD A LINE SEGMENT FROM LAST ENDPOINT
WHEN IN STATE 2, IF BUTTON 2 L, THEN
INSERT INTO IMAGE LEAF (I): A LINE FROM OLDX, OLDY, TO CURX, CURY
OIJDX=CURX
OLDY=CURY;
CURVE COMPLETE, GO BACK TO INITIAL STATE
WHEN IN STATE 2, IF BUTTON 3 L, THEN
DETACH CROSS
1=1+1
ENABLE STATE 1;

AIDS

121

TABLE II-(Continued)
C

C

C

C

C

C

MOVE FUNCTION
WHEN IN STATE 1, IF LB MOVE, THEN
SETUP SET PICTURE: PENON
ENABLE STATE 3;
LIGHT PEN ENABLED, NOW SELECT OBJECT TO lVIOVE
WHEN 3, IF LPH,
GET INSPEN INTO DUMMY
GET INSPOS OF INSTANCE DUMMY INTO X, Y
PUT X, Y INTO KNBCNT
POSITION CROSS, AT X, Y
SET (PICTURE. CONTAINS. INSTANCE CROSS;
CROSS MOVED TO DESIRED LOCATION, NOW MOVE OBJECT
WHEN 3, IF BUTTON 4 L,
POSITION DUMMY AT CURX, CURY
ENABLE STATE 1
DETACH CROSS;
ERASE FUNCTION
WHEN 1, LB ERASE, SETUP SET PICTURE: PENON
ENABLE STATE 4;
CHOSE OBJECT TO BE DELETED
WHEN 4, NLB,
GET INSPEN INTO DUMMY
DESTROY DUMMY
GET IMGPEN INTO DUMMY
DESTROY DUMMY
ENABLE STATE 1;
PANIC BUTTON
WHEN IN STATE £1, IF BUTTON 24 L, THEN ENDWAIT;
END

when that state is enabled as well as specify an interaction requirement.
"DETACH CROSS" removes the cursor from the
screen.
Condition LPH in State 3 specifies that Light Buttons
should be treated as normal display entities. That is, the
Light Buttons as well as the constructed display objects
may be moved with the move function.
Condition NLB in State 4 specified that the light pen

is enabled but Light Buttons are not to be displayed so
they will not inadvertently be erased.
Destroy DUJ\1:MY frees up the memory occupied by
object DUMMY.
Finally the State 0 response specifies that the main
program is to continue. The main program will execute
the Fortran STOP function. A condition in State 0 is
active in all states and so button 24 defines our panic
mode exit.

CRT display system for industrial process
by T. KONISHI and N. HAMADA
Hitachi Research Laboratory of Hitachi, Ltd.
Hitachi, Ibaraki, Japan
and

I. YASUDA
Ohmika Works of Hitachi, Ltd.
Hitachi, Ibaraki, Japan

INTRODUCTION

satisfy such requirements to some extent. These
displays, however, are not always suitable to the
industrial application with limited ambient temperature
or reliability.
In some of these CRT displays a special deflection
method or a special CRT4-6 are used, but these,
consequently, mean the cost-up by the deflection
circuits, and complexity of the control circuit, or
maintenance. Further, although they might be used in
an individual purpose, they are not always suitable
when incorporated in various wide computer control
application systems.
In consideration of survey results of users' requirement and also the shortcomings of conventional CRT
as experienced when employed in actual industrial
processes, we have developed a unique industrial
process CRT display system by using standard television equipment. This compact display system can be
produced at low cost, promising higher flexibility, easy
maintainability, and higher reliability.
This device offers an easy man-to-machine communication in computer control systems requiring efficient
construction over a wide field of applications. 7 Such
applications include the indication of each function of a
mill line in steelmaking, the display of the operating
state of many tracks in a railway marshalling yard, the
display of circuit breaker operation at a power system
substation, and so on.
This report concerns itself with the construction, performance, operational principle, special features, test
result of the model, etc., relating to this new CRT
display system.

Recently, in such industrial fields as steel mills, power
stations, chemical plants, the use of computer control
system is promoted more and more for improving
productivity. For the efficient use of the computer
control system various information generated in the
form of characters, graphs, etc., must. be accurately
communicated with higher response between man and
machine.
As for the data input to the computer, punched cards
or punched tapes prepared by a puncher are transferred
to the memory in the computer through a card reader or
tape reader. In this case however, it is usually necessary
to verify mispunches by another puncher or to detect
rejects, and it is always difficult to change the information on the punched card or tape. Thus, this method
expends additional man-hours.
Various output information from the control computer such as process condition and operation program
are usually displayed on the panel which incorporates
lamps or numeric display tubes. These methods,
unfortunately, allow little flexibility for changing the
displayed format or contents, and with a scale-up of the
system, it proves difficult to display in a limited area.
For these reasons the development of such a new
display system has become essential that can simplify
input/output information needed for controlling industrial plants. There are already many kinds of CRT
(Cathode-Ray Tube) displays l-3 developed as peripherals for supporting business computers, that can
123

124

Spring Joint Computer Conference, 1971

on plural displays and for communicating with the
computer or file memories and others.

Color TV Vie. . .

(a) Optional units for displaying trend graphs or special
patterns. The trend graph unit is used to display
physical quantity that changes with time, just like
a pen recorder.

COIMMI.icatian -COMPUte,

Balie

Unit,

Unit.

Li..

The special pattern unit is used for the display of
histograms, work programs, skeltons of power distribution systems, etc.

HITN: 72SO

(~II~~~ ~)

(b) Interface units for the viewer.
Figure 1-CRT display system

CONSTRUCTION AND PERFORMANCE OF
THE CRT DISPLAY SYSTEM
A block diagram of the CRT display system is shown
in Figure 1. It consists of a common basic unit, and of
the optional units that can be selected according to the
application.

Basic unit
A basic unit consists of a keyboard, basic control
circuits, and a viewer using monochrome_or color CRT.
The keyboard includes character keys composed of
alphanumeric and special letter, cursor keys which
control cursor position, special keys used to write
simple diagrams by combining a special pattern, color
keys used to select a color from among seven, and control
keys used to select such modes as "transmit," "receive,"
"write," or "print." For example, when an operator
wishes to display characters on the viewer, he depresses
the mode control key to "write" and after setting the
cursor on the viewer by the cursor key at the desired
initial position from which the display should begin, he
selects a desired color by color key, then he may begin
depressing the character keys.
The basic control unit consists of a display control
circuit, a character control circuit, an interface circuit
etc., and it treats several signals for displaying on the
viewer as explained later. As the viewer, a commercially
produced monochrome or color TV set is employed.

When the same contents are displayed on several sets
of viewers, a multiplexer specified for monochrome or
color TV will be used. When the viewer is located
remotely, and within the maximum distance of 2 km,
a line buffer unit will be used.
(c) Interface units for input/output optional units.
Interface units are used when the previously described
basic unit or optional units are connected to other
input/ output optional parts.
The CPU (central processing unit) interface, used
when the display device is connected with a control
computer (e.q. the HITAC-7250, HIDIC-500, HIDIC100 or HIDIC-50, cassette controller), operates as a
peripheral device. A cassette tape recorder, disk
memory, or core memory is used as a file memory. As
shown in Table I, the cassette tape recorder has the
maximum value of memory capacity and the core
memory has the maximum value of operation speed. An
optimum selection among them will be made according
to the application fields.
The communication controller is used when the
display device is connected with a control computer
through a communications network such as a telephone
line. As an example of a communications controller,
a DC communication system of 2,400 Baud has been
TABLE I-Characteristics of Memories
Devices
Cassette Tape

Optional units

Recorder

Disc
Memory

Several optional units are available for displaying
characters other than those on the standard keyboard,
patterns or marks, or for displaying the same contents

Core
Memory

Use
Condition

Memor;; Capacity

Speed

Price Ratio

On'line

200 Frame/l Tape

1-10 min/I Frame

1 - 0.1

On line

50 Framel1 Disc.

On line

10 PrsIne/4

Ii: word

20 mall Frame

1 mall Frame

CRT Display System

Horizontal
R.fr••h _ y
2

3

--~-6

7

A Li .. for Charact.r
t-------A
A

8

Synchronouo

125

Signol

13

Lin. Spac.

Lin. - - - - - - - - - 1
Fram.

( 16.7 mo)

Figure 4-Example of TV drive signals
Synchronoua

Signala

Figure 2-Block diagram of display principle

developed and is in actual use as a management
information system at Hitachi Works.
The printer control unit is used when the hard copy
of the characters displayed on the viewer is required.

OPERATIONAL PRINCIPLE
Since a commercially produced standard TV receiver
is used as the viewer, the operational principle of the
display is almost the same as those of USA and Japanese
Standard Television systems. Thus, a character is
displayed as a set of bright (5 X 7) dots, on a CRT
surface by giving a· brightness control to the raster
scanning electron beam having a horizontal synchronous
frequency of 15.75 kHz and a vertical synchronous
frequency of 60 Hz.

Display oj characters and special patterns
Figure 2 shows a block diagram of the display
principle. When the alphanumeric letter A, B and a
special patter_n r key is pushed down in that order, the
8 Dot Timing
per Character

(a)

14 Ra.ter.
per Line

j:

I

Example of
Special Pattern

'246824682468

keyboard will generate 8-bit code signals (the alphanumeric code is ASCII and the special pattern code is
prepared exclusively for this device) which correspond
with each character accordingly. Those codes are stored
in a memory in the basic controller; therefore memory
capacity is made to equal the maximum number of
characters displayed on the viewer, and a storing
position can be appointed by an underline (a cursor)
displayed on the CRT viewer. The change of the cursor
position is easy and arbitrary by cursor control keys.
If a character key is pushed, and the character is
displayed above the cursor, the cu~sor moves automatically to the next position simultaneously. The
memory has a capacity able to store the corresponding
number of codes to that of the displayed characters on a
viewer field (e.g., storing capacity of 40 characters X 13
lines = 520 characters).
When the electron beam of the viewer (TV) is
scanned by the timing control circuit signal, the refresh
memory sends out the stored code corresponding to the
character control circuit.
By these transmitted codes and the signals showing
the position of the scanning line given from the timing
control circuit, the character control circuit generates a
brightness control signal necessary for forming the
characters or patterns.
As shown in Figure 3(a), the unit size of one character
and a special pattern is determined by 8 dots timing in
the horizontal direction and 14 scanning lines in the
vertical direction. Thus, the unit size of a character is

RGiter. for
} Character.

8

TABLE II-Special Features of Color Code Storage Method

10

Ra.ter. for
} Line Space

12
/4

Memory

Constru,ction

I

I

Ra.ter

No.1
(It)

No.2 7alter
.

I

No./4 RCllter

T

[I

Same as monochrome TV

Color Control

It is necessary to use a"'l 8-bit color decoder and 'an

Circuit

encoder for color R, G a.."ld B ..

Interface to

~

CPU

n

Keyboard

I-.s,....... .j'!
Far Charac-'- For Character

ter

Space

Figure 3-Example of characters and special pattern display,
and their brightness control signals

Same CE as monochrome TV

Interface to

Soft-

Same as monochrome TV

Character

For color code a character space occurs every color

construction

change

Data

Data making is facilitated by using one byte

Making

character.

per

126

Spring Joint Computer Conference, 1971

character is made from a color, code using one character
space at the head of the character codes. In this way,
the memory stores the color code and the character code
in series. This color code storing method offers the
special features as shown in Table II.

Line dot space
Char()ct~r

dot area

Trend graph display

Figure 5-Special patterns for graph display

displayed in a method whereby brightness is controlled
in a fixed position of an 8 X 14-dots matrix. The unit
size of a special pattern has a usable limit in all 8 X 14
dots of a matrix, but a character has a usable range of
5 X 7 dots of a matrix. For this reason, when A, Band r
are displayed, the control signal for each raster becomes
as shown in Figure 3 (b) .
The brightness control signal, and the horizontal and
vertical synchronous signals are mixed to form the video
signal of a standard TV system at the driving unit, and
the viewer's CRT is driven by this video signal. Figure 4
is an example of such a driving signal.
Color display

For displaying colored characters, the specified color
code must be stored in the memory. In this case, a color
code storage method in which no extension of the
memory format is necessary is adopted.
In this storage method, the remaining codes as spared
from assigning a character or a special pattern is
allocated for each color of, for example, red, green, blue
and so on totaling 7 colors, and color identification of
_

Plty.I ••1

Quoallty

( t)

~

h

h

:.

~~l~~~;~~:~~~~?~~l~~7{tt~~~~1 :;o__
-- --_-_L
- -__--_--=-=-=- - -- ... -----~

It)

-..F_=':-':-':-_='=
1-- - - - - -

w.

AJ.. - t .. - t .. -,
Ali· II - Ii-,
• A I (Con.lanl)

~~~~~~~~------~~----

_r

, -0.1.2.---.25

The deflection method of a CRT electron beam is
divided into the two main classes of random positioning
and raster scanning. With raster scanning, as a commercial standard TV set can be used, the cost of its CRT
control circuit is much cheaper compared with the case
of random positioning. But it was very difficult to
display a continuous curve, and its programming was
very troublesome. However we have succeeded in
solving these problems and developed a new graphic
display method by raster scanning as described
subsequently.

(a) Principle of trend graph display.
The displaying of graphs by raster scanning will be
divided into two categories. That is a curve approximation method and a piecewise linear approximation
method. In the former, the error of approximation can
be made as small as the viewer's resolution.
However, it is not advantageous from the viewpoint
of cost as it needs a large memory capacity with the
increase of the number of line elements. Therefore a
piecewise linear approximation is selected.
The piecewise linear approximation method is further
classified into a dotted pattern or a brightness line
method. In the dotted pattern method a curve is
displayed by combining several special patterns as
shown in Figure 5. The principle of operation of this
method is identical to that in the special pattern as
explained in the character displaying. Although this
method is superior in histogram display etc., it requires,
as its drawback, a large number of patterns for displaying a curve.
In the brightness line methods, when displaying a
curve designated as (a) in Figure 6 by a broken line

at

fr••• 

Read/Vr:lte circuit

(ftL) 416·

voltage adder cd

Brightness

"

'l'nmd sraph

...... ori«in
0.5

only on uppn

c~tor

Sow-toothed vave

. . .rator
_tal

;x r.

lead/Vrite cil'C\lit

0.25

(ftL) 208
horisontal poeition

.ote,

•

Bu.ber of rastera/Diaplay.t traMe

II

lfuaber of vertical d1.v1aiona!DJ.eplayed. truea

p

lIuaber

q

Position resolution/Unit lenctb of • :raeter

or

raat.ra/Unit divisione

rp:

lumber of b~ ts detel'llined by a pattern

r.:

BUllb_r of bite of 1-1) converter

I'd

Ifuaber of dota detera1nlDc br1«htn •••

I

control incre.ent

'20

127

Figure 8-Block diagram of character and graph display
of model group

methods obtained under these assumptions is shown in
Table III. As the result, it was found that the digital
method was most advantageous for its lower cost, and
it was adopted. This method, however, has some limit
in displaying the curve (i.e., an origin must be in the
upper, left corner; the vertical direction must be
selected in time axis and the total number of piecewise
lines must be under 26). However, a unit length of
each axis can be selected or changed by the software.
Thus far, operational principles of the character
displaying, the color displaying and the trend graph
displaying were described; however, further descriptions
in these regards will be omitted.
OUTLINE OF MODEL SET
The block diagram of a model device is shown in
Figure 8. This device is able to display characters, trend
graphs, and their scales on the viewer of a monochrome
or color TV receiver. In this chapter the control circuit
of the trend graph displays will be given further
explanation. As ilhistrated by using Figure 6 in the
previous chapter, when a curve is approximated by
broken lines, the vertical axis (time axis) is equally
divided as a unit of p-rasters, and unit time t i+l - ti or
a section is assigned to a piecewise line. In a section,
a horizontal component of a piecewise line (for example
qoqi) is equally divided by the number of p, so that a
line element of constant length of (ql - qo)/p is displayed while each raster is moving along a line (b) in
Figure 6 in a former section. In the model device, the
number of raster p is selected as p = 7 in order to limit
the memory capacity. A memory capacity to accommodate the same word number as a divided number of time
axis is necessary and in this case the memory capacity is
26 words. The data format of a word is constructed as
shown in Figure 9, and the length of the line component

128

Spring Joint Computer Conference, 1971

2°

25 - - - - - - - - - - - - -

""1

h"

'I-----~--;--- -- -- ---.- -------~-- ---- - - -- ---',--

INC

B

r

! __ ______________________

"--

Raet.,.

Di.pla,

a,oa

',--

B

in

Display

Nondisplay

s

Negative

INC

Code

Fi 9 ure

in

I

Data
graph

Di.pla,

0'0'

Figure ll-Example of group display

slcpe in "I"

indicating

9

r ==

"0"

sh,'·pe

Positive

'1'-i

"1"

in "0 "
increment

the

of

format
display

controller that generates the brightness control signal
from the coincidence detecters; and (7) a timing
controller that generates the timing and control signals.
Nextly, the function of the trend graph display
circuit will be explained. In the example of a trend graph
display shown in Figure 11, data with a length of line
increment llq = llqi, llq' = llq'i (i = 1, 2, .... , 7) and
so on are fed to the graph memory. By the control of the
timing circuit, the contents of the memory is read out
and calculations of qi = qi - 1
il.q, qi+l = qi
and so on are carried out in the arithmetic gate, and
their results are stored in the start and the stop register
respectively. As soon as a brightened spot of raster fl
enters the graph display area, the increment counter
starts counting, and when the contents of the counter
coincides with the start register (qi), the brightness
control signal is generated from the brightness controller. This signal output is kept to generate until the
contents of the counter coincide with the stop register
(qi
llq). Just before the scanning spot moves to the
next raster, r2, contents of the start register are replaced
by ql, and contents of the stop register are replaced by
ql + il.q by transfer control between both registers (as
shown in the upper column of Table IV).
When the slope of broken lines is in the negative sign
with the time axis as a base, or in the case of reverse
slope as llq'i as shown in Figure 11, information will be
transferred between the two registers as shown in the
lower side in Table IV.
Since the model device uses 6 bits for an appointment
of the length of increment lines, the slope of the

+ il.;

+

Figure 9-Data format of graph display

is indicated by the INC part. This memory is termed as
"graph memory."
Circuit construction of the trend graph display is
shown in Figure 10. As clear from the figure the
hardwares consist of several circuits such as (1) a graph
memory having a capacity of 26 words whereby the
length of divided line elements in every piecewise line is
stored; (2) a start and stop register that sets a start and
stop point of the brightened line on the raster; (3) an
arithmetic gate that calculates traveling distance of the
brightened spot for every raster; (4) an increment
counter, for coding the position of the brightened spot
with digitals; (5) a coinciding circuit that effects
coincidence between data of the start and the stop
resistor and of the increment counter; (6) a brightness

Graph
Signal

From
CPU

Oi 'pla,

+

TABLE IV-Transfer Control Between Registers According
to Slope of lPtcrement
Slope of
Increment

--..

Data

__ •

Cafttral

Figure lO-Circuit construction of graph display

Signa'

Contents of Start Regist&r

Contents of Stop Regist&r

(Stop Register)

(stop Regist&r) + (Increment)

(start Registu) - (Increment)

(start Register)

CRT Display System

129

piecewise line can be selected for an angle of 0 or ±63
against the vertical axis.
TESTED RESULTS
Some examples of character display are shown in
Figure 12 (A) , and the trend graph display is shown in
Figure 12(B). The test model performed reliably at an
ambient temperature from 0 to 50°C with the source
voltage of 100V, + 10 percent, -15 percent.
SPECIAL FEATURES OF THE DISPLAY
DEVICE
The newly developed display system offers the
following special features:
(1) Higher reliability and simpler maintenance

Figure 12 (B)-Trend graph display

Because the composing elements are carefully selected, this device can be used under severe conditions
as familiarly encountered in usual electronic devices for
industrial use, and this can operate normally even at
ambient temperatures from 0 to 50°C and with voltage
variance of 100V, 10 percent, - 15 percent.
Ies and LSls are used in the logic elements, in the
character generator and in the main memory, and an
all-solid-state transistor TV receiver is used as the
viewer.

as blocks and are incorporated into each standard unit
of specific performance. These are easily expandable to
meet various application requirements such as changes
or additions of the characters, colors, trend graph
performances, and such as enlargements of the display
units and the printer units or such as an installation of
long-distance transmission system between the control
unit and the viewer unit, and so on.

(2) Higher flexibility

(3) Lower cost

+

The basic units such as keyboard, control unit,
display unit, source unit, etc., of the device are designed

As the result of the development of a new control
method, a commercially produced low-priced TV set
can be used for the viewer. The use of IC or LSI has
enabled the reduction of the component parts, as well as
the easy manufacture and inspection since the control
circuits were assembled into each unit of its specific
performance. By optimum selection of options from
many available here, a display system with a high
performance per cost can be achieved.
EXAMPLES OF APPLICATION
As shown in Table V, this display system is enjoying
many practical applications, and in the future, we
anticipate its use in every industrial field.
CONCLUSION

Figure 12 (A)-Character display

This report has described a newly developed industrial
process display system, and explains the construction

130

Spring Joint Computer Conference, 1971

TABLE V-Examples of Industrial Applications

Application

Objective

Performance

(and system construction)
Kanagement inforaation systea
maJd,ng

of vide flange mill

1. "cnitering of process

2. Display of operation procedure

Optional unit

(1) Trend grapb

(2l

Continuoua colo.
Jllultiple:zer tor color
Blink

,. Inquiry system

(,
(4

4. Data setting or adjustment

(5) Line buffer
(6) Control electronics

monitoring of automatic

1. Table display·ot predictive load

(7) Double display

dispatch slatem

2. Data setting or adjustment

(8) Special pattem

(H-8400x2+H-500cBTctr+CBTs)
Economic load dispatch and

REFERENCES

Telemetert-Data. exchange controlle) Monitoring use

system

(

1 The computer display review

+B-7250<::::::~::tOring) ~: ::~::; :::~::::n~::

lfiniaturizstion of monitor panel
for concentrated control of

substations

1. Constant lIlonitOring of substation
system

(2), (,), (4), (6).

(7), (8)

2. Flicker of condition change of

systera

circuit breakers

SUpervisory control of terminal

1. Monitoring of distribut.ed

(,), (6), (8)

substation
2. Display of trouble

syste:n

I/O use of DDC' system

\.

( Process+H_1OO+CRrd:rt-CRl' )

1. Data displ..,.
2. Diagram (floll chart. block
diagram, graph) display

(3), (4), (6),
(9)

Printer

(10) Alarm

1. Information retrieval

(S).

of management information

2. Control of work progress

(11) Communication

system

,. Xanagement of personnel inventory

Wan/machine interface
Management

been proved that this device can operate normally as
expected and demonstrate its every unique performance.
Presently, this type of display system is produced in
quantity and is employed in various industrial fields.

and performance of the system as well as the operational
principle and special features of character display, color
display, and trend graph display.
From the test results of the prototype model, it has

Adams Associates Vol 1 August 1966
2 H S CORBIN
A survey of CRT display consoles
Control Engineering Vol 12 No 12 pp 77-83 December 1965
3 D J THEIS L C HOBBS
Low-cost remote CRT terminals
Datamation Vol 14 No 6 pp 22-29 June 1968
4 S H BOYD
Digital-to-visible character generator
Electrotechnology Vol 75 No 1 pp 77-84 January 1965
5 H L MORGAN
A n inexpensive character generator
Electronic Design Vol 15 No 17 pp 242-244 August 1967
6 F W KIME A H SMITH
Data display system works in microseconds
Electronics Vol 36 No 48 pp 26-29 November 1963
7 R L ARONSON
CRT terminals make versatile control computer interface
Control Enginering Vol 17 No 4 pp 66-69 April 1970

Computer generated closed circuit TV displays with remote
terminal control
by STANLEY WINKLER* and GEORGE W. PRICE
Executive Office of the President, Office of Emergency Preparedness
Washington, D.C.

The system described in this paper was developed
within the constraints imposed by the equipment
available for the terminals and by the characteristics of
the available computer. The design was intended to
satisfy an operational need, but the resulting system
contains considerable generality and flexibility. In the
second section, we state the design philosophy underlying the development; in the third section, the system is
described; in the fourth section, the software, a display
sub-program, callable from any main program, is
discussed; in the fifth section, the performance of the
system is briefly outlined; and in the last section, brief
mention is made of possible improvements and future
developments.
Subsequent to the completion of the work described
in this paper, our attention was called to the work of
Bond, et aI., at the Carnegie-Mellon University.1 They
describe an interactive graphic monitor program for use
in a batch processing computer system with remote
entry. Their system, although considerably different
from our system, is nonetheless of general interest.

INTRODUCTION
In a large interactive computer system, most users with
remote terminals will have a primary interest in their
own specific queries. These same users may also have a
common interest in or requirement for general information which is being concurrently processed. While they
can individually query the computer from their own
terminals, this procedure can be time consuming and
inefficient, particularly if the time of availability of the
general information is not known in advance. The
suggestion to explore the use of closed circuit TV was a
natural one since a closed circuit TV system was
available.
The use of closed circuit TV offered a number of
advantages. A TV display is easy to use, and each
monitor can be viewed by a group rather than by only a
single individual. The inherent familiarity with TV
made user acceptance almost automatic. There were
cost advantages not only because the system was
available but also because TV monitors and their
maintenance are relatively inexpensive.
There were two disadvantages which initially caused
some concern. The first was the read-only nature of
closed circuit TV. This was overcome pragmatically by
allowing TV viewers to interact with the system by
telephoning any questions to the Display Control
Operator who could obtain and input the repli~s to these
queries for subsequent display and also by co-locating
the TV displays with a remote interactive terminal
wherever possible. The second disadvaqtage was the
possible exclusion of remote users who did not have
access to the closed circuit TV. This generated the
requirement that general information, such as that
displayed on the TV monitors, should be able to be
directed to any, or all, remote terminals.

DESIGN PHILOSOPHY
The system was developed over a four month period
to meet a specific operational need. This schedule
dictated that only currently available equipment would
be used, since hardware development or modification
was not feasible in the time available. We wanted to
display, on an available closed circuit TV system, a
combination of pre-stored data, results of computations
and data inserted from a remote terminal.
The system had to be simple to operate. and
convenient to use. A permanent record of each display
was required, as well as the ability to review each display
prior to placing it on the closed circuit TV. The length
of time during which a display remained visual was to be
under the control of the Display Control Operator who

* Now with the Systems Development Division, IBM Corporation, Gaithersburg, Maryland.
131

132

Spring JointComputer Conference, 1971

was to have the option of aborting any display, either
prior to its appearance on the TV circuit or at any time
after it was displayed.
It was important that the desired information be
displayed in a timely fashion and that the design of the
system should not limit the number of closed circuit TV
monitors. Desirable features of the system included the
capability to replace only a portion of a display as well
as the capability to display a fixed pattern or table with
changeable data. Some form of emphasis at the option
of the Display Control Operator was considered useful.
It was desired to write the handler for the displays in
such a manner that any user program written in
FORTRAN, COBOL or any other higher-level language,
could supply results of computations, data and information for the display. Finally, it seemed desirable to
write the program in a modular fashion in order to
facilitate the introduction of additional features or
improvements suggested by experience in actual
operation.
SYSTEM DESCRIPTION
The system was developed for implementation on the
Office of Emergency Preparedness' UNIVAC 1108
digital computer. The computer has four banks (262,000
words) of main high-speed core storage. The operating
system used on the 1108 is EXEC 8 which has the
capability of operating in real time and demand modes
as well as in remote batch and local batch modes. The
OEP computer system currently handles up to 15 or
20 low-speed remote terminals (e.g., teletype) in an
interactive demand mode as well as a number of
high-speed terminals (operating at 2400 or 4800 baud).
The low-speed remote terminals are connected to the
computer by voice telephone links interfaced with
acoustically coupled modems and the high-speed terminals use standard data sets.
The teletype was selected as the display control
terminal because it combined the advantages of low cost
and ready availability, and met the requirement of
providing a hard copy of each display shown on the
closed circuit TV monitors. The teletype was connected
to the compvter in the usual way through a telephone
line. Although the operator at the Display Control
teletype would manage the information to be displayed,
the main program, which generates the displayable
material, could be initiated from any remote terminal,
low speed or high speed.
The video signal necessary to drive the closed circuit
TV system is tapped off the Computer Communication
Inc. CC30 cathode ray tube display terminal. This
terminal consists of a Sony TV set, a keyboard, a buffer
memory, and associated power supplies. The eCI

terminal is directly coupled to the computer through a
1600 baud line. The signal. is transmitted to the TV
studio where the video distribution to the closed circuit
TV monitors is made.
The actual distance from the eCI terminal to the TV
studio was about 75 feet, but transmission over
distances of several hundred feet is feasible using
RG 59/U coaxial cable. Four lines connect the CCI
terminal to the studio. The first line carries the video
signal, already mentioned; the second and third lines
are for the vertical and horizontal drive pulses; and the
fourth line, which is optional, is a standard twisted pair
of wires for voice communication. With this voice
circuit, the operator at the Display Control terminal
can provide "live" voice narration or pre-recorded audio
tape messages for any display. A diagram of the system
is shown in Figure 1.
The fact that the CCI terminal contained a TV
receiver as its display element, made it relatively simple
to obtain the useful video signals required. The display
consisted entirely of alphanumeric information. The
quality of the display was very satisfactory and the
picture obtained on the closed circuit TV monitors was
also of very good quality. Figure 2a is a photograph of
the eCI terminal equipment including a display on the
cathode-ray tube, and Figure 2b is a photograph of the
same display on the closed circuit TV monitor. The
picture of the TV monitor screen was taken using a
4 second shutter speed. The normal TV set jitter is
clearly visible, but has no effect on the legibility of the
display.
Although the keyboard of the CeI terminal could
have been used to introduce data into the closed circuit
TV display, it was not actually used in our system since

Horizontal Drive

Twisted Pair for Voice Communication

Figure 1-The display system

Computer Generated Closed Circuit TV Displays

r-------------- _______

I

~

NORMAL CONNE'CTON

:~~-

133

I

~:

Ill~~n,

L!L - ~ - -- - - __________ ~:_ J
..

II

I:

l~,

PIN

CONNE'CTONS AND FUNCTIONS

~

PIN NO.

PIN NO.

1

Data tprminal rf'ady supplif's propf'r l€'vel
to pins S,

1

~ =:::><:::= ~

~
6 and 8.

terminal bf'comE"s

Transmit from onE'

rE'C£'ivf'

at thE' othfi'r terminal

Clear To St"nd
Data Spt Ready
Signal Ground
Data Carrier DE'tector

Data Terminal RE'ady

4

4

!.j [

Figure 3-Circuit to bypass data set
Figure 2(a)-CCI terminal equipment with computer generated
display on TV set
'

it would have bypassed the Display Control teletype
and not have furnished the required hard copy. The last
two lines on the face of the cathode-ray tube of the CCI
terminal were not used for the computer generated
display in order to allow for variances in the picture size
on the TV monitors. Probably one line would be
sufficient protection, and the other line could be used
for special one-line messages.
The only significant difficulty encountered in developing this system was in achieving hardware compatibility.
Some experimentation and adjustment was required to

obtain proper synchronization of signals between the
CCI terminal and the TV studio. The direct coupling of
the CCI terminal to the 1108 computer required
bypassing the normal data set interface. This was
accomplished by designing a data set bypass circuit.
The circuit diagram is shown in Figure 3.
A minor, but important, problem was the format of
the data to be displayed. Any of the programs written
for the 1108 computer can be used to provide data for
display on the CCI terminal. Most of these programs
are written to furnish an edited output of 120 to 132
characters per line. The line length is 72 characters on a
teletype and 40 characters on the CCI terminal. For a
suitable display on the TV monitors available to us, the
data had to be reformatted to fit on 20 lines of 36
characters each. The number of lines and the line length
will vary somewhat among closed circuit TV systems.
THE DISPLAY SUB-PROGRAM

Figure 2(b)-Photo of computer generated display as seen on
closed circuit TV monitor

The sub-program is written in assembler language
and contains approximately 500 lines of code. The
system requires that a main program, before it may call
the display sub-program, initialize the data or information to be displayed. This output data is stored in a
buffer containing a maximum of 720 characters, formatted in 20 lines of 36 characters each, and must be in
FIELDATA computer code (octal). Since the CCI
terminal requires ASCII code, the sub-program contains
a FIELDATA to ASCII conversion table. The main
program is any user program modified by adding an
option to permit bypassing the old output instructions
and substituting for them the display sub-program
output instructions. Except for formatting, these output

134

Spring Joint Computer Conference, 1971

displays with a predetermined time delay in milliseconds
between each individual display. It is also possible to
retain each display on the TV screens until the next
display is available, or, after a pre-set time, to remove
the display and allow the screens to remain dark until
the next display.
Another function, currently available, enables the
operator to add information to an existing display or to
remove a portion of the data on a display. This feature
was developed to permit the display of tabulated data
with the capability of updating data while retaining the
tabular format.
The simple and modular character of the sub-program
allows easy modification and straightforward addition
of desired features. The approach adopted here was to
retain an essential simplicity and to add only such
features as experience in operational use clearly showed
to be desirable.
Figure 4-Flow diagram of display sub-program

PERFORMANCE
instructions are essentially the same for all main
programs. A flow diagram of the display sub-program is
shown in Figure 4.
In the sub-program there are a number of external
calls or functions which enable the operator to control
the system and exercise the various options available
within the system. The first of these calls is OPEN,
which performs the initialization of the CCI terminal
display and the initialization of the auxiliary teletype
(aux TTY), if this feature is included. The OPEN call
also establishes the entry of the system into the real
time mode in the computer. The call, CLOSE, takes the
system out of the real time mode and terminates the
CCI terminal as well as the auxiliary teletype. Thus the
CCI terminal and hence the closed-circuit TV displays
operate in the real time mode. The control teletype
operates in the customary demand mode.
The OUT, or output, call is the one which determines
whether the information on the control teletype will be
transmitted to the closed circuit TV monitors or not.
If the output is requested (approved) by the Display
Control Operator, then the information is displayed on
the CCI terminal display, sent to the auxiliary
teletype, if required,and the appropriate video signals
are transmitted to the TV master control for distribution
to the monitors.
The call, BLINK, provides the means to cause each
new display to turn on and off for a fixed number of
blinks when the display first appears. This method is
used to call attention to the fact that a new set of data
or information is being displayed. Blinking can also be
omitted at the discretion of the operator. The TIMER
call permits the automatic sequencing of a series of

The system operates interactively permitting data or
information displays to be prepared and selected for
viewing on closed circuit TV as requested and is under
the full control of an operator. A main program which
either contains or generates the data to be displayed
must be initiated and accessible to either a teletype
(control TTY) or a remote batch terminal. The main
program retains complete control over the operation of
the system and performs "all data handling and computations. Any data verification or modification must
occur within the main program. The display subprogram only activates the display on command. In the
operational system we developed, control is exercised
with a display control teletype. This teletype could be
located anywhere, and is acoustically coupled to the
computer by a regular telephone. The data and control
flow within the system is shown schematically in
Figure 5.
The operation of the system is straightforward.
A request for data from the main program is made at
the display control teletype. The main program then
enters the requested data, which must be a single screen
or page of data, into the output buffer. At the same
time, the data is transmitted to the control teletype
where the data can be reviewed. At the control teletype,
the operator can make a decision either to display or
reject the data sent for review. A "go" decision is made
by typing G and a "no-go" decision by" typing X.
A "go" instruction from the teletype transfers control
of the data to the display sub-program which then
clears out the previous display and initiates the real
time mode display of the data on the CCI terminal.

Computer Generated Closed Circuit TV Displays

Simultaneously, the video signal from the Sony TV set
in the CCI terminal is transmitted to the TV studio and
from there distribution is made to the TV monitors in
the closed circuit TV system. Television broadcast of
the data could be accomplished in a similar way.
In addition to accepting or rejecting, for display, the
data from the main program, the operator at the control
teletype can insert an addition to a display or originate
an entire message for display. If auxiliary teletypes are
permitted access to the system, the request (an OPEN
call) from the control teletype (for data) also activates
the auxiliary teletypes. The display sub-program then
continues to poll the auxiliary teletypes for an input or
request and acts on any inputs or requests received. The
polling continues until a CLOSE call terminates the
connection to the auxiliary teletypes. The auxiliary
teletypes may request transmission of the current and
subsequent displays, or request the discontinuance of
the displays. They may also transmit messages for
display.
The system described here was tested operationally
over a 12 hour period in a dynamic situation during
which information was continuously received and data
values were rapidly changing. Ten TV monitors and
four auxiliary teletypes were used. A fixed schedule for
the display of general information was established, in
this case, the first ten minutes of each hour. Each of the
ten groups (one for each monitor) had a five minute
time slice during which their special data requests were
displayed. Interruptions to display messages were made
at the discretion of the Display Control Operator. Each
group had direct telephone access to either the Display
Control Operator or an auxiliary teletype and could
easily submit a request for data or information update.
The test successfully demonstrated the capabilities of
the system, satisfying the requirements of the user
groups.

MAIN

Request for Data

PROGRAM

135

CONTROL
TTY

Data for Review
OUTPUT
BUFFER

GO/NO-GO Decision

t

A ux TTY Input

DISPLAY
Sub-program Response

SUB-PROGRAM

CCI
TERMINAL

AUXILIARY
TTY

CLOSED

4-Wire
Connection

CIRCUIT
TV

Figure 5-How the system works

The addition of a graphic capability would permit
the introduction of maps and the presentation of output
data in the form of curves, and would remove much of
the present limitations on the type of information to be
displayed. It would also exploit the ability of a TV
monitor to present actual pictures together with data
and text. It seems easy enough to introduce pictures or
even live action using a standard closed circuit TV
camera. However, formatting the data and text to
appear properly with the picture has not yet been
worked out.
ACKNOWLEDGMENTS

FUTURE DEVELOPMENTS AND
IMPROVEMENTS
A number of improvements to the system have been
considered and, should the need arise, would be incorporated into the system at a future date. Among the more
significant new developments are the ability to include
graphic material in addition to the alphanumeric, the
use of overlays which can be selectively introduced and
removed, and the introduction of special messages from
any telephone. It also seems worthwhile to have the
capability to que~e a series of displays and then be able
to select any of them in an arbitrary order.

We want to express our appreciation to all of our
colleagues who assisted and encouraged this effort. In
particular we want to mention Richard Vaughan whose
help was essential in developing the connection to the
main program and to James Johnson for his help with
the equipment.
REFERENCE
1 A H BOND J RIGHTNOUR L SCOLES
An interactive graphical display monitor in a batch-processing
environment with remote entry
Comm ACM Vol 12 No 11 pp 595-603 607 1969

The theory and practice of bipartisan constitutional
computer-aided redistricting
by STUART S. NAGEL*
Yale Law School
New Haven, Connecticut

THE THEORY

incumbents, (6) inexpensiveness relative to the quality
of the results, and (7) flexibility to allow for local variations and special considerations. The key questions in
this paper relate to what criteria should the computer
instructions seek to satisfy and to what extent have
various redistricting programs addressed themselves to
those criteria.

The theory behind bipartisan constitutional computer-aided redistricting essentially consists of three
normative criteria which any systematic legislative redistricting scheme probably ought to seek to achieve.
First, the redistricting system ought to be feasible such
that it keeps computer time and other costs down to a
minimum. Second,. the system ought to provide legislative districts that will be approximately equal in
population per representative so as to satisfy the
Supreme Court's equality criterion, and the districts
should be so shaped as to satisfy other legal requirements that relate to contiguity. Third, the system
sho~Ild consider the impact of the resulting districting
on Incumbents and on party balance so as to minimize
the unhappiness which redistricting might otherwise
produce for political leaders and for diverse political
viewpoints.
These three criteria are summarized in the title of
this paper. The adjective "computer-aided" refers to
computer feasibility. It also emphasizes that the computer aids in redistricting like an elaborate desk calculator and does not do the redistricting itself. The
adjective "constitutional" refers to the federal and
state legal requirements with regard to equality and
contiguity. The adjective "bipartisan" refers to promoting mutual party interests rather than ignoring the
partisan impact that all redistricting inevitably has. l
Computers can usefully supplement traditional hand
methods of redistricting. This is so because the computer when adequately instructed has great (1) accuracy, (2) speed, (3) versatility to satisfy many criteria
simultaneously including legal and political criteria
(4) ability to break deadlocks by facilitating politicai
compromises, (5) ability to minimize disruption to

Satisfying computer feasibility
A logical approach to computer-aided redistricting
if one had unlimited computer time and funds available
might be to (1) establish an overall criterion of goodness; (2) have the computer generate every possible
combination of precincts or census tracts into a given
number of districts in the area to be redistricted; and
then (3) apply the optimizing criterion to each of the
districting patterns in order to determine which districting pattern maximizes the criterion.
Assuming agreement could be obtained among lawyers and politicians on a composite optimizing criterion,
such an approach would lack computer feasibility.
With any realistic number of precincts or census tracts
larger than 40, the number of different districting
patterns into which they could be made quickly becomes astronomical and infeasible to handle. 2
A simple alternative to trying all possible combinations is to (1) start with the. prevailing districting
pattern; (2) move each precinct or census tract from
the district that it is in into every other district·
(3) each time a move is made check to see if the dis~
tr~cti~g has been improved in light of the optimizing
cnterlOn; and (4) each time an improvement is made
use that districting pattern as the one to be improved
upon until no further improvements can be made. 3
This method can avoid making a high percentage of
moves that would be made in the all-combinations
method by inserting into the computer program certain

* On leave from University of Illinois
137

138

Spring Joint Computer Conference, 1971

prerequisites that must be met before a move can even
be checked against the optimizing criterion. For example, no move of a precinct from its present district
to another district will be made if the move will cause
either the district to which the precinct is moved or
the district from which the precinct is moved to become
non-contiguous, such that one could not go from any
point in the district to any other point in the district
without leaving the district. Likewise, no precinct will
be moved from its present district if it is the only
precinct or unit within the present district thereby
destroying the district and decreasing the number of
districts. In addition, the district from which a precinct
is moved must be different from the district to which
the precinct is moved.
The above system of moving each precinct from its
present district to every other district in order to obtain
successive improvements can also be supplemented by
simultaneously trading a precinct from one district for
a precinct from another district. Every pair of precincts
gets an opportunity to be involved in such a trade
provided the above-mentioned prerequisites with regard
to contiguity, multiple-precinct districts, and diverse
districts are met. When a trade is attempted, the
resulting redistricting combination is checked against
the legal and political optimizing criterion to see if an
improvement has been made just as in the single precinct moving approach.
Alternatives to the moving-and-trading approach
other than the all-combinations approach include such
techniques as the pie-slices approach of Myron Hale,4
the diminishing-halves approach of Edward Forrest,5
and the transportation algorithm of Weaver and Hess. 6
A comparative analysis by Michael Strumwasser emphasizing the computer feasibility aspects of these
alternative approaches concluded: "The generalized
swapping algorithms (moving-and-trading), following
closely the human approach to such a problem, offer
a better solution than either geometric allocation (the
pie-slices and diminishing-halves approach) or mathematical programming (the transportation algorithm).
While the latter two approaches are aesthetically satisfying, the simplifying assumptions are violated in
practice."7
One additional aspect of computer feasibility relates
to the use of optical input and output. Edward Forrest
has advocated the use of optical scanners to read
maps as input into the computer,8 but this clearly
seems to be less economically feasible than relying on
the Census -Bureau tapes which provide (for each
census tract or enumeration district) information on
population, longitude, latitude, and other miscellaneous
information. Additional clerical work, however, is
needed (1) to show what precincts touch each other

precinct and if desired (2) to convert census tract
boundaries and information into political precincts.
Forrest also recommends maps as output, but the cost
would be far higher than an output which says District
1 consists of Precincts A, B, and C, and District 2
consists of Precincts D and E, and so on. From that
verbal information, one can easily draw district lines
on a precinct map showing what precincts are joined
together in the same district. 9

Satisfying the legal requirements
The legal requirements which any computer-aided
redistricting scheme should satisfy consist of equal
population per district and generally contiguity within
each of the districts.
Equality of population can be measured in a variety
of ways. The crudest way, although sometimes quite
dramatic, is to present the ratio between the most
populous single-member district and the least populous
single-member district in the area being redistricted.
This simple approach obviously ignores all the information available about the population of the nonextreme districts. At the most complex end of a continuum of equality measures would be such esoteric
figures as the squared geometric meanlO or the inverse
coefficient of variation.ll These complex measures have
no legal standing in that they have never been cited
as appropriate for measuring equality in a published
court decision or a statute. 12
The most favorably cited measure of equality in the
literature is to (1) divide the total population of the
state or area to be districted by the number of districts
or seats in the legislature in order to determine the ideal
population per district or per representative P and
(2) determine by what percentage the population of
each actual district deviates from this ideal population.
If the percentage deviation from ideal for any district
is more than a few percentage points, then the districting probably represents a violation of the equal
protection clause of the Constitution and the democratic notion of one man, one vote. Thus in the most
recent Supreme Court case dealing with the equality
standard, Missouri's congressional districting was declared unconstitutional even though no district deviated
from the ideal by more than 3.13 percent. Justice
Brennan delivering the opinion of the Court stated
that the "standard requires that the State make a
good-faith effort to achieve precise mathematical
equality."14
It should be noted that no matter how low the
average deviation is if there is even one district that
has a substantial percentage deviation from the ideal,

Bipartisan Constitutional Computer-Aided Redistricting

the whole districting will probably be held unconstitutional. This must be recognized in writing the optimizing criterion even though mathematicians find it
more aesthetic to minimize or maximize averages. IS
Contiguity of districts is the second legal requirement. It is usually stated as a requirement in state
constitutions .or state statutes, or prevails as a matter
of custom with minor exceptions, although it is not a
U.S. Supreme Court requirement. A district is contiguous if one can go from any point in the district to
any other point without leaving the district. I6 Contiguity is sought for many purposes including the purpose of (1) simplifying redistricting by eliminating
many alternative combinations of precincts, (2) enabling legislators to have easier access to their constituents, (3) decreasing partisan gerrymandering, (4)
encouraging people of similar interests to be together
in the same district, and (5) making districting patterns
more understandable and more aesthetically appealing.
A district can be contiguous and not compact, and
likewise it can be compact and not contiguous. Compactness can either mean being geographically like a
circle or a square,H or it can mean having its people
clustered close together regardless of the shape of the
perimeter of the district. I8 Compactness is not a legal
requirement. In fact the Supreme Court recently said
"A State's preference for pleasingly shaped districts
can hardly justify population variances."19 Likewise the
Court disparaged the value of population compactness
as well as geographical compactness by saying "to
accept population variances, large or small, in order to
create districts with specific interest orientations is
antithetical to the basic premise of the constitutional
command to provide equal representation for equal
numbers of people.' '20
In spite of the importance the Supreme Court has
given to equality and the non-importance it has given
to compactness and in spite of the importance the
states have given to contiguity, a number of redistricting programs heavily emphasize compactness at
the expense of equality and do not at all guarantee
contiguity. 21

Satisfying the political requirements
It can be demonstrated that if the precincts, census
tracts, or other building blocks out of which districts
are made are small enough, then virtually perfect
equality can be provided and still allow room for taking
political interests into consideration. Given this leeway,
it seems reasonable to expect the politicians to want
computer-aided redistricting to (1) minimize disruption

139

to incumbents, and to (2) facilitate political compromIses.
Disruption to incumbents can be legally minimized
by the following techniques: (1) the prevailing districting plan can be used as a starting point rather than
starting from an undistricted map of the state; (2) districts that already come sufficiently close to the ideal
population can be removed from the redistricting;
(3) one can select building blocks from which districts
are built with the knowledge that these units will not
be broken into smaller pieces; (4) no move will be
consummated if the new districting is merely equal in
value to the previous one rather than an improvement
as measured by the optimizing criterion; (5) redistricting can be done to satisfy the equality requirement and
then district lines can be drawn· so as to make the
number of districts dominated by the Democrats or
Republicans as equal as possible to the number before
the redistricting.
Political compromises can be facilitated in various
ways if the computer redistricting scheme inputs information on the number of Democrats and Republicans
in each precinct or building block. For example, the
computer can quickly show the Democrats and Republicans ·what is the maximum number of districts
which they could each dominate given the Court's
equality requirements and the partisan information.
From these outermost positions, both sides can work in
toward a compromise. Once the equality requirement
has been met, the computer simply shifts from an
equality optimizing criterion to a Democratic or a
RepUblican optimizing criterion in order to reveal those
outermost positions. 22
Some compromises might require that districts in
certain sections of the state be drawn to favor the
Republicans up to a specified point and that districts
in other sections be drawn to favor the Democrats up
to a point while providing court-required equality. The
computer can aid in this kind of politically-oriented
redistricting, but only if it has been programmed to
provide for such a political option.
An option can also be exercised within the computer
program to make the percent of districts dominated by
the Democrats (or Republicans) as close as possible to
the percent of Democrats (or RepUblicans) in the state.
Doing so provides a kind of proportional representation
without the complicated voting procedures which are
usually associated with proportional representation. 23
Finally, the computer also facilitates political compromises by quickly providing information on the
partisan composition of the districts in various tentative
redistricting plans. 23a
In the most recent relevant Supreme Court decision
with regard to political considerations, the Court said,

140

Spring Joint Computer Conference, 1971

"Problems created by partisan politics cannot justify
an apportionment which does not otherwise pass constitutional muster."24 The implication is that if the reapportionment otherwise passes constitutional muster,
then problems created by partisan politics and legislative interplay can be legitimately considered. 25
In some states or areas, an additional political
requirement for redistricting might relate to minimizing
the negative reaction of minority ethnic groups like
blacks or Spanish-speaking Americans. Just as the
computer can attempt to provide proportional representation to the Democrats and Republicans, it can
also attempt to provide proportional representation to
minority ethnic groups by seeking to have the percentage of districts which they dominate equal to their
percentage of the population within the state.
William Below and Michael Strumwasser have prepared computer programs that do seek to minimize
disruption to incumbents and facilitate political compromises. 26 Other programs, however, have lacked any
attempt to consider their partisan effects although some
have been labeled non-partisan. Labeling a redistricting
program non-partisan does not make it non-partisan
if the results change the partisan balance of power as
they are likely to do. The non-partisan label has merely
meant that the computer program so labeled ignores
the partisan effects of its work, and thus cannot facilitate political compromises or minimize political disruptionY Robert Dixon in his comparative analysis of
alternative computer approaches particularly emphasizes the importance of political sophistication and
understanding the political impact of computer-aided
redistricting. 28

THE PRACTICE
A computer program that was written in 1964 to
satisfy the requirements of computer feasibility, constitutionality, and bipartisanship has thus far had some
limited applications which might be worth reporting.
Further applications are anticipated after the 1971
state legislatures convene and. decide on the general
procedures they intend to follow in redistricting the 50
states for congressional and state legislative purposes.
The first application of the bipartisan constitutional
program consisted of experimental runs made to convert
90 downstate Illinois counties from 21 districts down
to 18 districts. 29 Using counties rather than precincts
or census tracts as the building blocks out of which to
make districts greatly limited the flexibility to maneuver. Under current constitutional standards units
smaller than counties would be a court-ordered requirement. 30 Nevertheless the redistricting was able to con-

vert the original 21 districts, in which 8 violated the
Illinois constitutional requirement of no more than 20
percent deviation, into 18 contiguous districts in which
none violated the Illinois constitutional requirement.
This conversion took only 81 seconds of computer
running time.
After meeting the Illinois constitutional requirement,
the program generated various politically-oriented districting patterns. They ranged from a pattern in which
the Democrats obtained a majority in only 22 percent
of the 18 districts, up to a pattern in which the Democrats obtained a majority in 39 percent of the districts.
This ability to provide alternative political patterns
could have facilitated the Republicans making some
concessions in the downstate area in return for related
concessions by the Democrats in the Chicago area.
Instead both parties moved so slowly trying to develop
political compromises that the constitutional deadline
passed and an at-large election had to be held to choose
the state legislature.
The next application of the bipartisan constitutional
program was by William Below working for the California Assembly Committee on Elections and Apportionment in 1965. 31 According to his report, "The
program was applied to Assembly districts in Los
Angeles, Orange, San Francisco, and Santa Clara
counties. In San Francisco, the use of the program
served only to verify that a particular set of goals was
not obtainable. In each of the other counties, plans
were produced which the committee staff considered
good enough to submit to committee members and the
affected incumbents. Three out of the thirty-one districts in Los Angeles (those which underwent the
greatest change), were included in the assembly bill
almost exactly as the program produced them. The
plans for Orange and Santa Clara Counties were slightly
changed on the advice of the incumbents. "32
Below also reports that ";Members of the committee
staff with no data processing experience became proficient at specifying the initial plans, weights, and
desired proportions necessary to use the program. "33
Below's version of the program added increased flexibility by (1) allowing different political goals for each
district rather than just having an overall political
goal for the area to be redistricted, (2) interspersing
moving and trading rather than doing all the trading
after completing all the moving, (3) developing techniques for translating census areas into political areas,
(4) simplifying the information inputed to preserve
contiguity, and by (5) translating the program into the
Fortran programming language.
The third application of the bipartisan constitutional
program was by C-E-I-R, Inc., for the Illinois Republican Party in 1965. Norman Larsen, who handled

Bipartisan Constitutional Computer-Aided Redistricting

the application for C-E-I-R, reported that the politicians were more interested in being quickly and accurately provided with useful information on the characteristics of the districts in a variety of tentative
redistricting plans than they were in having the computer produce an optimum output. 34 The Republican
Party was the minority party in the Illinois legislature
at that time, and its districting patterns were less
influential on the final result than the Democratic
districting patterns. Nevertheless the Republican Party
leaders did buy $31,000 of computer redistricting consulting services, and they appeared to be satisfied with
what they obtained.
Like Below in California, Larsen in Illinois made
various changes in the program to take into consideration the fact that thousands of townships and other
units were used to create the districts rather than a
mere 90 counties as in the original example. The
contiguity checks in particular were streamlined. Continuous intermediate output was generated to allow a
monitoring of convergence toward an optimum. Time
saving conditions were also introduced to eliminate
various kinds of moves that were not likely to lead to
an improvement.
The experience received in applying the program in
California for a bipartisan state legislative committee
and the experience in Illinois for the minority political
party showed that the bipartisan constitutional computer-aided redistricting approach is feasible from the
three viewpoints of computer technology, law, and
political realism. The approach would clearly be less
meaningful if it failed to satisfy fully these three essential criteria. It is anticipated that other versions of
this basic program and approach will be developed and
applied to the 1971 redistricting which is about to get
under way across the country.

REFERENCES
1 A computer program that seeks to achieve all three of these
criteria and their sub-criteria simultaneously is described in:

NAGEL
Simplified bipartisan computer redistricting
17 Stanford Law Review 8631965
2 This was the finding of Garfinkel and N emhauser as
described in the masters thesis of:
M STRUMWASSER
A quantitative analysis of political redistricting
UCLA School of Business Administration 1970
3 The essence of this precinct moving system was first
developed by:
H KAISER
,An objective method for establishing legislative districts
10 Midwest Journal of Political Science 200 1966
4 M HALE
Representation and reapportionment

141

Dept of Political Science Ohio State University 1965
See also:
MHALE
Computer methods of districting
In Reapportioning Legislatures H Hamilton ed
Charles Merill 1966
5 E FORREST
Apportionment by computer
4 American Behavioral Scientist 23 1964
6 J B WEAVER S W HESS
A procedure for nonpartisan districting: Development of
computer techniques
73 Yale Law Journal 288 1963
A revised version of their approach is described in:
CROND INC
REDIST: Program description and user manual
National Municipal League 1967
7 M STRUMWASSER
Op cit Ref 2 at p 19
See Also:
RISE INC
Proposal for the reapportionment of the California Assembly
March 1970
Available from 417 South Hill Street Los Angeles California
Because the Hess and Weaver approach begins from many
starting points rather than from the existing districting
pattern, their approach can produce many equally equal
plans which further decreases the feasibility of their
approach.
8 E FORREST
Electronic reapportionment mapping
Data Processing Magazine July 1965
9 Providing a map as output does not seem worth the extra
cost over providing words as output. Along related lines,
however, providing a system whereby one can move
population units through a computerized typewriter and
receive immediate feedback may be quite useful to politicians who want to do their own districting, but who want
the computer to provide information on the alternatives
they suggest rather than have the computer suggest alter
natives. See:
C STEVENS
On the screen: Computer aided districting
Conflicts Among Possible Criteria for Rational Districting
40-49 National Municipal League 1969
10 H F KAISER
A measure of the population quality of legislative
apportionment
62 American Political Science Review p 208 1968
11 G SCHUBERT C PRESS
M easun"ng malapportionment
58 American Political Science Review p 302 1964
12 G BAKER
Implementing one man, one vote: Population equality and
other evolving standards in lower courts
Conflicts Among Possible Criteria for Rational Districting
p 24-39 32 National Municipal League 1969
13 Where there are multi-member districts involving different
numbers of representatives in a state, the courts talk in
terms of population per representative rather than population per district. Someday, however, the courts may recognize that one voter who is a member of the majority
interest group in a district with two representatives and 2000
people has more political power than one voter in a dis-

142

14

15

16

17

18
19
20
21

22

23

Spring Joint Computer Conference, 1971

trict with one representative and 1000 people, since the
first voter can determine who two representatives will be.
J BANZHAF III
Multi-member electoral districts-Do they violate the "One
man, One vote" principle
75 Yale Law Journal p 1309 1966
KIRKPATRICK v PREISLER
394 lJ S 526 530 1969
Although Kirkpatrick dealt with congressional districting,
its standards would probably equally apply to state legislative districting. In fact the equal protection clause which
applies to state districts, more specifically requires equality
than Article I of the Constitution which applies to congressional districts.
The programs developed by Bill Below and Henry Kaiser
only optimize averages rather than force outliers under a
maximum cut-off.
W BELOW
The computer as an aid to legislative reapportionment
An ALI-ABA course of Study on Computers in Redistricting
American Law Institute 1965
H KAISER
Op Cit Ref 3
For two intra-district units to touch or be contiguous they
must share part of a common line no matter how small, not
merely a common point. Legal boundaries of land areas
adjacent to bodies of water normally extend over a portion
of the water at least for the purpose of court and police
jurisdiction if not for the purpose of ownership, and these
extended boundaries should be used in determining contiguity not the shoreline.
E C REOCK
Measuring compactness as a requirement of legislative
apportionment
5 Midwest Journal of Political Science 70 1961
WEAVER HESS
Op Cit Ref 6
KIRKPATRICK v PREISLER
394 lJ S 526 536 1969
Ibid p 533
WEAVER HESS
Op Cit Ref 6
FORREST
Op Cit Ref 5
The Kaiser program also lacks a guarantee of contiguity:
KAISER
Op Cit Ref 3
For a discussion of the computer programming that is
involved in both the equality optimizing criterion and the
various political optimizing criteria, see:
NAGEL
Op Cit Ref 1
James Weaver impliedly defines "non-partisan" as providing proportional representation by stating, "A procedure
blind to politics should provide a random opportunity for
changes in the party in power, hopefully approximating the
partisan ratio in the area."
J WEAVER
Fair and Equal districts: A how-to-do-it manual on computer
use
p 3 National Municipal League 1970

The Weaver-Hess program, however, makes no attempt to
provide proportional representation (i.e., to approximate
the partisan ratio) other than by blind hope.
23a An additional option can easily be added to the program
whereby the computer seeks to maximize the number of
districts in which neither the Democrats nor the
Republicians have more than 53 percent of the two-party
vote. This would please political scientists who feel
competitive districts make for more responsible representatives, but might displease politicians who prefer
greater margins of safety.
24 KIRKPATRICK v PREISLER
394 lJS 526 533 1969
25 Thus far, the Supreme Court has refused to declare political
line drawing unconstitutional where equality was provided,
WMCA v Lomenzo, 382 lJ.S. 4 (1965). Someday, however,
the Supreme Court might say that the equal protection
clause is prima facie violated if the districting plan gives
the minority party a substantially lower percentage of the
districts than the percentage of minority party members
in the state. This would be the case, for example, if the
minority party constitutes 40 percent of the people in the
state, but the lines are drawn so that the minority party
dominates only 15 percent of the districts.
26 W BELOW
Op Cit Ref 1
M STRlJMWASSER
Op Cit Ref 2
Both of these programs are based on the Nagel program,
Op Cit Ref 1
27 This is true of the programs of Weaver-Hess, Op Cit Ref 6;
Forrest, Op Cit Ref 5; Hale, Op Cit Ref 4; and Kaiser,
Op Cit Ref 3. It is also true of the more obscure programs of
C HARRIS
A scientific method of districting
9 Behavioral Science p 219 1964
J THORESON J LIITTSCHWAGER
Computers in behavioral science: Legislative districting by
computer simulation
12 Behavioral Science p 237 1967
28 R DIXON
Democratic representation: Reapportionment in law and
politics
pp 527-35 Oxford lJniversity Press 1968
29 NAGEL
Op Cit Ref 1
30 KIRKPATRICK v PREISLER
The court said "we do not find legally acceptable the
argument that variances are justified if they necessarily
result from a State's attempt to avoid fragmenting political
subdivisions by drawing congressional district lines along
existing county, municipal, or other political subdivision
boundaries." 394 lJ.S. 526,533, 1969.
31 BELOW
Op Cit Ref 1
Information on the California application also comes from
1965 correspondence between William Below and this writer
32 Ibid P 7
33 Ibid
34 In a report from Norman Larsen to Jack Moshman of
C-E-I-R, Inc. dated Nov 1 1965

"Second generation" computer vote count systemsAssuming a professional responsibility
by COLBY H. SPRINGER and MICHAEL R. ALKUS
Systems Research Inc.
Los Angeles, California

In recent years, the costs of holding an election have
increased substantially. Not only are more voters being
added to the registration rolls, but rising numbers of
candidates and issues have bloated ballot 'sizes in city,
county, and state elections. Not surprisingly, the most
promising solution to the growing vote tabulation
problem has seemed to lie in the application of automatic
data processing technology. Few would disagree that
computers should be able to perform the vote counting
task more economically and efficiently than any of the
other systems which have been designed for 'that
purpose. Because vote counting appeared to 'be a
relatively simple tabulation process, it seemed to be an
ideal example of the kind of job computers do best:
counting quickly, cheaply, and accurately.
In terms of the number of elections successfully
completed, computer vote counting systems have turned
in a satisfactory record. Both major punched card voting
equipment manufacturers have "run" hundreds of
elections. The exceptions, however, have been notable.
Fresno, California, has experienced consistently late
counts; Los Angeles suffered nationwide publicity with
its own tardy returns; and Detroit endured two late
counts after a history of successful elections with
lever-type voting machines. It has been the latter
cases-the only two large-scale applications of computer
vote counting systems-which have resulted in widespread questioning of the ,concept of computerized
elections by the public.
There appear to be three reasons for the failures.
First, punched card'ballots are themselves susceptible to
damage from handling. Second, centralized elections of
any kind-particularly those involving the introduction
of a new technology-require precise planning and good
management. Third, the manufacturers' computer
software systems have contained major design weaknesses~ Although any of these problem areas might
result in delays in any election, the duration of such a

delay is magnified in a large election because of the
sheer volume of the processing task.
Several election delays have resulted from punched
card damage. In the Flint, Michigan, 1970 general
election, cards soaked in heavy rains failed to feed--even
after being baked in the high school oven. Similarly,

Figure I-A typical punched card voting machine in use.
The stylus is used to pUl1ch through a mask and
template to the ballot inserted below

143

144

Spring Joint Computer Conference, 1971

complete understanding of the underlying data processing system. In one case-Detroit-contractor
personnel were precluded by legal interpretation from
being present at the counting sites. On the other hand
.
'
countIes with strong data processing management a
history of successful data processing applications, ~nd
close coordination between data processing and election
officials have had good elections. One of many good
examples is California's Santa Clara County.
In the Los Angeles example, a massive planning and
management effort (mounted after long delays had been
experienced in the June, 1970 primary) produced a
general election count without significant delays. 2
Previous planning oversights were avoided by transferring major responsibilities to the County Administrator's
Office supplemented by a significant amount of outside
consulting talent. According to one report, the extra
effort raised the cost of the November election to more
than $2.00 per ballot.
But another basic contributor to the major election
system failures has been the computer software design
itself. This paper focuses on such system weaknesses and
suggests major design changes. It identifies the need and
·suggests improvements for design and programming
standards,
auditing
procedures,
and support
documentation.
Figure 2-The punched card is inserted in a ballot envelope.
The attached stub is then removed and given the
voter as a receipt

punched card ballots in Detroit, after being stored in an
uncontrolled environment for several rainy days before
the election, failed to feed because of edge softness. The
Los Angeles County audit team suggested consideration
of alternate media, but concluded that the economy of
punched cards makes them the only medium currently
practical. 1
Reports from both Los Angeles County and the City
of Detroit 1970 primary electIons pointed to specific
planning and management weaknesses as one cause of
election failures. These experiences indicated that
probl~~s in t:ansportation, ballot protection during
transIt, InSpectIOn, and the performance of inexperienced
personnel were related to inadequate preparation for the
massive demands posed by the sheer logistics of these
op,erations. Further, management can be hampered by
election laws written for paper ballots. Although most
punched card voting machine manufacturers include
some management services as part of their support
package, legal requirements demand that election
officials remain in control of election night operations.
This fact alone can result in situations where decisions
must be made by election officials who do not have a

CURRENT DESIGN
It was the issue of security that first brought attention
to the weaknesses of computer vote count systems. Our
original research efforts focused on the possibility of
deliberate program modification as a potential security
threat. * This preliminary research, including the construction and analysis of a model vote count system,
uncovered several potentially disastrous weaknesses.
The November, 1969 Intellectron reporV identified the
following:
• The manufacturer-supplied operating system was
vulnerable to modification and would permit
changes without requiring access to the userdeveloped vote tabulation programs.
• Detection of a vote bias routine-one which would
change the ballot image-would be difficult during
production without destroying the election results.
Furthermore, such a routine could be written

* This and subsequent investigations were directed toward the
IBM Votomatic System as implemented in Los Angeles County.
~ll commercially marketed punched card voting systems currently
ill use are forms of V otomatic. Although IBM has not released a
list of the firms now marketing these systems seven have been
identified.
'

"Second Generation" Computer Vote Count Systems

without altering linkage editor totals or core-image
length.
• A valid "logic and accuracy test" would require
either a sophisticated computer program or prohibitive amounts of computer time-perhaps
several times as much as that required for the actual
vote count itself.
• Many techniques of computer vote fraud require
the access of only one person and, at most, an
operator and a programmer.
• None of these techniques would be detected by a
casual 0 bserver, even if he had an extensive
background in data processing.
These findings indicated that the possibilities for
system failures-not only due to fraud, but to unintentional error as well-had been seriously underestimated by their designers.4 The subsequent elections
proved the prediction to be correct. 5,6,7
The analysis of these problems in the Votomatic
system may, however, prove valuable in constructing
"second generation" vote count systems. While such a
detailed analysis of current systems has not been made,
it is now possible to identify some of the factors which
should be included in future systems.
At least part of the problem derives from a basic
misconception on the part of some of the users and
systems developers themselves. Although the job may
seem to be relatively uncomplicated, in fact, the
challenges offered to a conscientious designer are
anything but trivial. First, the accuracy attained by
such a system must be absolute and unquestionable. If
the slightest doubt as to the reliability of the resu~ts is
tolerated, one of the prime goals of the system-maintenance of public confidence in the sanctity of the
ballot-is forfeit.
Second, the system must operate under the most
exacting timing requirements. In the short period
between the determination of candidates and issues and
election night, ballots must be designed, printed, and
distributed; workers must be recruited, trained,
organized; and program modifications must be completed, tested, and certified. The system must deliver
results promptly and reliably on election night-and it
must do so the first time.
Third, the system must be extraordinarily secure.
Computerization means centralization: where counting
was once conducted by precinct workers scattered
throughout many locations, a computerized system
necessitates consolidation of election night activities,
and a corresponding threat to security. Centralization
of counting processes reduces the obstacles to a
would-be vote embezzler-where previously he would

145

have had to bribe or coerce large numbers of election
workers, under the new systems he need only alter the
central counting mechanism. For this reason, that
mechanism must contain adequate multiple safeguards.
Last, because of the high stakes involved in election
tabulation, the entire vote counting process must be
reconstructible. Although extensive audit proce-dures
are not normally provided in non-financial systems,
audit trails are needed here on several grounds. One
clear reason is that the threat of deliberate tampering is
reduced when audit procedures increase the likelihood
of such activity being discovered. Another is that
accidental error is less likely to escape unno~iced, and
the program fault which allowed it to go uncorrected,
when control totals are automatically compared during
the course of tabulation. Perhaps most importantly,
thorough audit procedures can answer challenges to the
validity of results, and thus act to maintain public
credibility.
Clearly, the system which fulfills all of these requirements performs far more than a simple tabulation task.
And yet, in spite of the obvious need for the safeguards
and redundancies required by any high-security
system, few of those now used in vote counting contain
even one of the performance characteristics described
above. The system outlined in this paper is designed to
meet the special demands which the past several years
have shown to be made of such systems. The technology
is now present to implement the system described for
small- and medium-sized elections; and it is reasonable
to believe that a rigorous research effort could yield one
capable of handling very large elections, as well.

DESIGN CONCEPTS
A vote tabulation system must be designed to satisfy
three criteria: assurance of ballot security; performance
of tabulation functions at minimum cost; and prompt
issuance of election results.
Of these criteria, security is clearly the most critical.
An electronic vote tabulation system must provide
special safeguards against both accidental and intentional alteration of the results. These safeguards can be
provided by internal audit procedures, by full operational testing of all peripheral (as well as the main
counting) programs, and by external inspectors and
observers. A system can be designed to take advantage
of the high speeds of third generation computers to
provide more thorough reports, more complete auditing
than is currently available, and to allow independent
inspection of any of its procedures or results.
A system properly designed to reduce the cost of
ballot preparation and vote tabulation can achieve

146

Spring Joint Computer Conference, 1971

economies unavailable through any other tabulation
method. A well-planned system can reduce the costs of
tabulation until they become a small fraction of the
overall expense of conducting an election, and without
compromising security. Similarly, the introduction of
computers to the ballot preparation stage can offer
significant savings and increase the reliability of
pre-election activities.
SYSTEM CHARACTERISTICS
A vote tabulation system includes the computer
tabulation and auditing programs and documentation,
and a planning guide. A specific "second generation"
system described below would offer a series of computer
programs for the preparation, tabulation, and audit of
an election. It is significantly more extensive than
current systems, since ballot design and auditing aids
are included-although both features have been recommended to improve security and reduce potential errors,
neither has yet been incorporated in an operational
system. The introduction of computer techniques early
in the process would offer significant economies and a
higher degree of security than is possible when computers
are used only in the tabulation process.*
A specific system design best illustrates the features
requisite to this type of computer program. The design
suggested below is the result of a preliminary research
project conducted to provide such an illustration.
The programs have not been tested.
Ballot preparation

Recent experiences have shown that the preparation
of ballots is a crucial step which, if performed improperly, can prevent the rest of the system from
functioning as intended. Further, the communication of
ballot formats to the tabulation program has often been
difficult, creating errors in the actual vote count. These
difficulties can be a voided by using the computer to
prepare the ballot assemblies, sample ballots and other
reports, and having the ballot preparation program
itself communicate format to the tabulation program.
In the proposed system, the ballot assembly program
receives as input appropriate precinct data, the political
units in whose borders each precinct lies, the candidates,
the offices for which the candidates are running, and the
rules by which candidates' names are rotated on the

* This specific change has been strongly recommended by the
audit teams for both the Detroit and Los Angeles elections, See
References 1 and 2.

ballot. The program then produces the design of the
actual pages to be placed in the voting machine as well
as the ballot assemblies for all precincts.
The same program can produce the sample ballots to
be mailed to voters, and proof sheets for comparison
with ballot assemblies distributed to the precinct. 8
Other reports generated by the program include ballot
layout summaries for candidates, parties, election
officials, and impartial observers. This innovation is
included to assure candidates that the precinct components of their vote have been correctly identified for
their office.
Internal checks in this program assure completeness
and consistency of the output.
The ballot preparation program also produces
machine-readable tables representing the ballot patterns
for each precinct. These tables are read directly into the
computer on election night by the ballot tabulation
program.

Ballot tabulation

Ballot tabulation programs and procedures are
designed to provide speedy reporting of election results
with a complete audit trail and documented reconstruction of the count.
The first phase in ballot tabulation is media conversion. The ballots, having been delivered to the computation center, are unpacked, scanned for physical damage
(e.g., hanging chad) and read into a computer which
copies them onto magnetic tape. While the ballots could
be read directly into the ballot tabulation program in
card form, media conversion permits preliminary audit
checks and produces a magnetic tape which can be used
by later programs for generating various reports. The
program includes a restart capability so that a core
dump can be taken during the counting process.
Furthermore, special provisions are made for recovering
from card jams-a problem which often plagues such
systems. The special cards used by most voting systems
are particularly susceptible to card jams in high speed
card readers. Because of equipment design characteristics, it is often difficult to determine which card caused
the jam, and to identify the last card to be read.
Consequently, votes may be counted twice-or not
counted at all-during recovery from a card jam. To
prevent such occurrences, the media conversion program
is designed to restart without duplication every time a
card jam occurs. Restart points are frequently printed
out on the console, so that, following any card jam, the
data stream may be reinitiated from the last ballot
whose proper reading is confirmed.
The second phase of the tabulation process is the

"Second Generation" Computer Vote Count Systems

147

produced by the vote count program, and on independent tapes produced by the media conversion
program--are all retained and used in the audit
programs after the election count.
Audit
Additional programs are provided to form an audit
of the election. This audit insures that all ballots
(including those held at supply centers as well as those
actually distributed) have been accounted for, and that all
intermediate totals produced in the various stages of the
tabulation process balance. In addition, quality control
totals of undervotes, overvotes, and mutilations are
maintained to insure that results satisfy certain minimal
reasonability requirements. The audit programs also
provide a report noting all disqualified and unmarked
ballots in each race.
Recount system

Figure 3-A common difficulty with the punched card as a ballot
medium is the incomplete punch. The resultant hanging
chad must be removed by inspectors on election night

actual vote counting and report generation. The vote
tabulation program receives the magnetic tape containing the card images of the ballots and the pattern tables
indicating ballot formats as generated by the ballot
preparation program. The tabulation program reads the
ballots and totals them by precinct and by office.
Intermediate totals are maintained (1) on a direct access
disk file, (2) on a sequential access tape file, and (3) in an
intermediate totals report produced on the printer. The
program prints out restart points so that an observer
may call for the count to be stopped while a core dump
is taken. This dump can be compared with one taken
before the start of processing to verify the security of
the vote count program itself. Once the dump has been
taken, the program can be restarted at the point where
counting was interrupted. In addition to displaying the
contents of core memory, all data on disk files and
intermediate counters can also be printed out.
In addition to the reports listing the results of the
election by office and by precinct, the election night vote
tabulation program leaves numerous independent audit
trails. These trails--on the disk pack, on the tapes

In addition to the audit prepared and run by election
officials, the system makes provision for recounts
performed for or by candidates, parties, or impartial
observers. Three kinds of checks may be performed at
the option of the party requesting the recount.
A manual recount simply allows the parties involved
or election officials to retabulate the votes by counting
holes on the cards-themselves or their printed images.
A machine card recount provides for recounting from
the cards themselves (rather than tape-resident card
images) by computer program.
A tape recount is done by computer on the actual tape
used in the tabulation program.
A recount using a distributed copy of the card image
tape can be run on any computer using any program
chosen by the challenger.
HARDWARE AND SOFTWARE
CHARACTERISTICS
The technical design of the system represents a
departure from previous electronic vote tabulation
systems. Previous systems have employed low-level,
machine-oriented programming languages in an effort to
maximize computational efficiency. Leaving aside the
judgment about the wisdom of that decision for large
municipalities, it appears clear that for smaller elections,
computational efficiency is not of prime importance.
Clarity, security, and ease of modification are much
more vital to the design of such a vote tabulation

148

Spring Joint Computer Conference, 1971

system. Consequently, all programming should be done
in a high-level, user-oriented· programming language,
such as COBOL, or preferablyPL/l. While these
languages do not generate the most efficient machine
code, a knowledge~ble programmer with no previous
familiarity with the program would be able to understand the functioning of an intelligently-organized and
well-commented PL/l vote tabulation program.
This choice offers several advantages. First, the
program could receive wide distribution to all interested
parties who could then verify its accuracy to their own
satisfaction without actually running the program.
Second, modifications to the program to accommodate
different equipment, different recording requirements,
or different organizational features of an election
would be made much easier using a high-level language
and modular program construction.

SOFTWARE
The system design identified in preliminary research
which would fill the requirements of such a system
included eleven specific programs. They were;
•
•
•
•
•
•
•
•
•
•
•

Ballot Assembly Preparation Program
Assembly Audit Report Program
Media Conversion Program (card to tape)
Logic and Accuracy Test Program
Vote Tabulation Program
Inspector Audit Program A (for media conversion
program)
Inspector Audit Program B (for vote tabulation
program)
Ballot Inventory Reconciliation Report Program
Ballot Image Print Program
Core Dump Comparison Program
Election Results Analysis Report Program

The system, unlike present systems, is heavily oriented
toward audit and proof programs designed to check each
operation through an independent program. The system
uses the computer throughout the election process from
ballot design through a· quality control analysis of the
counted ballots.

modified programs are used, the scare headlines, public
investigations, and official indignation we have already
witnessed will continue to be aimed at the computer
profession, and we need only await the first major
election scandal for the threat of government regulation
to become a reality.*
At issue is the larger question of professional
responsibility. At present, no mechanism exists for the
self-regulation of the industry, and no universal code of
'good practices' has been defined. Iil a highly sensitive,
highly visible system like vote counting, this oversight
stands as an invitation to disaster. Even if we will not,
the public will hold the profession accountable for its
responsibility to society. If members of the profession
are to undertake responsible assignments, those who
assume these assignments must be held accountable for
their work.
The vote count system described here is offered as an
example of the kind of design approach which assumes
social responsibility as a basic design requirement. It
may not perform the vote counting task more quickly or
inexpensively than do present systems, but does address
the larger-and potentially vital-issues of security and
credibility.
It is hoped that the kinds of problems it treats will be
met by future vote counting system designs, and that
the same kinds of issues will be recognized wherever
they occur in the computer industry.

REFERENCES
1 H H ISAACS ET AL
Final report County of Los Angeles votomatic computer
system audit
.Isaacs Associates Inc Los Angeles California Volume II
p 10 1970
2 ECONOMICS RESEARCH ASSOCIATES
Evaluation of planning and performance for the November
1970 general election
Economics Research Associates Los Angeles California
pp 11-1 V 1-2
_
3 J FARMER C H SPRINGER R STANTON
M J STRUMWASSER
Vulnerabilities of present computer vote count systems to
computer fraud
Intellectron International Inc Van N uys California
November 1969
4 R L PATRICK A DAHL
V oting systems
Datamation May 1970

CONCLUSION
Regardless of what system is used, computers will
continue to be employed in the tabulation of election
returns. If insecure, poorly-documented and hastily-

* Eleven states have already forbidden the use of the computer in
election tabulation. The California legislature is investigating the
licensing of computer programmers and operators.

"Second Generation" Computer Vote Count Systems

5 And the winner is ... '! Computer is the loser in Michigan
election
Wall Street Journal August 6 1970
6 Primary highlighted by discrepancies, irregularities
Computer W orId October 7 1970

149

7 State, local probes ordered into ballot mixups, delays
Los Angeles Times June 4 1970
8 ECONOMICS RESEARCH ASSOCIATES
Report to the Los Angeles County election security committee
Economics Research Associates Los Angeles California
August 1970

Evaluation of hardware-firmware-software trade-offs with
mathematical modeling
by H. BARSAMIAN and A. DeCEGAMA
The National Cash Register Company
Hawthorne, California

indicated above. LSI technology has the potential vI
bringing the practical capabilities of the computer
manufacturer closer to the needs of the user. However,
the satisfaction of the user's needs in terIl!s of cost/
performance efficiency of. their computing installations
continues to be a fundamental and complicated task.
The specifics of user applications, as well as the functional and control characteristics of the entire computing system are highly interrelated. They need to be
analyzed systematically and evaluated quantitatively
as characteristics of a consolidated information processing system.
This paper emphasizes the necessity of employing the
integral hardward-software approach for design and
evaluation of computing systems. Three relevant
topics are discussed:

INTRODUCTION
The 70's are expected to become the beginning of the
era of intellectual maturity of the information processing systems. Until now the main challenge was the
adaptation of problems to the computers. The new
challenge is the adaptation of computers to the
problems.
Presently the engineering know-how and the state-ofthe-art of the technology have reached a stage where
the design of a" good" computer is no longer a mystery.
However, the understanding and the efficient utilization of computers in terms of cost/performance/user
need is far from satisfactory. The deficiencies in present
computing systems are due mainly to the neglect of
comprehensive disciplines and formal techniques for the
integral design, analysis and evaluation of computer
systems and user tasks. As a result, the computer user
usually pays more than he should and/or gets less than
he needs. To bridge the gap between designing a" good"
computer and its efficient utilization, a major reorientation in the design and evaluation philosophy of information processing systems should take place. With optimum cost/performance indices as the ultimate goal, the
main design guideline must be systems adaptation to
user problems through tailored computer hardware and
software.
From the user's standpoint, this is the ideal approach.
For the computer manufacturer however, this approach
has constituted a cascade of technological, design,
manufacturing and maintenance problems which have
compounded to produce almost prohibitive systems
costs. These costs impact negatively on potential users,
frequently surpassing the "psychological barrier" of
the buyers, particularly in the commercial market.
Recent technological advances, specifically semiconductor LSI technology, seem to promise a positive
solution to most of the manufacturer's problems

1. Formulation of the task and the strategy for
designing more efficient computing systems
through hardware-firmware-software trade-offs.
2. The analytical tool applied for quantitative
evaluation and optimization of these systems,
namely the mathematical modeling techniques.
3. An example of the implementation of the proposed methodology.

HARDWARE-FIRMWARE-SOFTWARE
TRADE-OFFS

Task formulation
The analysis of. contemporary computing systems of
conventional architecture (the overwhelming majority
of present general purpose computers) reveals that
performance inefficiencies are caused mainly by three
151

152

Spring Joint Computer Conference, 1971

principal factors:
1. Excessively complex, costly and often poorly

designed software-caused by the neglect of
comprehensive scientific disciplines in the design
of both systems and applications software. As a
result, the quality of the computer software
depends heavily upon the subjective human
factors of the designer.
2. The logical primitiveness of the computer
hardware. A typical" general purpose" computer
has only the adder as its sole decision-making
mechanism. All other system components, e.g.,
memories, transmission channels and peripherals,
do not have any intellectual capabilities. It is
the burden, then, of the software to schedule and
control the functions of these components, as
well as perform all kinds of data manipulations
in the CPU (factually in the adder).
3. The large gap between machine and programming languages. Software interpreters or compilers perform the enormous task of language
translation and program conversion from the procedure oriented source languages to the object
machine codes. As a result, a significant portion of
computer power in terms of time and space (as
much as 70 percent) may be spent for program
compilation. The user, whose only interest is
the solution of his problems, pays for this
wasted computer power.
Thus, the epicenter of problems seems to be related
to the hardware-software ratio used for performing
processing functions within the computing system.
Computing systems with improved cost/performance
efficiency can be designed by redistributing the processing functions performed in the hardware and
software. Such a redistribution must be balanced and
evaluated quantitatively within the range of the
optimum values of two target functions: cost and
throughput (response time for on-line or real-time
systems) for the whole computing system. Accordingly,
an overall system design and evaluation strategy can
be postulated:
1. Delegate more computational procedures and
algorithmic functions to the computer hardware
(both in the main-frame and the peripherals) by
hardware-firmware-software trade-offs (HFS).
These trade-offs must be aimed toward the
following goals:
(a) Simplify the structure and the design of the
software systems.

(b) Allow more space and time for the executive
and user programs within the given system
configuration.
(c) Narrow the gap between the machine and
source programming languages by incorporating more procedural contents and
functional complexity into the primitives of
the machine language; thus, allowing the
computers to be driven primarily by input
data rather than by programs.
2. Apply analytical techniques for evaluation and
optimization of proposed system architectures.
A mathematical model with acceptable accuracy,
rather than a simulation model, appears to be a
suitable and efficient tool for the fulfillment
of these tasks. Two peculiarities of the mathematical model must be underlined:
(a) The mathematical model considers the computing system as a complex stochastic
model of queueing, resource sharing, scheduling and allocation processes.
(b) The parameters of the computing system as
well as the characteristics of the processing
environment and the user requirements are
considered as interrelated control and/or
state variables of the model.

The strategy

Actually the HFS trade-o:ffs are an economical
problem facing the computer architect. Theoretically,
any computable problem can be solved on a single bit
adder operating with an infinitely long serial memory
(Turing-type machine). However, any realistic computer architecture of mini-, midi-, or super-scale is the
result of a compromise made between the computational
speed (or the throughput) and the practicality of the
systems cost. The criteria for this compromise are:
(1) the satisfaction within a reasonable elapsed time of
the computational needs of a certain class of users,
(2) suitable man-machine communication features,
i.e., programming and operating aids, and (3) an
acceptable price/performance ratio. Thus, the speedcost trade-off is the creed of the· computer designer.
Contemporary computing systems can be considered
as a superposition of physical and logical media. The
physical medium (the hardware) is the host environment where the logic (the software) resides and carries
out the information processing task. Although completely different by their physical and technological
nature, the computer hardware and the software
consitute together an indivisible architectural assembly
designated to solve user problems. Thus, the computer

Evaluation of Hardware-Firmware-Software Trade-Offs

system is an integral hardware-software system and
speed-cost trade-offs must be optimized for the integral
system, not for the hardware and the software separately. Although this may seem to be a trivial definition,
its implications constitute a fundamental impact on the
design and evaluation efficiency of computer systems.
In the process of computer design, the distribution
of processing functions to be performed in the hardware
and the software as constituents of an integral system
architecture is subject to the same speed-cost trade-off
rules. Traditionally the computer architect solved this
dilemma based on engineering intution, experience and
some superficial figures of cost and performance,
assessing mainly isolated physical characteristics such
as the CPU hardwarE\ cost, the add time and/or
memory cycle time. Applying these and like criteria for
the evaluation of computer systems performance is
analogous to applying Newton's classical theory to
quantum mechanics, i.e., adopting a static view of a
multidimensional and highly dynamic process. The
parameters mentioned above are necessary but no
longer adequate in the performance evaluation of
present (and future) computing systems for the
following reasons:
1. The CPU hardware cost, specifically in this era of
batch fabricated components technology, constitutes a very small percent of the total system
cost (approximately 3 percent).
2. The add time, as any other machine instruction
execution time, cannot by itself indicate system
performance since most programming is done
in higher level languages.
3. Faster memories (even if program codes are well
optimized) do not directly benefit the user
because they store more than just the user
programs. Moreover, if the computing system
is I/O bound, often the case in business applications, then the fast memory becomes an expensive luxury for the user.
The inadequacies of other static parametersmemory capacity, I/O transfer rate, machine word
length, etc.-as system performance criteria are also
obvious.
Program execution times or the number of programs
executed per unit time, however, do reflect all the
architectural and combined functional characteristics
of the computer system. Thus, during the computer's
architectural design period, the ideal speed-cost tradeoffs should be evaluated and optimized on the executable program level rather than on the machine instruction level. In the case of general purpose computers, however, this is an enormously difficult if not

153

impossible task. Such an approach seems to be more
practicable on the next lower level of functional
complexity i.e., on the level of independent processing
sequences, algorithmic functions, routines or macros,
which represent functional building blocks for both the
user and executive programs (compilers, operating
systems, utilities). Search, insertion and file retrieval,
sorting, table reference and inversion, scanning (in
compilers), stack manipulations, floating point and
variable field arithmetic, trigonometric and transcendent functions, I/O interface control, links, error
recovery, interrupt handling and reentry are representative sources of such functions. It is suggested
that the procedure of dividing the computer's processing
tasks into independent or semi-independent functional
building blocks may be called ALGORITHMIC
PARTITIONING.
The execution path of an algorithmic partition
utilizes more of the various system resources and
exercises. their functional capabilities to a greater
extent. Therefore, more optimized speed-cost trade-offs
are achievable at this level of functional complexity than
at the traditionally considered machine instruction
level. Through algorithmic partitioning a noticeable
software-hardware interface can be drawn, which is the
necessary condition and the basic platform for efficient
HFS trade-offs.
Microprogramming: the vehicle for implementation
of H F S trade-offs

Microprogramming was first introduced in the
early 50's as the "best way to design a computer."5
Microprogramming, virtually, is a control discipline
which allows the design of instruction execution flows
and the functional sequencing of digital processors in a
more systematic way. Microprogrammed interpreters
can be designed for algorithmic partitions performing
orderly structured processing functions. These interpreters, composed of micro-instructions and stored in
special read-only (ROM) or read-write (RWM) control
stores, are called firmware.
In choosing an approach for the implementation of
HFS trade-offs, the following factors must be taken
into consideration:
a. The importance of computer emulation, which
requires the availability of additional (nonpre designed) primitive functions at the lowest
possible logical level of processor's hardware.
b. The trend toward increasing the intellectual
capabilities of computer hardware, which necessitiates the control of more complex data
manipulation schemes and increased number of
information flow paths.

154

Spring Joint Computer Conference, 1971

Microprogramming is the most efficient technique
for achieving such capabilities. Also, microprogramming
offers more economical design of modern processors due
to the availability of inexpensive and fast LSI semiconductor control memories for storing the coded
control sequences (microprograms), and to the suitability of ordered and repetitive memory structures
(rather than non-standard logic networks) to LSI
implementation. These factors suggest that through
microprogramming, optimum cost/performance and
functionally flexible HFS trade-offs can be attained.
Although the implementation details of microprogrammed processors are outside of the scope of this
paper, two aspects of control store organization,
pertinent to HFS trade-offs, must be discussed: static
versus dynamic and centralized versus distributed
microprogramming.
One of the major benefits of using microprogramming as a control discipline is the possibility of tailoring
a great number of primitive function combinations to
facilitate the execution of programs and to allow more
efficient emulation of different machines or languages.
The use of a ROM of limited capacity for the control
store, static microprogramming, considerably diminishes the functional flexibility of the microprocessor.
Any microprogram modification after the original
design and the micro coding is completed can be done
only in an off-line fashion. The alternative is to provide
a very large capacity of ROM which can accommodate
all the possible combinations of primitive functions and
allow microprogram switching. The estimates show
that, in this case, an astronomically high number of
bits are required for the control store and obviously
such an approach is not practical. The use of a RWM
for the control store will allow dynamic microprogram
alterations under supervisory control. The microprograms now can be treated as user programs, subject
to online editing and modifications, reloading and swapping, thus allowing some kind of multi-microprogramming capabiltities. This appears to be a highly desirable
feature for multi-purpose emulation. Dynamic microprogramming is beneficial to the computer designer and
the user. It permits accomplishment of a more flexible,
more error free and more easily maintainable logic
design, while the user gets a potentially more powerful
processor within the given physical constraints and
costs.
In a conventional microprogrammed processor a
single control store is designated for all microprograms.
The use of multiple control stores designated for
separate microroutines or group of microroutines is
another way to achieve optimum design and performance. Such an approach is feasible, specifically, with the
LSI technology since the cost-capacity relationship of

semiconductor memories is a nearly linear function.
An efficient decentralization of the control medium can
be accomplished with microroutines that are function
and time exclusive. Additional factors to be considered are interface characteristics, registers, and other
architectural features of the processor. The main advantages of distributed microprogramming are the
achievement of a more compressed microcode, and its
suitability for parallel execution or pipelining of
different microroutines stored in physically separate
control stores. Also, distributed microprogramming
permits a higher degree of firmware maintainability
and diagnosibility.
HFS trade-offs for algorithmic partitions implemented with dynamic and distributed microprogramming positively impact the overall computer system
architecture by providing the following characteristics:
a. The realization of modularized and simplified
design, maintenance and debugging of major
software systems, e.g., compilers and operating
systems.
b. Machine language which is more procedure
oriented and less sensitive to the computer's
hardware design specifics (it appears as machineindependent intermediate type language).
c. Better utilization of the available hardware and
a more faceless logical structure due to the
dynamic alterability of microinstructions. Thus,
versatile and efficient emulation can be performed.
The justification of HFS trade-offs from the computer engineering and manufacturing standpoint is
rationalized not only by the prospect of lowering
production costs (mainly due to the advances of LSI
technology) but also by substantially increasing systems
throughput. The use of more complex machine instruction sets and increased numbers of processing and
algorithmic functions performed in hardware-firmware
will result in increased availabilty of memory space,
fewer references to the slower main memories, and
diminished systems interrupt overhead-hence, faster
program generation and execution speeds.
These facts lead to rather an interesting conclusion,
which at first seems controversial. All other conditons
being equal, the HFS trade-offs allow the attainment
of required systems throughput by using slower, and
consequently cheaper, logic circuits, and main memories.
Thus, "the faster, the better" approach for selecting the
logic circuits appears not to be the axiomatic guideline
for achieving increased performance of a computing
system. The present circuit speeds have nearly reached
their physical limits. Therefore, harmony between the

Evaluation of Hardware-Firmware-Software Trade-Offs

execution speeds of decentralized microprocessors (the
result of JIFS trade-offs) and the speeds of their
communications interfaces must be the prime consideration for improved performance in computing
systems.
Thus, it seems apparent that through HFS trade-offs
optimum integral hardware-software systems can be
designed. Using firmware with its algorithmic contents
and two logical interfaces-an interpretive interface
with the software and an executive interface with the
hardware-as a buffering medium, the ratio of hardware and software functions can be balanced. As
mentioned earlier, this ratio has an overwhelming
impact on the cost/performance index of a computing
system, therefore, its quantitative evaluation and
optimization are tasks of paramount importance.
THE MATHEMATICAL MODEL
The Mathematical Model (MAMO) used for the
evaluation of HFS trade-offs constitutes a recent
development in the evaluation and optimization of
computer systems. 1 It can predict and optimize the
performance of priority-partitioned multi-processing
multi-programming systems by taking into account
diverse environmental conditions, hardware configurations and characteristics as well as system control
policies. The MAMO solves a system of non-linear
equ~tions correlating these parameters and obtains the
desired quantitative results in a relatively short time.
In contrasting the MAMO with simulation models,
factors such as complexity and cost of development,
computer time and space requirements, accuracy and
efficiency in obtaining the desired results must be
considered. The development of a realistic MAMO is a
difficult and costly undertaking. The difficulty stems
from the fact that the available mathematical techniques cannot provide a sufficiently accurate solution
to the complex probabilistics problems characterizing
computer systems. Therefore, in order to formulate a
MAMO useful for practical purposes, numerous approximations must be made. These approximations require
extensive testings by simulation models as well as a
thorough understanding of the stochastic processes
involved. The high development cost of a MAMO is
due to the fact that its formulation and verification
require the development, maintenance and application
of highly sophisticated simulation models which would
reflect real computing systems as closely as possible.
However, once a satisfactory MAMO is developed and
verified, the higher degree of operational capabilities
and efficiency afforded by MAMO, as compared to a
simulation model, compensates for the efforts and
resources invested.

155

For the studies described in Reference 1, an experimental task was formulated for a hypothetical multiprogramming system. The simulation model of this
task run on an IBM 360/65 required five to six hours of
computer time and approximately 250K words of
space; whereas, the mathematical model of the same
task processed on a UNIVAC 1108 system required
only 10 seconds of machine time and 32K words of
space.
The superiority of MAMO is more apparent in
performance optimization tasks. A simulation model
produces optimized results by analyzing the values of
control variables and target functions accumulated by
numerous simulation runs. Cons'equently, the validity
and the accuracy of the results are simply proportional
to the number of simulation runs for the given task.
By virtue of its nature, the MAMO solves the optimization task through a set of non-linear equations, by
manipulation of the state and control variables characterizing the computer system being studied. The
optimization process consists of the search for a set of
nominal control variables on the multi-dimensional
performance space which provides an extremum value
(maximum or minimum) for the selected target function. In a mUlti-programming system having three
partitions of programs, the optimum performance
will be influenced by more than 30 variables (see Appendix). The number of simulation runs, then, would be
over 230 and would require thousands of hours of
machine time (as estimated for a 360/65 system). On a
UNIVAC 1108 system, the MAMO used less than one
hour of computer time to solve such an optimization
task.
Thus, it seems that simulation models should be
used mainly as tools for developing and verifying
mathematical models of computer systems. However,
for complicated service disciplines and for some of the
processes in computing systems which cannot be
properly formulated mathematically, simulation techniques can be used advantageously by incorporating
them into mathematical models. In general, studies of
computer system performance evaluation and optimization are more economical and practical when conducted by mathematical models.
Figure 1 is the simplified block diagram of the
MAMO.1 The computer system's performance and
cost are functions of three groups of variables:

1. Environmental parameters defining the computer
system under study, mainly the physical characteristics of programs and equipment;
2. system control variables which can be varied by
the MAMO to obtain optimum cost/perform-

156

r

Spring Joint Computer Conference, 1971

SYSTEM CONTROL

I
I

Pl-.RFORMAt\CE

I

I

STATE VARIABLES

Et\VIRONMENT

I I

I

1

COST

I

Figure 1-Mathematical model block diagram

ance balance (memory partition sizes, CPU
quantum size, page sizes, etc.), and
3. state variables (CPU utilization/priority, probabilities of interrupts/priority, probabilities of
coinpilation interrupts/priority) which influence
the two main evaluation criteria-cost and
performance.
The optimization procedure, shown by the feedback
path in Figure 1, uses a direct search technique to
define the optimum set of control variables. This
means that the MAMO uses the interim solutions of
each step to choose the direction of the next step
toward the optimum. As a result of this optimization
process, the MAMO can calculate the maximum values
of the following target functions:
• Throughput for a given cost with or without
constraints on response time,
• quality of service (minimum average response time)
for a given throughput and cost,
• uniformity of service (minimum variance of
response time) for a given cost, throughput and
average response time,
and, also, the minimum value of the cost to achieve a
given throughput and quality of service.

service processes are taking place· simultaneously. The
study of this queueing system is purposeful to the
determination of the .service times required by each
program as a function of CPU time and I/O service
requirements. Thus, two queueing systems must be considered: the queues of programs attempting to receive
service from the CPU and the main memory, and the
queues of I/O requests made by the programs. The
I/O request queues must be analyzed based on the
existing queues formed at each of the controllers of the
file system, e.g., drum, disk, tape. The MAMO is a set
of non-linear equations which constitutes the mathematical expressions of these two interrelated queueing
systems. The solution of this system of equations must
satisfy the condition that the utilization factor of all
system components-CPU, main memory, I/O controllers and devices-must remain less than one.
The design of the MAMO is based on the assumption
that the time periods between consecutive CPU
interrupts should follow an exponential density function rather closely. The occurrence of CPU interrupts
in a multi-programming system can be co;nsidered the
result of the superposition of many random and independent events (paging, I/O file accesses, ends of
programs, new arrivals, quantizing, etc.). Then the
Pooled Output Theorem would seem to indicate that
the distribution of inter-interrupt times is asymptotically exponential. The only question is whether known
results of queueing theory involving exactly exponential
density functions of times between events can be
applied to the different service systems constituted by
the I/O devices, CPU's and memories.
In order to apply known queueing theory results for
the design of the MAMO, two factors must be verified:
1. Is the assumption on the distribution of interinterrupt times valid?
2. Is the deviation from the exact exponential
distribution tolerable?
The detailed simulation of many different systems
indicates that, indeed, the results of queueing theory
were applicable to the design of the MAMO. It is
believed that this conclusion opens new avenues in the
development of practically applicable mathematical
models for complex computing systems.
The detailed description of the MAMO is out of the
scope of this paper and is provided in Reference 1.

The formulation of the MAMO

EVALUATION OF THE FIRMWARE SORT
PROCESSOR WITH THE MATHEMATICAL
MODEL

The formulation of the MAMO requires an in-depth
analysis of a complex queueing system in which several

To obtain a reasonable justification for any HFS
trade-off a conclusive cost-performance evaluation

Evaluation of Hardware-Firmware-Software Trade-Offs

must be carried out. The evaluation of the HFS
trade-offs and their effect on the performance of
the whole computing system is a task of dynamic
nature. The characteristics and the performance of the
routines or algorithmic partitions subject of HFS
trade-offs must be analyzed and evaluated under the
actual conditions of the computational environment,
taking into account the following major factors:

157

MAIN MEMORY

1. The computer time spent to perform the object

routine in software,
2. the memory space occupied by the object
software routine,
3. the user's processing environment characterized
by well-defined (measured and/ or calculated)
input, state and control parameters.
(Detailed listings of these parameters are provided in the Appendix.)
The objectives of the task are to determine and
evaluate the dynamic values of the increased availability of computer time and memory space due to the
hardware-firmware implementation of the object software routine.
The MAMO described briefly in the previous section
was applied to the performance evaluation of a firmware Sort Processor (SP).2 The SP is a functionally
autonomous microprogrammed proceS30r dedicated
exclusively to sorting, thereby relieving the computer's
CPU of this function. It can easily be integrated into
any general purpose computer system and will behave
as an intelligent peripheral controller (Figure 2). As
such, the SP represents a typical example of an HFS
trade-off. (See Reference 2 for detailed description of
the SP.)
The evaluation of the SP is conducted by comparing
the performance of the host computer system executing
a sort with conventional software routines against its
performance of the same function with the SP. The
host computer systems studied, represent third generation, business oriented small and medium class systems
with multi-programming capabilities. They are assumed
to operate in a typical business environment of the
following average values of major characteristics:
•
•
•
•
•

program size (code): 20,000 bytes
defined data space for execution; 30,000 bytes
CPU time required by a program: 60 seconds
CPU time between data file references: 1.25 seconds
CPU time between jump instructions: 50 microseconds
• CPU time between data references: 13 microseconds
• percentage of CPU time in execution of sort
routine; S = 20 percent

SORT

-

START
CENTRAL

PROCESSOR
INTERRUPT

--

PROCESSOR

,.
MASS MEMORY

Figure 2-Computer system configuration with sort processor

• percentage of CPU time in execution of library
subroutines: L = 30 percent
• probability of executing the sort routine from the
library of subroutines: P s = 67 percent
• probability of requiring I/O access per subroutine
call :40 percent.
(Note: S =LXP s) The sort routine has been
assumed to occupy 6 KB of main memory space.
The influence on maximum system throughput of the
amount of processing done by the SP is shown in Figure
3 for one system studied and two different memory
sizes. The limitation in performance that can be
observed from the leveling off of the curves is due to a
main memory bottleneck for the smaller memory
system and to a disk system bottleneck for the larger
memory system. However, in both cases, the throughput of the computer system with the firmware SP
was improved. (In Figure 3, zero percent indicates no
hardware-firmware implementation of the sort routine.)
Table I summarizes the results obtained for different
types and classes of host computers operating in batch
mode. Main memory sizes (256 KB) and the sorting
workload (20 percent) are the same for all four systems.

158

Spring Joint Computer Conference, 1971

TABLE I-Computer Performance Evaluation with the Sort
Processor in a Business Environment of Batch Programs
SYSTEM

Bl

Cl

B2

C2

Throughput increase
(percent)

13.2

20

9.85

14.4

Cost/Program decrease
(percent)

11.6

15

8.5

11.8

15.5

5

10.3

Average service time/program decrease (percent)

9.9

NOTE: Main memory size =256 kilobytes, percentage of sort
routine usage =20 percent

Throughput in Table I indicates the maximum program
arrival rate that the system can handle; this rate is
also equal to the maximum service rate that can be
sustained by the system. The service rate is given by
the ratio N pm/Tel, where N pm is the average number
of programs in memory and Tel is the average service
time/program. In evaluating the influence of the SP
on throughput it should be noted that Tel = Tcl Til,
where Tcl is the average CPU time required/program
and Til is the average interrupt time/program.
The SP will affect N pm and Tel. N pm will be slightly
greater with the SP due to the increased availability of
memory space, while Til will decrease dueto the smaller
number of I/O interrupts required per program. The
use of the SP diminishes the occurrence of two types
of interrupts: paging activity interrupts and, more
importantly, the I/O interrupts needed to transfer
the sort routine into the main memory. Thus} the
efficiency of the SP depends on the value of Q defined
for a system without SP as follows:

storage configurations as do systems C2 and
B2, the latter systems being more powerful.
Accordingly, based on the results of Table I, the
following conclusions are valid, provided that all
environmental conditions are equal: the performance
improvement with SP is greater for,
• systems having faster main memories with similar
secondary storage characteristics. (This conclusion
is confirmed also by the results shown in Figure 4.)
• systems having slower secondary storages with
similar main memory characteristics.
The increased availability of main memories for
user programs-due to the release of the space traditionally required by the sort routine-results in less
paging activity for all systems with SP. Therefore, the
percentage of performance improvement is greater for
computer systems with slower secondary storage due
to the decreased utilization of slower I/O devices for
paging.
Thus, Table I indicates that substantial cost/
performance improvements, i.e., increased throughput,

+

PROGRAMS/HOUR

22
MM = 512KB

21

20

MM = 256KB

19

Q number of sort routine calls
total number of interrupts

A higher value of this ratio yields more throughput
increase for a host system having an SP.
The improvement in cost per program (C) is defined
as C = (Eo/Po) - (Es/P s) where Eo and Es are the
computer system costs per unit of time without and
with an SP, respectively. Po and P s are corresponding
throughput values.
For further analysis and evaluation of the SP, the
following characteristics of the host computer systems
studied must be stated:

18

17
NOTE: MAIN ME MOR Y =

MM

KILOBYTES = KB
16

o
SORT USAGE (%)

a. The main memory of system C2 is the fastest,
followed by systems C1, B1 and B2, respectively.
b. Systems C1 and B1 have similar secondary

Figure 3-Throughput versus the percentage of sort usage
(Batch programs)

Evaluation of Hardware-Firmware-Software Trade-Offs

decreased service time, and lower costs per program, are
achievable for various computer system configurations
with SP. The utilization of the computer equipment,
i.e., CPUs, main memories, I/O controllers and secondary storage units, is also higher.
Similar results have been obtained for systems with
on-line programs only and for systems with on-line
and background batch programs (see Appendix for
on-line program characteristics). The effect of main
memory size on the performance of an on-line system
having the program characteristics listed in the Appendix is shown in Figure 5. It appears that the particular system studied was I/O bound, and increasing
the size of the main memory more than a certain
value does not result in improved performance. However, whether the system is I/O bound or not, the
addition of the SP results in a marked improvement of
computer system throughput.
Finally, the results of the study as shown in the
preceding table and figures demonstrate,

PROGRAMS/ HOUR

SOR TING WITH SP

SOR TING WITH SOFTWARE

300
ll5
128

1. the capabilities of the MAMO in producing
quantitative results about system performance,
PROGRAMS/HOUR

159

256

384

512

MAIN MEMORY CAPACITY IN KILOBYTES

Figure 5-Throughput versus main memory size
(On-line programs)

19

2. the usefulness of the lVIAMO for making decisions
on new system architectures and HFS trade-offs,
3. the power of the Sort Processor for substantially
improving the cost/performance index of computing systems.
l5

It is clear that through the' combined application of
mathematical modeling and HFS techniques new and
better computer systems can be built.
II

SUMMARY

19
SOR TING WITH SOFTWARE

16 L-______~.~6__~.~8__Tl.~0__~1.~l_5~1~.-5~1~.7-5--_r--MAIN MEMORY CYCLE TIME IN MICROSECONDS
FIGURE 4.

THROUGHPUT VS. MAIN MEMORY CYCLE TIME. (BATCH
PROGRAMS); MAIN MEMORY SIZE = 256 KILOBYTES,
SORT USAGE = lO%

Figure 4-Throughput versus main memory cycle time
(Batch programs)

In the era of LSI and increased computer usage, the
importance of designing new computer architectures
with optimum hardware/software balance has been
discussed. Algorithmic partitioning is suggested as the
proper vehicle to accomplish the hardware-firmwaresoftware trade-offs. Carefully selected algorithmic
partitions, traditionally performed by software routines,
are implemented by microprogrammed hardware interpreters stored in distributed writable control stores.
Selection and cost/performance evaluations of algorithmic partitions for hardware-firmware-software
trade-offs are accomplished by applying a realistic
Mathematical Model of multiprogramming and multiprocessing computer systems. The possible computer

160

Spring Joint Computer Conference, 1971

architectures are evaluated under diverse environmental conditions.
As an example of this system design philosophy, a
recently developed Mathematical Modell has been
applied to the cost/performance evaluation of a
firmware Sort Processor. 2 The results obtained indicate
that more efficient computer systems can be designed
by means of increasing the logical complexity of the
hardware and by extensive use of microprogramming
techniques. The results also show the great analytical
power and practical utility of mathematical models
for performance evaluation and optimization of
computer systems. It is believed that the system design
philosophy set forth in this paper will be prevalent
in the years to come.

ACKNOWLEDGMENTS
The cooperation and encouragement offered by the
NCR-DPD Research Department has been instrumental in the preparation of this paper.
The authors wish to express their appreciation to
Mr. A. G. Hanlon and Mrs. V. J. Miller for their
valuable suggestions and editorial efforts.
Mmes. A. Peralta's and E. Mackay's contributions
in typing this paper deserve a full-hearted thanks
from the authors.

REFERENCES
1 A DeCEGAMA
Performance optimization of multiprogramming systems
Doctoral Dissertation Computer Science Department
Carnegie-Mellon University Pittsburgh Pa April 1970
2 H BARSAMIAN
Firmware sort proccssor with LSI components
AFIPS Conference Proceedings Spring Joint Computer
Conference Volume 361970
3 C F WOOD
Recent developments in dir:-ect search techniques
Research Report 62-159-522-R1 Westinghouse Research
Laboratories Pittsburgh Pa
4 A L SCHERR
Analysis of storage performance and dynamic relocation
techniques
IBM Research Report TR-OO-1494 September 1966
5 M V WILKES
The best way to dcsign an automatic calculation machine
Manchester University Computer Inaugural Conference
Proceeding p 16 1951
6 R F ROSIN
Con.temporary concepts of microprogramming and emulation
Computing Surveys December 1969
7 A OPLER
Fourth generation software
Datamation January 1967

8 L L CONSTANTINE
Integral hardware/software design
Modern Data April 1968-February 1969
9 R W COOK M J FLYNN
System design of a dynamic microproccssor
IEEE Transactions on Computers March 1970
10 H W LAWSON
Programming language-oriented instruction streams
IEEE Transactions C-17 pp 476-485 1968

APPENDIX
Parameters defining machine space requirements

Operating system space
Number of compilers
Number of compiler code segments/compiler
Sizes of compiler segments
Number of data segments/compiler
Sizes of data segments/compiler
Number of library subroutines
Sizes of library subroutines
Distributions of program sizes
Distributions of data space/program
Parameters defining machine time requirements

Distributions of CPU time/program
Distributions of input times (card-reader, teletype,
console, etc.)
Distributions of output times (printer, teletype,
console, etc.)
Compilation/execution time ratios
Probabilities of a source card error
Probabilities of a compilation error
Distribution of CPU times between files accesses
during compilation
Distribution of operator times to make files accessible
to the programs
Probabilities of execution error
Probabilities of-using data files by programs
Distribution of CPU times between data file accesses
Probabilities of using serial files by programs
Probabilities of serial file accesses versus random
file accesses
Probabilities of disk random file accesses versus
drum random file accesses
Probabilities of accessing print files (spooling)
versus data files
Distribution of interrupt overhead times
Parameters that influence both time and space requirements

Distribution of inter-arrival times
Distribution of amounts of source data

Evaluation of Hardware-Firmware-Software Trade-Offs

Distribution of number of interactions for conversational programs
Peak number of conversational users
Distribution of CPU times between jump instructions
(compilers and programs)
Probabilities of jumping to a new compiler segment
during compilation
Probabilities that a jump instruction is a library call
Distribution of CPU times between data references
(compilers and programs)
Probabilities of data segment swap per I/O access
Distribution of jump distances
Distribution of data access distances
Main memory address generation rate by CPU
(code and data)
Distribution of program file sizes
Distribution of data per file access
Distribution of dump data
Tape start-up time
Tape transfer time
Disk rotational speed
Disk seek time
Disk transfer time
Drum rotational speed
Drum seek time
Drum transfer rate
Size of the routine or program subject to HFS trade
off
Frequency of usage of the routine or program subject
to HFS trade off
Hardware/software execution speed ratio of the
routine subject to HFS trade off
Access times of the different memory hierarchies
Amount of data/access for the d'fferent memory
hierarchies
CPU speed
Hardware costs
Control variables influencing optimum system performance

Number of CPU's
Cache capacity
Total amount of memory (ma:n memory and bulk
. memory) for non-partitioned systems

161

Partition sizes (main memory and bulk memory) for
a partitioned system
Number of disk and drum controllers
Number of spindles/controller
Number of tape controllers
Number of tapes/controller
Proportion of data files kept directly accessible to the
programs as opposed to the files requiring manual
intervention by the operator
Maximum data segment size (for each priority i)
Maximum program segment size (for each priority i)
Maximum amount of main memory space allowed
for data (for each priority i)
Maximum amount of main memory space allowed
for instructions (for each priority i)
Input buffer sizes (for each priority i)
Output buffer sizes (for each priority i)
Compiler buffer sizes (for each priority i)
Print buffer sizes (for each priority i)
Probabilities of finding a system routine or a user
data segrp.ent in each level of secondary storage
CPU quantum size (for each priority i)
Number of printers for priority i
Number of card readers for priority i
Major characteristics of programs in
(average values)

on-line mode

Program size (code) = 30,000 bytes
Defined data space for execution = 10,000 bytes
CPU time required by a program = 60 milliseconds
CPU time between data file references = 4 milliseconds
CPU time between jump instructions = 50 microseconds
CPU time between data references = 10 microseconds
Percentage of CPU time spent in execution of the
sort routine = 12 percent
Percentage of CPU time spent in execution of
library subroutines = 30 percent
Probability of execution of the sort routine = 40
percent
Probability of requiring I/O access/subroutine call =
40 percent

System/370 integrated emulation under OS and DOS
by GARY R. ALLRED
International Business Machines Corporation
Kingston, N ew York

INTRODUCTION

1401 emulator on System/360 Model 30, were
exclusively implemented by hardware. The Model
30 became, in effect, a 1401, with the appropriate
registers, addressing, I and E-time execution,
etc., handled by the hardware. While performance and operating characteristics were quite
good, the system was dedicated to a specific
mode of operation, i.e., either emulation or
"native" mode. The resultant loading and reloading necessary to attain the desired mode of
operation imposed unproductive overhead on the
user.
2. Hardware/Software: Other emulators, such as
the IBM 7000 series emulators on System/360
Model 65, were comprised of both hardware and
software. By adding software, the emulator
offered more flexibility in device support and
operational characteristics, while at the same
time retained the desirable performance attributes of hardware execution. (Pure software
implementation would become total simulation,
with the obvious degradation of performance.)
These emulators required a total dedication to
the system of a specific operating mode--emulation or native, with the unproductive overhead
of loading and reloading. This overhead was
more noticeable when the native mode operation
involved an operating system. Additionally,
terminal applications were not possible because
of the necessity to "shut down" the operating
system for emulator loading.
3. Hardware/Software/Operating System: Two
programs, Compatibility Operating System
(COS)/30 and COS/40 were developed by IBM
which integrated the 1401 emulator on System/
360 Models 30 and 40 under DOS. At first considered to be interim programs, these programs, because of their wide acceptance and usage, were
subsequently upgraded through hardware and
software refinements and renamed Compati-

The purpose of this paper is to discuss the design and
development of integrated emulators for the IBM
System/370. Specifically, emulation of IBM 1400 and
7000 series systems on the IBM System/370 Models
145, 155 and 165, integrated under an Operating System. While the author acknowledges the development
and presence of emulation outside of IBM, it is not the
intent of this article to conduct a comparative survey
of all emulator products. Rather, the discussion will be
restricted to the design and development considerations
involved in producing the System/370 integrated
emulators.

EMULATOR HISTORY
The System/370 integrated emulators are evolutionary products of earlier IBM emulators on System/
360. Before discussing the design and functional characteristics of the System/370 emulators, a review of
emulation as it existed prior to System/370 is presented
in order to form a base of comparison for the new
system.
Three methods are employed by the emulators to
effect the execution of prior systems programs on the
new system and are referred to throughout this article:
1. Interpretation/execution via hardware (referred
to as emulation).
2. Interpretation/execution via software routine
(referred to as simulation).
3. A combination of the above (referred to as
emulation).

In System/360, emulation was composed of three
distinct design types:
1. Hardware: Some emulators, such as the IBM
163

164

Spring Joint Computer Conference, 1971

bility System (CS)/30 and CS/40. For the first
time, 1401 jobs and System/360 native-mode
jobs could be run concurrently in a limited multiprogramming environment. (Limited multiprogramming in the sense that there were certain
restrictions on the Foreground/Background allocation of jobs under DOS.) Single job stream
input was also possible. Overall system thruput
was significantly improved by eliminating the
need to reload the system between emulator and
System/360 jobs.
In addition to the CS emulators, there were other
applications such as Hypervisors and "hook loaders,"
which, to a lesser degree, provided a single operating
environment by eliminating the need to re-IPL between emulator and System/360 jobs. Hypervisors enabled two emulators to run concurrently or, an emulator to run with a System/360 job.
The Hypervisor concept was relatively simple. It consisted of an addendum to the emulator program and a
hardware modification on a Model 65 having a compatibility feature. The hardware modification divided
the Model 65 into two partitions, each addressable from
O-n. The program addendum, having overlaid the system Program Status Words (PSW) with its own, became the interrupt handler for the entire system. After
determining which partition had initiated the event
causing the interrupt, control was transferred accordingly. The Hypervisor required dedicated I/O devices
for each partition and, because of this, the I/O configurations were usually quite large, and, therefore,
prohibitive to the majority of users.
Hook loaders, developed by individual installations,
effected a "roll-in/roll-out" of the emulator or System/
360 job. The decision to swap operating modes could
be interrupt driven or initiated by the operator. The
basic attribute of this application was to eliminate the
need for IPL when changing operating modes.

DESIGN CONSIDERATIONS AND
OBJECTIVES
At the time they were initially released, the System/
360 emulators were considered to be short term programs. They were intended to provide the user with
the facility to grow from a second generation system
to the improved facilities of System/360 with little or
no reprogramming. To this end, they served their purpose very well. Their predicted demise however, did
not take place as expected. Emulation usage continued
at a high rate, with installation resources directed at

new applications rather than conversion of existing
applications.
Clearly, as system and customer applications became
more complex, the need for expanded emulator support
became more evident. Early in the planning cycle of
System/370, IBM began a design study to determine
the most efficient architecture for emulators on System/370. Based on an analysis of existing and projected
future operating environments, feedback from user
groups, and the experience gained to date with emulation, the following key design points were establishe4
as objectives for System/370 emulators:
1. Emulators must be fully integrated with the
operating system and run as a problem program.
2. Complete multiprogramming facilities must be
available including multiprogramming of
emulators.
3. Device independence, with all device allocation
performed by the operating system.
4. Data compatibility with the operating system.
5. A single jobstream environment.
6. A common, modular architecture for improved
maintenance and portability.
7. An improved hardware feature design with emulator mode restrictions eliminated and all feature
operations interruptible.

MODELING
While the COS/CS emulators had proved the basic
feasibility of integrating an emulator as a problem program under an operating system, in this case DOS,
extending this feasibility to include a large scale, complex system with the full multiprogramming facilities
of OS/360 remained to be proven. Therefore, it was
decided that a model should be built which would integrate a large scale system into OS/360.
The system selected was the 7094 Emulator on System/360 Model 65. The 7094 and the 7094 Operating
System (IBSYS) represented the most complex and
sophisticated second generation system available. If
this system could be successfully integrated with OS/
360, the design and technology could certainly be applied to smaller, less complex systems.
The OS/360 option selected was MFT II. This system, with its fixed partition requirement, could be more
easily adapted to the 7094 Emulator design which also
included fixed locations and addressing.
This particular feasibility study proved to be an excellent subject for modeling. The goals were well defined, the emulator itself was relatively self contained,
and the design alternatives were varied enough to make

System/370 Integrated Emulation

multiple design evaluations necessary. Modeling was
primarily concerned with the assessment of four major
areas: input/output techniques, operation under an
operating system, hardware design/requirements, and
operating system interfaces. There were a number of
key recommendations and resolutions achieved in these
areas as the result of modeling.

Input/output techniques
• To provide the most efficient means of I/O Simulation, an emulator access method with standard
interfaces to the operating system was developed.
OS /360 Basic Sequential Access Method (BSAM)
was used for tape operations and Queued Sequential Access Method (QSAM) for support of Unit
Record devices. Basic Direct Access Method
(BDAM) support was later added for those systems that support disk. This access method was
subsequently expanded to be usable by any System/370 emulator, regardless of the emulated
system. This access method is currently used by
the 1400 and 7000 series emulators on System/370.
• To solve the problem of prohibitively long tape
records (32K maximum), and some file formats
which were unacceptable to OS /360, a tape preprocessor program was developed to format secondgeneration tapes into a spanned variable length
record format. A post-processor was also developed
to convert these tapes back to their original format,
if desired.
• To enable selective processing for Operating System error recovery procedures, parity switching
and density switching modifications were made to
the data management facilities of 08/360.

Operation under an operating system
• Whereas the stand alone emulators had used privileged instructions at will, this could not be done
if the emulator was to run as a problem program
under the operating system. Those routines requiring privileged Op-Codes were either replaced
by operating system routines or redesigned to use
only standard Op-Codes.
To achieve a common, portable architecture, emulator routines were standardized as emulator dependent and operating system dependent modules.

• There should be no fixed addresses and the emulator including the target memory, should be relocatable.
• Emulator Op-Codes should be standardized.
• Emulator Op-Codes should be interruptible and
capable of retry. (In emulation, it is possible to
remain in E-time simulation for an unusually long
period of time, relative to normal 8ystem/370
E-time. Therefore, the hardware feature, must be
fully interruptible if functions requiring the immediate dispatch of asynchronous interrupts are
to be supported.)
• Hardware/Software communication should be
done via General Purpose Registers and Floating
Point Registers rather than through special hardware registers and/or fixed tables. This is required
if emulators are to be multiprogrammed.

Operating system interfaces
• Three standard interfaces were defined. These interfaces are emulator and operating system dependent.
1. An interface was established between the compatibility feature and the emulator modules
which performed CPU simulation, I/O simulation and Operator Services.
2. A second interface was established between the
CPU, I/O and Operator Service modules and
the emulator access method.
3. A third interface was established between the
emulator access method and the operating
system
• By implementing the emulator to these standard
defined interfaces, the goal of a common, modular
design with the inherent facility of portability was
realized.
In summary, the modeling effort successfully demonstrated the feasibility of large scale integrated emulation, while at the same time meeting all of the design
and performance objectives. The architecture which
evolved from the model was used by the 08 /M85 /7094
emulator and was released in early 1970. This architecture, with further refinements, is used by all of the
8ystem/370 emulators:
Models 145 and 155

Model 165

Hardware design/requirements
The need to operate in emulator mode should be
eliminated. The emulator program should be transparent to the operating system.

165

DOS /1401-1440-1460
DOS/1410-7010
OS/1401-1440-1460
08/1410-7010

OS/7074
OS/7080
OS/7094

166

Spring Joint Computer Conference, 1971

These systems represent the most advanced emulators
ever offered in the IBM product line, combining the
powerful new System/370, its high performance I/O
devices, the multiprogramming facilities of Operating
System (OS) /360 and Disk Operating System (DOS) /
360, and an improved technology in emulator design.
SYSTEM REQUIREMENTS AND FEATURES
On the Model 155 there are four emulator combination, available. The 1401/1440/1460 Emulator under
both DOS and OS and the 1410/7010 Emulators under
DOS and OS. These are four separate programs, each
with an individual program number. The compatibility
feature on System/370 Model 155 is an integrated
feature which provides the facility to emulate the
1401/1440/1460/ and 1410/7010. These emulators can
be multiprogrammed in any combination.
On the model 165-7074, 7080 and 7094 emulators
are provided. These emulators run under OS/360 and
can be multiprogrammed. Each emulator consists of a
compatibility feature and a corresponding emulator
program that has a unique feature and program number. Only one feature can be installed in the system at
one time.
The System/370 emulators have a number of requirements, considerations and support functions in
common:
Minimum Requirements

Data File Restrictions:
• 1400/7000 series tape files must be converted if
record lengths exceed 32,755 bytes or, if data is in
mixed densities.
• All 1400/7010 disk files must be converted.
COMPATIBILITY FEATURES
The Compatibility Features on System/370 Models
155 and 165 are under microprogram control. The
feature on the Model 155 is an installed resident feature, whereas on the Model 165 it is loaded into "Writable Control Storage" via the console file.
The compatibility feature is, in effect, a number of
special instructions added to the base System/370.
These special instructions are used by the emulator
program to emulate target machine operations. The
selection of operations to be performed by the special
instructions is based on an analysis of the target machine operations relative to complexity and frequency
of use.
The most significant special instruction (since it is
used once for each target machine instruction executed)
is called DO INTERPRETIVE LOOP or simply, DIL
(Figure 1). The DIL instruction replaces with a single
instruction the subroutine that a pure software sub-

IF OIL
TRAP PENDING

S/370 I-FETCH
AND EXECUTION

OPERATOR SERVICES

VO COMPLETION

INTERRUPT GENERATION
UNUSUAL CONDITION

• Compatibility Feature
• A sufficient number of System/370 I/O devices to
correspond to the devices on the system being
emulated, plus the devices required by the Operating System.
• Sufficient System/370 processor storage for: (1)
the version of the operating system being used
(MFT, MVT or DOS), (2) emulator functions
needed for the system being emulated, and (3) the
program being executed.
Additional Features
• Two tape formatting programs are provided: (1)
to convert 1400/7000 series tape files to Operating
System (spanned variable length) format for more
efficient data handling by the emulator, and, (2)
to convert output records in spanned variable
length format to original 1400/7000 series format.
• A disk formatting program is provided to assist in
converting 1400/7010 disk files to the standard
Operating System format.

EMULATOR
ACCESS
METHOD

OSOATAMGMT
ISAM,
QSAM,

lOAM

Figure I---Dverview of emula.tor instruction execution

System/370 Integrated Emulation

routine would use to:
1. Access the simulated instruction counter (I C) .
2. Convert the IC to a System/370 address in the
simula ted target machine storage which contains the instruction to be interpreted.
3. Fetch the instruction.
4. Update and restore the simulated IC.
5. Perform any indexing required for the subject
instruction.
6. Convert the effective address obtained to the
System/370 address in the simulated target
machine storage which contains the subject
operand.
7. Interpret the instruction Op-Code and branch
to the appropriate simulator routine which will
simulate the instruction.
INPUT/OUTPUT DEVICE CORRESPONDENCE
Expanded support of I/O devices is provided with
the System/370 integrated emulators. The OS Emulators employ the QSAM, BSAM and BDAM facilities
of OS /360 Data Management, and offer device independence within the support capabilities of these access
methods. The DOS emulators provide device independence only for Unit Record devices.
DISTRIBUTION
DOS
The DOS emulators for 1400/7010 are distributed as
components of DOS. Standard DOS system generation
procedures are followed in generating an emulator
system.
OS
The OS emulators for System/370 Models 155 and
165 are distributed independently of OS/360. Independent distribution was chosen inasmuch as the emulator modules would be superfluous to System/360 users
and take up unnecessary space on the distributed system libraries.
SUPPLEMENTAL PROGRAMS
Tape formatting programs
Two tape formatting programs are distributed with
the emulator program. The Preprocessor program con-

167

verts tapes in original 1400/7000 series format to
spanned variaole-Iength record format. Any 1400/7000
series tape containing records longer than 32,755 characters must be preprocessed. Preprocessing of other
tapes is optional, although greater buffering efficiency
can be obtained because the emulator is intended to
normally operate with a spanned variable-length
format.
The post-processor program converts tape data sets
from spanned variable-length format to 1400/7000
series format. The programs support tapes at 200, 556,
800 and 1600 BPI density and handle mixed density
tapes. The programs support even, odd and mixed
parity tapes.
Disk formatting program
A disk formatting program is provided to assist in
converting 1400 disk files to a format acceptable to the
emulator program. The disk formatting program runs
as a problem program under the operating system. The
program creates a data set composed of control information and of blank records whose size and number
are determined by the device being emulated.
COMMUNICATING WITH THE EMULATOR
PROGRAM
A full range of operator services are provided for
operator communication with the emulator program.
1400/7000 series console operations are simulated
through commands entered by the operator.
In an integrated, multiprogramming environment,
the operating characteristics are expected to initially be
more difficult for the operator. However, every effort
has been made to ease the transition from stand-alone
to integrated operation. Messages from the emulator
program are identified by a unique message ID, including a sequentially-incremented message number
and the job name of the program being emulated. The
user has the option of including multiple console support and directing emulation messages to the second
console.
SUMMARY
The System/370 integrated emulators have significantly extended the technology of emulation. They
bring to the user an improved, more efficient operating
environment for emulator and native mode System/370
jobs, while at the same time providing a nondisruptive
growth path for today's System/360 user.

168

Spring Joint Computer Conference, 1971

REFERENCE MATERIAL
System/370 Emulators-8RL Publications
Emulating the IBM 1401,1440,1460 on the IBM System/370
Model8145 and 155 Using DOS/360- '/I.GC33-2004
Emulating the IBM 1410 and 7010 on the IBM System/370
Model8145 and 155 Using DOS/360- '/I. GC33-2005.
Emulating the IBM 1401,1440,1460 on the IBM System/370
Model8145 and 155 Using OS/360- '/I. GC27-6945.
Emulating the IBM 1410 and 7010 on the IBM System/370
Model8145 and 155 Using OS/360- '/I.GC27-6946
Emulating the IBM 7070/7071,. on the IBM System/370

Model 165 Using OS/360- '/I.GC27-69J,8
Emulating the IBM 7080 on the IBM System/370 Model 165
Using 08/360- '/I.GC27-6952
Emulating the IBM 709, 7090, 7094, 709411 on the IBM
System/370 Model 165 Using OS/360- '/I.GC27-6951
Hypervisor Documentation
Hypervisor for Running 7074 Emulation as an OS/360 Task'/I. 360D-05.2.005
Double 7074 Emulation on a System/360 Model 65- '/I. 360D05.2.008
Hypervisor RPQ's
Shared Storage RPQ for a 8ystem/360 Model 65- '/I.E 880801
Shared Storage RPQ for a 8ystem/360 Model 50- '/I.E 56222

A high-level microprogramming language (MPL)
by R. H. ECKHOUSE, JR.
S.U.N.Y. at Buffalo
Amherst, New York

Objectives

INTRODUCTION

The hardware designer (the traditional microprogrammer) needs the ability to express the relevant
behavior and structural properties of the system. The
software designer needs the flexibility of a programming
system which allows him to describe the procedures by
which a machine can execute a desired function. In
combining both of these needs, as microprogramming
does, we find that a suitable microprogramming language must be one that is high-level, procedural, descriptive, flexible, and possibly machine-independent.

As late as 1967, a prominent researcher reported to his
organization! that he believed a successful higher-level
microprogramming language seemed unlikely. At the
same time, other members of the same organization
were describing what they termed etA Microprogram
Compiler". 2 Meanwhile, other hardware and software
designers, equally oblivious of each other, were generating useful and powerful higher-level languages to
assist them in their work. As the reader will see, the
stage had been set for the development of a higher-level,
machine-independent language to be used for the task
of writing microprograms.
The research here reported describes a microprogramming language, called MPL, and includes several
aspects of the development and use of such a language.
The objectives for the language, the advantages and
disadvantages, the work which has preceded the development, and the importance and relevance of developing such a language are considered. Finally, we
shall consider this current research, showing some preliminary but very promising results.

Advantages
A high-level microprogramming language will free
the users from such non-essential considerations as
table layouts, register assignments, and trivial bookkeeping details. The language will have the obvious
benefits of improved programmer productivity, greater
program flexibility, better documentation, and more
transferability. By providing the necessary tools for the
hardware designer, the software designer, and the
machine user, this language can be part of a larger system which is viable for all phases of system design:
description, simulation, interpretation, and code generation.

HIGHER-LEVEL LANGUAGES FOR
MICROPROGRAMMING

Disadvantages

The area of microprogramming has opened new possibilities for both software and hardware designers because microprogramming has, to a certain extent,
blurred the once clear separation of level between the
two. In microprogrammable machines we find hardware circuits that incorporate read-only or read/write
memory which can determine both the computer's actions and its language. It is therefore beneficial to consider the needs of both the. hardware designer and the
software designer in the development of a microprogramming language.

The seemingly obvious (and traditional) disadvantages of utilizing a higher-level language for microprogramming are loss of efficiency, inflexibility, and high
cost. The critics cite the need for a high degree of
machine usage, tight code and maximum utilization of
every bit in a microinstruction as the major factors for
ruling out the use of such a language. "Basically, a compiler would generally be forced to compete with a
microprogrammer who can justifiably spend many
169

170

Spring Joint Computer Conference, 1971

hours trying to squeeze a cycle out of his code and who
may make changes in the data path to do SO".1
In light of the larger aspects of microprogramming,
the above criticisms seem much less tenable. First, the
current users of machines which can be microprogrammed are not only their hardware designers. These
users do not wish to exercise the microprocessor to its
fullest extent if this leads to "tricky" code, or code that
is difficult to write, debug, and test. Instead, these
users wish to be able to write their own emulation or
application software and be able to use higher-level
languages with all of their benefits.
The second point is that there exist scant measurement standards for determining efficiency, flexibility,
and cost of the presently used methods. Manufacturers,
when asked questions concerning hardware utilization,
concurrency, and efficiency, tend to state that "the
total core size of the microprogram is only X", or "our
machine has achieved the desired speed Y", leaving us
puzzled and unenlightened.
The real costs and savings inherent in a microprogrammable machine should not be measured in terms
of raw speed or core size alone but must be concerned
with the unique flexibility that such a machine offers.
When we discover bugs in the virtual system, we know
that it is clearly less costly to write and implement new
microroutines in a microprogrammable machine than
to rewire a non-microprogrammable machine. And
when we desire to add new features such as virtual
instructions or hardware I/O options, it is again less
costly to do so to a microprogrammable machine. Thus
it would seem that if a higher-level language can aid
this process by further reducing the cost of writing
microroutines, then clearly such an approach is viable
and well worthwhile. The reader is referred to the work
of Corbat0 3 for additional discussion of the approach.

A suitable language

In developing a suitable language for writing microprograms, the language developer should ask himself if
his language would be new, better, more enlighted or
useful than some existing, well known language. Instead of adding to the proliferation of languages, it
would be well worthwhile to utilize some existing language, with extentions if necessary, as the basis for the
development. Fortunately, an appropriate, hjgher-Ievel
language does exist and can be used to not only write
microprograms, but also be used to describe, simulate,
and design hardware.
A small dialect of PL/I, akin to XPL, represents a
suitably modified language amenable to microprogramming. This paper will report on the on-going effort of

the design and use of this dialect, the author's higherlevel language for writing microprograms called MPL.
Another dialect of PL/I described in CASD4 has already been used to aid in the design of computers (both
microprogrammable and not). Thus, the use of a higherlevel language has already been demonstrated to be a
viable technique for the design, description, and simulation of computer systems, and need not be treated
further in this paper.
Importance and relevance

In the process of developing a microprogramming
language such as MPL we must be concerned with the
relevance of the language. We must find out how effectively the language may be used, and how capable it is
in meeting the needs of the user. Thus, performance is
a criterion of acceptance and we must be able to demonstrate the ease of producing a meaningful high-level
program which can be suitably translated into efficient
microcode.
BACKGROUND
Previous work in developing higher-level languages
for software and hardware designers is rather extensive.
Unfortunately neither side has been concerned with the
other, and we find few attempts to reconcile the two.
APL is one exception, and its proponents have catagorized it as a universal, systematic language which is
satisfactory for all areas of application. 5 Papers have
been written to show how APL can be used to describe
hardware, to formulate problems, and to design systems. Another exception, previously discussed, is
CASD.4 However, the CASD project has since been disbanded, and no attempt has been made to write the
microcode translator discussed in the report of that
project.
Systems programming languages

For all of its contributions and contributors, APL
does not adequately describe systems programming or
microprogramming problems (e.g., timing, asynchronicity, and multiprogramming) without additional explanations in English. In addition, only a subset of
APL has been implemented and the whole language
remains significant but unimplemented.
Other contributions to higher-level, systems programming languages have included EPL, ESPOL,
SYMPL, and IMP. These languages possess block
structure, compound statements, and logical connec-

MPL

tives which make the job of system design much easier.
The MULTICS project3 and the development of the
B5500, with its unconventional "machine language,"
have demonstrated the successful utilization of higherlevel languages to operating systems design.
Hardware design languages

Recent papers by hardware designers seem to indicate a strong trend toward the use of higher-level,
machine-independent languages for hardware design.
The objectives of these papers appear to be:
(1) To describe digital machines

(a) Their logic
(b) Their timing and sequencing
(2) To simulate digital machines
(a) Verify new designs
(b) Verify new features
(3) To have machine translatable, formal, hardware
description languages
(a) Supporting (1) and (2) above
(b) To simplify machine design
The objectives have been met to various degrees as
evidenced by the work of Metze and Seshu,6 Chu, 7, 8
Darringer,9 Schlaeppi,lO Schorr,ll Proctor,12 Gorman
and Anderson,13 and Zucker.14 Much of their work
seems amenable in its application to microprogramming,
and all of it represents the application of an existing
higher-level language structure (FORTRAN or
ALGOL) to the hardware specification and design
problem.
Microprogramming languages

The first evidence of a language structure for writing
microprograms appears to be in the work of Husson
et a1. 2 The authors present their views on the more
general concepts for designing a microprogram compiler but they do not have the experience of a working
compiler. They discuss a compiler-language which is
high-level, procedural, descriptive, and machine-independent. They suggest that such a language will require
an intermediary language (some form of an UN COL)
which will allow for the successful generation of a simulator and a machine-dependent interpretation of the
microcode.
The authors go on to suggest that there should be
compatability between adjacent, architecturally similar processors or classes of machines. Thus, the compiler-language must permit hierarchical descriptions of
the particular machine class.

171

Universal languages

Many of the problems encountered by the hardware
and software designers which concern machine-independence are discussed in papers on SLANG15 and
UNCOL.16,17 The SLANG system is concerned with the
basic question, "Is it possible to describe in a machineindependent language processes which in themselves are
machine-dependent?" In the papers addressing the
UNCOL concept, we find the discussion on whether or
not there exists some intermediate language(s) between
any problem-oriented language and any machine language, and whether or not the separation of machineindependent aspects of a procedure oriented language
from the machine-dependent aspect is feasible.
THE LANGUAGE AND ITS TRANSLATOR
The choice of a higher-level, machine-independent
language for this research required consideration of
several aspects in its development. Some of these aspects and the conclusions to which their consideration
lead included:
(1) A survey of a representative sample of micro-

(2)

(3)

(4)

(5)

programmable machines, i.e., how machineindependent or widely applicable is the proposed
language?
What is the syntax of the language? What are
its syntactic elements and how do they relate to
microprogramming?
What is the objective of the language? Is it ease
of translation into efficient microcode, or is it ease
of describing application problems which can be
converted into microcode?
How is translation into microcode performed? If
the language is machine-independent, at what
stage in its translation is machine-dependence
introduced? At what stage do we tailor the
code toward the particular microprogrammable
machine?
How do we evaluate the code produced? How do
we know it is correct or good? Is it "concise"?

What follows are answers to these questions, and an
analysis of the effects these answers had in dictating
the ultimate results.
M icroprogrammable hardware

The objective in surveying current hardware was to
attempt to classify the similarities and differences in
the various microprogrammable machines. As expected,

172

Spring Joint Computer Conference, 1971

the architectural differences are not overwhelming, and
in many cases are manifest more in terms of the "state
of the art" technology, than in differences in type of
instruction set, testable conditions, types of addressing,
etc. Indeed, all of the machines can be classified as
classical, Von Neumann in nature with only minor
perturbations.

Syntax

The literature abounds with various languages for
writing systems programs (MOL-360, BCPL, PL/360,
etc.) and for describing and simulating hardware
(LOTIS, CASD, Computer Compiler, etc.). In all
cases, the syntax is simple and easily translatable into
hardware implementable semantics. Such an approach
was taken in specifying the PL/I-like syntax of MPL.

Procedures and declarations

As in PL/I, the basic building block of MPL is the
procedure. The concepts of local and global, scope of
names, etc., have been preserved and represent the
block structure of the language.
Declarations of the various data items (including
registers, central memory, and events) give the attributes of the items. By use of the PL/I "DEFINE"
syntax, register data items are subdivided into their
principal parts (i.e., we may declare a virtual 2n-bit
register and then define its true constituent n-bit
parts).

Data itelDs

There are basically six types of data items. First,
there are the machine registers, both true and virtual.
Second, there is central and micro memory. Third, there
is both local and auxilary storage which can be similar
to the register data type or the central memory data
type, depending on its implementation in the actual
microprogrammable machine. Fourth, there are
"events," unlike events in PL/I, which correspond to
testable machine conditions (carry, overflow, etc.).
Fifth, there are constants of the type decimal (e.g., 2),
binary (e.g., 101IB), and hexadecimal (e.g., OFX). The
traditional enclosing quotes around binary and hexadecimal constants may be dropped as long as the constant begins with a digit and ends with a B or X. There
are also label constants and symbol constants (or literal
constants ala XPL). Finally, there are variables which
will take on constant values~

StatelDents

Assignment statements have been modified in MPL
to allow concatenated items to appear on either side of
the equal sign. Thus, the concatenation of two registers
Rl and R2 becomes Rl/ /R2. This newly defined,
double length register can be used logically as if it
actually existed such as:
Rl/ /R2=Rl//R2+2;
Additional binary and logical operators have been
added or modified in MPL and include:
a .RSH. b
a .LSH. b
a
/\ b
a
V b
a
fit b

Shift a right b places
Shift a left b places
a and b
a or b
a exclusive-or b

Finally, the IF statement is able to test an EVENT
previously declared. Thus, a convenient means exists
for a transfer on carry, overflow, etc.

Objectives of the language

With microprogrammable machines, two emulation
objectives are commonly identified. First, the hardware may be used to emulate a particular system (S/360
on the IBM 2025, 1130 on the Meta IV). Second, the
microprogrammable hardware may be used to emulate
a particular environment (SNOBOL4, a banking system, etc.). Traditionally, the former objective requires
tight microcoding with efficiency of the produced microcode the end goal. The latter objective requires a good
run-time environment which can support, through
emulated primitives, those features peculiar to the application environment.
The first objective generally requires an efficient
translator, with various techniques of optimization,
including the use of an intermediate language. 1s The
second generally does not require an intermediate language, since in most cases the primary language can be
directly translated into the emulated primitives implemented on the host machine.

Translation procedure

In this research the use of an intermediate language
called SML has greatly facilitated the translation process from a higher-level machine-independent language
into microcode. The basis for'this intermediate language
can be found in an early paper by Melvin Conway on

MPL

173

Preliminary evaluation and future work

MPL-to-SML
Dictionary Produced

SML-to-Virtual
Object Code

Virtual Object Code
to Object Code

Figure I-Organization of the translator

the use of an UNCOL.19 SML-to-microcode translators
have been written for the INTERDATA 3, and are
capable of producing "compact" code (see Appendix
A). In addition, the translation algorithm for converting the MPL code into microcode allows for multiple
precision data manipulations, a feature very common
to emulator programs. The result is that the process of
emulating a 2n-bit word machine on an n-bit microprogrammable machine is easily done at the highest
level (MPL level) in a most natural fashion.
The general organization of the translator is shown
in Figure 1. Source code is initially translated into
SML in phase 1. At the same time, a dictionary is
constructed for later use in phase 3. Items entered into
the dictionary include real and virtual registers, testable
conditions, literals, and other items DECLAREd. Although the SML produced is machine-independent, the
dictionary is not in that it relates virtual data items to
their real equivalents.
Phase 2 of the translator produces virtual object
code from the SML input. The code is virtual in the
sense that the operands of machine instructions may be
virtual data items (concatenated registers, multiple
precision data items, literals, etc.) and need not be of
equal widths.
The conversion from virtual object code to object
code is resolved in phase 3. Operands of unequal or
virtual nature must be converted to true machine instructions. Literals, immediate operands, virtual and
concatenated registers must be looked up in the dictionary in order that their virtual representations may be
replaced by their object representations.
In general, phase 3 will cause additional lines of
object code to be generated. This code results from the
conversion of virtual operands into true operands.

The current translators from SlVIL to microcode are
written in SNOBOL4. They are capable of producing
microcoded routines from SML which closely resembles
the same code supplied by the manufacturer (see Appendix A). The whole process of coding the emulation
routines has been made considerably easier by using
MPL, and removes much of the busy work required
in writing microprograms from the programmer's
shoulders.
There are certain drawbacks, however, in using any
systems languages such as MPL. In particular, when
one allows the use of every facility at the highest level,
conflicts may arise (such as that which can occur in any
high-level language where assembler code may be generated in-line), and indiscriminate use of those facilities
may lead to reasonable but unexpected results. This
seems a small price to pay in terms of the original 0 bjectives set forth in the section on microprogramming
languages.
.
The original objectives of a high-level, procedural,
descriptive, flexible, and possibly machine-independent
language for writing microprograms have been met so
far. The entire process from the higher-level language
to the microcode has been considered, and the feasibility seems clear. As Husson points out in his book,2
. the value of this project is in its use for:
(1) Designing
(2) Debugging
(3) Translating

In each case, the programs must be organized, written,
tested, and debugged. At each level, the organization
and flexibility provided are enough to justify the existence of MPL.
Future work will concern itself with further refinements to the language, and with its application to a
multiple data-path machine such as the IBM 2050.
Techniques for both local and global optimization of
the code produced will also be considered.

APPENDIX A
Figure 2 is a portion of the INTERDATA 3 emulator
written in MPL. The emulated environment is that of
a simplified 360, and Figure 2 shows part of the initializing routine (to fetch the PSW) and the instruction
fetch and decode routines.
In the outer procedure "INITIAL", we find the ex-

174

Spring Joint Computer Conference, 1971

INTf·ROUAJI PHOCt,UlJHf. (JPTlONS(I·'AHI) ,

115

CM (0132761) BIT It,,).
MAk

(·11,'.01

0;

KllOaU)
Itt_O,MI;

DECOO£I

IF SNGl.. ClTN 'THEN GO TO SUPOttTl

SUPttE TI

113

orCOOEI
('SOPO.T . " 0)
K(JUltP-ON-SN(;U'
JlIJONP-ON-CaTN) r

tilT 116) •
MAM l'qT
MAL HIT

MOk

=

HIT

(H)
(H)

UEFTIIIFU >1 Ail "USITTO"l II) •
UEFINFU ouR PUSJT"" .... ('I),

= R"A OFX;

, . "'SK f'p eOOE·'

MOH ''II
MOL AyT

(H)
(H)

SUPIPETI

(ltlt, ....

I·".X ••• 1I
X ClNO)
IIt.0."311

, 16),
')EFlNFU ",fIR "USITrON Ii) ,
OEFINfU >1nR .. OS IT rON 1'1).
, . "Ul TTPl Y 8Y 3 . ,

LOCCNT tilT 1161,

(1t3,'."
(·t.aUI
JlllEFTSNIFTI

1"••••• 01
1"3.'.11
XI.DOI

n.o.'ltl;

INITIALI
I . FilCH IHf LOCATlOIll COIJIIITFH A.,O PUT IT INTO 1>'1 AN" HI . ,
IIAH .. LOCCNT!
MOH .. CM (I~ARI,
HOIIHI .. MORI
Ge rO uISPLYI
I- GU C..E,.K 0 ....... ONSOLF. .. ETT , '~~S . ,

OFIt = 1t21

IOZ.'.OI
XllOaOJ
1".O."FItII

IF TRUE THEN GO TO Ill£GI
ElS£ IF F'lSl'wC.III1Y THEN GO TO TROU"L:

1·lll£G.a.O)
XI.lUNP-ON-TRUE)
I 'TROUAt.,'. 01
x (JUNP-ON-.ll.SEI
XIJO"P-O .... C."ItYlI

PHASElt PilCCEUUREI
rl~SIIWCTTU'j FI:.TCH,

I.

LOC CNTp IJPnAIE " 0 .. coo. DFC"OE

MAil .. RUIIRI,
MOR • CMI"I/lR)I
HOIIHI a ROIIAI."I
H411.0 .. 1011)111
H1 .. H'IoRSH,)1

Ak '" IRJ.LSH,llVlI

I . TIIICRF'Mt.NT lOCATIO" COU·,·TF.A
I . GET OP "'OnE
, . RIGId JUSTIFY AI/"'I
I . lEFT SHTFT R~/XZ
I . INrO AH ~IT~ LC;H SFT

H2,OFR .. R4,PSH,4
IF CARRY THEN GO TO RJlFOPMI
H6 .. AR"11
H5 .. 01

ARFOR",
OECODE r
SUPRElr

.,

, . I""ST.WCTlO" AnO.lESS . ,
.1
.,

Figure 3-PL/i-like code translated into SML

.1

.,
.1

IF SNGL veATN THEN GO TO c;uPOpTi
I . t4AC;r< UP ,.onE . ,
H3 .. H4""FXI
I . MULTIPLY .• y 1 . ,
AR a R3. IR3.1. SH.) II
uFR .. RCI
IF UiFn.... ;

n.l~-bN-C'~YlI
, . tt£G-.EG Fo ... n · ,

*lPFOttNI
I ••••• "

'·1 .... 11
XUNnI
IIt••• *611

Figure 2-An example using the PL/I-like syntax

L
L
L
L
L
B
N
L
T
B

T

XfttIGNTSHIFTJ
.~

L
C
L
L
L
C
L
A
L
L
L
L
L

SUP RET

B
L
N
L
A
L
B
B
B

MAR, .LOCCNT
MR

RO//R1,MDR
RAR,·DISPL~

MAR,RO//R1
MR

AR,·2
RO//R1 ,RO//R1
Ril//R3,MDR
AR,R3,SR+NC
AR,AR,SR+NC
R7,AR,SR+NC
AR,R3,SL+NC
AR,·1
AR,RiI,SR+NC
AR,AR,SR+NC
AR,AR,SR+NC
DFR,AR,SR+NC
R2,DFR
C,RXFORM

R6,·1
RS,·O
SNGL
G,SUPORT

CATN
G,SUPORT

AR,·OFX
R3,R4
AR,R3,SL+NC
AR,R3
DFR,R2
G,ILLEG
L,TROUBL
C,TROUBL

Figure 4-Second phase of the translation

MPL

INITIAL

MAL ,=L (LOCCNT)
MAli ,=H (LOCCNT)

L

L

PHASE 1

C
L
L
L
L
L
L

MR

C

MR

L
A
A
L
L
L
L
L
L

AR,X'02 •
R1 ,R1 ,CO
RO,RO,CI
R3,MDL
RlI,MDH
AR,R3,SR+NC
AR,AR,SR+NC
R7,AR,SR+NC
AR,R3,SL+NC
AR,X'01 '
AR,RlI,SR+NC
AR,AR,SR+NC
AR,AR,SR+NC
DFR,AR,SR+NC
R2,DFR
C,RXFORM

o
L
L
L
L
L
B

RRFORM

N
L

DECODE

SUP RET

T
B
T
B
L
N
L
A
L
B
B
B

R1,MDL
RO,MDH
RAL ,=L (DISPLY)
RAH,=H(DISPLY)
MAL,R1
MAH,RO

R6 ,X' 0 1 '

RS,X'OO'
SNGL
G,SUPORT
CATN
G,SUPORT
AR,X'OF'
R3,RlI
AR,R3,SL+NC
AR,R3
DFR,R2
G,ILLEG
L,TROUBL
C,TROUBL

Figure 5-Third phase of the translation

pected sequence:
(1) Put address into memory address register
(2) Read central memory
(3), Copy data out of memory data register
common to emulator programs. The concatenation of
RO and R1 occurs because central memory is actually
16-bits wide, and reads and writes require double length
registers for both addressing and data handling.

175

The inner procedure "PHASE 1" represents the instruction fetch, location counter update, and op-code
format recognizer routines. In it can be found two features not normally found in PL/I. First, the binary
operators .RSH. and .LSH. have been added to represent right-shift and left-shift respectively. Additional
operators such as exclusive-or have also been included
in the syntax since they occur frequently in the instruction sets of microprogrammable machines.
Second, the occurrence of the "events" CARRY,
SNGL, CATN, TRUE, etc., in the IF statements are
taken to imply that special conditions within the machine can be tested directly. The actual relationship of
the event to the physical hardware is specified in the
DECLARE statement.
Figure 3 shows the same code as Figure 2, but the
translation into SML can be found interspersed on the
right-hand side of the figure. Operations are denoted
by an "X" followed by parentheses enclosing the name
of the operation. Arguments needed for operations
must first be loaded into argument or A-registers. Results of operations are placed in result or R-registers.
Temporary or T -registers are available for intermediate
results. Finally, literals are indicated by preceding
their names (values) by an asterisk.
Figure 4 represents the output of phase two of the
translator. This is the traditional and more difficult
code emission phase of the translation process. The results are not true INTERDATA code, however, and
must go through another phase to relate the actual
facilities and data-path widths to the virtual facilities
and data paths which the programmer assumes.
Figure 5 is the output of the third phase of the
translator. The code produced here is actual INTERDATA code in assembler format. The output has required a dictionary to relate the virtual and physical
registers, data-paths, etc., to each other. Construction
of such a dictionary is accomplished in the MPL to
SML phase of the translation, and is facilitated by the
declarations in the MPL code.
APPENDIXB
The INTERDATA 3 is a very fast, simple and uncomplicated machine. Control instructions reside in a
Read-Only-Memory (ROM) that is 16-bits wide. Thus,
the microinstructions of the machine are 16-bits long.
However, the data paths are only 8-bits wide.
Microinstructions for the INTERDATA are somewhat similar to the instructions for a two address
(register-to-register) machine with an accumulator
(AR). The instruction types include:
L
A

Load
Add

X
B

Exclusive Or
Branch On Condition

176

Spring Joint Computer Conference, 1971

T
C
D

S Subtract
NAnd
o Or

Test
Command
Do

The assembler allows literals to be specified as hexadecimal constants and labels. Since labels may be 16-bit
values, their high and low parts are specified by prefixing the literal by an H or L respectively.

The four formats for the ten instructions of the machine
are:

I I
op

I

destination

I

source

~IP~171,

modifiers

for op codes A,S,N,O,X,L

I I
op

destination

I

data

j

for immediate instruction forms as above

I I
op.

test or command

for test or command instructions

op

CVGL

1

address

for branch instructions where:
C=carry
V=overflow
G = greater than zero
L = less than zero
Thus an add instruction might look like:
A

MAH,Rl

where the contents of the source register Rl are added
to the contents of the accumulator AR and the result
is stored in the destination register MAH.
Modifiers for the various instruction types include:
NA
SR

SL

cr
CO
NC

REFERENCES

AR is not added to the source register.
Shift the contents of the source register
right one bit and then perform the operation.
Shift left as above.
If the carry flip-flop is on, add a one to this
instruction.
Set the carry flip-flop if a carry is generated out of the most significant bit.
No carry.

(18),p.402
1 S G TUCKER
Microprogram control
IBM Systems Journal Volume 6 pp 222-241 1967
2 S S HUSSON
Microprogramming: Principles and practices
Prentice-Hall Engelwood Cliffs New Jersey pp 125-143 1970
3 F J CORBATO
PLjI as a tool for systems programming
MIT Project MAC Memorandum M-378 1968
4 E D CROCKETT et al
Computer-aided system design
AFIPS Conference Proceedings Fall Joint Computer
Conference 1970
5 K ElVERSON
Programming notation in systems design
IBM Systems Journal Volume 2 pp 117-128 1963
6 G METZE S SESHU
A proposal for a computer compiler
AFIPS Conference Proceedings Spring Joint Computer
Conference pp 253-263 1966
7 XCHU
An ALGOL-like computer design language
Communications of the ACM Volume 8 pp 607-615 1965
8 Y CHU
A higher-order language for describing microprogrammed
cOmputers
University of Maryland Computer Science Center
Technical Report 68-78 College Park Maryland 1968
9 J A DARRINGER
The description, simulation, and automatic implementation
of digiatal computer processors
Ph D Thesis Carnegie-Mellon University Pittsburgh
Pennsylvania 1969
10 H P SCHLAEPPI
A formal language for describing machine .logic, timing, and
sequencing (LOTI S)
IEEE Transactions on Electronic Computers Volume
EC-13 pp 439-448 1964
11 H SCHORR
Computer-aided digital system design and analysis using a
register transfer language
IEEE Transactions on Electronic Computers Volume
EC-13 pp 730-737 1964
12 R M PROCTOR
A logic de.3ign translator experiment demonstrating
relationships of languages to systems and logic design
IEEE Transactions on Electronic Computers Volume
EC-13 pp 422-430 1964
13 D F GORMAN J P ANDERSON
A logic design translator
AFIPS Conference Proceedings Fall Joint Computer
Conference pp 251-261 1962
14 M S ZUCKER
LOCS: An EDP machine logic and control simulator

MPL

IEEE Transactions on Electronic Computers Volume
EC-14 pp 403-416 1965
15 R A SIBLEY
The SLANG system
Communications of the ACM Volume 4 pp 75-84 1961
16 P R BAGLEY
Principles and problems of a universal computer-oriented
language
Computer Journal Volume 4 pp 305-312 1962

177

17 T B STEEL
A first version of U NCOL
Proceedings of the Western Joint Computer Conference
pp 371-378 1961
18 D J FRAILEY
Expression optimatiml using unary complement operators
SIGPLAN Notices Volume 5 pp 67-851970
19 M E CONWAY
Proposal fOr an U NCOL
Communications of the ACM Volume 1 pp 5-81958

A firmware APL time-sharing system
by RODNAY ZAKS,* DAVID STEINGART,* and JEFFREY MOORE**
University of California
Berkely, California

INTRODUCTION

puters the microinstructions are powerful enough and
the control memories large and fast enough to permit
an on-line interpreter and monitor to be implemented
in microcode. If a sufficient number of fast hardware
registers is available, core memory serves only for
storage of source strings and operands. The speed advantages of such a mode of execution are obvious.

Incremental advances in technological design often result in qualitative advances in utilization of technology.
The recent introduction of low-cost, microprogrammed
computers makes it feasible to dedicate highly sophisticated and powerful computation systems where previously the needed performance could not be economically
justified. Historically, the contribution made by the
computing sciences to the behavioral sciences has been
limited largely to statistical analysis precisely because
sufficiently sophisticated computing equipment was
available only outside the experimental situation. Inexpensive time.;..sharing systems have recently made it
possible to integrate the computer in a novel way as a
tool for conducting experiments to measure human behavior in laboratory situations. A detailed presentation
of computerized control of social science experimentation is presented later. However, many aspects of the
system are of general interest because they exploit the
possibilities of a newly available computer generation.
Iverson's APL language has been found to be very
effective in complex decision-making simulations, and
the source language for the interpreter to be described
is a home-grown dialect of APL. It is in the nature of
interpreters that relatively complex operations are performed by single operators, thus making the ratio of
primitive executions to operand fetches higher than in
any other mode of computation. This is especially true
in APL, where most bookkeeping is internal to particular operators, and a single APL operator may replace
a FOR block in ALGOL, for example. This high ratio
places a premium on the ability of microprogrammed
processors to execute highly complex instruction sequences drawing instructions from a very fast control
memory instead of from much slower core memory.
In the new generation of microprogrammable com-

SYSTEM ARCHITECTURE***
META-APL is a multi-processor system for APL
time-sharing. One processor ("the language processor")
is microprogrammed to interpret APL statements and
provide some monitor functions, while a second one
(the "Interface processor") handles all input-output,
scheduling and provides preprocessing capabilities:
formatting, file editing, conversion from external APL
to internal APL. Editing capabilities are also provided
offline by the display stations. In the language processor's control memory reside the APL interpreter and
the executive. In the Interface processor's reside the
monitor and the translator.
An APL statement is thus typed and edited at a display station, then shipped to the Interface processor
which translates and normalizes this external APL
string to "internal APL," a string of tokens whose left
part is a descripto~ and right part an alphanumeric
value or i.d. number corresponding to an entry in the
user tables (see appendix A). External APL may be
reconstructed directly from internal APL and internal
tables, so that the external string is discarded and only
its internal form is stored. This internal APL string is
shipped to the language processor's memory: the APL
processor will now execute this string at the user's request.

*** The concepts presented here are being implemented under
the auspices of the Center for Research in Management Science.
Research on the final configuration is continuing at the Center
and the views expressed in this paper represent the ideas of the
authors.

* Department of Computer Science
** School of Business Administration
179

180

Spring Joint Computer Conference, 1971

16 CII'fa1splq
atatlO11s

Figure 1-The Meta-APL time-sharing system (projected)

The variable's i.d. numbers ("OC#") are assigned by
the system on a first-come-first-served basis and are
used for all further internal references to the variable.
This OC# is the index which will be used from now on
to reference the variable within the OAT (Operand
Address Table) of the language processor. The set of
internal APL strings constitutes the "program strings."
Microprogramming encourages complex interpretation because the time spent interpreting a given bit or
string of bits is negligible. We have taken advantage of
this ability to allow short descriptors to replace "deaddata" wherever possible so as to minimize the inert-data
flow and maximize the active-data flow. All external
program symbols are translated to tokens internallyhowever, as we have previously mentioned, the grammar and semantics of the internal notation are isomorphic to the external symbolic.

tions fetch operands from thirty-one 16-bit-registers
through two buses and deposit results through a third
into any register. Most instructions may thus address
three registers independently-there are no accumulators as such. Up to 65K of 750 nsec cycle core may be
accessed through two of the registers on the buses,
10 through another pair, and sixty-four words of 40
nsec scratch pad memory through yet another pair.
These registers are addressed as any others and the
external communications are initiated by appropriate
bits present in all microinstructions.
Triple addressing and a well-rationalized register
structure promote compact coding. The entire APL
interpreter and executive reside in under 2,000 words
of read-only memory.
Although special multiply and divide step microinstructions are implemented in the hardware of the
Meta-4, the arithmetic capability of the processor is not
on a par with the parsing, stack management, and other
nonarithmetic capabilities of the interpreter. Adding
a pair of 32-bit floating operands takes about 5 p.sec,
a very respectable figure for a processor of this size and
more than adequate for the laboratory environment.
A floating multiply or divide takes 20-25 J,Lsec.
On the other hand, a pass through the decision tree
of the parser takes 1-2 J,Lsec, and as will be seen from the
descriptor codes this tree is fairly deep. This speed is a
consequence of the facility to test any bit or byte in any
register and execute a branch in 120 nsec, or mask a
register in less than 90 nsec.
APL PROCESSOR: MEMORY
The experimental situation demands that response
time of the computer system to external communication be imperceptible. We were forced by this consideration to make all active programs resident in core, and

APL PROCESSOR: HARDWARE
The laboratory requirements called for a very fast
APL processor capable of executing at least sixteen
independent or intercommunicating experimental programs, each program responding in real time to textual
and analog input from the subject of the experiment.
Once the possibilities of a microprogrammed interpreter became apparent, the search for a machine
centered on those with extensive microprogramming
facilities. Of these the Digital Scientific Meta-4 was
chosen by the Center for Research in Management
Science for its fast instruction cycle, extensive register
complement, and capable instruction set.
The processor fetches 32-bit instruction from a readonly memory on an average of every 90 nsec. Instruc-

Figure 2-The APL processor

Firmware APL Time-Sharing System

181

in order to maximize the utilization of the available
address space of 65K, several designs evolved.
Processor

1. Through a hardware map, the virtual address
space of 65K is mapped into 256K core.
2. Since many of the APL operators are implemented in core, and since the experimental
situation normally requires many identical environments (programs with respect to the computer), all program strings are accessible concurrently to all processes or users.
3. Through dynamic mapping of the available
physical memory space, individual processes may
be allocated pages as needed, and pages are
released to free space when no longer occupied
by a process. Optimal use is made of the waxing
and waning of individual processes.

Set sv1tch

Word ,

Pap ,

(bit.

I
I
Optional
bonk 3

In the n-th word in the hardware map, the righthand seven bits contain the physical address of
the page whose virtual address is n.

OptlO1lal

1-"

1
1
1

1
I
IL ____ J

-.
Core

OptiOllal

_1
Data

1
1
I
I ____ ..JI
L

•

•

1-6)-.

'i-

1

The entire virtual memory space is divided into three
contiguous areas: system tables; system and user program strings, processes work space. Within the processes
work space, memory is allocated to the stack and
management table (MT) for each process. The stack
and MT start in separate pages, the stack from the
bottom and the MT from the top, and these two are
mapped as the bottom and top pages of the virtual
work space, regardless of their physical location. As the
process grows during execution, pages are allocated to
it from free space within the process work space and
are mapped as contiguous memory to the existing stack
andMT.
The specifics of the memory and map design were
constrained primarily by available hardware. The computer used has a 16-bit address field-65K is the maximum direct address space but not adequate for 40 plus
processes. Mapping by hardware 256K into 65K eliminates the need for carrying two-word addresses inside
the computer. Pages are 512 words long, 128 pages in
the 65K virtual space, 512 pages in the real space,
keeping fragmentation to a minimum.
The map is a hardware unit built integrally with the
memory interface. The core cycle is 750 nsec, the map
adds 80-100 nsec to this time.
The map incorporates a 128-word, 12-bit, 40 nsec
random access memory which is loaded every time a
user is activated. The data comprising the user page
map are obtained from a linear scan of the general
system memory map.
Each word in the map contains three fields.

-(bit8 1-15)

ea4/vrlte

Core .emory

I
I

I
I
~~_1~~':l

To I/o proce8sor

Figure 3-The hardware map

• The two bits adjacent to this field (bits 7, 8) map
the 65K space into 256K (i.e., bank select).
• The three remaining have control functions and
are returned in about 100 nsec to the status register
associated with the memory address register. These
bits are thus interpreted by the microprogram and
any actions necessary are taken before the memory
returns data 350 nsec later, 450 nsec after the
initiation of the read (or write).
• The first bit, when set, causes an automatic write
abort, thereby providing read-only protection of a
given page.
• The second bit, when set, indicates a page fault.
When detected in the status register, a special
routine is executed which allocates a new page to
the user.
• The third bit indicates that a page is under the
control of the interface processor and prevents the
APL processor from modifying or reading that
page.
In general, the program string area is protected by
the read-only bit; it may be modified only by the interface processor. All free storage in the processes work
space, and all virtual pages not allocated to the process
activated at a given time are protected by the page
fault bit. Thus, when a process references outside its
unprotected area, the request is interpreted as a request
for additional storage. When the interface processor is

182

Spring Joint Computer Conference, 1971

modifying either program strings or input-output buffers, those areas are protected by the read-write abort
bit.
'
The map may be bypassed by setting the appropriate
bits in the map access register. This is to permit loading
of the map to proceed simultaneously with core fetches,
while the map's memory settles down, and to avoid the
bootstrapping necessary if the map always intervened
in the addressing of core.
The system memory map, stored in the top section
of the user's virtual storage establishes the system-wide
mapping from physical to virtual pages. Each of the
128 entries, one per physical block contains the owner's
ID number (or zero) and the corresponding virtual
location within its storage. Free pages are chained with
terminal pointers in CPU registers. The overhead incurred in a page fault or page release is thus minimum
(3 psec).
APL PROCESSOR: INTERPRETER SOURCE
The APL interpreter accepts strings of tokens resident in the program strings area of core. The translation from symbolic source to token source is performed
externally to the APL processor by an interface processor. The translation process is a one-pass assembly
with fix-ups at the end of the pass for forward-defined
symbols. The translation preserves the isomorphism
between internal and external source and is naturally
a bidirectional procedure-external source may be regenerated from internal source and a symbol table.
Meta-APL closely resembles standard APL with
some restrictions and extensions. The only major restriction is that arrays may have at most two dimensions-a concession toward terseness of the indexing
routines. There are two significant extensions.
Functions

Functions may have parameters, which are called
by name, in addition to arguments. This is to facilitate
the development of a "procedure library" akin to that
available in ALGOL or FORTRAN. Parameters eliminate the naming problem inherent in shared code.
The BNF of a Meta-APL function call:
(FUNCTION CALL):= {{ARGUMENT)}
(FUNCTION NAME)
{( (PARAMETER LIST»}
{ (ARGUMENT)}
(PARAMETER LIST):= (VARIABLE NAME) I
(PARAMETER LIST),
(VARIABLE NAME)

""The variables specified as parameters may either be
local to the calling function or global. The mechanics of
the function call will be described later, as this is one
of the aspects of this implementation which is particularly smooth.
Processes

The other significant extension in Meta-APL is the
facility of one program to create and communicate
with concurrently executing daugliter programs, all of
which are called processes. Briefly, a process is each
executing sequence represented by a stack and management table in the "processes work space." Any process
can create a subprocess and communicate with it
through a parameter list, although only from global
level to global level. The latter restriction is necessary
because processes are asynchronous and the only level
of a program guaranteed to exist and be accessible at
any time is the global level (of both mother and daughter processes).
The activation of a new program,
$NUprog{Pl, P2, P3, ... , Pn} PROGRAM NAME
establishes a communication mechanism, the "umbilic~l
cord" between calling program A and called program
AA. AA constitutes a new process and will run in
parallel with all other processes of the system. The
cord however, establishes a special relationship between
AA and A:
-the cord may be severed by either A or AA, causing
the death of the tree of processes (if any) whose
rootisAA.
-the parameter list of the $NUprog command establishes the communication channel for transmitting
values between A and AA. All these parameters
may thus be referenced by either .process A or
process AA and will cause the appropriate changes
in the other process. To prevent critical races, two
commands have been introduced.
$WA (WAITFOR) which dismisses the program
until some condition holds true.
$CH (CHECK) which returns 1 if the variable has
already been assigned a value by the program, 4> otherwise. It expects a logical argument. Thus $WA ($CH
VI V $CH V2) will hang program A until either VI or
V2 have been evaluated by program AA on the other
side of the umbilical cord. It will then resume processing.
Among the parameters that may be passed ar~ 10
device descriptors. Hence, a mother process can tem-

Firmware APL Time-Sharing System

porarily assign to daughters any 10 device assigned to
her. This is to facilitate use of simple reentrant 10
communications routines to control multi-terminal interactive experiments under the control of one mother
process. The mother may identify daughters executing
the same program string by assigning them distinct
variables as parameters.
The usual characteristics of well ordering apply to
process tree structures.
The BNF of Meta-APL is included as an appendix.

183

Parameter p f'r0lll mother process

para-{

meter
list

k:

Mother's name

~

0Cp

~

v

+--

Global OAT

i:",

THE DESCRIPTOR MECHANISM
Function name

The formats of the internal tokens are as follows:
numerical scalar quantities are all represented in floating point and fixed only for internal use. The left-most
one bit identifies the two words of a floating operand:
the I-bit descriptor allows maximum data transit.

-

.,.,
8

OCi

e

»

,..Po

.-i

~

!i t
...

~

'C

>

.,.,

8

OCi-global

Operand calls

~.

-

~

~
!J,.. l:~
,..
.,01~ ....,

>

.,.,
.,
()

Variables are identified by descriptors called operand
calls (after Burroughs). The i.d. field of the OC locates
an entry in the Operand Address Table (OAT) which
gives the current relative address of the variable in the
stack, or specifies that a given variable is a parameter or
undefined.
Another bit specifies the domain of the variable,
local or global. Unlike ALGOL, there is one global
block in Meta-APL. The possible addressing is indicated graphically.
When a process is created or a function is called, a
block of storage is allocated to the Management Table
to store the stack addresses of all the variables of that
block-the block known as the Operand Address Table
(OAT). The i.d. field of the OC is an index to the OAT.
When an OC is encountered as the argument of an
operator, the address of the variable is obtained in or
through the OAT. If the variable is local to the current
block and defined, the current stack address is found
in the appropriate location of the local OAT. If the
variable is global, as specified by the domain bit, the
global level OAT is accessed. In either case, if the
variable is undefined, a zero will be found in the OAT
entry for that variable, and an error message will result. If the variable is a parameter, an operand call to
either the calling or global level will be found in the
OAT. In the case of a function, this OC points to either
the calling block OAT or the global OAT and the address/zero/parameter OC will be found there. If the
OC was found in the global OAT, it is a parameter from
the mother process as described above.

~
OCj-local

St.ack

l!

f-----

i
&

.

.,

8

~

OCk-global
v:

V

Figure 4-Stack addressing mechanism

Obviously, parameters may be linked through every
level of process and function.

Operators
Operators are represented by such word descriptors
containing tag bits, identification bits, and some redundant function bits specifying particular operations
to the interpreter (marking the stack, initiating execution, terminating execution).
During parsing, operators are placed on an Operator
Push Down List created for every block immediately
below the OAT for that block. During the execution
phase, operators are popped off the OPDL and decoded
first for executive action (special bits) and number of
arguments. The addresses of the actual operands are
calculated as explained under Variables and those addresses passed to the operator front end. This routine
analyzes the operands for conformability, moves them
in some cases, and calls the operator routine to calculate results, either once for scalars, or many times as it
indexes through vector operands.

184

Spring Joint Computer Conference, 1971

The operator front end represents most of the complexity of the execution phase since the variety of APL
operands is so great.

OAT (n) {

Function call
OPDL (n) {

The mechanism of function call uses the OPDL. If a
function descriptor is encountered during the parse, it
is pushed onto the OPDL and three zeroed words are
left after it for storage of the dynamic history pointers.
The specifications of the function are looked up in the
function table and one additional zeroed word is left
for each variable which appears in the function header
before the function name, i.e., A~B FOO ... would result in two spaces zeroed one for A, one for B.
Then, as the parse continues, if a left brace is encountered (as in A~B FOO{P1, ••• , Pn}C), parameter
OCs PI through P n are pushed onto the OPDL until
the right brace is encountered. The number of parameters (n) is entered, the function descriptor duplicated,
and parsing proceeds on its merry way.
During execution the last entered function descriptor

Function descriptor
Function heading
PP (n)
SA (n)
MT (n)
C-explicit result address
A-argument stack address
Parameter Pl

OAT (n + 1)

Parameter Pn
B-argument stack address
Local variables o~ the t'unction

A F1JNCTN {Pl,P2,P3} B

(a)

Eltternal APL

(b)

Internal APL (program string)

A

FCALLI <

ID

>

OPDL

{

PI
P2

P3

Figure 6--Management table after activation of
C~A FOO (PI, P2, .•. , Pn) B

}

B

m

FCALLi <
(c)

OPDL after parse ot the function call

>

11", par,arg,val

PI

,

PZ
P3

FCALLI <

(d)

The function as stored

ID

m

Top ot OPDL

>

~1
",

>

Function table entry

ar ar

~

Figure 5-Function call

Function code
(internal APL)

is popped. This initiates the function call. First, the
number n above is compared with the number of parameters specified in the function header. Then the address
for arguments Band C are entered (these are the current top two elements of the stack). The current stack
pointer, MT pointer, and program string pointer are
saved in the appropriate locations and x words after
the C argument are zeroed to accommodate the x new
variables to be defined in the new function; (x was obtained from the function string header).
At this point, control is passed to the function program string with the new OAT already formatted.
The purpose of the preceding description is to indicate the kind of manipulation which is cheap in time
and instructions in a microprogrammed interpreter.
The function call routine takes under fifty instructions
and takes about 10 psec to execute (plus .9 x psec to
zero x locations).
Other descriptor types:

Firmware APL Time-Sharing System

185

8lMBOL

~

II

~INARY

, -x
1

A

~

If

OC

1-4

V

5-8

2
P

,•
,,•
1

OP

8m

10

RET

D T

L

42-4'i~

46-4SP.

,

1

1 1

ll-

l

11

1

1

A

Y

R

I

D

,

>

G

T

H

> 1 . - - - - Vector
>

L

E

11

A

G

.

T

.

B

E

R 0

F

W 0

R

D 8

H

> 1 4 - - - - Operand call

Exponent=<1-7>

2' s complement

MBntissa=<8-3l> +first bit is sign(f=+,l=-)
Domain: l-global, • local
Trace: l=on, f=off
Variable LD. 1=<6-15>
Y length up to 11 bits
X*y must be < 16 bits
fOllowed by X elements and Y elements
L=f=double words
l=half-words

>

- - - - - - - - - - R M D E

D D

F

<

0 P

- - - - - -

E R

A

- - -

1 1 - - - - Phantom
> 1 2 - - - - Operator

-

- - - - - - - - - -

,,- - - - - - - - - - - 1

1

Arithmetic

.

11

F

,----------

8

E

T

1

V

A

M

DESCRIPTION

8

.

1

1

<

9 10 11 12 13 14 15

<

L

,, • , -

9
1 __ 4]

5'
51

M

8

1

T

X
N U

PH

3 4 5 6 7
0 N E N T

OCTAL

- - - - - - - - - - - -

1 3 - - - - Segmentoperator goes
to OPDL

TR=trace
M=mark stack ]+
D=dyadic=l, monadic='
E=execution delimiter: ])1
Opera is the operator code(l bit tells
scalar operator)
DD=~'=function call: bits 6-l5-I.D.II
~l=empty marker (NOOP for parser)

l'=program call
ll=beginning of line + line number

1 7 - - - - 10 descriptor FF=.f=output program#
~l=input
goes to stack
1f=unused
ll=unused
1 5 - - - - Unused
1 6 - - - - End of function definition=RETURN

Figure 7-The descriptors

A TIME-SHARING SYSTEM FOR BEHAVIORAL
SCIENCE EXPERIMENTATION
Historically, the contribution made by the computing
sciences in the behavioral-science area has been limited
almost exclusively to the utilization of computational
resources for statistical analysis of social-science data
and simulation studies. The advantages offered by the
computer technology in other areas have until very
recently been lost to the behavioral sciences. The advent
of reliable and economical time-sharing systems has
opened new vistas to the research horizons of a socialscience experimenter. The use of time-sharing systems
in programmed learning for teaching and other educational purposes has been well documented. The objective of this paper is to outline a science whereby the
process-control technology combined with time-sharing
can be used in a novel way as a tool for conducting
experiments to measure human behavior in laboratory
situations. Traditionally, attempts to monitor human
behavior in decision-making situations have had less
than desirable results, due primarily to the extreme
difficulty in maintaining control over the course of the
experiment. That is, subjects of a behavioral-science
experiment often do not behave in a manner which is
conducive to exercise of experimental control so that

certain variables can be measured in a controlled environment. To meet the challenge of properly controlling experiments in the social sciences, a laboratory
for that purpose was created by a grant from the
National Science Foundation in the early 1960s. * The
intent was to utiliZe computerized time-sharing with
special-purpose hardware, combined with a suitable
cubicle arrangement so that subjects participating in,
for example, economic-gaming situations could have
their decisions monitored anQ. recorded by the timeshared computer. The idea was to have the experimenter model his economic game by writing his mathematical model as a computer program. At run-time, the
resulting program serves as the executive controlling
subject input from the time-sharing terminal. In this
fashion, the computer serves two functions: to provide
the medium whereby the experimenter may mathematically express his experimental design and to serve
as the data-collection and process-control device during
a time-shared experiment in which subjects at the
terminals are communicating with the computer.
The requirements placed upon a time-sharing system

* A C HOGGATT J ESHERICK J T WHEELER "A Laboratory to Facilitate Computer Controlled Behavioral Experiments,"
Administrative Science Quarterly, Vol. 14, No.2, June 1969.

186

Spring Joint Computer Conference, 1971

when it is utilized for computer-controlled experiments
differ markedly from those placed upon a conventional
time-sharing system. The actual implementation of the
model itself requires a general-purpose computational
capability combined with the usual string-handling
capabilities found on any general-purpose time-sharing
system; and hence, these are a minimal requirement of
any experimental-control computer. The features which
most notably distinguish a time.;.shared computer system when used for experiments are as follows: (1) Response of the processor to input from the remote terminals must be virtually instantaneous, that is, in experimental situations the usual delay of one or more seconds
by the time-shared processor after user input is prohibitively long. In some measurement situations, in
order not to introduce an additional and uncontrolled
variation in his behavior, such as might be caused by
even minor frustration with the responsiveness of the
time-sharing system, feedback of a response to a subject's input on a time-shared terminal must be less than
approximately five hundred milliseconds. In other less
rigorous experimental situations in which rapid feedback is important, the relatively lengthy response time
of most time-sharing systems has also introduced significant variation in subject behavior. (2) The measurement of human behavior is a costly and time-consuming
process, and hence the successful completion of an
economic-gaming experiment requires the utmost in
reliability of the time-sharing system. Even minor system fall-downs are usually intolerable to the experimental design; for they, at the very least, introduce
possible lost data, i.e., lost observations on subject behavior or time delays in the operation of the experiment.
A system crash normally causes the experimenter to
terminate the experiment and dismiss the subjects and
can even force cancellation or modification of an entire
sequence of experiments if the system fall-downoccurred at a particularly crucial point in the experimental
design. For these reasons, the existence of an on-site
time-sharing system is crucial to providing reliable
service to experimenters. Only through on-site installations can control be exercised over the reliability of the
hardware and of the system software. (3) the necessity
of an on-site installation, combined with the meagre
finances of most researchers in the behavioral sciences,
requires that concessions be made in the design of hardware and software to provide economical service. Historically, these concessions have been the development
of a language tailored to the needs of those programming experimental-gaming situations and the development of a single-language time-sharing system for that
purpose. Further, the cost of extremely reliable massstorage devices has prohibited their use thus far. (4) In
addition to meeting the above constraints of fast time-

shared operation on a small computing system, the
language utilized by the system must (a) have provision
for the usual computational requirements of a behavioral-science experiment. For example, the experimental
program normally modeled in gaming situations requires
that the language have facilities for matrix manipulations and elementary string operations. (b) The language
must be relatively easy for novice programmers to
learn and use;· that is, behavioral scientists with little
or no background in the computing sciences must
readily comprehend the language. (c) The program
should be self documenting, i.e., the language in which
the model is programmed must be general enough so
that the code is virtually machine independent. (d) The
language must allow a limited real-time report of subject performance to the experimenter. The experimenter
must be able to sample a subject's performance while
the experiment is in progress; further, he must do so
without the subject's recognizing that his performance
is being monitored. (e) The language must enable the
experimenter easily to exercise control over segments
of an in-progress experiment. Very frequently, in the
course of an experiment, the need arises for the experimenter to modify the nature of the experiment itself
or to communicate with the subject by sending messages
to him. This requirement and the previous one translate
into the necessity of allowing a controlled amount of
interaction between the time-shared terminals used by
the subjects and the terminal used by the experimenter.
(f) The language must permit a controlled amount of
subject-to-subject interaction for bargaining and other
small-group experiments. Again, this translates into a
need for some degree of interaction among the users of
the time-sharing system. (g) The system must store all
relevant information on subject behavior in machinereadable form for subsequent data analysis. Data on
subject behavior is usually analyzed statistically at the
conclusion of the experiment, often on another comSystem. -.p cootrol.

Processor ...ps

Figure 8-A time-shared array of language processors

Firmware APL Time-Sharing System

I

puter and the need for any keypunching is eliminated
if all information can be recorded on a peripheral device
in machine-readable form. (h) The language must interface the experiment to a variety of special-purpose
input-output peripherals, such as galvanic skin response and other analogue equipment, video displays,
sense switches and pulse relays for slide projectors, reward machines and the like. (i) A final requirement of
the experimental-control "language is the need for reentrancy. Reentrant coding permits the use of shared
functions among users of the time-sharing system,
thereby conserving core.

Virtual

•
.
1
2

I

I
I

n+l

Jl'bplcal.

I

J

Word Iw

"I-Ip

I
I
I
I

Procesl101'

I?I~:r J

~

vith1D

j.

I

.

"'-!:T

~ Proee.1OI' 11 area

I

!

1

k

I

AN ARRAY OF LANGUAGE PROCESSORS
(ALPS)

Figure 9-Memory mappings

J

j

Virtual-block

The Meta-APL Time-Sharing system which has been
described represents a conceptual prototype for a timeshared array of language processors. (See Figure 8)
The ALPS consist of an array of independent dedicated
language processors which communicate with the out:..
side world and/or each other exclusively through core
memory. These processors are completely independent,
and could indeed be different machines. The physical
memory consists of core memory plus secondary storage
and is divided in 4K blocks allocated to the various
processors through the system map. These blocks appear to each processor through the "system map as a
continuous memory which is in turn paged via the
processor maps.
Let us consider the successive transformations of an

I

Word Iw

_Ill

187

I
I
I
I

Il'bplcal
Jdule lit

~
Proeea..,..12 ........

I

Word Ir

J

I

I
I
I
I
I
I
SyBtea _

It Processor 15 ........
\'o"_17...a.uellt

Figure lO-General address computation

address issued by a language processor (Figure 9). The
logical page number field of the address is used to access
a location in the processor map whose contents represent the physical page number and are substituted in
the page field of the address. The reconstituted address
is then interpreted by the system map in a different
way: an initial field of shorter length than the page
number represents the virtual module number and is
used to access a location within the system map which
substitutes automatically its contents (physical module
number) for the original field.
This physical module number is then used to access
a memory module while the rest of the address specifies
a word within the module.
Note that if no processor is expected to monopolize
all of core memory, there will be many more physical
memory modules than virtual ones for each processor.
The physical module number field will then be much
larger than the original virtual module number so that
the size of physical memory which can be accessed by
any" one processor over a period of time"" can be much
larger than its 'maximum addressing capability, as defined by the length of its instruction address field.
A user logging in on one of the ALPS terminals obtains the attention of the corresponding I/O processor
and communicates with it via the system-wide command language. Once input has been completed a language processor is flagged by having the user's string

188

Spring Joint Computer Conference, 1971

assigned to its portion of the system map. Switching
between languages is handled as a transfer within the
system's virtual memory and therefore implemented as
a mere· system map modification. For this reason, all
map handling routines are common to all processors,
including I/O processors. Map protection is provided
by hardware lock-out.
Finally, the modularity of the system provides a high
degree of reliability. Core modules can be added or removed by merely marking their locations within the
system map as full or empty. The same holds for the
languages processors; to each of them corresponds a
system map block containing one word per core module
that may be allocated to it up to the maximum size of
the processor's storage compatible with its addressing
capabilities. Furthermore, each language processor say
#n-l might have access to an interpreter for language
n written in language n-l (mod the number of language
processors), so that, should processor n be removed,
processor n-l could still interpret language n, the penal ty being in this case a lower speed in execution.
Similarly, the number of I/O processors can be adjusted to the needs for input-output choices, terminals,
or secondary storage choices.
It should be noted that although the system is asynchronous and modular, the modules, memory as well as
processors need not be identical. In fact, it seems highly
desirable to use different processor architectures to
interpret the various languages.

In summary, the essential features of the ALPS
time-sharing system are:
-automatic address translation via a multilevel
hardware mapping system; each user and each
processor operates in its own virtual storage.
-the type, architecture, and characteristics of each
processor are optimized for the computer language
that it interprets, allowing for maximum technological efficiency for the language considered.
This is essentially a low-cost system since each processor has to worry about a single language, and the
overhead for language swapping is reduced to merely
switching the user to a different processor, allowing a
smaller low-cost processor to operate efficiently in this
environment.
ACKNOWLEDGMENTS
The authors are indebted to Messrs. A. C. Hoggatt and
F. E. Balderston, who encouraged the development of
the APL system. Appreciation is due to Mr. M. B.
Garman for editorial suggestions and stimulating discussions, to Messrs. E. A. Stohr and Simcha Sadan for
their work on APL operators, and to Mr. R. Rawson
for his effort on the temporary 10 processor. We are
also grateful to Mrs. I. Workman for her very careful
drawings and typing.

APPENDIX-SIMPLIFIED BNF META-APL EXTERNL SYNTAX
Notes: (1)
(2)
(3)
(4)

{... } denotes 0 or 1 times .... () are symbols of Meta-APL.
Lower-case letters are used for comments to avoid lengthy repetitions.
cr denotes a carriage-return.
PROGRAM in BNF is equivalent to PROCESS in the text.

Group 1

(FUNCTION BLOCK)
(PROGRAM BLOCK)
(PROGRAM DEFINITION)
(PROGRAM NAME)
(PROGRAM)
(STATEMENT)
(STATEMENT LINE)
(SYSTEM COMMAND)
(BRANCH)
(IMMEDIATE)
(SPECIFICATION)

::= (FUNCTION DEFINITION) (STATEMENTS) V
::= {(PROGRAM DEFINITION)} (PROGRAM)
::= (PROGRAM NAME) {«PARAMETER LIST»}
::= (NAME)
::= (STATEMENT) I (PROGRAM) (STATEMENT)
I (PROGRAM) (FUNCTION BLOCK)
::= {(LABEL)} (STATEMENT LINE) cr
::=(BRANCH) I (SPECIFICATION) I (SYSTEM COMMAND)
I (IMMEDIATE)
:: = (see system commands)
::=-7 (EXPRESSION)
::= (EXPRESSION) cr I (EXPRESSION); (IMMEDIATE)
::= (VARIABLE)+-(EXPRESSION) cr I (OUTPUT SYMBOL)
+-(EXPRESSION)

Firmware APL Time-Sharing System

189

Group 2

(NUl\tIBER)
(DECIMAL FORM)
(INTEGER)
(DIGIT)
(EXPONENTIAL FORM)
(VECTOR)

::= (DECIMAL FORM) I (EXPONENTIAL FORM)
::= {(INTEGER)} . {(INTEGER)} I (INTEGER)
::= (DIGIT) I (INTEGER) (DIGIT)
::=0 11 ! 2 13 14 I 5 16 I 7 18 19
::= (DECIMAL FORM) E (INTEGER)
::= (SCALAR VECTOR) I (CHARACTER VECTOR)

(SCALAR VECTOR)
(SPACES)
(SPACE)
(EMPTY VECTOR)
(CHARACTER VECTOR)
(CHARACTER STRING)
(CHARACTER)
(NAME)
(ALPHANUMERIC STRING)

::= (NUMBER) I (SCALAR VECTOR) (SPACES) (NUMBER)
::= (SPACE) I (SPACES) (SPACE)
::= (one blank)
::=" I 1.0 I p (SCALAR)

I (EMPTY VECTOR)

(ALPHANUMERIC)
(NUMERICAL TYPE)
(LOGICAL)
(LABEL)
(IOSYMBOL)
(OUTPUT SYMBOL)
(INPUT SYMBOL)
(DEVICE ID)

:: =' (CHARACTER STRING )'
:: =

(CHARACTER)

I (CHARACTER STRING)

(CHARACTER)

::= (LETTER) I (DIGIT) I (SYMBOL)
,
::= (LETTER) I (LETTER) (ALPHANUMERIC STRING)
::= (ALPHANUMERIC) I (ALPHANUMERIC STRING)

(ALPHANUMERIC)
(LETTER) I (DIGIT)
::= (NUMBER) / (VECTOR)
::=011
::= (NAME):
::= (INPUT SYMBOL) I (OUTPUT SYMBOL)
::=0 I (DEVICE ID)
:: = 0 I quote-quad
:: = (undefined as yet)
::=

Group 3

(SCALAR OPERATOR)
(MONADIC OPERATOR)
(DYADIC OPERATOR)
(MONADIC SCALOP)
(DYADIC SCALOP)

::= (MONADIC SCALOP)
::= (MONADIC SCALOP)

I (DYADIC SCALOP)
I (MONADIC MIXEDOP)

I (MONADIC EXTENDED SCALOP)
I (DYADIC MIXEDOP)
I (DYADIC EXTENDED SCALOP)
:: = + I - I X / + I r I L I * /log I I I ! I ? / 0 I I'-'
::= (MONADIC SCALOP) /A I V I nand / nor I < I :::; I
I ~ I $> I $~
::= (DYADIC SCALOP)

(EXTENDED SCALAR OPERATOR)

::= (MONADIC EXTENDED SCALOP)

(MONADIC EXTENDED SCALOP)

::= (SCALAR OPERATOR) /

I (DYADIC EXTENDED SCALOP)
(DYADIC EXTENDED SCALOP)
(COORD)
(MIXED OPERATOR)
(MONADIC MIXEDOP)
(DYADIC MIXEDOP)

I (SCALAR OPERATOR) /[COORD]
I (DYADIC SCALOP). (DYADIC SCALOP)
10. (DYADIC SCALOP)
::=112
::= (DYADIC MIXEDOP) (MONADIC MIXEDOP)
:: = pi, II'-' I q, I transpose/grade-up/grade-down I V I d
: : = pi, I (. I q, I transpose I / I \ I i I ! I e I ! I T I

190

Group

Spring Joint Computer Conference, 1971

4

I (VARIABLE) I (INPUT SYMBOL)
I (MONADIC EXPRESSION) I (DYADIC EXPRESSION)
I (MONADIC EXPRESSION) I (0-ARG FUNCTION)

(EXPRESSION)

::= (NUMERICAL TYPE)

(MONADIC EXPRESSION)

::= (MONADIC OPERATOR) (EXPRESSION)

(DYADIC EXPRESSION)

::= (EXPRESSION) (DYADIC OPERATOR) (EXPRESSION)

I (l-ARG FUNCTION)
I (ALPHANUMERIC STRING)

(RELOP)
(ALPHANUMERIC STRING)
I (LOGICAL) (RELOP) (LOGICAL) I (2-ARG FUNCTION)

(RELOP)
(FUNCTION NAME)
(0-ARG FUNCTION)
(l-ARG FUNCTION)

::= < I ::; I = I 2 I > I =/=
::= (NAME)
::= (FUNCTION NAME) {({PARAMETER LIST»}
::= (FUNCTION NAME) {({PARAMETER LIST»}

(2-ARG FUNCTION)

::= (EXPRESSION) (FUNCTION NAME)

(EXPRESSION)

(PARAMETER NAME)
(LETTER)

{( (PARAMETER LIST»} (EXPRESSION)
::=V{ {VARIABLE NAME)}f-{ (VARIABLE NAME)}
(FUNCTION NAME) {( (PARAMETER LIST»}
(VARIABLE NAME) {(LOCAL VARIABLES)}
I V{ (VARIABLE NAME)}f-{FUNCTION NAME)
{«PARAMETER LIST»} {(VARIABLE NAME)}
{ (LOCAL VARIABLES)}
::= (NAME)
:: = (VARIABLE NAME) I (INDEXED VARIABLE)
::= (NAME) [(EXPRESSION) {;{EXPRESSION)}]
::= (PARAMETER NAME) I (PARAMETER LIST)
, (PARAMETER NAME)
::= ; (VARIABLE NAME)
I (LOCAL VARIABLE) ; (VARIABLE NAME)
::= (VARIABLE NAME)
::=A I B I C I DIE I FIG I H I I I J I K I LIM

(SYMBOL)

::=]

(FUNCTION DEFINITION)

(VARIABLE NAME)
(VARIABLE)
(INDEXED VARIABLE)
(PARAMETER LIST)
(LOCAL VARIABLES)

INIOIPIQIRISITIUIVIWIXIYIZ

I [I f-I ~ I + I X 1/ I \ I, I· I .. I-I <
I> 1=/= I::; 121 = 1)1(1 V 1/\ lei :111T
I-I t I i 1,1"-'lol?ILI rl-I*lpIU
I n I a I c I ~ I + I w I 0-1 V I A I' \ ( I)
0

\

Designing a large-scale on-line real-time system
by SUMIO ISHIZAKI
The Fuji Bank, Limited
Tokyo,Japan

OBJECTIVES AND BACKGROUND OF TOTAL
BANKING SYSTEM

•
•
•
•

The Fuji Bank, Ltd., now employs a computerized
total banking system. The objective of the system and
the main fields of application are described below. *

Estimation of market share
Selection of branch sites
Study of personnel requirements
Financial analysis of corporate customers

Autom.ated custom.er services

• Services to correspondent banks
• Billing services for professionals, credit cards,
electricity, rent for living quarters, pension
funds, hospital fees, etc.
• Collection of tuition fees
• Scoring of school entrance examinations
• Repayments to scholarship funds
• Inventory analysis and control, etc.

Major business activities covered by the system
Data processing

All kinds of deposit accounts, loans, domestic remittances, foreign exchange, stock transfer, management of portfolio, calculation of depreciation on
furnishings and equipment, payroll, etc. **

Objectives of total banking system

Managem.ent inform.ation system.

Fuji Bank has invested over $30 million in computerization for three major objectives.

(a) Organization of Various Data Filing and Retrieval
Systems.
• Customers information files
individual customers
corporate customers
• Management reports file
• Corporate accounts file
• Personnel information file
(b) Management Science
• Forecasting macro-economic activities
• Forecasting deposits

Cost saving

Cost reduction is the first objective of computerization. Most important is the saving in personnel expenses. The imbalance in Japan's labor market has
been getting worse from year to year. Last spring, the
number of high school graduates intending to take up
jobs was only 657,000 against 4,701,000 openings, so
that only 14 percent of demand could be met;
The increase in the volume of business would have
required the addition of 2,000 new employees to the
staff of Fuji Bank over the next six years if computerization had remained within the limits of an off-line
system. In view of the conditions in the labor market,
it would have been nearly impossible to recruit 2,000
new employees in addition to the 1,500 needed each
year for filling the vacancies created by retirement.

* Fuji Bank has 3 million ordinary deposit accounts for which
passbooks are used and on which interest is paid. In addition to
ordinary deposits and withdrawals, these accounts are used for
the payment of all kinds of bills (telephone, electricity, gas and
water), for the settlement of credit card balances and other
automatic transfers. Ordinary deposits account for 53 percent of
all individual deposits.
** In Japan, instead of mailing checks, remittances are usually
sent by teletype or by a computer message switching system.

191

192

Spring Joint Computer Conference, 1971

(ComputcrSick)

(b)

(c)

(d)
TelkrTcrmin:d

Figure 1-Layout. of multi-processor system

By installing an on-line real-time system for all major
business operations, even the growing workload can be
handled with the present number of employees in the
next six years, thus saving not only personnel costs
but also the additional office space which would have
been required.
Better IDanageIDent

The following advantages are gained by large-scale
computerization:
(a) Greater accuracy in office work
Prevention of errors and unauthorized payments
(b) Greater speed in office work
Increase in labor productivity
(c) Better management reports
Management reports whose preparation by hand
would simply be impossible can be compiled
quickly and exactly.
(d) New personnel management
Computerization relieves the staff of monotonous
or "mechanical" routine work. An on-line system
fitted with various subordinate checking systems
significantly decreases the errors in clerical work;
even inexperienced operators quickly become experienced and the burden on the supervisory
staff is reduced.

CustoIDer services

(a) The principle of accurate and fast data processing
can be applied to customer services.
An on-line real-time system not only reduces the
possibility of faulty or unauthorized operations

(e)

but also reduces the customers' waiting time because all major banking operations, including
ledger retrieval, can be computerized.
Through computerization, central control of all
deposit ledgers is achieved so that customers
can be allowed to use anyone of the 206 branches
of Fuji Bank in Japan for unlimited deposits and
withdrawals.
Customers of the 206 branches can use any
branch for making remittances to customers of
other branches and remittances can be effected
within seconds.
Organization of new customer services: The
growth of banking activities involves new bank
services such as consumer credits, payroll deposits, the credit card business and automatic
debiting of public charges (telephone, electricity,
gas and water) whose increasing volume could
hardly be handled without computers. As a
matter of fact, many of the new services have
been developed in response to computerization.
In this sense, the automated business procedures
described above can be regarded as computerrelated customer services.

OUTLINE OF TOTAL BANKING SERVICES
The following discussion will focus on the role of
the on-line real-time system within the total banking
system.
Nationwide network

The .Fuji Bank System, the largest private system
in operation, consists of a network of about 1,000 online teller terminals in the Bank's 206 branches linked
by a data communication network comprising 516 lines
with a capacity of 1,200 or 200 bits per second.
System configuration

The main computer of the system consists of four
UNIVAC 1108 multi-processors installed at the computer centers located in Tokyo and Osaka, two units
at each center (Figure 1). The system further includes
two units IBM 360, three units UNIVAC 418, two
units NCR Century, three units UNIVAC 9300 and
other for batch processing.
Configuration of the computers

Various 110 units such as 23 FH 1782 magnetic
drums, 5 Fastrand II drums, 9 FH 432 drums, 17

Designing a Large-Scale On-line Real-Time System

standard communication subsystems and 28 VIII Cmagnetic tape units are connected with the UNIVAC
1108 multi-processors and moreover connected with
processors of different systems acting as back-ups in
case of failure.

193

instantaneously:
Osaka
branch
(deposit
section)pOsaka Computer (deposit sectionp
remittance section)pTokyo Computer (remittance sectionpdeposit section). This case illustrates the assimilation of deposits and remittances.

CHARACTERISTICS OF SYSTEMS DESIGN
The objective of the systems design was to determine
among the possible alternatives a systems capability
which would meet the requirements of maximum performance and lowest cost.
Diversified apPlicationSl
(systems economy
and voluminous data
compromise and efficiency
processing (high traffic
(quick response)
rate)
Variety of applications
Scope of application

The present system is used for all major banking
operations including all kinds of deposits (checking
accounts, ordinary deposits, time deposits, deposits at
notice, savings accounts, special deposits for tax payment), domestic remittances and preparation of balance
sheets for all branch offices.
Com.bination of inquiry and answer system. and
m.essage switching system.

In the actual use of the installation, an organic
combination of several systems with different modes or
operations, e.g., the inquiry-and-answer system and
the message switching system, will be necessary. Below
are a few examples of such combinations:
(a) For funds paid in through the remittance system
from a distant location, the computer retrieves
the account of the payee and automatically
makes the entry into the customer's ledger.
(b) Another example is the so-called network service
which allows deposits and withdrawals at any of
the Bank's 206 branch offices. This system relies
on separate data files at each of the two computer centers in Tokyo and Osaka. If a customer
of a branch in Tokyo withdraws money at a
branch in Osaka, the Osaka computer must also
retrieve and update the data file of the Tokyo
Center. The procedure involves the following
steps which are taken automatically and almost

Sim.ultaneous real-tim.e processing and batch
processing

Another feature of the system is the possibility of
providing access to the computer file used for real-time
processing also for batch processing. This has improved
the perfonnance of the system and opened the way to
multi-programming including both real-time and batch
programs. Practically, the system allows the automatic
posting of salaries or stock dividends in customers' accounts, debiting customers with electricity, gas, water
and telephone charges and credit card purchases,
debiting of large batches of checks returned from the
clearinghouse, and dispatch of accumulated items to a
branch office which had been closed on account of a
local holiday. This can be done not only before or after
business hours but also at other times.

Handling of com.plex office work

Although used for a great variety of applications, the
operation of the terminals has been standardized as
much as possible. As mentioned above, the applications
include the handling of several types of deposits, such
as checking accounts, ordinary and time deposits,
transfer from one account to another, remote processing
of files and other on-line as well as off-line operations.
Efficiency and economy through order-made system

Efficiency and economy have been the ultimate objectives in having the entire system, from the system
design to the terminals, made to the Bank's specifications. In this way, the system can handle greatly
diversified operations (in addition to all kinds of deposits and domestic remittances, foreign exchange and
loans) all involving a high traffic rate.
Term.inals

In view of the variety of applications and the large
volume of transactions, two types of terminals have
been adopted. The first is the deposit terminal designed
for processing fixed-length messages with the emphasis

194

Spring Joint Computer Conference, 1971

on efficiency. The other is the remittance terminal built
for processing variable-length messages with the emphasis on flexibility. But the remittance terminal can
also be used for handling deposits and, by shifting the
connector, can serve as a transmitter as well as a receiver, depending on the conditions at a particular time.
The total number of terminals required for the system
amounts to about 1,000 units so that a reduction in the
unit costs of the terminals results in substantial savings.
The deposit terminal, called Fujisaver, costs $5,000, the
remittance terminal, named Fujityper, $3,000. The
Fujisaver, in particular, possesses several noteworthy
features. It simultaneously imprints the passbook, the
journal and a slip; its panel of indicator lamps (Figure
2) shows the condition of the system and, if necessary,
the machine locks until the required corrective action
is taken.

Supervisory program.

While the overhead load of the operating system
(EXEC 8) has been reduced, the programming burden
of the user has been lessened by incorporating a supervisory program into the system, thus improving its
performance (Figures 3 and 4).

Project m.anagem.ent

UNIVAC undertook the development of the computers and Oki Electric Co. the development of the
terminals while Fuji Bank assumed responsibility for
the user program. Fuji Bank was also in charge of the
overall management of the project whose completion
required about six years, due to the size of the project
and the necessity of developing entirely new banking
terminals.

Peak workload processing techniques

Load distribution related to term.inals
~

In order to avoid too frequent interruptions of the
computers ordinarily doing batch processing, both
deposit and remittance terminals have been equipped
with buffer memories (576 characters) so that transmis·sion is in block units.
Developm.en t of supervisory program.

In addition to the standard operating system, the
above-mentioned supervisory program has been developed which exercises activity control over the realtime program as well as input/output control.
Reduction of access frequency to random.
access file

Most of the deposit ledgers are recorded on the 23
high-speed magnetic drums with an average access time
of 17 ms., but in view of the extremely high traffic rates,
the access frequency to the random access file should
be reduced as much as possible. Each magnetic drum
contains the ledgers of the customers of about 11
branches. For the retrieval of an account, the location
of the drum unit is ascertained from the code number of
the branch recorded on the magnetic core and the
table showing the corresponding drum unit. The code
number of the account is divided by the number of
blocks per drum and the account is located by transferring the remaining "nth" block to the magnetic
core. This procedure speeds up the operation (Figure 5).
For time deposits, the drum number is identical
with the number of the deposit certificate which makes
direct addressing possible and speeds up the operation.
Balance between real-tim.e and batch
processing

In order to achieve the twin objectives of lessening
the load at peak hours and, at the same time, increasing

Com.puter Load Distribution

Since the system was to cover about 6 millionaccounts and· an average of 650,000 transactions a day
was foreseen, the load was distributed between two
computer centers, Tokyo and Osaka, each equipped
with a UNIVAC 1108 multi-processor system. A further
reason for this arrangement was the high cost in Japan
of long-distance communication lines used exclusively
for data transmission.

Test

Off line

On line

Hold
Account

Account
# Error

X Total

Overdraft

Mis ope ration

Turn
Page

Excessive
Amount

Passbook
Set?

Journal

Reentry
Ready
Busy
No
Response

Figure 2-Fujisaver-Indicator lamp arrangement

Designing a Large-Scale On-line Real-Time System

Higher performance

I

Branch #

~

Operating
System.
(EXEC 8)

Item #

Account #

195

I

User

Super'ltisory
)

(

Program.

Program.
(

)

130KW

4KW

65 KW

E~ing

Table on core for
obtaining drum unit #
Branch #. Drum Unit #

Figure 3-Program configuration

(Item. # x 2 x 10 8

+ Account #)

# of blocks
remainder "n"

Transfer "nth"
data block onto
Master data file

efficiency, the proportion of real-time to batch processing has been fixed with great care for each specific
operation.

PrograIllIlling by Asse:mbler Language
Figure 5-Master data retrieval sequence

A compiler is more advantageous for programming
and program maintenance but for the most efficient
handling of a large random access file by "bit" units
and for increasing throughput at peak hours, an assembler proved much more efficient. Hence, the entire
program amounting to over 50,000 steps has been
coded by an assembler.
Traffic sllnulation

The results of a traffic simulation for which a GPSS
II was used are shown in Figure 6. The simulation
proved that despite a heavy future increase in the data
volume the life cycle of the system can be prolonged
considerably by adding more magnetic drums such as
FH 1782 or Fastrand II or their control units.

~-"'-'-"'=="=-;;Cr:o:;;:IIUn~u;:;n;;i:1-

For Output Control
of Deposit Accounts

face

Figure 4-Block diagram of supervisory program

Techniques for economizing memory capacity
Main Illelllory unit

In order to reduce the demands on the main memory
unit and simplify program maintenance, the common
parts of different items have been consolidated as
much as possible and brought together in a subroutine.
Within the limits of the queuing time resulting from
the traffic volume, program overlay has been attempted
for the FH 432 magnetic drums with an average access
time of 4.25 ms.
RandoIll access file

Similar to the main memory unit, a master file system
consolidating the common items of the random access
file has been adopted. All items subject to frequent
change with regard to overflow, item length and data
numbers are recorded in the slave file. A continuous
chaining of the two files is possible through the link
address. All items are recorded and processed by" bit"
lnitS.
For the master file, one FH 1782 magnetic drum
contains the records of 11 branch offices and up to
299,915 accounts. For retrieval, each magnetic drum
has been divided into 3,157 blocks; this number corresponds to the number of items and accounts divided
by a multiple of 11. Each block consists of 95 accounts,
a number fixed by taking into consideration the buffer
capa,city of the core memory and the frequency of
access to the drum. The starting position of block No. 0

196

Spring Joint Computer Conference, 1971

.:

gi
l]S

!

.: .:

iis

1:1:

g:

!~]~

~.:~

Similar precautions have been taken for the terminals.
By shoring up the functions of oil-line processing, passbook entries can be made and operational errors or
over-payment prevented even if a failure occurs in the
computer or the- transmission lines.

FBl182

Ina-II
.... 32

d..,...q

Software

nU2
dn..-D

Quick recovery
processo
VIII-C
. . snetie
tape-S
VUI-C
ugnet.lc
t . . . -E

1:-----:-:1o~--,2::::-0-~30;;-------;;40;---.:7S0--;;6;;-O--;;;:--=80--;9"'0-=100;;rate of use per I/O

,.

!:!;:~=!~:
FH432Q:
FH4J2D:
FR-Il-TR:
FR-Il-S:
VllIC-S:
VlIIC-E:

! ~:~:!~ :~::; 2

exc:han&e
exchange drum
dummy write drum
trans'lctio"n & other drums
regular Master
depOsit 8-C
exeh'Jonge 8-C

Figure 6-Work volume and rate of use per I/O

has been staggered by 287 blocks for each branch so as
to average the number of accounts in each block.
Prevention of failure

In order to reduce as much as possible loss of time
from failure and to expedite recovery, arrangements
have been made in three fields, namely, hardware,
software and business procedures.
Hardware

The central processors of the multi-processor system
back each other up so that even in case the I/O Controller breaks down, either Processor No. 1 or Processor
No. 2 can step into the breach. One of the four memory
banks of the main memory is usually assigned to batch
processing but if one of the other three banks fails, the
batch processing can be suspended at once and the
bank switched to the operating system or the real-time
program. The standard communication subsystems,
and the equipment ordinarily used for batch processing,
such as various magnetic drums, magnetic tapes and
printers, are also connected with different systems and
can be switched immediately to real-time processing if
an accident occurs.
The data communication network comprises different
systems for deposits and remittances which lessens the
probability that both operations will be interrupted at
the same time. In case the equipment at a branch office
gets out of order, a neighboring branch office can take
over transmission and reception of messages according
to a prearranged plan.
'

Since the system handles more than half the Bank's
daily business, it is very important to keep interruptions due to breakdowns to a minimum. No matter
when the failure occurs, a comprehensive and instantaneous check must be possible to ascertain exactly
how far the operation had proceeded and at what point
the process came to an end so that the trouble can be
corrected not only as quickly as possible but also without missing a single input or repeating the same input.
To this end, all transactions are recorded in strict
chronological sequence and full detail on both magnetic
drums and magnetic tape. If a failure occurs, the drum
master file is checked against the transaction file and in
case of discrepancies, the master file can be corrected
immediately. In most cases, the system can be repaired
and restarted in 10-20 minutes.
Prevention of complete breakdown by partial failure
Because so many different applications are linked to
the same nationwide network, care has been taken lest
a failure in one part of the system affect the entire
network. (a) Recovery during continuing real-time
processing.
Repairs can be made while real-time processing continues so that the entire operation need not be shut
down on account of the breakdown of a single unit.
If, e.g., one of the magnetic drums gets out of order,
the files of the branches recorded on this particular
drum will be transferred to a spare drum, reproducing

Bit 35 34 33 32 31 302928 27.26 25 2423 22 21 20 19 18 17 16
15 14 13 12 il 10 9 8 7 6 5 4 3 2 1

o.

Branch No.

I

I

Item

1.

Balance

2.

Accumulated Interest

Account No.

r Valid

I

All
Branch

3.

Unposted interest at the
closing of accounts

4.

Link address or Columns for Various Codes

/0/]/

Date of Transfer
(year month day)

Figure 7-Layout of deposit master file

TParity

Designing a Large-Scale On-line Real-Time System

the balances brought forward from the previous day
and all transactions from the beginning of the day until
the time of the failure from the magnetic tape. This
makes it unnecessary to wait for the physical repair
of the faulty drum. The arrangement that all transactions are recorded both on magnetic tape and on magnetic drums which forms part of the supervisory program, makes this recovery procedure possible.
(b) Since different applications are handled by realtime processing, care must be taken in the programming that trouble in one business routine
will not adversely affect otliers. In and by themselves, the different business activities are independent of each other but actually there is a
great deal of interaction. For instance, a remittance sent through the remittance system is
automatically debited to the deposit account of
the payor and credited to the account of the
payee so that the transaction necessarily involves
the deposit files. Special techniques, therefore,
are required to prevent a breakdown in one sector
from shutting down others.
(c) Since remittances and the so-called network
service necessitate a constant exchange of messages between the Tokyo and Osaka computer
centers, the failure of one computer should not
influence the other. If money is transferred from
a branch within the limits of the Tokyo Center
to a branch belonging to the Osaka Center, a
breakdown of the Osaka computer should not
cause the transfer to go astray or the same sum
to be transferred twice. The prevention of such
accidents must be planned in the program.
Assimilation of ordinary operations and recovery
process
There is much to be gained from making the recovery
process for correcting breakdowns as similar as possible
to the ordinary processing procedure. First, such an
arrangement will economize the capacity of the expensive main memory and facilitate program maintenance
in case the system or the program is modified. Secondly,
it will make the recovery process easier for the computer
operators as well as the operators of the branch terminals. It is of particular value for the Osaka Center
which has no programmers.

y
Verifier

197

Eye

"
Withdrawal

Slip
S. Ishizaki

Figure 8-Signature verification system

to correct a failure. For ordinary deposits, a list of unposted items is prepared during the night-compilation
of the list takes about four hours-and distributed to
all tellers before nine o'clock the next morning. A similar
list is prepared for current deposits showing outstanding
balances after debiting public charges or checks returned during the night from the clearinghouse. In this
way, the tellers can comply with demands for payment
even if the computer or the communication lines are
out of order.
A utomation of related operations

For the best performance of the on-line system, it is
also necessary to improve the functions of related
operations. The real-time system has reduced to nearly
zero the time needed for retrieving the ledger from the
files, but it is only in the case of deposits that no further
search is required. For withdrawals, it is also necessary
to check the signature file and this prolongs the time
the customer must be kept waiting. In 1963, therefore,
Fuji Bank, in cooperation with Canon Inc., began the
development of a signature verification system. The
signature inscribed in the passbook is covered with a
black vinyl seal so that it is invisible to the naked eye
and can only be read through a newly developed
verifying machine. Thus, the customer's signature can
be verified at the same time that he presents his passbook. (Figure 8).
FUTURE DEVELOPMENTS

Office routine

Care has been taken to prevent the disruption of the
everyday office work even for the short time it takes

Total banking system and MIS

By transforming daily operations into a real-time
system, an accurate and up-to-date customers informa-

198

Spring Joint Computer Conference, 1971

tion file can be prepared. But the information obtained
througp on-line processing consists mainly of statistical
data related to deposits and withdrawals. For a really
useful customers information file, a consolidated file
would have to be prepared which would also include
descriptive data such as the extent to which the customer uses automatic debiting of public charges, consumer loans, rental safes or credit cards, information
on the customer's occupation, family status, income,
relations with other banks, codes of business clients
and code of the bank officer in charge of the account.
The system would also have to provide for fast and
simple retrieval. But the number of deposit accounts
alone covered by the present system amounts to six
million and the addition of even a single item would
require an enormous investment for input and maintenance. The inputs, therefore, have to be selected with
great care after examining repeatedly how often the
information will be used. At present, the number of
accounts covered by the customers information file is
gradually being expanded. In the future, when the file

will also cover potential customers, it may play an
important role in assessing market potential.
By organizing various data banks and combining
them with IR or management science techniques, the
system could be expanded into a complete MIS, our
goal after the installation of the on-line system.

Common Data Transmission System for All Banks
The second phase in the development of the on-line
systems of individual banks would be the organization
of a common data transmission system linking all 87
Japanese banks. Such a system would require the installation of huge computers able to process 700,0001,350,000 transactions a day. The system would have
to be able to effect an exchange between input/output
messages of different formats. The year 1973 has been
set as target date for the implementation of this system
which, if completed, would represent one of the world's
most extensive data communication systems.

PERT-A computer-aided game
by J. A. RICHTER-NIELSEN
Technical University of Denmark
DK 2800 Lyngby, Denmark

INTRODUCTION

projects, with a complexity corresponding to the use of
about 70 activities, are relevant.
The computer program used by the tool is so designed
that it allows a very high degree of flexibility. Any instructor with special industrial knowledge can develop
a new data set for the program within 2-3 days and,
thereby, interchange the whole project with an other
of a similar realistic nature. Such an interchange does
not call for special knowledge of programming.

PERT networks in the education of electric power
engineers is only a tool among a host of more important
tools. Thus, the time allotted for the training of' the
students must be cut down to an absolute minimum.
To introduce experience in an effective manner and to
arouse the interest of the students a computer-aided
PERT game seems to be the ideal educational approach.
The purpose of this game is to give the students an introduction into the subject area so that later they can,
on their own, use the more advanced literature and
specialize in that subject. In this paper an educational
tool will be described which, through its effective layout and direct appeal to the special knowledge already
acquired by the students, promotes the wanted motivation and engagement.
Accordingly, the educational tool is presented in the
form of a realistic project with aspects of competition.
It was initiated by a Master Thesisl at the Technical
University of Denmark in 1968 and has in a developed
form been used in the past years as a part of the education. The administration of the game is completely
documented in a User's Manual,2 an Operator's Guide,3
and the original thesis.
Headed by four lectures about the theories in modern
network planning and control methods, the students
plan and follow up a constructional problem or project
with a time consumption of two periods of each four
hours. The constructional problem or project presented
in this paper does often occur in the electric power
industry, and it is simulated on the IBM 1800 computer
installed in the department.
Two major advantages are fulfilled by the tool:
It is designed to handle projects so complicated that
it becomes impossible for the students to get a comprehensive view and therefore giving them a feeling of being
in the world of reality. On the other hand, the project
is made so simple that the instructor can control the
learning process. The experience has shown us that

WHY A GAME?
The purpose of the computer-aided PERT game is,
through the planning and organization of a project, to
train the participants in the possibilities of varying the
construction time and project composition. At the same
time as these possibilities there are constraints, expressing
the causal relationship of the project and the resources
available for its completion. They must be incorporated
in order to obtain, within a fixed finishing date, the
minimum constructional cost.
The aspect of competition is obtained by having up
to 9 teams, each trying to build up a network in its
optimal configuration, to prevent unforeseen occurrences, and to minimize the total cost. All this has to
be done under the constraints to the time and resources.
In addition to this, it is calculated, depending upon the
total duration of the project, whether a penalty shall
come into action or not.
The fact that we can accept it as a game is then involved in the following two parts:
1. The instructor can, directly by inspection of the
total cost for each group, point out the winner.
2 .. The unforeseen occurrences happen for each
group as a function of the delayed time with
respect to the fixed finishing date. This is quite

199

200

Spring Joint Computer Conference, 1971

1----------------------------------------------------------------------------------------------------------------------1I

I

I

311 8/1971

REPORT UATE

CAP E R T S I M

FINAL

DECISION

PERIOU

2 SURT

KEy ••••• SLACK

I

I
I

1----------------------------------------------------------------------------------------------------------------------1
I
I
I STARTING ENDING

ACTIVITY DESCRIPTION

ACTUAL

EXPECTED

DATE

DATE

LATEST

SCHEDULE

I

I EVENT
I

EVENT

DATE

DATE

SLA"CK

I
I
I
I

1----------------------------------------------------------------------------------------------------------------------1I

I

I

BK

EE

5

EE

FB

28

Fti

BI

190 DAYS 150 DAYS 240 DAYS

92500 $

-100

6/ 9/1972

5/30/1972

-1.42

I

300 $

000

6/11/1972

6/ 1/1972

-1.42

I

400

6/13/1972

6/ 3/1972

-1.42

I

I

I

2 DAYS

2 DAYS

2 DAYS

2 DAYS

2 DAYS

2 DAYS

I

I
I
I
"I
I
I
I
I

I

I

1000

$

GD

HC

27

13 DAYS

10 DAYS

19 DAYS

3000 $

-200

6/26/1972

6/16/1972

-1.42

HI

HC

48

13 DAYS

12 DAYS

19 DAYS

5200 $

-200

6/26/1972

6/16/1972

-1.42

I

HC

HD

58

4 DAYS

3 DAYS

4 DAYS

HD

HE

21

DAYS

DAYS

2 DAYS

1500

$

500

6/30/1972

6/20/1972

-1.42

400 $

400

71 1/1972

6/2111972

-1.42

I
I
I

AE

HF

42

200 DAYS

$

000

7/ 4/1972

6/2411972

-1.42

I

HE

HF

tiO

3 DAYS

3 DAYS

DAYS 300 DAYS

5 DAYS

700 $

000

200

7/ 4/1972

6/24/1972

-1.42

I

HF

HG

49

3 DAYS

3 DAYS

3 DAYS

300 $

000

7/ 7/1972

6/27/1972

GA

HC

~7

12 DAYS

7 DAYS

12 DAYS

21000 $

-800

6/2511972

6/16/1972

-1.28

19200

$

-100

6/ 7/1972

5/30/1972

-1.14

300

$

000

6/ 911972

6/ 1/1972

-1.14

000 $

000

9/15/1971

9/12/1971

-0.42

-100

3/ 9/1972

6/ 3/1972

12.28

I

I

I

06/27/1972

-1.42

I

I
I
I
I
I
I
I

I
I

I

I

I
I
I

I

I

I
I

HH

EG

9~

EG

FB

4

lR8 DAYS 150 DAYS 240 DAYS
2 DAYS

2 DAYS

2 DAYS

AA

AH

63

15 DAYS

14 DAYS

15 DAYS

FD

HI

97

31 DAYS

30 DAYS

31 DAYS

BB

BG

70

6 DAYS

6 DAYS

HG

BH

20

25 DAYS

21 DAYS

BH

BC

3

14 DAYS

14 DAYS

14 DAYS

HB

EC

32

90 DAYS

65 DAYS

90 DAYS

12000

$

EC

FB

74

2 DAYS

2 DAYS

2 DAYS

300

$

AJ

FE

16

102 DAYS

F~

BI

25

4 DAYS

BB

FH

51

56 DAYS

BJ

FD

89

2 DAYS

I
I
I
I
I
I

I
I

I
I
I
I

6800

$

6 DAYS

000

$

000

12/ 7/1971

3/ 6/1972

12.71

25 DAYS

12500

$

-200

1/ 111972

3/31/1972

12.71

000

1/15/1972

4/14/1972

12.71

-100

3/ 1/1972

5/30/1972

12.85

I

000

3/ 3/1972

61 1/1972

12.85

I
I
I

10700 $

-100

2/ 9/1972

5/30/1972

15.71

1

1000 $

000

2/13/1972

6/ 3/1972

15.71

I

7200

$

000

1126/1972

6/ 5/1972

18.57

I

000

$

000

12/ 5/1971

5/ 311972

21.28

I
I
I

90 DAYS 102 DAYS

000 $

I
I
I
I
I
I

I

5 DAYS

5 DAYS

I

22 DAYS 222 DAYS
2 DAYS

2 DAYS

I

I

1----------------------------------------------------------------------------------------------------------------------1
I

I

I START .END.
1
CODE
I EVENT EVENT
I

ACTUEL LOWER TIME
TIME

LIMIT

UPPER ACTUEL

TIME LIMIT

COST

DICOST 1/
DITIMEI

EXPECTED
DATE

LATEST
DATE

SCHEDULE
DATE

SLACK

I
I
I
I

1----------------------------------------------------------------------------------------------------------------------1
Figure I-The starting situation of the project

similar to the traditional aspect of throwing a
die. But here we have weighted the die in such
a manner, that the unforeseen occurrences depend on the size of the negative slack, i.e., upon
the skill of the players.

HOW TO PLAY THE GAME
N one of the participants need to have a special knowledge of programming. They are divided into groups each
having a User's Manual containing the start inform a-

PERT

tion. This includes a description of the assumption for
the start of the project (e.g., an approval to build
the station from the board of preservation of natural
beauty), a list of activities (e.g., fitting up 60 kV outdoor plant), an enlistment (Figure 1) regarding the
starting situation of the project, where the activities
as shown on the list are sorted after increasing slack.
Furthermore, the cost data (Figure 2) which in matrices
and graphs show the marginal cost of each activity as a
function of time, is included. Finally, the rules for
punching the data cards as well as a list of error messages are given.
The two periods of four hours fit well with the game.
The first period is used to construct the network and to
get it tested as indicated by the control routine as mentioned in a later section. The second period of four
hours is used to carry out the decisions for the execution
of the project, the consequences of which are simulated
by the computer.
Two types of decisions are possible during the
execution.

Total

>;QS_i

i'I"

I

10000

I

C,moCioo

0' "'''0' bqmrt =ooWN (5 SYMBOLS)? twh
ALL YOUR CODES SHOULD BEGIN WITH TFF LETTER A
YOUR CODE IS AUTOMATICALLY ATWH
TYPF CODES FOR OTHER PARTICIPANTS.
WHEN ASKED 'VERIFIED?', HIT ONLY RETURN KEY IF 01<, ANY SYMBOL IF NOT.
WHEN ASKED 'CODE :?' AND YOU EAVE NO ft!ORE CODES. HIT JUST THE RETURN KFY.
BUT FIRST, SUPPLY I NFOR"t.~TION FOR YOURSEI,F:
~HAT CLASSIFICATION CODE? 11
LIST PERMISSIONS: VOTE (1 ) ,ADD ITEM(2) ,ADD MESSAGE (3) :?1,2,3
HIT RETURN IF FULL EXPLANATION IS TO BE GIVEN WHEN TPIS
CODE FI RST LOGS IN. TYPE A NY SYM BeL I F NOT.? no
TYPE At-Y TWO LINES OF INFO. FOR FUTURE REFERNCE:

? this is alphabetic information to identify this respondent.
?
VERIFY: CODE= ATWH PERMISSIONS:
1
2 3
CLASSIFICATION= 11
THIS IS AI,PHA 'PETIC INFORMATIO~ ro IDENTIFY THIS RESPONDEf\I-,:'.
VERIFIED?
CODE:?
IS THE 'WAIT' CHOICE ALLOWED. RE.TURN=YES, ANY SYMPOL=t-.-o? no
THE SYSTEM WII,L KEEP A BACKUP TAPE ASSOCIATED WITH THIS CONFF·REN:'F.
SUPPLY A TAPE REEL NAME TO BE USED FOR THIS PURPOSE. IF YOU DO FOT
HAVE A TAPE ALREADY, HIT JUST THE RETURN KEY A~~ THIS SYSTEM
WILL REQUEST ONE FROM THE OPERATOR. 'rAPE NUMBER:? none
TYPE FOUR LINES OF SUBJECT DEFINITION.
FOR THE FIFTH LINE, TYPE YOUR NAME At-:"D HOW '1"0 REACE YOU.

? this is a demonstart
ration conference.
? it has no real subjeCt:
?
?
this would be the monitor's name and phone num1:'t:_ ..
?
THIS IS A DEMONSTRATION CONFERENCE.
IT HAS NO REAL SUBJECT.
THIS WOULD BE THE MONITOR'S NAME A~~ PHO~~ ~~MPEF.
VERIFIED?
DO YOU WA~~ A 'MO~~TOR MESSAGE' NOW.
HIT RETURN IF YES, AN'{ SYMBOL IF NO.? no
CONFERENCE SUCCESSFULLY CREATED.
MONI~OR CHOICE:? +

Implementation of Interactive Conference System

DELPHI CONFERENCE SYSTEM AT
PLEASE TYPE YOUR CODE? atwh
DO YOU WISH
AN EXPlANATION
(1)
SUBJECT DEFINI'J.'ION (2)
LON:; FORM
(3)
SHORT FORM
(4)
CHOICE? 3

00 YOU WISH TO:
VIE~ SUMMARIES
VIEW ITEMS
VIEW MFSSAGES
VIEW VOTES
VIEW A ND VOTE 0 N ITEMS
VOTE
ADD A N ITEM OR MESSAGE
MODIFY AN ITEM
WAIT

14S6~3

ON 123171

(1 )
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)

MODE CHOICE:? 7
ITEM OR MESSAGE MUST FIT IN SIX LINES OF 61 CHARACTERS.
HIT RETURN KEY WJIiN EACH LINE IS COMPl.ETED lAND WAIT
FOR THE QUESTION MARK BEFORE BEGIN'IN3 NEW LINES.
YOU MUST SUPPLY SIX LINES EVEN IF TF~Y ARE PUT IN
ElAM< BY HITTIlG THE RETURN KEY AT THE EEGINt-.."'IN:; OF THE LIt-."E.
ADD ITEM OR MESSAGE IN NEXT SIX LINES. END OF LINE IS HERE

? this is a test item for demonstration purposes only.
? treat this item as an estimate. when asked for your

? estimate, supply an arbitrary number within the given
? limits.
?
?
THIS IS A TEST ITEM FOR DEMONSTRATION PURPOSES ONLY.
TREAT THIS ITEM AS AN ESTIMATE. ~lIEN ASKED FOR YOUR
ESTlf'lATE, SUPPLY AN ARBITRARY NUMBER WITHIN THE GIVEN
LIMITS.
IS THE ABOVE CORRECT AS STATED:
YES(1), NO(2), OR NULLIFY(.)
CONTI~~ CHOICE:? yes
INDICATE ASSOCIATION WITH EXISTING ITEM (NOT
MESSAGE) BY SUPPLYING EXISTING ITEM ~~MBER.
IF NO ASSOCIATION, E~~ER ZERo(e).
ASSOCIATION CHOICE? 8
I NDICATE TYPE:
PROPOSAL
(1)
COMMENT
(2)
FACT
(3)
ESTIMA TE
(4 )
MESSAGE
(5)
TYPE CHOICE:? 4

*

225

226

Spring Joint Computer Conference, 1971

WHAT IS THE LOWER BOUMD FOR ES'lIMATES? ~
WHAT IS THE UPPER BOUND? 1"~
INDICATE SECO~~ARY EVALUATION SCALE:
NO SECOt-..1> SCALE
( ., )
CONFIDENCE
(1 )
AGREEME m'
( 2)
FEASIBILITY
(3)
IMPACT
(4)
PROBABILITY
(5)
ARBITRARY
(6)
SCALE CHOICE:? e
TYPE: E SEC. SCALE: NO~~ ITEM >~
LB: ~ UB: 1~'
ARE YOU SATISFIED WITH THE ABOVE CHOICES:
YES(1). NO(2). OF ~~LLIFY(+).
CONTI~~E CHOICE:? 1
ITEft1 OR M.ESSAGE ENTERED: 1 2317" AT 15t122

3E:
THIS IS A TEST I,)'EM F'OR DEMONSTRATION PURPOSES ONl.y.
TREAT THIS ITEM AS AN ESTIMATE. WHEN ASKED FOF YOUR
ESTIMATE. SUPPLY AN ARBITRARY NUMBER WITEIN THE GIVEN
LIMITS.
.

1(3
DO YOU WISH TO ADD OR MODIFY ANOTHER ITEM OR MESSAGE
YES(1) OR NO (2).

CHOICE:? 2
DC YOU WISH TO:
VIEW SUMMARIES
VIEW ITFJ.tS
VIEW MESSAGES
VIEW VOTES
VIEW AND VOTE ON ITEMS
VOTE
ADD AN ITEM OR MESSAGE
MODIFY AN ITEM
WAIT

CO~~INUE

(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)

(9)

MODE CHOICE:? 5
DO YOU WISH ITEMS PRESENTED BY:
LIST ORDER
(1)
SItiGLY BY NUMBER
(2)
ASSOCIATIO~~
(3)
THOSE NEW OR MODIFIED
(4)
THOSE ACCEPTED
(5)
THOSE SIGNIFICAr.."'l'
(6)
THOSE PENDING
(7)
THOSE INSIGNIFICA~1T
(8)
THOSE REJECTED
(9)
ORDER CHOICE:? 2
ITEM:? 3

Implementation of Interactive Conference System

3E:
THIS IS A TEST ITEM FOR DEMONSTRATION PURPOSES ONLY.
TREAT THIS ITEM AS AN ESTIMATE. ~iJiEN ASKED FOR YOUR
ESTIMATE. SUPPLY AN ARBITRARY ~l{JM.BER WITHIN THE GIVEN
LIMITS.

1(3
YOU HAVE NOT YET VOTED ON THIS ITEM.
PER: LAST VOTE: e PRESEt..1I!' CHOICE:? 3
ESTIMATE BETWEEN e AND 1 ~e
R> LAST CHOICE. PRESENT CHOICE:? 63
ITEM:? 3

3E:

THIS IS A TEST ITEM FOR DEMONSTRATICN PURPOSES O~"l.Y.
TREAT THIS ITEM AS AN ESTIMATE. WB""E N ASKED FOR YOUR
ESTIMATE. SUPPLY AN ARBITRARY ~~MBER WITHIN THE ~IVEN
LIMITS.

1(3
CODE: (1) (2) ( 3) ( 4) ( 5 ) ( 6)
A VE
PE R :
e ~ 1
It
I
e
3 ••
N:
1 A:
63 • ~ fJ AA:
"
63.~e
SD:
.11 LE:
63.e~
HE:
PER: LAST VOTE: 3 PRESE~~ CHOICE:? 1
ESTIMATE BE'lWEEN ~ AND 1 ~e
LAST CHOICE: 63 PRESE~~ CHOICE:? 6S
(NOTE: THAT VOTE CHAJ\GES THE ITEM'S STATUS.)
ITEM:? +
DO YOU WISH TO:
VIEW SUMMARIES
VIEW ITEMS
(3 )
VIEW MESSAGES
(4)
VIEW VOTES
(5)
VIEW AND VOTE ON ITEMS
(6)
VOTE
(7)
ADD AN ITEM OR MESSAGE
(8)
MODI FY A N ITEM
(9)
WAIT

~1~

~ODE

CHOICE:? 1

ACTIVITY SUMMARY
THERE ARE ~~ MESSAGES.
ITEMS:
1 TO 3
TYPES: P:
fJ C:
1 F: e E: 2
STATUS: A:
1 S: e P: 2 I: I R:
ACTIVE VOTERS
ACTIVE VIEWERS
TOTAL LOGINS
TOTAL VOTES
TOTAL VOTE CHANGES
VOTE CHA~X;FS FROM NO JUDGEMENr
NO JUDGEMENTS
DO YOU WISH:
ASSOCIATION MAP
(1 )
ITEM SU~~ARY
(2)
YOUR VOTE SU~~RY
(3)
PETURN TO MODE CHOICE(.)
SUMMARY CHOICE:? -

I PURGED
1
f)

7
1
2

e
~

227

228

Spring Joint Computer Conference, 1971

DELPHI CONFERE~~E SYSTEM AT 15123t ON 12317e
PLEASE TYPE YOUR CODE? monitor

00 YOU WISH TO:
SET UP A CONFEPENCE
(1)
DELETE A CONFERENCE
(2)
MOtIFY A CONFEREN:E
(3)
A NALY ZE A CO NFE RE ~-rcE
(4)
RETURN TO MAI~ SYSTEM
(+)
TERMINATE
(-)
MONITOR CHOICE:? 4
WHAT IS YOUR MONITOR CODE? atwh

ANALYSIS OF DELPHI
CONFERENCE
l-1Jfo1FER 1
DO YOU WISH INFORMATION ABOUT EACF RESPONDENT TO BE PFINTED.
YES(1) OR NO(2):? 1
WHAT ARE THE LIMITS ON FIRST DIGIT OF CLASSIFICATIO~ CODE:
TYPE
LB,UB:? 1 ,9
WHAT ARE THE LIMITS FOR THE SECOND DIGIT:? 1,9

CODE: ATWH
-- VOTIN3 RESPONDENT
I NFORMATION SUPPLIED BY MONIT'OR:
THIS IS ALPHABETIC I NFORMATION TO IDENTIFY THIS
NUMBER OF LOGINS=
7
PERMISSIONS =
7 (PACKED FORMAT)
ITEM NUMBER 1 VOTIN:; RECORD:
HAS NOT VOTED ON THIS ITEM.

ITEM NUMBER 2 VOTI r-..~ RECORD:
HAS ooT VOTED ON THIS ITEM..
ITEM NUMBER 3 VOTIN3 RECORD:
LAST VOTE ON SCALE 1 =
LAST VOTE ON SCALE 2
R). VOTE CHANGES ON PES =
R). VOTE CHAroES ON SES =
NUMBER OF CHAroES FROM NJ =
LAST ESTIMATE =

=

1
~

2
~

e

65

TOTAL A VERAGES FOR THIS GROUP:
NUMBER OF CODES COUt-."l'ED
1
7
TOTAL LOGINS
VOTE eHA f.{;E SON SE S
2
VOTE CHAN3ES ON PES =
~
VOTE CHAN3ES FROM NJ =
e

=

ITEM
3

AVE. ON PES
1 .tl

=

=

AVE. ON SES
SES

~~

********

END OF SU~~RY

TIME:

1 .522

**********

RESPC~"DEt.:"'T.

Implementation of Interactive Conference System

ACKNOWLEDGMENTS
This paper is an outgrowth of work done on the subject
of conference systems by Dr. Murray TurofI of the
Systems Evaluation Division, Office of Emergency
Preparedness, Washington, D.C. The author worked
with Dr. TurofI in this area during the year of 1970.
The design of the user's interaction with the conference

229

system cited in the paper is due to Dr. TurofI-the
expansion of the design into a complete conference
system and the implementation of this system being the
author's concern.
The cooperation of the author's associates at Language and Systems Development, Inc., should also be
acknowledged-their ideas on systems programming
being, as always, useful.

Who are the users?-An analysis of computer use in a
university computer center*
by EARL HUNT, GEORGE DIEHR and DAVID GARNATZ
University of Washington
Seattle, Washington

INTRODUCTION

half of it was used. Today charges are based upon use
of memory size, processor time, and peripherals. To
make accurate estimates of his needs, the user must
ask "What resources do people like me actually utilize?"
Consider the problem of the instructor trying to estimate the cost of a course in programming. What he
knows is that he will have n students, k problems, and
that the problems will take an average of m runs to
solve. These runs vary greatly as the students progress
from incompetence with control cards to an ability to
write infinite loops. To estimate the cost of computing,
the instructor needs statistics about how student jobs
perform. The research scientist who has not yet settled
on his batch of production programs (and who may
never find them) is in a similar situation. He knows
how many people he has on his project and knows how
often they submit programs. He also knows that the
programs vary greatly as he and his associates go
through cycles of planning, debugging, production
modification, and reprogramming. To estimate his
budget he needs applicable averages.
As our final justification, we point to an application
of user statistics within Computer Science. The use of
models to predict system performance has become increasingly popular in system evaluation. Basically, the
idea is to view a computing configuration as a job shop
servicing jobs drawn at random from a population of
users, and then to analyze a model of such a service.
In order to make the model anything more than an
exercise in mathematics, however, one must show a
correspondence between it and reality. Here we present
some statistics which can be appealed to in justifying
a model of the user.

This is a study of how the users of the University of
Washington computing center exercise its machinery.
Our hope is to make an undramatic but useful contribution to knowledge. In a simpler day the distinction
was made between "scientific" and "business" computing. Undoubtedly this contrast is still useful for
many purposes, but finer distinctions are needed. We
shall present statistics showing that, within a community which contains not a single "business" user,
there are distinct groups with quite different machine
requirements. Of course, nobody who is aware of modern computing would seriously dispute this. Our contribution is to provide statistics on the relative size of
the different groups. We also· offer this report as an
example of methodology. The usefulness of our numbers
to another center will depend upon the extent to which
the other center is like ours. The ways in which we
acquired and analyzed our statistics would be useful
more generally.
From the viewpoint of the Computer Center, _a
knowledge of user characteristics is important in planning. In the particular center we studied, and others
like it, there will probably be no major change in the
types of computing done over the next five years (unless qualitatively different equipment capabilities are
provided), but there will be a steady increase in the
number of users. The characteristics of this increasing
population must be known in order to anticipate bottlenecks and to plan for orderly expansion. Users also
need to know something about themselves~ Time is expensive, so computer use must be estimated as accurately as possible in budget preparation. In the days
before multiprogramming, one simply rented the entire
computer configuration for a few seconds, even if only

THE UNIVERSITY AND THE CENTER
Some words about the setting of our study are in
order. The University of Washington is a large state

* This research was supported by the Institutional Research
Fund of the University of Washington, Seattle, Washington.
231

232

Spring Joint Computer Conference, 1971

university with about 33,000 students, 20 percent of
them in graduate or professional schools, and a faculty
of roughly 2,500. The University computer center provides general support for this community. Specialized
research computing capabilities needed for process control or real time applications are provided by dedicated
installations scattered throughout the campus. We did
not study these. The University's administrative data
processing is done on a dedicated Burroughs B5500
computer, and hence is also not included in this study.
The center's "scientific" computer is a Control Data
6400 system with 65,000 sixty bit words. It is used in
batch mode under control of the SCOPE 3 operating
system. Systems and. library applications programs
reside on a 132 M character CDC 6638 disk, which is
also available for user temporary files. Every time that
a job requires service from the operating system an
appropriate message is recorded on the DAYFILE, a
log maintained by the SCOPE system. To obtain our
data we sampled several copies of the DAYFILE, recording the following information.
1. Job identification: The code used in job identification distinguishes between graduates, undergraduates, and faculty, and between jobs associated with classwork and jobs associated with
research projects. The technique of financial
control in the system discourages the use of class
-numbers for research jobs and vice versa.
2. Central processor time used
3. Peripheral processor time used
4. Priority of job at time it is run (O-low priority
to 7-high priority)
5. Number of tape drives charged for
6. Charges assigned
7. Whether the job is a FORTRAN or nonFORTRAN
8. Number of lines printed
9. Number of cards read
10. Amount of centrat memory used, expressed as a
percentage of 32 K words.
. Three different statistical techniques were used. One
was a simple summary of the statistical characteristics
of each of the nine measurements of the aggregate
sample, obtained by plotting a histogram and preparing
tables of measures of the central tendencies and dispersion statistics, using BMD01D program to do this.
The correlations between the different measures were
computed using the BMD03M program (Dixon, 1965).
This program provided correlation matrices and a factor
analysis using the variance maximization criterion
(Harman, 1960) to define orthogonal factors. Finally,
a cluster analysis (Diehr, 1969) was performed to see

TABLE I-Descriptive Statistics for 1588 Jobs Submitted to
CDC 6400

Measure
Cards read
Lines printed
CPU time (sec.)
PPU time (sec.)
Central memory
Tape drives charged
Cost to user
Percent jobs using Fortran

Mean
224
760
11.0
11.9
55.8

.28
1.44
.54

Standard
Deviation
495
1260
41
35
25.4
.55
4.10
.50

if the jobs analyzed fell into groups of similar jobs.
The cluster analysis algorithm used grouped the observations into a fixed number of groups called clusters,
such that the sum of squared distances from observations to their cluster means was minimized. The algorithm will be described in detail in a moment.
DESCRIPTIVE STATISTICS RESULTS
Table I presents descriptive statistics for 1588 jobs
selected from first shifts. * We shall discuss this sample
extensively. Similar analysis of second shift data and
data from a different time of the year produced very
similar results. Therefore, virtually all of our remarks
will be concerned with an analysis of these jobs.
Whether Table I presents a true or false picture of
the user community depends on the purpose for which
the examination is conducted. It shows what sort of
use is made of computing by the "average" user. This
hypothetical individual submits what most people intuitively familiar with the center would consider a
medium-sized job, reading about 200 cards, printing
700 to 800 lines (about eight pages plus system output),
and using around eleven seconds each of cpu and ppu
time. Slightly -more than half of the jobs execute a
Fortran compilation. Like the man with 2.4 children,
the average user is not the typical user! Frequency
plots of the variables CARDS READ, LINES
PRINTED, CP TIME, PP TIME, and COST showed
that the distributions were positively skewed with
means in regions of very low density, suggesting that
(a) mean values were not good descriptors of the popu-

* In obtaining the 1588 teaching and research jobs we also encountered on DAYFILE logs a record of 364 miscellaneous jobs.
These included jobs generated-ny- the computer center itself,
administrative work for some reason not done on the B5500,
and an occasional commercial user. Because this group of jobs
was so heterogeneous it was not further included in the analysis.

Who are the Users?

lation and (b) that the observations were exponentially
distributed. If the second conclusion had been correct,
logarithmic transformations of the indicated variables
would have produced symmetric distributions. In fact,
they did not. This is illustrated in Figure 1, which is a
frequency histogram for the logarithm of CP time. (The
other four variables listed above were similarly distributed, while MEMORY USE was symmetric originally and TAPE DRIVES CHARGED and FORTRAN use are discrete.) Both the mean value of the
transformed CP time and the logarithmic value of the
mean of the untransformed time are shown. It can be
seen that neither figure is an accurate descriptor. The
frequency distributions were positively skewed even
after the transformation and, in some cases, appeared
to be bimodal. This strongly suggests that instead of
regarding jobs as being generated by a single process,
the jobs should be thought of as being a mixture of two
or more populations which individually might be satisfactorily characterized by standard descriptors of central tendency and dispersion.

400-

..

'0
>

.!!

300-

.5

.5

200-

TABLE II-Mean Values of Each Measurement, for Total
Sample, Research, and Instructional J6b Numbers

Variable

Overall
Mean

Cards read
224
Lines printed
760
CPU time
11.0
PPU time
11.9
Central memory
55.8
Tape drives
.28
Cost
1.44
Percent jobs using Fortran
.54
Priority of run
.016
Number of jobs
1588

Research Instructional
Job Mean Job Mean

490
1430
26
22
66.0
.4
3.40
.73
.04
527

95
442
3.8
7.1
51.0
.22
.48
.44
.004
1061

To investigate this hypothesis, we first divided the
sample into two groups, jobs associated with research
projects and jobs associated with instruction. It was
immediately clear that this was, indeed, a reasonable
distinction. Table II shows the means on each measure
for the sample as a whole and by subgroups. On the
average, the difference between subgroup means exceeds one standard deviation about the sample mean,
thus clearly supporting the hypothesis that there are
two distinct subgroups.
One is tempted to say, "Of course, why bother to
measure such an obvious thing?" We would expect to
find differences between instructional and research
work, although our intuition is not very good at predicting the fine detail of these differences. We also
found, however, that this simple division is not
enough-averages do not describe the typical research
or instructional job eithed Examination of the histograms within classes based on the research-instruction
distinction again showed distributions similar to Figure
1. We therefore eschewed our intuition and turned to
an "automatic" method of dividing jobs into homogeneous groups, using cluster analysis.
CLUSTER ANALYSIS RESULTS

100-

B
-00

233

-I.

-A

.2

A

I .8 I1.4

2.0 2.5

Lower Bound of loglO CP time
Pt A is mean of row CP time
Pt B is mean of log CP time

Figure I-Frequency histogram of IOglO CP time

The purpose of a cluster analysis is to group observations into k subclasses such that, in some sense, the
differences between members of the same class is small
relative to the differences between members of different
classes. The particular cluster analysis technique we
used regards each observation as a point in n dimensional Euclidean space. Observations are assigned to a
predetermined number of groups (clusters) in such a
way that the sum of squares of the distances of points
to their cluster mean point is minimized. Thus the
cluster analysis is bound to produce groups for which

234

Spring Joint Computer Conference, 1971

TABLE III-Mean Values for Measures-Two Clusters
Compared to Research and Instructional Jobs
Variable

Group
Cluster 1 Instruction Cluster 2 Research

Log cards read
Log lines printed
Log CP time
Log PP time
Memory use
Tape drives
Log cost
Percent Fortran use

3.9
5.6
-.09
1.6
51
.22
-2.0
.44

3.8
5.5
-.5
1.5
48
.19

-2.1
.44

5.4
6.3
1.9
2.4
67
.40
.53
.74

5.6
6.5
2.3
2.5
70
.44
.50

.72

central tendency measures are reasonable descriptors,
while the standard deviation within a cluster indicates
how much variation there is about the mean point. The
algorithm begins with all observations in a single cluster
around the population mean point. A second cluster is
initiated whose first member is the observation furthest
away from the center point. An iteration phase follows,
in which each observation is assigned to one or the
other cluster by making the choice which minimizes
the sum of squares about cluster points. The cluster
mean point is adjusted as the observation is grouped.
The iteration is continued until no further adjustments
are made. A new cluster is then initiated by choosing
as its first member that observation which is furthest
from the mean point of the cluster to which it is now
assigned. The iteration is then repeated. The entire
process is continued until the predetermined number
of clusters is obtained.
While a stable partition represents a local minimum
by the sum of squares criterion, it is not necessarily a
global minimum. Extensive experimentation with this
algorithm in comparison to several other clustering
methods has indicated that it consistently finds good
clusters (Diehr, 1969). Our only reservation is that because a minimum variance criterion is being used, one
wants to avoid situations in which the means and variances of the partitions are correlated. Fortunately, this
can be achieved by using logarithmic transformations
of highly skewed variables (in this case CARDS READ,
LINES PRINTED, CP TIME, PP TIME, and
COST). Accordingly these variables were included after
a logarithmic transformation. The variables MEMORY
USE, TAPE USE, and FORTRAN USE were included
but not transformed.
If the research-instructional distinction is a valid one,
then a clustering into two classes should recreate it.
This· is, indeed, what happens. Table III shows the
means and standard deviations for two clusters, eom-

pared to the breakdown of jobs by research or instructional sources. Table IV shows a cross classification of
jobs both by their origin and the cluster into which
they fall. Almost 90 percent of the instructional jobs
fall into the first cluster, while about 75 percent of the
research jobs fall into the second cluster.
While this confirms our faith in the research-instruction distinction, it still leaves us with too gross an
analysis. Clusterings into from two to six groups provided a significant insight into the data. Let us describe
the results of these successive clusterings briefly.
Three groups: The data was partitioned into small,
medium, and large resource use groups. The small job
group is largely classwork jobs, the large usage group
largely research jobs, and the medium usage group
made up of half research-half classwork jobs. There is
no indication of sub...:.populations which have heavy I/O
use but light processor use (i.e., no "scientific business"
breakdown) .
Four groups: The data was partitioned into two
groups of jobs with small resource use; differentiated
only by use or non-use of the FORTRAN compiler.
The other two groups were jobs with medium to large
system resource use and "aborted" jobs. The mediumlarge usage group is similar to the medium-large usage
group found for two clusters. The group of aborted
jobs tends to be small in terms of I/O requirements, and
had virtually no CP use.
Five groups: This clustering separated a group of
large jobs using tape drives from the four groups described above.
Sl·X groups: This is perhaps the most interesting clustering. Two levels of system resource use were uncovered, with three types of jobs within each level.
There were three types of small jobs; 408 FORTRAN
and 472 non-FORTRAN jobs, and 89 aborted jobs.
The small job groups were primarily instructional, and
included jobs using a BASIC interpreter. The aborted
jobs were almost all terminated due to control card errors. It is interesting to note that such errors apparently
occur on about 5 percent of the jobs submitted.
The medium to large job groups included 181 mediumsized jobs using tape drives, 293 medium to large jobs
which did not use tapes, and 148 very large jobs.
TABLE IV-Cross Classification of Jobs by Cluster and
Administrative Source
Administrative Source
Instruction
Research
1

884

144

2

187

373

Cluster

Who are the Users?

We feel that the most interesting contrasts are between (a) the population statistics, (b) the statistics for
the two-cluster (research-instruction) partition, and
(c) the finer data of the six group clustering. Figure 2 is
a graphic summary of what one sees if jobs are regarded
as coming from one, two, or six populations. In this
figure each cluster is represented as a rectangle. The
following information is coded in the figure:
1. The area of the rectangle drawn for the group is
proportional to the number of jobs within it.
2. The shading indicates the number of research
jobs-i.e., a completely shaded rectangle would
represent a group containing only research jobs,
while an unshaded rectangle would represent a
group of instructional jobs.
3. The horizontal axis shows the average number
of standard deviations between a group mean
and the population mean on each of the resource
variables. Thus the "partition" consisting of all
1588 jobs has its rectangle centered at 0.0 on the
horizontal axis, while clusters containing large
resource use jobs are centered to the right of this
point, and those containing small jobs are centered to the left.
4. The vertical axis indicates the number of groups
(1, 2, and 6) on which the partition is based and,
within the region for a given number of groups,
the fraction of FORTRAN jobs. Thus one can
determine that the 1028 "small" jobs in the two
groups clustering contained approximately· 45
percent FORTRAN jobs, while the "mediumlarge" jobs were 75 percent FORTRAN by examining the vertical position of the appropriate
rectangles.

2

~
~

2

l

~

.5
0

.5
0

c

!

~ .5
~

0

-1.0

0.0

1.0

Ava. Resource Ute.

Figure 2-Graphic summary of six cluster result-see text for
explanation of code

235

TABLE V-Correlations Between Variables Based on 1588 Cases
Variable

1. Log cards read

2.
3.
4.
5.
6.
7.

Log lines printed
Log CP time
Log PP time
Memory use
Tape drives
Log cost

1

1.00

2

.42
1.00

3

.51
.46
1.00

4

.39
.39
.53
1.00

5
.36
.22
.46
.18
1.00

6

.02
.10
.16
.41
.10
1.00

7
.62
.50
.75
.71
.43
.31
1.00

5. The.length of the rectangl,e indicates the average
variation on the system use variables, with 0.8
std. dev. used as a basis. Thus, it is evident that
for six groups the "small-non-FORTRAN" jobs
had a slightly greater variation on the average
than the "small-FORTRAN" jobs. The length
of the rectangles also shows that the "med-nontape" jobs are better defined than either the
"med-tape" jobs or the "large" jobs.

CORRELATION ANALYSIS
The cluster and descriptive analyses dealt with the
relations between jobs. Another way to analyze our
statistics is to look at the relationship between variables. The table of correlation coefficients for all variables was computed and factor analyzed. The analysis
was performed separately for the different classes of
user and for all cases together. Since there was no substantial difference in either the correlation or factor
matrixes, only the overall picture will be discussed.
Before performing the correlation analysis a certain
amount of data editing was done. The distinction between FORTRAN and non-FORTRAN jobs and the
priority measures were dropped, and' a logarithmic
transformation was performed on all other variables.
The logarithmic transformation was used because all
variables were either exponentially distributed or had
a number of cases with extreme values. High or low
correlation coefficients based on untransformed data,
then, might be produced by only a few cases. The use of
the logarithmic transformation greatly reduces the
chance of this occurring.
The correlations between the variables are shown in
Table V. The table of untransformed variables presents
SUbstantially the same appearance except that the extreme values are somewhat higher. The picture of correlations is not immediately clear. It becomes so, however, when one looks at Table VI, which shows the

236

Spring Joint Computer Conference, 1971

TABLE VI-Factor Loadings for Variables on First 3 Factors
Factor
Variable
Log cards read
Log lines printed
Log CP time
Log PP time
Memory use
Tape drives
Log cost
Cumulative % variance

1

2

3

.72
.65
.84
.76
.55
.35
.92

.35
.14
.13
-.40
.34
-.83
-.05

. 12
.45
-.07
.18
-.71
-.26
.02

50

66

77

factor loadings for each of the variables on each possible
factor*. Only the first three factors will be discussed.
These account for better than 75 percent of the total
variance. The first, and by far the largest (50 percent)
factor can be thought of as a "standard job" factor. It
accounts for half or more of the variance in cards read,
lines printed, and central and peripheral processor time.
Our interpretation is that this factor is produced by the
correlated variation in the measures used by most jobs.
The second factor is essentially a "tape request" factor
(note the high loading of "requests"), and reflects a
difference between jobs that do or do not use tapes.
The third factor has its heaviest loading on memory
use. It reflects variation in memory use by some jobs
that lie outside of the normal spectrum of computing
(i.e., outside the range covered by factor 1). This is
probably caused by (a) jobs that have control card
errors and hence use little memory and (b) a few research jobs that utilize memory heavily.

* Since factor analysis may not be familiar to all readers, we
shall explain a way to interpret its results. For further details
see Harman (1960) or Morrison (1965).
Suppose each job were plotted as a point in 7 dimensional space.
Since most measures are exponential, conversion to a logarithmic
scale ensures that the swarm of points will be roughly a hyperellipse. The factors can be thought of as the axes of the hyperellipse. The first factor is the major axis, the second factor the
next longest axis, etc. The 'percent variance extracted' by each
factor is the percent of variance in distances from the centroid
of the ellipse associated with projections on the factor in question.
The loading of a variable on a factor can be interpreted in the
following way. Suppose each point is plotted on a chart of variables against factor. Note that these will not generally be orthogonal axes. The square of the loading of the variable on the factor
is the fraction of variance in the variable associated with variance
in the factor. Alternately, the loading can be thought of as the
correlation between the variable and a hypothetical pure test
of the factor.

In general, the factor analysis supports the other
statistics we have gathered. An interesting point is the
low loading for memory use on factor 1, which indicates
that most jobs have a uniform memory requirement.
This could be quite important in designing memory
allocation algorithms in multi-programming systems.
SPECIFIC QUESTIONS
The statistical analysis raised a number of nonstatistical questions about jobs, and particularly about '
jobs that were not typical of their administrative category. To answer these a special cross tabulation program was written, modeled after a more extensive information retrieval system designed by Finke (1970)
and used to sort jobs in various ways. Some of the
specific questions and their answers were as follows:
Q.1. How does the use of FORTRAN or BASIC affect instructional jobs?
A.
BASIC jobs use less processor time than FORTRAN but, on the average, much more memory
than the average for instructional jobs. Research
jobs virtually never use BASIC except for relatively small jobs.
Q.2. What percent of memory is used by the "average" job?
A.
Better than half the teaching jobs use less than
16K words. The comparable "break even" point
for research jobs is 24K. Twelve percent of the
research jobs use more than 32K words, while
less than two percent of instructional jobs do.
Furthermore, most of the long instructional jobs
are generated by a few individuals (i.e., are
multiple jobs with the same user I.D.).
Q.3. How many runs are compiler runs of any sort?
What compilers were used?
A.
About two-thirds of the jobs call for at least one
compilation. In 1588 runs the FORTRAN compiler was called 849 times, BASIC 249 times,
SNOBOL once, SIMSCRIPT 13 times,. the
COMPASS assembler twice (by the same job
number) and COBOL and ALGOL never. (Excellent COBOL and ALGOL systems are available on the University's B5500, so this may be
misleading.) Only three center supported "packages" were used: the BMD statistical programs,
the SMIS package, and a SORT-MERGE system, for a total of 69 runs. One wonders two
things: how much effort are users devoting to
duplicating library programs and how much effort
should a computer center devote to maintaining
such programs?

Who are the Users?

CONCLUSIONS
Our specific conclusions are that the University of
Washington Computer Center users create jobs that
fall into four groups, some with important subgroups,
producing the six groups graphed in Figure 2. The four
major groups are aborted jobs, small jobs (with subgroups FORTRAN and non-FORTRAN), mediumsized jobs (with subgroups tape and non-tape jobs), and
large jobs. Small jobs are primarily due to classroom
work, middle and large jobs are associated with research work. The principal ways in which jobs differ
from each other is in the amount of processor time used
and the amount of input. These statistics, which are
not terribly startling, are of direct use to the University
of Washington and are of indirect use to any institution
which is willing to assume it is like Washington.
Should our analysis be used generally even though
our particular results are not general? To answer this,
we will point out two courses of action which are available to the University of Washington now that it has
these statistics, but which might not have been available (or at least, would have been available only by
trusting the Computer Center Director's intuition)
without the analysis.
At most universities computer use of education is
supported by intramural funds, while a substantial part
of the research computing support is extramural. Understandably, granting agencies (notably the United
States Government) insist that the same algorithm be
used to allot charges to all users. The argument is that
the cost of a computation should be determined by the
nature of the computation and not by who makes it.
While seemingly fair, this can be frustrating to an institution which wishes to encourage educational
use of computing, but needs to capture all funds that
are available for research computing. More generally,
there are a number of situations in which a computer
center may wish to encourage or discourage certain
classes of user, while still retaining the principle that
the same charge will be levied for the same service.
The solution proposed is to establish a charging algorithm which is sensitive to the varying characteristics
of jobs from different user sources. For example, if the
University. of Washington were to place a very low
charge for the first 200 cards read and the first 10 seconds each of CP and PP time, and charge considerably
for system utilization beyond these limits, then the
educational users would pay proportionately less and
the research users proportionately more of the total bill.
Charges would still be non-discriminatory in the sense
that identical jobs receive identical bills. Note also, how
our statistical analysis dictates the type of charging
algorithm. From the correlational analysis we know

237

that the only way of differentially effecting user charges
is to manipulate the number of cards read and the
processor time charges. From the descriptive statistics
and the cluster analysis we can predict how a given
manipulation will affect different sections of the user
community.
Very much the same reasoning can be used in planning for new equipment acquisition. Obviously equipment additions aid in computing either because they
facilitate the running of all jobs equally (in which case
the aim is to increase throughput uniformly) or because they aid in processing of certain types of jobs.
The computer center director rightly looks at equipment in terms of how it affects bottlenecks in his
throughput or in his capability to do certain types of
computation. From the University administrators'
view, however money into the computing center is a
means toward the end of achieving some ed uca tional or
scholarly goal, such as increased production of engineering B.S.'s or support of a Geophysics research program.
We can use a statistical analysis of user characteristics
to reconcile these points of view. Taking an obvious
example from our data, if the University of Washington
decides to put x dollars into support of computing for
education, the money should not be spent buying tape
drives. To take a more subtle case, suppose we were
faced with a choice of obtaining a medium-sized computer or expanding the CDC 6400 system to a CDC
6500 or CDC 6600 computer. The appropriate course
of action might be determined by the purpose for which
the money is intended, to facilitate educational or research use. Without these statistics, we do not see how
the management goals of the institution and the technical goals of the Computer Center can be coordinated.
Our results also are of interest to two groups of people
outside of our own institution; those interested in research on computing systems and those involved in
selling computers to universities. We feel that we have
clearly shown that a simple model of a single process
for generating statistical characteristics of user jobssuch as those assumed to estimate the performance of
system algorithms-is not appropriate. A model of a
university computing community must be based on
sub-models for quite different populations. The business-scientific distinction is decidedly not appropriate.
We close with some remarks on the methods we have
used. Two of our techniques, descriptive statistics and
correlational analysis, are conventional statistical meth,;,.
ods. Indeed, the programs we used, BMD03M and
BMDOID, are part of the most widely supported applications package in programming! There is no reason
why everyone with a computer of any size could not
perform these analyses on his job stream. We feel, however, that the clearest picture of our users was obtained

238

Spring Joint Computer Conference, 1971

by the less conventional cluster analysis. We recommend that this technique be used more widely to
analyze computer use. We hope it will aid in identifying
the characteristics of existential, rather than postulated, computer users.

REFERENCES
1 W DIXON ed

Biomedical computer programs
Health Sciences Computing Facility UCLA 1965

2 H HARMAN
Modern factor analysis
U Chicago Press 1960
3 D F Morrison
Multivariate Statistical Methods. N ew York:
McGraw-Hill, 1965
4 G DIEHR
An investigation of computational algorithms for
aggregation problems
Western Management Science Institute UCLA Working
Paper 155 1969
5 J FINKE
A users guide to the operation of an information storage and
retrieval system on the B5500 Computer
Technical Report No. 70-1-3
Computer Science Univ of Washington

An initial operational problem oriented medical record
system-For storage, manipulation and retrieval of medical
data*
by JAN R. SCHULTZ, STEPHEN V. CANTRILL and KEITH G. IVIORGAN
University of Vermont
Burlington, Vermont

ented manner. It allows the direct inputting of data by
the information originator and the retrieval of data in
various medically relevant forms.

INTRODUCTION
The ultimate role of the computer in the delivery of
health services has yet to be defined. There may be profound implications in terms of quality of medical care,
efficiency, economics of care, and medical research.
Final judgments as to advisability and economic feasibility await the implementation of prototype total
medical information systems and further technical developments directed toward lowering the high cost of
currently developing systems. Development of less expensive hardware and real-time application of the
present hardware and software must go on in parallel.
We have been involved in the latter, and an experimental time-shared medical information system has
been developed for storing and retrieving the total
medical record, including both the narrative and the
numeric data. This development has integrated the
Problem Oriented Medical Record, a means of organizing medical data around a patient's problems, with a
touch sensitive cathode ray tube terminal that allows
structured input (with additional keyboard entry capability) by directly interfaced medical users (in particular the physician and the nurse).
A total of 85 general medical patient records have
been kept on the system as of December, 1970. The
system handles all aspects of medical record keepingfrom the Past Medical History and Systems Review
collected directly from the patient to complete Progress
Notes and flowsheets, all recorded in a problem-ori-

THE PHILOSOPHICAL BASIS FOR THE SYSTEM
The system is based on a medical philosophy requiring- data in the medical record to be problem oriented
and not source oriented. 1 •2 Data are collected, filed and
identified in a problem oriented record with respect to
a given problem and not to the source of the data as in
the traditional source oriented record.
The problem oriented medical record requires a systematic approach to treatme,nt of the patient. This
systematic approach is defined by the four phases of
medical action: data base collection, problem formulation, plan definition and follow-up. A brief explanation
of the four phases will outline the basic requirements
for this system (see Figure 1) :

............ .........

.......................
.11. FORMULATION OF
ALL PROBLEMSI

, I. E S TA III I SHME NT 0 F,

- - - - -.•

A DATA BASE.

,- - - - -

HISTORY.
PHYSICAL EXAM.
ADM. LAB. WORK.

,- - -

-.->

I

-- - --

-,

PROBLEM LIST.

•

a
I ••••••••••••••••••••• I

1 ••••••••••••••••••• 1

/

.a.III.
.....PLANS
......FOR....EACHa
....<-// / .......................
IIV. FOLLOW-UP ON EACH.
a

PROBLEMa

1- -

-

-

-

-

-

-

-

,
PRO BUM,
a
1- - - - - - - - a __ a> , PROCRESS NOTES.

-,

* This research was done under a research grant from the Department of Health, Education, and Welfare, Health Services
and Mental Health Administration, National Center for Health
Services Research and Development, PHS 1 RIB HS 00175-01,
entitled "The Automation of a Problem Oriented Medical Record," Co-Principal Investigators Lawrence L. Weed, M. D.
and Jan R. Schultz.

-I

, TITLED I NUMBERED
BY PROBLEM.
I •••••••••••••••••••••

I ••••• • • • • • • • • • • • • • • I

Figure 1

239

-

,

COLLECTION 0 F
FURTHER DA TA.
TREATMENT.
,
EDUCA TlON 0 F PA T.,

240

Spsing Joint Computer Conference, 1971

During the first phase of medical action the patient's
complete data base is collected. This includes a branching questionnaire Past Medical History and Systems
Review taken directly by the patient, a Physical Examination entered by the physician and otlier medical
personnel, the Present Illness structured from choices
(to be discussed later) and entered by the physician,
and certain admission laboratory orders generated by
the physician.
After a complete data base is collected the physician
studies it and formulates a list of all the patient's
problems. This is the second phase of medical action.
The problem list includes medical, social, psychiatric
and demographic problems. Each problem is defined
at the level the physician understands it. A problem can
be a "diagnostic entity," a physiologic finding, a
physical finding, an abnormal laboratory finding, or a
symptom. The problem list is a dynamic index to all
the patient's plans and progress notes since it can be
used to follow the course of the problem(s).
After a complete problem list is formulated, the
physician must define an initial plan for each problem.
This is the third phase of medical action. The plans are
divided into plans for more information, plans for treatment and contingency plans. With plans for more information it is possible to: (1) rule out different problems by ordering certain tests or procedures, (2) get
laboratory tests for management, and (3) get more
data base information. Under plans for treatment: a
drug, diet, activity, procedure, or patient education
can be prescribed. Contingency plans are possible future
plans to be carried out if certain contingencies are
satisfied.
The fourth phase of medical action is writing progress
notes for each problem. The progress notes for each
problem are divided into: Symptomatic, Objective
(laboratory, x-ray and other reports), Treatment Given,
Assessment and Follow-up Plans (similar in content to
the Initial Plans) sections. The progress notes allow
medical personnel to act as a guidance system and
follow the course of each problem, collecting more data
base, reformulating and updating problems and respecifying the plans, each action dependent upon the
course of the patient's problems.
The Problem Oriented Medical Record has been
used in paper form for the past fourteen years. It has
proved a working record system on paper and was
demonstrated practical long before computerization
was ever considered. Its dynamic structure, non-source
orientation and medically relevant labeling of .all data,
however, facilitates computerization. Computerization
augments its medical capabilities by making it possible to retrieve all data on one problem in sequence
and by allowing data to be organized separate from its

source in the record. This ability is recognized as having
significant medical implications,3,4 for it allows the
physician to follow the course of a problem in parallel
with the patient's other problems or as a separate
problem (i.e., retrieving information chronologically vs.
retrieving all data on one problem). The computer enables rapid audit of all the patients with similar problems as well as the ability to audit a physician's logic
and thoroughness on one specific patient, and will allow
the development of research files.
As previously written:
"We should not assess a physician's effectiveness by the amount of time he spends with patients or the sophistication of his specialized
techniques. Rather we should judge him on the
completeness and accuracy of the data base he
creates as he starts his work, the speed and economy with which he obtains patient data, the adequacy of his formulation of all the problems, the
intelligence he demonstrates as he carefully treats
and follows each problem, and the total quantity
of acceptable care he is able to deliver."!

Our experience with this system indicates that computerization does facilitate such an assessment.
At the time this proj ect began, and the system was
specified, other operating systems (of which we had
knowledge) were few. 5 ,6 After an analysis of these systems our group decided that we would try to build the
necessary medical application programs using as a basis
system software developed by another group. To quote
R. W. Hamming;
"Indeed, one of my mqjor complaints about the
computer field is that whereas Newton could say,
, If I have seen a little further than others it is because I have stood on the shoulders of giants,' I
am forced to say,' Today we stand on each other's
feet.' Perhaps the central problem we face in all of
computer science is how we are to get to the situation where we build on top of the work of others
rather than redoing so much of it in a trivially
different way. Science is supposed to be cumulative, not almost. endless duplication of the same
kind of things."7

This would serve two purposes: It would divide the
total work task naturally into more manageable units, .
and it would force our group to learn completely the
system we were to build upon, thus allowing the basic
system software to be utilized as a tool in the accom-

Initial Operational Problem Oriented Medical Record System

plishment of our medical goals. We built upon the system being developed by Medical Systems Research
Laboratory of Control Data Corporation. The hardware
developments of Dr. Robert Masters (the Digiscribe)
and the software developments of Mr. Harlan Fretheim
(the Executive, the Human Interface Program and
SETRAN) have been fundamental to our own progress.

241

'3463

-------------------------------------------------COULDN'T DETERMINE.
*

ONSETa

DIDN'T DETERMINE.

GRADUAL (INSIDIOUS).
SUDDEN (ABRUPT).

DISPLAY IN 'HIP'

THE BASIC SYSTEM SOFTWARE DEVELOPED
BY MEDICAL SYSTEMS RESEARCH
LABORATORY OF CONTROL DATA
CORPORATION
Directly interfacing busy medical personnel to the
computer system required the development of an effective facile interface. The Digiscribe terminal with its
associated software is such an interface. Displayed on
the cathode ray tube is an array of choices from which
the user can make a selection by touching the screen
with his finger. The user's selection is input to the
computer system in the form of a character so that for
each of twenty different positions on the cathode ray
tube screen a different character is generated. A system
programS accepts as input the user selection and on the
basis of it appropriate branching takes place and new
information is displayed. Included in the information
used by the program that interprets the selection (the
Human Interface Program) is a push-down list of frame
numbers waiting to be displayed, branching information and certain internal parameters (not seen by the
user at the terminal) which can be associated with each
choice displayed on the screen. In addition to text,
branching information and internal parameters, a program ca1l 9 can be associated with any selection. This
allows a certain amount of open endedness and provides the means for calling and executing application
programs.
The selections made by the medical user at the
terminal are concatenated" by the Human Interface
Program to form "paragraphs" of information. The
paragraph is the basic unit of information generated in
the system. (The Storage and Retrieval programs
manipulate paragraphs of information.) Associated with
each paragraph is the Selection Parameter List which
includes for each choice made by the user the frame
number, the choice number within that frame, and any
internal parameters associated with the choice. The
internal parameters can be used to code selections so
programs can interpret compact codes rather than
alphanumeric data. The internal parameters are identified by a single letter (e.g., "F" type internal parameters
specify format codes which will be explained in detail
below). Using the Selection Parameter List our pro-

Rlfl46l ,3463AC M

*
,.... .".. x •• , ,

x... .
x... .

ONSETa

COULDN'T DETERMINE.

"..,,,,2
GRADUAL (INSIDIOUS).

-,3463 " •• ,

-'3463 '3434

-,3463

x."

DIDN'T DETERNINE.

g

,.~"

x... .

SUDDEN (A BRUPT).

-'3463 '3434

SAME DISPLAY IN 'SETRAN'

Figure 2

grams can analyze the input data without having to
search the generated English text.
The user generated paragraphs and associated Selection Parameter Lists, are the coupling mechanism between the Human Interface Program and our application programs which store, retrieve, and manipulate
the patient records and other files.
A language was developed as part of the basic system
called SETRAN (Selection Element TRANslator)10
which makes possible the programming of the branching logic displays and alteration of already entered displays using the keyboard on the terminal. See Figure 2
for an example of a frame as displayed in the Human
Interface Program and in SETRAN. It has been possible to train physicians and other personnel in this
language and development of new displays has proceeded without a computer person acting as an intermediary. Allowing medical personnel, with almost no
computer science training a direct means of developing
and altering basic system displays is fundamental during the development phase of systems such as these.
The massive number of such displays required for such
a system (currently over 16,000 displays have been
developed by the PROMIS group) and the necessity
for a tight feedback-loop during the developmental
phase of the displays require such tools. Once systems
such as this are beyond the developmental phase, access to such programs must be carefully controlled.
The operating system supports multiple terminals

242

Spring Joint Computer Conference, 1971

in the process of the user making choices at the terminal. It
produces output directly to the user's terminal.
All of the above processes take place unknown to the user
but are a direct response to his choices at the terminal in either
storing in or retrieving information from the Computerized
Problem Oriented Medical Record.

'IO'UA' OIUHTlD
MUICAl

IICOID

'AliENT STlUCTURED AND UST flUS
IMASS STORAGE RESIDENTI

STOlE 'IOGRAA'$
ISTlUCTUIU AND LIST fILES'

,~

~~"

..I~~~
Jt:fh
'RI~T£D

MEDICAL OUTPU

I
I

I
I
I

I
I

__ J

SURAN 'ROGRAM
ISYSTIM DEYUO'A'ENI ONLY'

Figure 3-EXPLANATORY LEGEND
The "HUMAN INTERFACE PROGRAM (HIP)" displays
medical content from the "FRAME DISPLAY DICTIONARY"
to the user in a branching logic fashion. Most of the entries in the
"FRAME DISPLAY DICTIONARY" were created by system
development personnel at some time in the past by using the
"SETRAN PROGRAM."
As the user makes a series of choices at the terminal, "HIP"
creates a "SELECTION PARAMETER LIST AND PARAGRAPH SEGMENTS" which represent the history of choices
made by the user.
At appropriate times in the series of displays seen by the user,
dependent. upon pathway, "HIP" calls the "STORE PROGRAMS" and the "RETRIEVE PROGRAMS."
The "STORE PROGRAMS," by processing the "SELECTION PARAMETER LIST AND PARAGRAPH SEGMENTS" for the specific user, can update the mass storage
resident "PROBLEM ORIENTED MEDICAL RECORD
PATIENT STRUCTURED AND LIST FILES."
If the "STORE TRANSLATED" module) 'of the "STORE
PROGRAMS" is used (e.g., for processing patient histories),
reference will be made to the "TRANSLATION DICTIONARY" which was created in the past by system development
personnel using the "DETRAN PROGRAM."
The "RETRIEVE PROGRAM (STRUCTURED FILE)"
processes the user's retrieval request as specified in his associated
"SELECTION PARAMETER LIST AND PARAGRAPH
SEGMENTS" retrieving any specified part of a patient's medical
record.
Output by the "FORMAT ROUTINE" from the "RETRIEVAL PROGRAM (STRUCTURED FILE)" can be either
to "PRINTED MEDICAL OUTPUT" (hardcopy) or directly
to the user's cathode ray tube terminal via the "FRAME DISPLAY DICTIONARY" and "HIP."
The "RETRIEVE PROGRAM (LIST FILE)" may be called

allowing a rapid response to user interaction (a user
familiar with the frame content can make selections
faster than one per second) and supports application
programs operating in a multi-level, multi-programmingmode.
The frames, application programs, medical files and
station associated variables are disk resident. Most
selections made by a user require four disk accesses (in
the current version of the system) and the variables
associated with a station are core resident only while
the selection for that station is being processed.
There are four main classes of application programs.
The two highest level classes are interactive with the
user and require immediate execution. The Selectible
Element Translator is of this type and allows the online entering and changing of frame content and branching (by appropriate personnel, not all users).
The second level class of programs is executed while
the user is at the terminal, but these programs may take
longer to run than the ones executed immediately. An
example of this type is the program which retrieves
from the patient records to the cathode ray tube
terminal.
The lowest level of application programs is executed
sequentially by priority level in the background after
a user has signed off the terminal. These include the
programs which store into the patient records and retrieve patient records to the line printer.
AN APPROACH TO THE COMPUTERIZATION
OF THE ME])[CAL RECORD
Our approach to the computerization of the medical
record involves many elements. The Problem Oriented
Medical Record represents the medical "foundation"
upon which the total system rests. The Human Interface Program, the Selection Element TRANslator, the
system executive and the hardware" drivers" represent
the computer software "foundation." The other elements to our approach will be discussed in this section.
The information generated by having the medical
personnel (or the patient himself) go through the
branching displays is English narrative. Although the
number of selections presented on each display is small
(averaging 8 selections) the number of paths through

Initial Operational Problem Oriented Medical Record System

these selections is quite large. There are currently over
16,000 displays in the system. Approximately 12,000
of these are branching displays, the remainder are
solely for information, e.g., before any drug can be
ordered a sequence of displays requests the physician
to: (1) CHECK THE PROBLEM LIST FOR: followed by a list of problems (2) SIDE EFFECTS TO
WATCH FOR: (3) DRUG AND TEST INTERACTIONS: (4) TEST INTERFERENCE: (5) USUAL
DOSAGE: followed by optional (6) MECHANISM
OF ACTION: (7) METABOLISM AND EXCRETION:.
Structured 'branching logic displays allow the medical user to operate from a body of knowledge broader
than can be kept in his own memory. This body of
kllowledge as represented in the library of displaysll.12
is capable of being updated in an organized and systematic way, so that it can always reflect the most
current and sophisticated medical thinking. The system is dynamic and since data can be typed into individual patient's records supplementing the structured
displays, it would be possible to analyze this typed in
data to update the library of displays (if the typed in
data indicate a deficiency in the branching displays
and not in the physician using the displays). Once
systems similar to the one described here are in daily
operation, the organized and systematic updating of
the library of displays will have to be centralized in an
organization with this sole responsibility and authority.
The generation of English is the result of a user making selections from structured, branching-logic displays. These selections must then be transformed by a
program into an internal form for storage in a patient's
file. The program that does the transformation of the
selections into an internal form is generalized; it is independent of the specific content contained in the
selections. It stores data in an internal form, independent of the output devices ultimately used to display the data. The retrieval routines that allow the
stored data to be manipulated and displayed in various
forms will also be discussed in more detail. (See Figure 3).
The displays necessary for the generation of each
section of the Problem Oriented Medical Record are
specified by a "meta-structure" for each section. The
"meta-structure" specifies the branching logic of the
content displays. 1 There are structured approaches to
Present Illness, Problem Lists, Drug Sequences, and
Progress Notes. For example, the Present Illness metastructure would include, for each body system, a list
of the symptoms particular to that system and for each
symptom, a list of its characteristics. In the Psychiatry
system, for example, if Headache were selected from
the frame of the list of possible symptoms, the physician

243

would be asked to describe characteristics of this
headache:

2413S
HEA DA CHE

ONSET/COIwll4ENCED

LOCATION

I NTENS I TY AT WORST

RADIATION

AI'iOUNT AT wO RS T

RELIEVED/NOT RELIEVED BY

QUAL I TY

,.iA DE WO RSE BY

TIME RELATIONSHIP

ASSOCIATED WITH

EPISODES

II

CHOICES CONT

_ _- - - - . . 1 1
~

24226
HEADACHE

-------_._._-------------------------------------I I RETURN TO PREVo PAGE

COURSE OVERALLs

II

TURN PACE

If "MADE WORSE BY" were chosen by the physician, a frame containing the following selections appears:
24J. 74
HEADACHE

-------------------------------------------------*
NOISE,

PiADE WORSE BV,

ORAL CONTRACEPTI VES,

PHVS. EXERTION,

ALCOHOL,

POSITION,

CERTAIN FOODS (TYPE IN)

NOTHING IN PARTICULAR.

TENSION/ANXIETV,

DIDN'T DETER"'lINE.

FA TI GUE,

II TURN PAGE

This technique allows a complete English narrative
description of HEADACHE to be generated:
~EADACHES:

ONSET: GRADUAL (INSIDIOUS).
COMMENCED: 3 WEEKS AGO.
SEVERITY AT WORST: SEVERE, CANNOT CONTINUE USUAL ACTIVITY.
QUALITY: DULL. DEEP, PRESSING/BAND-LIKE,
TIME OF DAY: NOCTURNAL: PREVENTS SLEEP, COURSE' OCCURS IN
EPISODES
WHICH ARE: FREQUENT
EACH EPISODE OCCURS: SEVERAL X/WEEK EACH EPISODE LASTS:
HOURS.
LOCATION/SPREAD: VERTEX, PARIETAL, OCCIPITAL, BILATERAL
RELIEVED BY: NOTHING.
•
MADE WORSE BY: NOTHING IN PARTICULAR.
ASSOCIATED WITH: SOMNOLENCE/TORPOR: DIFFICULTY W/MEMORY,
KNOWN CONVULSIVE DISORDER
ASSOCIATED WITH: ~CONCURRENT RX WITH
DILANTIN,ZARONTIN,MEBARA ~
OVERALL COURSE: UNCHANGED.

244

Spring Joint Computer Conference, 1971

All programs that interpret input data can assume a
standard form and structure guaranteed by how the
branching logic displays are programmed. Contained
in the Selection Parameter List for each paragraph are
the internal parameters which define the type of data
and what the program should do with it. The programs
also receive information which further describes the
paragraph as a unit. This information was defined by
selections as the user went through the displays. Associated with each paragraph is a paragraph label which
further describes the paragraph on two levels: Information Type 1 (IT1) defines the major section in a record
to which this paragraph belongs (e.g., Physical Examination, Progress Notes, Problem List, Past Medical
History and Systems Review, etc.); Information Type 2
(IT2) defines the subsection within the major section
(e.g., Skin Examination in the Physical Examination,
Symptomatically in the Progress Notes, etc.). All paragraphs which contain problem oriented data have associated with the label the applicable problem number.
Also associated with the paragraph label is information
indicating whether this paragraph contains either narrative or numeric information. The date, time, user and
patient numbers are also associated. This data determines where the paragraph is to be stored in a specific
patient's record.
Structured displays and the internal parameters
linked to selections (and collected in the Selection
Parameter List) both imply a closed system. The closed
system enforces the organized entry of information in a
well-defined syntax. Via the structured displays, the
user is aware of data relationships that normally are
imbedded in the data interpreting programs. The data
is so entered that it has an inherent structure-not to
be found when data is entered free form. This structure
holds even if the data entered via selections from the
structured displays are supplemented by typed-in information. Much simpler data interpreting programs
are required for such structured data with the associated Selection Parameter List then for purely free
formed input.
In addition, the user of structured displays can operate on recognition rather than recall. He has available,
at the time he needs it, the organized knowledge of his
profession. This knowledge can be systematically updated with a thoroughness that is impossible on an
individual basis.
In summary, the necessary elements in our approach
to a computerized system to store and retrieve Problem
Oriented Medical Records include a medically relevant
organization of the data in the medical record, an effective interface between the medical user and the computer system, a means of structuring the medical content material on frames using meta-structures, programs

to transform selections into a manipulatable internal
form, programs to retrieve the stored data in various
forms, and a "closed" system.

FILE STRUCTURES FOR MEDICAL RECORDS
Our files may be characterized in terms of (1) maintenance and (2) structure. In terms of maintenance:
Will this file handle information that can be both inserted and deleted (purgable) or only inserted (nonpurgable)? An example of a file into which information
will only be inserted and never purged is the patient's
problem-oriented medical record. A list of all the patients on a ward is an example of a file that will be
added to and deleted from on a regular basis as patients
are admitted and discharged from the ward. In terms
of structure : We define a file of homogeneous elements
(a file which contains a specific subset of the IT1,
IT2's) as a list file; e.g., the list of patients on a specific
ward, or the list of problems on a specific patient,. or
the list of current drug/diet/activities for a speCIfic
patient. A file of heterogeneous elements (all IT1, IT2's)
we call a structured file, for example, the patient's
Problem Oriented Medical Record.
There are then a total of four types of files that we
envision possible within the system: (1) a list file that
cannot be purged; files of this type will ultimately be
used for research retrieval capabilities; (2) a list file
that can be purged; (3) a structured file that cannot
be purged, and (4) a structured file that can be purged
and which may in the future be used for the most current progress note. This progress note could be purgable
since it might be possible, for example, to condense
many vital sign values to one value and a range. For
the current implementation of the system it has been
necessary to utilize list files that are purgable and
structured files that cannot be purged. Further expansion and sophistication of the current system will require the other two types of files to be developed and
utilized. The two file types currently used will be described in more detail.

NON-PURGABLE STRUCTURED FILES-THE
PATIENT'S PROBLEM ORIENTED
MEDICAL RECORD
An individual patient's file requires a structure that
facilitates the storage and retrieval of its data while
minimizing the number of mass storage accesses. This
file (a non-purgable file with heterogeneous elements)
consists of a "Table of Contents" and a variable num-

Initial Operational Problem Oriented Medical Record System

ber of "Items." The Table of Contents is an index to
all the data in an individual patient's file and is addressable as a function of the patient's system I.D.
number. The Table of Contents contains a variable
length list of Item Pointers (see Figure 4). Each Item
Pointer includes Information Type 1, the storage date
and time, and the Item number containing the paragraph
(5). If the Information Type 1 is problem oriented
then there is also an array of bits that represent the
presence of that problem number in the Item. This
feature allows the rapid sequential retrieval of all the
data on one problem by indicating whether the Item
contains paragraphs on the problem number.
The Item is the depository of the narrative (e.g.,
Present Illness) and numeric (e.g., Blood Pressure)
paragraphs. It contains the paragraphs generated by
the user at the terminal and transformed by the
STORE program into the internal form. (A description
of this internal form is given later in this paper.) For
each paragraph in the Item there is a corresponding
paragraph pointer also in the Item. This paragraph
pointer contains Information Type 1 and 2, problem
number, date, time, user number and the relative address of the paragraph in this Item.
To access any specific paragraph of information in
an Item a search through the paragraph pointers, which
are sorted and linked together, is required. No searching of actual narrative data is necessary for the retrieval
of any narrative paragraph contained in an individual
Item, as the paragraph pointers completely define the
contents of each narrative paragraph at the level the
medical user needs to differentiate the data.
This type of file structure allows the retrieval of
specific paragraphs of data in a minimum number of
mass storage accesses (two accesses), if we assume the
Table of Contents does not overflow. Since the Table
of Contents (core resident in one access) contains Item
Pointers for each Information Type in the patient's
file, searching it gives the Item number(s) of the requested data. The necessary Item(s) can then be
brought into core in one access per Item.

INTERNAL REPRESENTATION OF
NARRATIVE AND NUMERIC DATA
WITHIN THE PATIENT'S PROBLEMORIENTED MEDICAL RECORD
Each paragraph generated by the user to be stored
into a patient's record can contain either narrative or
numeric information. The STORE program and appropriate subroutines are used to transform each paragraph into a standard internal form which consists of

245

flU; SflUCTU.1OF A'ATII:.U'
nOIUMOIII.neIlllItCAlltC. . .
AST.UCTUln_OtI-,..u.llffIU

TAtl(Of(o.un.TS

noCi

D'::'Fl::...,ut
O.tt.li ..

,fLn"fI,ul

2:, ...,.......,..,..."
.(

:=:\J.-----I
Figure 4

strings of eight bit characters. The paragraph, for narrative data, represents the smallest unit of information
the physician or other medical personnel can ever request on an individual patient. For numeric data, the
numeric block-which specifies one numeric value-is
necessary, as the physician requires time ordered
graphs of various physiologic parameters and thus the
retrieval of individual numeric blocks is required. It is
possible to store a variable number of numeric blocks
in one paragraph. In both cases the data are accessible
at the level the medical personnel are most accustomed
to working with it. This internal representation allows
the rearrangement of data for various retrieval requirements that are impossible with a manual paper record
system.
If the paragraph contains narrative data, a STORE
routine interprets" F" type internal parameters in the
Selection Parameter List as format codes. These format
codes are associated with certain selections. They are
used to define the internal form in which the data will
be stored. This internal form in turn defines the output
format of the paragraph and specifies the relationship
between the selection with the" F" internal parameter
and those selections following. The selection with the
"F" internal parameter is treated as a "title" and the
selections until the next "F" internal parameter are
"data" (See diagram below). Specifically, the format
code defines the indentation level of the title (level 0 is
the least indentation, level 3 is the greatest amount)
and whether there are carriage returns before and/or
after the title. With this information it is possible to
output the narrative using an interpretive output subroutine called FORMAT. The format codes and the
internal form are output device independent.
Since SETRAN (Selection Element TRANslator)
allows internal parameters to be associated with any

246

Spring Joint Computer Conference, 1971

choice, the individual who writes the frame can specify
or change the output format. Such flexibility is an important feature in a system like ours as it allows the
individual writing the frame content material to specify, at the same time, the output format of the information. Changes of output format are also greatly
facilitated as they only require a rewriting, using
SETRAN, of the" F" internal parameters on the frame.
(Any information previously generated from these
frames is not reformatted.)
Internally a paragraph of narrative data is of the
following form:
(Format Code) (Title Narrative) (Data Narrative) .. .
(Format Code) (Title Narrative) (Data Narrative) .. .
(Format Code) (Title Narrative) (Data Narrative)
(Terminating Format Code)
For example:

LEVEL 1

) (EATS 2 OR MORE
MEALS/DAY; MEAT OR
EGGS WITH 1 OR MORE
OF THEM.)
LEVEL 2
) (22 POS, 12 NEG, 0 DNK,
ODNU.)
(TERMINATION CODE)
A printout on the line printer of this paragraph would
result in the following:

SOCIAL PROFILE:
ADULT fEMALE.
AGE 69.
BORN IN VEI-IMONH RURAL AREA.
LIVED IN AREA Of CURRENT
RESIDENCE FOR > 39 YEARS.
LAST COMPLETED GRADE: .JUNIOR COLLEGE.
WOULD NOT LIKE fURTHER
EDUCATION OR TRAINING. .
MARRIED.
OOES NOT LIVE W/ HUSBANO.
LIVES ALONE.
COOKS OWN
MEALS.
WIOOWEO FOR >1· YEAR.
NOT SATISFIEO ,1/ PRESENT LIVING CONDITIONS.
UNEMPLOYED FOR MORE THAN 2 YE.ARS.
DOES NON-STRENUOUS LABOR.
GETS OAILY EXERCISE.
PRESENT HEALTH CONOITIONS
INTERFERE W/WORK.
SUPPORTEO MAINLY 8Y SELF.
DOES NOT ORINK ALCOHOL.
EATS 2 OR MORE MEALS/OAY' MEAT OR EGGS WITH 1 OR MORE Of THEM.
22 POSt 12 NEG.
0 ONK.
0 ONU.

(CR, LEVEL 0, CR) (SOCIAL PROFILE:)
(
(

LEVEL 1
LEVEL 1

LEVEL 1

LEVEL 1

LEVEL 1

LEVEL 1

(

LEVEL 1
LEVEL 1

) (ADULT FEMALE. AGE 69.)
) (BORN IN VERMONT;
RURAL AREA. LIVED IN
AREA OF CURRENT
RESIDENCE FOR >39
YEARS.)
) (LAST COMPLETED
GRADE: JUNIOR
COLLEGE. WOULD NOT
LIKE FURTHER
EDUCATION OR
TRAINING.)
) (MARRIED. DOES NOT
LIVE W /HUSBAND.
LIVES ALONE. COOKS
OWN MEALS. WIDOWED
FOR >1 YEAR.)
) (NOT SATISFIED
W /PRESENT LIVING
CONDITIONS.)
) (UNEMPLOYED FOR
MORE THAN 2 YEARS.
DOES NON-STRENUOUS
LABOR. GETS DAILY
EXERCISE. PRESENT
HEALTH CONDITIONS
INTERFERE W /WORK.)
) (SUPPORTED MAINLY BY
SELF.)
) (DOES NOT DRINK
ALCOHOL.)

On the cathode ray tube it appears as:

SOCIAL PROFILEa
ADUL T FEMLE. AGE ".
BORN IN VERNONT, RURAL AREA. LIVED IN AREA
OF C·URRENT RESIDENCE FOR> '9 VEARS.
LAST COMPLETED G~DE. JUNIOR COLLEGE. WOULD
NOT LIKE FURTHER EDUCATION OR TRAIN'NG.
t4ARRIED.
DOES NOT LIVE WI HUSBAND. LIVES
ALONE. COOKS OWN MEALS. WIDOwED FOR >1
VEAR.
NOT SATIS',ED WI PRESENT LIVING CONDITIONS.
UNEMPLOVED FOR NORE THAN 2 VEAlS.
DOES
NON-STI.NUOU' LAtOI. GITS DAILV IXIICIIE.
PRESENT HEALTH COIl1TIOII I.TEI'EI£
WlVOtl.
SUPPORTED. ~IILV IV IlL'.
DOIS MOT 9f II ALCOHOL.
EATS 2 OR MORE MEALS/DAV. MEAT OR EGGS WITH 1
OR NORE OF THEN.
22 POSt 12 NEG, • DNK, • DNU.

The title and data narrative are both of variable length.
If the paragraph contains numeric data, then the
STORE-NUMERIC routine interprets "N" type
internal parameters in the Selection Parameter List as
numeric codes. The numeric codes associate specific
medical data with the internal structure of the numeric
blocks. The numeric blocks each contain numeric values
or objective text; this could represent a blood pressure,
a clinical chemistry value, or any other type of objective information that must be manipulated internally
in the system. The numeric codes are used to associate

Initial Operational Problem Oriented Medical Record System

a type code (i.e., a number that represents the type of
data contained in the block and is the means of identifying all numeric data within the system), a time, a
date, a title, a numeric value, and objective text with
the numeric block structure. For example, a temperature numeric block could contain: type code = 30 ;
time = 14:35; date = Feb. 23, 1970; title=TEMP;
numeric value =38; number descriptor=C. Medical personnel writing frames can define and change the "N"
type internal parameters using SETRAN. (Previously
stored numeric blocks are not affected.)
The overall philosophy upon which the system was
built required enough flexibility to allow the medical
user, * after a minimal training period on the system,
to change all system variables specifying the output
format (and thus the internal form of the data) and
those internal parameters that depend upon medical
knowledge of various physiological parameters. Medical personnel associated with our project can directly
develop many of the system content frames and can
change the specification of how the data will be stored
internally by changing internal parameters entered
with the frame content material. Such changes require
no modifications to our programs.
PURGABLE LIST FILES-PATIENT, WARD
AND MESSAGE FILES
There are two different types of purgable list filesthe message files and the intermediate selectable list
files.
To facilitate the closed nature of the system it must
be possible to present previously entered dynamic data
to the user on displays for selection (e.g., a list of a
patient's current active problems). The intermediate
selectable list files perform this function. This ability to·
display lists for selection was necessary for us to develop
because the basic system software (Control Data supplied) allows only for the creation of static displays
(via the Selection Element TRANslator) from the keyboard and no ability to dynamically display various
lists for selection. Because of time requirements in the
displaying of lists, the data to be displayed come from
this" intermediate" file rather than from scanning the
entire patient's file each time a list is to be created for
a user.
The files are directly accessible on the basis of either
the patient's system I.D. number (patient list files)

* "Medical user" refers to medical personnel in the PRO MIS
Laboratory contributing to system development and not to the
physician on the ward who does not need to know any of this
material to function adequately and who in fact would never be
allowed access to SETRAN.

247

or ward number (ward list files). For each patient
there are two classes of file entries: patient problem
list entries and patient order entries. For each ward, the
ward file consists of entries of the patients currently on
that ward with additional information such as the
status of each of their problems.
Two examples follow: To enter data on a patient, the
patient must be identified. This could be done by having the user key-in certain identification information
which then would be scanned, verified and used to access the patient's record. The identification. is more
easily achieved, if the user knows the ward on which the
patient is staying, by allowing the user to select the
desired patient from a dynamic list of patients on a
given ward. The selection, constructed from information
in the patient's record (including his name, age, sex,
and unit number) has associated with it internal parameters that define this patient's system I. D. number.
This allows the STORE or the retrieval programs to
directly address the patient's Table of Contents. Another example, which has been very convenient for the
nurses in reporting the administration of a, drug on a
patient, is a list of the current drugs that the patient
is receiving.
The "message" file is not a pure "selectable" file
since its entries are not used as part of the displays in
the system. Its contents are copies of all the additions
to the patient order files. In the future these could be
sorted and printed out at the proper location in the
hospital (e.g., laboratory, pharmacy, x-ray). However,
this processing of the message files has not currently
been implemented.
THE STORE PROGRAM FOR THE PATIENT'S
STRUCTURED FILE
The STORE program stores all narrative and numeric data into the patient's problem oriented medical
record (structured non-purgable file). It is executed
after the user at the terminal has confirmed as valid all
generated paragraphs. No data are stored until after
this final verification procedure. The STORE program
receives as input the paragraphs and their associated
Selection Parameter Lists. A Paragraph Index is built
for all the paragraphs input to the program. Each
paragraph's identifying data: Information Type 1 and
2, problem number, storage mode, date, time, user
number, and patient number are put in a Paragraph
Index Element. After the Paragraph Index is built, it is
sorted by patient number, date, time, Information
Type 1, problem number, Information Type 2. This
represents the order that the paragraphs are stored in
the patient's file (for normal retrieval).
For each Paragraph Index Element the storage mode

248

Spsing Joint Computer Conference, 1971

defines which of these STORE routines is executed:

STORE-DIRECT is executed if the paragraph contains the narrative to be stored in the record. (This is
the narrative shown on the top of the display as selections are made.) STORE-DIRECT interprets the "F"
internal parameters in the Selection Parameter List as
format codes and combines them with the selections to
form the narrative data in its internal form.
STORE-NUMERIC is executed if the paragraph
contains numeric blocks to be stored into the Item.
STORE-NUMERIC interprets the "N" parameters
in the Selection Parameter List along with the narrative in the paragraph to build the numeric blocks in
their internal form.
STORE-TRANSLATED is executed if the paragraph is the result of a questionnaire. The paragraph
does not contain narrative selections but the Selection
Parameter List contains a record of the selections made
on each frame. These paragraphs are formed when
"YES" or "NO" questions are answered and the response must be translated into English narrative. The
Selection Parameter List is interpreted and an "S" internal parameter associated with any selection signals
a dictionary look-up using the frame and the choice
number to define the dictionary element. The dictionary elements are concatenated according to rules defined in the dictionary and the resultant data are stored
in the internal form for narrative data. Used in conjunction with STORE-TRANSLATED is a programming language similar to the Selection Element
TRAN slator. This program, the Dictionary Element
TRANslator, DETRAN, is used to define the narrative
to be associated with any choice, the rules to specify
the concatenation of titles with subsequent dictionary
elements, and the format codes necessary to specify the
output format. Using STORE-TRANSLATED and
Dictionary Element TRANslator it has been possible
to give a patient a questionnaire in Spanish and have
the narrative output in English.
THE STORE PROGRAM FOR THE LIST FILES
The STORE LIST program checks all newly input
paragraphs to the patient's record and determines if
they should be used in updating the various intermediate selectable list files for that patient, the ward which
he is currently on, or the message file.
This program (in its usual mode) takes as input, the
paragraphs just stored into a patient's record. This includes both narrative (e.g., problem statement) and
numeric (e.g., order) paragraphs. From information in
these paragraphs, the program may add, delete or alter
entries in the appropriate (patient, ward, and message)
list files. For example, if a new order is written for a

patient, that order's" text" along with the order problem number, the type code, the frequency and the
number of times for administration, will comprise an
entry which will be added to that patient's intermediate selectable list file. The entries added to the file are
sorted, using one or more elements of the entry as sortkeys, depending upon the entry type (e.g., problem list
entry, order entry, ward entry, etc.).
Although this program is usually called by the
STORE program, it can be called independently of the
STORE program, too. For example, a single patient's
intermediate selectable list files can be rebuilt by completely scanning all entries in the patient's record.
RETRIEVAL PROGRAM8-RETRIEVAL OF DATA
FROM THE PATIENT'S STRUCTURED FILE
The retrieval programs working on structured files
are of two types: The first type retireves both narrative
and numeric data in the form of a narrative report;
the second forms a- time ordered "flowsheet" of various
physiological parameters, clinical chemistry results and
drugs administered. Both are strictly for the retrieval of
data on a single patient.
Each retrieval program can display the retrieved information on either the cathode ray tube terminal or
the high speed printer. Input to the retrieval programs
is a paragraph which represents the retrieval request,
that is, the complete specification of the data to be
retrieved. Included with this paragraph is a string of
internal parameters in the Selection Parameter List.
The user is not aware of these parameters, he need only
select the patient and the sections of the record desired
(for example, Progress Notes on all active problems,
History and Systems Review, or complete record
grouped by major sections, i.e., at IT1 level). The
retrieval program interprets the parameter string
(which is well formed due to the structure of the
retrieval frames).
Because each user must identify himself when he
signs on, it is possible to allow him access to only certain displays in the system. Using this approach it is
possible to limit an individual's access to information
within the system by -allowing him to formulate only
certain retrieval requests.
A retrieval may require one or more retrieval cycles
depending on the number of major record sections
(IT1's) included in the request and the degree of grouping required in each major section. For each retrieval
cycle required, the retrieval routine scans the Item
pointers in the patient's Table of Contents to determine which Items contain paragraphs satisfying this
retrieval cycle~ The Items are then brought into core
in the order specified by the applicable Item pointers
in the Table of Contents. For each Item the paragraph

Initial Operational Problem Oriented Medical Record System

pointers are scanned, and for each paragraph pointer
satisfying the current retrieval cycle request, the FORMAT routine is called to output the paragraph. The
address of this paragraph is given to the FORMAT
routine along with certain control information requested
by FORMAT. The FORMAT routine interprets the
paragraph looking for format codes and outputs it,
continuing until terminated by the FORMAT termination code, then returning to the retrieval program. Once
control is returned from FORMAT, the retrieval routine
searches the Itell} for the next proper paragraph pointer
and continues feeding FORMAT until the list of paragraph pointers is exhausted. The retrieval program returns to the Item list, continuing until the Item list is
exhausted.
A flowsheet is a time ordered table of multiple medical parameters. Sound interpretation of data involving
clinical findings, vital signs, laboratory ¥alues, medications, and intakes and outputs requires organization of
the data to clearly reveal temporal relationships and
clarify the inter-relationships of crucial data. A user
requests a flowsheet by selecting the patient, the medical parameters to be included on the flowsheet and the
output device (printer or cathode ray tube). (See the
flowsheet included in the annotated record.)
RETRIEVAL OF DATA FROM THE
INTERMEDIATE SELECTABLE LIST FILES
Although technically a retrieval from the Intermediate Selectable List Files, the creation and presentation
of Selectable Lists for the user is done in the context
of his storing (or retrieving) other information to (or
from) the patient's record. For example, to write a Progress Note about a specific problem on the patient's
Problem List, the user must specify on which problem
h~ is entering data. This is done by showing him, in
dIsplay form, the list of the patient's Current Problems
and having him select the proper one. It should be
noted that all information in the Selectable List was
previously generated and stored by a user.
Input to this program includes the number (type) of
Selectable List the user is to see. This number points
to an entry in a table which then drives the creation of
the frames in the display dictionary containing proper
contents from the appropriate Intermediate Selectable
List File. The user is then automatically shown the
first display containing the list of elements. If more
than. one display is necessary, the additional displays
are h~ed to the first display. The selection of the approprIate element in the list is then made under the
Human Interface Program.
The complex of the Intermediate Selectable List
Files with Store List and the subsequent creation of
Selectable Lists allows information previously entered

249

into the system to govern the storage and retrieval of
other patient information, facilitating a closed system.
AN ANNOTATED EXAMPLE OF AN ACTUAL
RECORD GENERATED ON THE SYSTEM
The following is an actual record from one of the
patients on the computerized ward. This is a complete
"cycled" record; i.e., data that have been added to a
section are output chronologically within that section
(e.g., page 6 includes the PMH & SR additions entered
to the G. U. IRenal and Neurology sections by the
physician after reading the history). This printed output serves as the" paper" chart and is kept in the chart
rack where the traditional paper record was kept. In
this way a back-up record is always available, and attendings or consultants can utilize this paper record as
well as the cathode ray tube. terminal. This printed
. output is never written on and a new copy (or any
updates) is printed daily.
The annotations associated with each page will help
explain how the record is constructed, its relationship
to the data as they are stored in the patient's structured
non-purgable file, and the user's relationship to various
aspects of the data as additional information is added
to the record. In the annotations, the following abbreviations are used in specifying the different storage modes
(SM):
SM = D

Store Directly from selections or from
keyboard.
SM = T Store Translated by a dictionary lookup
based upon the frame number and the
choice number.
SM = N Store Numeric from selections in an internal form which allows multiple numeric
blocks within one paragraph (may include
typed in information).
The purpose of duplication of the first page of the
case is to show the layout and then the content on the
same page. It would be helpful to refer to the Explanatory Legend for Figure 3 before proceeding. The
blacked out spaces throughout the case are names which
have been covered to protect the confidentiality of the
patient.
ANNOTATED EXAMPLES OF THE
SELECTABLE LIST FILES ASSOCIATED
WITH THE PRECEDING RECORD
The following pages are copies as they appear on the
cathode ray tube screen of the Selectable List on the
same patient whose record has just been presented.

250

Spring Joint Computer Conference, 1971

l.ILUIIIN[C'SI CIIIIIHOSIS'
2. '!'IIh identification data is contained in the
patient's Table of Contents and h part of
the header on each page to uniquely identify
the output. Note that the n _ has been
blocked out.

3.EI)( .... PITTING. 5[CIIIID.IIY TO • ..oT SP£CI'I(O
... IWL .....RTlCULaR DIS' INVOLVING. IIfIISTlCARPALS liT. IIN[(. LT.
S(CONO.IIY TO.
5.HYPEllTENSI0N. "'0
6.-~UIIOLITHI.SIS.

M'O.

1. '!'lIe centered titles are output whenever a
new section (Information Type One--ITl) is
output. They are not stored in the record
but are supplied by the retrieval prQ9r .....

.131 II

T.PLEUII.L 'LUIOI '011 PLEUIIAL

IIUCTI~"'~-",

15.34

I.LOW HE ..UOCIIITlIIGII

4. The aubtitles are output whenever a new sub-

U'31 II

L--_____...- -

section Unformation Type Two--IT2) is output. (See page 2 of record.)

cTOTAL PIIOI LlSb

I.ALCOHOLISM. C-IC •

5. The user nWllber, ti_, and date are displayed
at the start of the paragraph or paragraphs
to which they apply, whenever they change.

11.55 II

l.IL.E_C·SI CI_IS'

NOTE: When the user n..mer, the t t - or the
date do not change, they are not output.
The date, U .... , and user nWllber are positioned on the right to avoid distraction"
fr_ the .edical content.

3.(1)(.... PITTING. 5[CONOAIIY TO • ..oT SP£CI,.[O
".IWLA....IITICULAII DIS. INVOLVING' IIfIIST/CARPALS liT. IIN[(. LT.
S(CONO.IIY TO'
S.MYPERTENSION. M'O

6. All typed in data entered froe the keyboard
are denoted by the "@" signs surrounding
the data.

6.UIIOLITMI.SIS.
-~UIIOLITMIASIS.

MID.

J.PLEUIIAL 'LUIO' "'" PLEUR.L

II[ACTI~_--­

I'LOW HEMATOCIIIT/IIGII

(UfR .... 1 -- SN .. 12110/70
S""OLD'" 5"3-507-1 '" .. 9
CACTIVE PROB

101511

PA6f

NOTE: All material printed here (exclusive of
certain titles) wa. retrieved by the RETRIEVE
All of this
information was stored into the patient's record
by the STORE progrillll.

l~pr09raJII frOlll the patient's record.

LIST~

~NOTE:

The "ACTIVE PROBLEM LIST" is a subset of
the "TOTAL PROBLEM LIST", consistinCj of the last
statement of each of the patient's active problems.
Z.ILAENNEC·SI CIRRHOSIS:
For the problem list, the SM=D, ITI-prob.
list. IT2=body system under which problem is
3.EDE"'Ao PI TTING. SECONDARY TOI NOT SPEClrtED
defined (this cannot be determined from the
printout). The IT2 is known to the system,
... INf'LA"'. ARTICULAR DISI INVOLVING: IIRISTICARPALS RT. KNEE. LT.
being associated with each problem when disSECONDARY TO.
vlayed on the list of the patient's problems
by the VISUAL LIST prograJII before the user
5.HYPERTENSION. "10
writes a problem oriented note. The IT2 is
used to determine the Cjroup of tests and
6.-~UROLlTHIASIS. H/O.
9'31 11/20170
symptoms from which the user can choose in
writing an INITIAL PLAN Or PROGRESS NOTE.
7.PLEURAL 'LUID' .oR PLEURAL REACTIONil
15134
For example, if the user selected "Low hematocrit/HGB" writing a progress note, then the
8.LOII WE"'ATOCRIT 1"68
22'32 11125170
system would autOlllatically display the
HEMATOPOETIC progress note displays.

~NOTE: The "TOTAL PROBLEM LIST" is the "ACTIVE
PROBLEM LIST" plus all problells that have been
I.ALCONOLISM. CHRONIC.
11155 11/19170
resolved/inactivated Or cc.bined. Here the
2.ILAENNEC·SI CIRR"OSISI
COIIplete history of the probl_ can be seen,
c.t. problem I'. The "--) " on the "ACTIVE"
3.EDE"' •• PITTING. SECONDARY TOI NOT SPfClnED
list indicates the problea was restated (updated); on the "TOTAL" list, we see the original
definition of the problell as well •
... INI'LA"'. ARTICULAR 0151 INVOLVINGI IIRISTICARPALS RT. IlNEE. LT.
SECO...oARY TOI
I.ALCOHOLlS ... CHIIONIC •

1"'001"1 11155 11/)9170

5."YPERTENSION. "10
6.UROLITHUSIS.
-~UROLITHIASIS.

M/O.

9131 11120170

T.PLEURAL fLUIDI .oR PLEURAL REACTlONlll

15134

8.LOII HEMlTOCRITI'HGB

liZ! 3l 11I2SI70
<1Hf'ORMANTS~

~ NOTE:
171.5 II 1191'10

PATIENTI
'RIENDI
WRIEIIIII



---------------------------.ALCOHOLISM ANO ARTHRITIS.

IMOOI ../

171.5 11/19nO

All sections, "INFORMANTS" through
"PHYSICAL DATA BASE BY SYSTEM- are all .part
of the DATA BASE part of the record (Box I
in the C phase. of ~ical action) and are
not problell oriented.

Initial Operational Problem Oriented Medical Record System

IlTER.i!1 -S"' .. 12110170
SBHOLD2 5"3-507-8 II .. 9

10158

PAGf

i!

~ NOTE.



SX:4.~--------------------------------------------------1"'00141 18:0;5 II/l'U70
ORI",I(I"G PROBLEM
O.. SET: GRA~UAL IIf11SIDIOUSI.
COHME"'CEOI 30 YEARS AGO.
SEVERITY AT WORST: SEV[RE.
AMOUNT AT IIIORST: ."4-5 SHOTS AfilD A FElli BOHLES OF REER" •
FRIEND RfLATES THAT HE I)lH"'IIS UP TO 3-4 SI~ PACI(S PfR
DAY.II COURSE HAS BEf" COfllTl ..UOUS SINCE OIilSET.
ASS.WI IIOE.. IES HAVING HAD DTtS OR CONVULSIONS. DE .. IES
BLEEOING PROI:ILEH."
OVERALL COURSE: GETTING IIIORSE.
PATlE .. T'S ATTITUDE: DOfS NOT UfilDERSTAND. IS I"'DIFfERE"'T.
UNREALISTIC. DOES fIIOT ACCEPT STATEMENT OF PROBLEM.
JOINT PAlfil
ONSET: SUDDEN (ABRUPTI.
COOfMEIoICEDI I III~EI(S AGO.
ANTECEOEIiIT TO ONSET: _CLAIMS BITTEN BY SPIDER ON RIGHT
IIRIST. NOT REOUHIING RII.
A"OUIIIT AT IIIORSTI CAN'T CONT. USUAL. ACTI GOES TO BED. STAYS
1tOH£ fROM IIORI(I CAN'T DO PHYSICAL IIIORII.
QUALITYI A DUll ACIIE CONTINUOUS SINCE OIiISET:
LOCATIONI NUL. TlPLE JOlliITS ASSYM",ETRICAL: IIIRIST (SI. RIG'iT.
I(NEE (51. LEFT.
RELIEVED BY: NOTHING •
..ADE IIIORSE BYI NOTHING IN PARTICULAR.
ASSOCIJTED IfITHI COULON'T DETERMINE.
OVERALL COURSE: UIIICHANGED.
PATIENT'S ATTITUDE: ODES NOT UNDERSTAND. ACCEPTS STAlEMENf
OF pROBLEM.
EDEMAI
OIiISET I GRADUAL (INSIDIOUS).

•

• Cill'4IiPIIUiTt.IRliliiZEIiD.POMH IITI':II.21 -SN 4 17/10170
SI!HOLD2 S43-507-8 H .. 9

10:5/1

This demonstrates the _ta-structure approach
in handling SY.ptOlllS.

NOTE: Example of indentation levels of FORMAT
routine as used by RETRIEVE to the printer: "SX"
is at level 0: "DRINKING PROBLEM" at level I .
"ONSET", "COMMENCED", etc. at level 2.
These
codes are stored with the paraqraphs of data
by the STORE program-since they will be needed
every t i _ that this data is retrieved from the
record.

NOTE:
Because of the specified mode of retrieval
("cycled"), each body system contains all HISTORY
, LAB DATA BASE material entered under that body
system in a cumUlative manner, regardless of the
time of entry or the aspect of the HISTORY , LAB
DATA BASE (e.g. LAB order, patient IIX).
This
ability of the RETRIEVE program allows an integrative association of subject-related information which is entered in a temporally unrelated manner.
This is facilitated by the STORE
program storing the information in such a way
that this and other associations (e.g. flowsheets output by RETRIEVE-FLOWSHEET) are
possible.
This section of the record
is "cycled" on each body system and may contain
patient entered HISTORY (SM=T) ,physician entered
/IISTORY (SM=D) and/or physician entered LAB
orders and reports (SM=Nl.
This demonstrates
that one body system under this ITl can contain
all modes of storage (see GENITO-URINARY/RENAL,
below) •

PAGE

,OMHE",CED: I IIIfEKS AGO.
SEVERITY AT 1iI0RSH HODEPATf. QUALITY: PITTING.
:)15TRI9U1I0N: LOCALIZEO.
L\lCATION: fEET ONLY. BILATERAL. COURSE HAS BEfN CONTiNUOUS
SINCE ONSET.
~ELlEvED /lY: NOTHING.
,"JT RfLlEVfD BY: ANYTHING.
"AOE WORSE BY: NOTHING IN PARTICULAR.
ASSOCIATED wIT": COULON" DETERMI"IE.
ovERALL COURSE: SUBSIDING.
PATIENT'S ATTITUDf: DOES NOT UNDERSTAND. ACCEPTS STATEMfNT
Of PROBl EIoO.
~HISTORY

~

This is a subtitle.

LAB OATA BASE>
IMD;>I2I 15114 11119170

PATl£NT ADMINISTERED HISTORy fOLLOIIIS:
114001'+1 1I11SS
, .. II "lOT ENTERED BY PH

r:;O~
.• E:

COr.STlTUTlOIilAL SUMMARY/GENERAL:
IMD2121 15: 14
........
ADULT MALE.
"'0 wEIGHT LOSS.
PECE'o!TL Y LOST APPE TI Tf.
...OT FEH ING TlRFO.
"'OT HAVING fEVER.
1"'00141 18: 55
-"DRUGS - HIGH BLOOD PRESSURE PILL 1-2 PER DAY DEPEND IlliG Gill Thr
WAY HE fELT-

The sections of 1:he HISTORY W.hiCh are not
tyo('d in have been produced by the patient
sitting at a terminal answering questions.
The
responses arc translated and stored by the STORE
program (SM=T), utilizing the translation
dict10nary created by the DETRAN program.
The
code (MD2l2) in th1s case refers to the
1nd1vidual who "tarted the patient on the h1story.

Ll.D.

SOCIAL PROf"lLE:
11010;>1.21 15114
ADULT MALE.
AGE 49.
90R'I 1"1 VERMONT! TOWill or LESS THAlli 5.000 PfOPlf.
LIVED I'" APEA
Of CURRENT RESIDENCE rON ALL t)f LIFE.
Wt)UlD ,"OT L1H fURTHER EDUCATlOIII
LAST COMPLETED GRADE: lOTI'.
OR TRAINING.
'4APRlfD.
DOES IIIOT LIVE WI WIH.
LIVES ALONl.
(OOIlS OW ...
HEALS~
IIIIOOWED fOR >1 YEAR.
SATISIFIED WI PRESENT LlVIIIG CONDITIONS.
SE~VEO IN AC1HED fOPCES.
PIlESENTlY EMPLOYED.
SATISF"lED WITH PPES!':NT JOH.
DOfS WO"K
SITTING DOIilN.
GETS DAILY EXERCISE.
SuPPORTED "A)NL Y BY SELf.
:>RINI(5 "ANY O'HN/(S/DAY.
SOMETIMES: MISSES WORK ON MONDAf
MOR.. INGSI TAKES A DRIIII/( IN THE MOl'''IIN(,. HOSPITALllEO
DUE TO DPINKING.
EATS;> OP MORE MEALS/DAY I MEAT OR EGGS WITH I OR MORE OF THEM.
26 POSt 16 NEG.
0 ONK.
0 D'IU . . .,..
. ._________________
P,fECTlOUS DISEASE:
~AS HAD CHICKEN pox. RUBELLA. PUBEOLA. HEPUITIS. MU"PS.
TYP'i/)IO. WHOOPING COUGH.
1 POSt
I NEG.
0 ONK.
0 ONU.
I .....JIIII ZAT IONS:

::OTE:
Typed in sections of the HISTORY are
corrections or additions made by the physician
after r('ading the patient generated HISTORY
(SM=D) •

NOTE.
The larger patient generated HISTORY
sections are followed by a summary of the number
of POSitive, NEGative, Do Not Know, and Do Not
Understand responses to -that section of thepatient administered HISTORY.

251

252

Spring Joint Computer Conference, 1971

-.-iCiiOM.PiiUii'IiEiiRliiZiiEiiiD.POMRts", UTER.ll -- SIll 4 luuno 10158
BHOLDZ 543-501-8 M 49
•
INJECTIONS IN PAST F'IVE YEARSa INf'LUENZAI SMALL POX
IVACCINATlON) I TYPHOIOI OP'.
IN PAST TEN yEARS ftAS "AD POLIO SHOTS.
5 POSt 0 NEG. 0 DNKt 0 GNU.

PAGE

4

FAMILY IIISTORUGENETICI
HYPERTENSION- F'ATHER. STROKE- rantER. HYPERTENSION- IIOTHF.:R.
It[ART OISEASE- FATHER. T.B.- SISTER.
FATHER DIED AT AGE AGED 50 TO 59. MOTHER DIED AT AGE 60 TO 69.
PATERNAL GRANOF'ATHER DIED AT AGE UNKNOVN. IIOT KIIOWIiI
IF' PATERNAL GRANOIIOTHER LIVING. MATERNAL GRANOF'ATIt[R
DIED AT AGE NOT KIIOWIiI. MATEIINAL GRANDMOTHER OlEO AT
AGE NOT I(NOVN.
16 POSt 6 NEG. I ONK. 0 ONU.
DEAN-ALLERGY I
OfTEN WORKED AROUND CHEMICALS. SOLVENTS. OR CLEANING F'LUIDS.
I POS. 6 NEG. 0 DNK. 0 DNU.
EYE. EAR. IIOSE. THROATI
IlEARS OR HAS 1I0Rte GLASSES. ./0 GLASSES HAS TROUBLE SEEING
CLOSE. VISION SATISF'ACTORY III GLASSES.
3 POS. 17 NEG. 0 DNK. 0 DNU.
MUCH LOUD NOlst: IN PLACE OF' EMPLOYNENT.
HAS SHOT A GUN A GREAT DEAL.
I POS. 14 NEG. 0 ONK. 0 GNU.

NOTE: ~eN are LAB order. under the ax • LAB
DATA IlASE (11'1) Nc:tion (. . . . ). TIle. . are _ch

UP

.tored . . n_ric: block. in tile patient'. record
by the S'I'ORE progr_. After each addition to a
patient'. record, the S'I'ORE progr_ call.
STORE-LIST which then proc:e•••• · _
.ntrie.
to the patient'. record (by _iog a lIB'faIZVE
routine) • 'I'he new entrie. in thi. c:_, would
be plac:ed on a U.t of thb patient'. _t.tanding lab. order.' . .e the l b t . on page.
for ex.-ple.).

DENTAL'
11/0 DE"T~ X-ItAYS.
EATS MUCH 8READ. POTATO[S OR ....CARONI. SOMETIMES EATS RAil
VEGETABLfS.
8RUSHES TEETH 11/ TOOTHBRUSH ONCE A DAY.
4 POS. II NEG. 0 DNK. 0 GNU.

It[ .... TOPOUICI

o ~OS.

II NEG.

o

DNK.

• GNU.
"'0014/ 17'05

~~I RAT[

"7."8

~

........- - - - - - - - - - - - - - - - - - /MDlll/ 8145 II/ZOn
CIIC' HeT C.) 39. NGII CGetS.1 11.5. wee: ...0 • IIBC 01,.F' SEG/
54•• IdfG/ I.' LMCI 3lSi MOMI 10.1 EOSI I.' 8AS/ I.'
PLATELfTS
3...../ . . UNCORII .29MM/" COIIII • SED IfATf
1110014/ Z1I51 II/Z3/1.
ttGI
11IHPF'

He'
1I.4GetS.

361

HGe

1IO'l'E:

Tbe.e are the reported value. for the
previ_. two lab. order., al~ .tored . .
n_ric: block. by the S'I'ORE progr_ (. . . . ).

~::e (:::r:ra~::.r;r.~~L~c::~~jPo:-

_Utandiog lab. orden for tbb patient
a. _truc:ted by 8'rOIIE-LIS'l'. Tbe Nt of
di.play. c:ontainift9 the appropriate c:hoice.
for reportiog the value 18 .'-n to the _ r
vllen he ~. . the order to report. ~i.
. . t of d18play. i. pre-defined for eac:b
order at the tt.e i t i. ordered by _
of
-.- par_ter. wbic:b caaM tile pla_nt of a
.pec:1fic: value in a 8p8c:1fic: field in tile
n_ric: block for tIIet order. In _ t c:. . . . ,

a= =-~~v!t=-~::;

;..~:-i~:!t:.::n

HeT

actual report of tan prec:eed. the _
of the
ta.t (e.g. 8ed. rata report). After the. .
re.ulta have been .tored in the patient'.
rec:ord by nou, noU-LI8T will proe:e••
the. . _ r i c : block., r _ i o g tile -CIIC" and
-SllD.ItA.,." order. f~ tile li8t of ouUtandiog
order• • •inc:e unl. . . otherw18• •pecified, lab.
order. are . . . . . . to be . - t e d _ly _ .

RESPIRATORY!

Z
-- CSW'!J "

P=~~~E;;~~S;;_8S:

:9IUIO/70

10151

PAGE

5

CONTACT 11/ PfRSON II' T .1 ••
SMOKfS CIGARETTESI I PACK/DAY F'OR 31-40 YfARS.
5 POSe IS NEG. I DNK. 0 ONU.
CIlEST X-RU CPA.

/CCOI5/ 015B 11122/70
CHEST X-RAY CPA' ..,A AND LAT CHfST •• A NORMAL CARDUC
..-IIO'1'E: Thb dellOn.trate. that . .teri.l c:an be
SILHOUETTE ••• LUNG F'lELDS •• CLEAR EXCEPT F'OR A RT.
typed in (using the KEY progr_) vllen reportiog
PLEURAL ANGLE 8LUNTING IIHICII liAS NOT PRESENT IN 19611.
a te.t result that 18 .tored a. a D_ric: block
THIS MAY REPRESENT OLD PLEURAL RUCTION F'RDII A
by the STORE pr09r- (QIaoIl).
PREVIOUS ILLNESS BUT COULD REPRESENT A NEil PROCESS.
F'LUOftOSCOPIC EXAM AfCOMMfNOED ••

cOONE>

8REAST:

o POSe

4 NEG.

0 ONK.

0 GNU.

CARDIOVASCULARI
PAINLESS SWELLING F'REOU[NT Ih 80TH FEET OR ANKLfS WHICH GO£S
DOliN OVERNIGHT.
HAS TAKEN HYPER TENS I WE IIfDS
TOLD HAS HAD HIGH 8LOOD PRESSURE.
5 POS. 11 NEG. 1 ONK. 0 DNU.
EI(G
/CCOI5/ 17' Z8 IU13/70
EKG IIRHYTHMaSINUSA 'V RATES-ABOUT 90. PR INTaO.16. ORS
INT-0.08. OTC-Nt. AXIS-RLO. INTfRPRETATlON-SINUS
RHYTHM. PVC IS ARE PRESENT. POOR R IIAVf UNTIL V3 ••

cOON[>

GASTROINTESTINAL I
11/0 ESOPHAGEAL X-RAYS.
H/O STOMACH X-RAYS.
11/0 LIVER DISEASE. JAUNDICE.
H/O GALL 8LADOfR X-RA'S.
H/O X-RAYS OF' BOWfL.
11/0 HEMORRHOIDS.
7 POS. 19 NEG. 0 ONK. 0 ONU.

MUSCULD-Sl(fLETAL I
H/O SHDULDfR TROUBLE.
1 POSt 1. MfG. 0 ONK.
ENDOCRINE'

0 DNU.

MORE CLOTHES IIORN IN COLD WEATHER THAN IEF'ORE.
LOIIUT WfIGHT AS AOUL T 1I0-110 LIS
GAfATfST WEIGHT AS ADULT 161-170 LIS CAT AGE 1'-30 ••
4 POSe 16 NEG. 1 ONK. 1 DNU.

I HR.

hA
I(

p.e.

GLUC

Initial Operational Problem Oriented Medical Record System

POMR C ITER.ll -- SN 4 12110.170
SBHOLD2 543-507-8 M 49

10159

PAGE

6

C02
CL
IMD2121
135 MEQ/L NA
3.5 MEQ/L K
27 MEQ/L C02
95 MEQ/L CL
112 MGIIOO ML 2 HR. p.C. GLUC
HR PC"

8145 1I1l0.l70

lINOT STATED ON LAB SLIP WHETHER 2

GENI TO-URINARY IRENAL I
15114 11/19nO
URINE HAS BEEN BLOODY OR COFFEE COLORED.
TOLD HAD KIDNEY OR BLADDER STONES. STONES PASSED WI URINE.
TOLD HAD KIDNEY OR BLADDER INFECTION
DOES NOT HAVE ANY SEX PROBLEMS.
HAS NOT BEEN CIRCUHSIIED.
6 POSt 18 NEG. 2 DNK. 0 DNU.
IMD0141 17105
BUN
ROUTINE URINE
QUAL. VORL
181S5
IIHAD KIDNEY OR BLADDER STONES SOME TIME AROUND 1942. NON£
SINCE. STATES 'THE DRS. "DISSOLVED" THE STONES WITH A
SPECIAL LIQUID (OR.
' . NO GU SYMPTOMS
RECENTLY/"
IND2121 8145 lIllOnO
ROUTINE URINE YEL. CLR. SPGR: 1.005. PH 5. PROT NEG. CHO NEG. AC.
NEG. SEDI SP. wBe: fEW./HPf.
3 MGII00 ML 8UN
ICC025/ 15151
NEGA TI VE. QUAL. VORL
INDOI4/ ZII51 111l3nO
S. CREATININE
ICCOZ5/ 16131 111Z4no
1.0 MG/IOO ML S. CREATININE
NEUROLOGY'

IM02121 15114 11/19/70
TROUBLE HOVING ARMS ANO L.EGS ON BOTH SIDES.
NUMBNESS OR TINGLING OF ARMS. HANDS. LEGS OR FEET PRESENT FOR
SEVERAL WEEKS.
5 PDS. 18 NEG. 0 DNK. 0 DNU.
IMOOl41 18155
IIOIFFICULTY MOVING THE LEFT LEG IS BECAUSE Of THE PAIN
ASSOCIATED WITH THE ARTHRITIS IN THE LEFT KNEE. DENIES
NUMBNESS OR TINGLlMG IN THE ARMS AND LEGS."
PSYCHIATRY.
IMOZIZ/ 15114
DECREASED INTEREST OR ENJOYMENT IN SEX.
Z POSt 34 NEG. 0 DNK. 0 ONU.

-- frU"!!';; P::~~~E:4~~5;;_8 s: :9121lOno
SUMMARY STATlSTlCSI
INTERVIEw COMPL.ETEO.
TOTALS.
lOS POS.Z67 NEG. 8 ONK.

PAGE:

10159

7

0 GNU ...........- - - - - - - -_ __



NOTE: These are the totals for the patient
adainistered HIS'l'ORY sectiona as stored in
the patient's record by the STORE prOCJr_
(SM-T) •

IMOZIZI 15103 11/19170
VITAL SIGNSI
TEMP. OIlAL(OEGR£ES CII 36.1
PULSE. RAOIALI 10/MIN. RHYTHM NOT NOTED.
R[SPIRATIONSI ZO/MIN.
BP. LT ARH. SITTINGI 1 62/90 14M HG. VPI NOT EVALUATED.
WEIGHT. LII I 25. HEIGHT/LENGTHI NOT M£AS.
IMOOllo1 11105

~

VITAl SIGNSI TEMPI NOT TAKEN.
~
PULS[. RADULI 1 08/MIN. REGULAR.
RESPIRATIONSI 20lMIN,
IP. itT ARM. SUPINEI 170.1105 MM MG,
BP. LT ARM. SUPINEI 170/105 MM MG.
aP. itT AItM. STANDINGI 1 70/10.. MM MG •
.IVP. 0 CM AIDy[ STERNAL ANGLE AT ..5 DEGREES ELEV. WEIGHTI NOT
DETERMINED. HEIGHT/LENGTHI NIT M£AS.
GE:N£RAL APPEAItANCE 1
THE PATIENT IS A CHRONICALLY ILL. NORMALLY NOURISHED. MIDDlE
AGE:D (APPEARS STATED AGEl. CAUCASIAN MALEI RESPONSIVE
COOPERATIVE. AND CAN CARE FOR SELF.
CURRENTLY REQUIRESI ND AIDS. PT IS ANXIOUS.
SKINI NORMAL.
HEAD I NORMAL.
EYESI
[yE NOVEM£NTSI [yE MOVEMENTS NORMAL.
PUPllSI ROUHO. EQUAL . . . MN. ESTIMATED. REACT TO LIGHT
FUNDUS I
811AT. NORMAL.

~

ACCON ••

EARS I
EXTERNAL CANALI llLAT. NORMAL.
TYMPANIC MEMBRANEI IILATI NORMAL COLOR. MID POSITION. LIGHT
R£'lEX NORMAL.
HEARINGI NORMAL lILAT.
NOSE

~

NASOPHARYNU NORMAL.

OROPHARYNX I
TONGUEI lILAT. ~OARSE TREMOR. GENERAlIZEOII
T[[THI
GE:N[RAlL YI "ANY AISENT.

NOTE: These vital si9nS _re entered as aoon
as the patient arrived on the ward.

NOTE: These vital si9ns and the rest of the
physical ex_ _ re entered by the physician
followift9 the nor.al patient work-up on the
ward.

253

254

Spring Joint Computer Conference, 1971

--lfilolptIUllfIRflfEIDIIPOMR (ITER.21 -- SN 4 12110170
SBHOl02 543-507-8 14 49

1 ••1.1•• I ••• •

10159

PAGE

8

NECI(:
THYROIDI NORMAL SIZED. SYMMETRICAL,
lYMPH NOOESI NONE PALPABLE.
CHEST AND LUNGS:
RESPIRATION: NORMAL.
INSPECTION: NORMAL.
PALPATION: NORMAL.
PERCUSSIOtof: RESONANT THROUGHOUT'
AUSCUL TATION:
NORMAL BREATH SOUNOSI BILU. ENTIRE CHEST,
CARDIOVASCULAR:
ARTERIAL PULSESI ALL NORMAL.
JUGULAR VENOUS PULSE: VEtofOUS PRESSURE: ·0 CM ABOVE STERNAL ANGLE
AT itS DEGREES ELEVATION.
PALPATION:
APEX BEATI LOCALIZED AT MCl Itof ItTH ICS.
AUSCULTATION: NORMAL.
ABDOMEN: INSPECTION NORMAL. PERCUSSIOtof NORMAL. AUSCUlTATION
NORMAL.
RECTAL!
NORMAL.
STOOL ABSENT.
EXTREMITIES'
JOINTS.
WRIST: RT: SWELLING/ENLARGEMENT: OVEN ENTIRE JOINT, MAINlY
SOf'T TISSUE. RUBBERY, TENDER. IiOT.
JOINTS:
KNEE. LTI PAIN ON MOTION: ACTIVE" PASSIVE. CONSTANT.
INf'lAMMATION W/O SWELL.' OVER ENTIRE JOINT.
EXTREMITIES'
lOWER LEG' 3. PITTING EDEMA. ANKLES. BILAT. RT • l T.
BACK/FLANK/SPINE ,
PALP/PERCUSSION PAIN: BILAT. son TISSUE' FLANf(.
COSTOVERTEBRAL ANGLE, MILO. BILAY. SOFT TISSUEI MILO.
MALE GENITALIA'
PUBIC HAIR. FEMALE ESCUTCHEON.
NEUROLOGIC EXAMI
ItEtofTAL STATUS' FUlLY RESPONSIVE. DAY OF WEEK. DATE Of' MONTIi.
FUlLY COOPERATIVE.
ACTIVITY" B(HAVIOR. NORMAL MOTOR ACTIVITY. AflLE TO CAliF FOR
SELr. NORMAL BElfAVIOR.
APrEARANCE I UNtfEMPTo SLOPPY.
MEMoRY.INTELl •• PARIETAL: REMOTE MOO. IMPAIRED. NORMAL

-- ['Emilia

r~:~o~~~E:;~~5;;_8S:

:91Ul0170

10159

PAGE

INTELLIGENCE. ABLE TO ABSTRACT.
FUIilO or KNOWLEDGE: SlIGHTLV DErICIENT.
SPEECH: NORMAL.
CIILCULATIONS: DOES SERIAL 7'S.
INSIGHT .. JUDGEMENT: UNDERSTANDS ILLNESS. JUDGEMENT NORMAL.
14000 .. AFFECT: WITHDRAWN. FLAT.
THOUGHT CONTENT: NORMAL.
CRANIAL NERVES:
II IIIOT TESTED.
lI: ACUITV NOT TESTED. COLOR VISION NOT TESTED. VISUAL
rIELDS NORMAL TO CONFRONTATION. FUIilOI NORMAL.
III.IV.VI: RECORDED UNDER EYE EXAM
V: "ASTlCATlON IIilTACT. SENSATION INTACT.
VII: MOTOR INTACT. TASTE NOT TESTED.
Villi HEARING ""ORMAL. CALORICS NOT TESTED.
IX.x: NORMAL.
XI: ~RMAL.
XII: TREMOR. BILAT.
MOTOR: (RT HANOEOI
STRENGTH: NORMAL.
BULK:
ATROPIIV. SLIGIiTI BILAT
TO"lE: NORMAL.
COORDINATION: NORMAL.
GAIT. STANCE: VEERS. TO LT.
ABNORMAL MOVEMfNTS: TREMOR. ENTIRE BODV. AT REST.
REFLEXES: CONTINUOUS. MODERATE 16-9/SECI. COARSE.
OTR'S:
BICEPS JERK: BILAT. Z"
TRICEPS JERK: BILATo Z"
KNEE JERK: BILAT. Z••
ANKLE JERK: BILAT. O.
PLANTAR RESPONSE: RT FLEXOR. LT FLEXOR.
ABOOMI"IAL PRESENT RILAT.
SE"ISORYI COOPERATION GOOD.
SUPERFICIAL PAIN: NORMAL.
TOUCII: NOAMAL.
VIBRATION: MOOERATELV DECREASED AT SIOEI LF.G.
VIBRATION: SLIGHTLY DECREASED LT 510El HG.



~

PLN-G[N IUIAD:
PT.ACl:
uP WI ASSISTANCE ..AS TOL."
VISITING REGULAP.
HOUSE DIET. FLUIDS AD LIB.
CHLOAAL IIVORITE 1000MG PRN Xl. p.O.
::i~II~F ~~~Nf.SU SUSP. lOML PAN Xlo p.O.
T[MP

OBIt

11400141 191Ze 11/19170

NOTE: "GENERAL WARD INFORMATION" contains
inf0l1!'ation ne:essary for general ward/hotel
funct10ns. Th1S section is not problem
oriented.

~ NOTE: Theaeare actual orders which are stored
in the patient' s record as nu.eric blocks by
STORE (SM-NI. Similar to the method mentioned
aItOve. these are added to a list of this patient'.
outstanding orders by STORE-LIST.
(Note these
on page ('7 on the patient Non-Rx order Hst.1

Initial Operational Problem Oriented Medical Record

UTER.ll -- SN 10 IUlO170
SBHOLD2 543-501-1 .. 49

10159

PAGE

System~-

10

PULSE OIH
RESP OIH
BP 01"
UBJI
'UNOlV ZZl04

CHLORAL HYDRATE 1000".

'US010'

31.6'C TEMP OIH
II 'MIN PUlSE OIH
20'''IN RESP OIH
I1oUl00 8P OIH

."'KEN AT 1100 AM ••

~

1107 lInono

'UN020' 10158

160'110 8P oeH
8"MIN PULSE OIH
lO'MIN RESP 08H
21111
37.3'C
84'MIN
20,n1N

TEMP OIH "PM TEMP RECORDED LATE.
PULSE OIH .IP" RECORDED LATE_
RESP 01" . .PM RECORDED LATE.
'US0101

9116 lInlno

'UNOlJ'

9'56

37.2'C TEMP OIH
8B 'MIN PULSE OIH
20'MIN RESP O8H
150/100 BP OIH
I"MIN PULSE OIH
lO'MIN RESP OIH

NOTE: This is the execution of the -CHLORAL
HYDRATE- ordered u.ediately above. This
execution was specified by the Unit Nurse
(UN02 .. ) by choosinq the order frOll the outstandinq list of orders displayed by the
VISUAL LIST proqram from the list created
by STORE-LIST. Note that the entire order
is not shown, but rather is truncated with
a - * -• This was done to l8ake the record
easier to read; the entire order is always
available in the record. The order is
assWlled to be ,executed as stated unless
otherwise stated (c. f. blood pressure (BP)
entered by UN020 at 10:58). Since this
order was specified to be executed only
once (-Xl-), the order is removed froa the
list of outstandinq orders when the nwaeric
block reflectinq the execution (stored by
STORE, SM2N) is processed by STORE-LIST.

IUNOZ51 15156

17B'110 8P OIH
81o'MIN PULSE OIH
2UNIN RESP O8H
'USOIOI 16104
37'C TEMP Q8H
. .'MIN PULSE alH
lO,nlN RESP oeH
123 LIS

IUN014'

6139 UnU70

'UNOI5'

7&101

IUS010'

1146

WEIGHT GOO

190'112 BP OIH
I"MIN PULSE aIM
20/MIN RESP O8H
36.I'C TEMP""
II ,nlN PUlSE OIM
28/n1N RESP oeH
'UNO lJ, 1 .. SO
138'90 BP aIM
14,n1N PUlSE OIH
20'MIN RESP oeH
'UNOZS' 15.43
J60/90 BP OIH
..UnIN PULSE 08"
lZ'MIN RESP""

--COMPUTERIZED poaR IITER.2) -- SN 4 12/10/70
III I I
SBHOlDl 543-507-1 .. 49
37.4·C TEMP OIH
88 'MIN PUlSE 08H
lO'MIN RESP OIH

11100



7133 11.123/70

IUN02U

9UMIN PULSE Q8H
2 O'MIN RESP QIH
180'100 8P OIH
92'MIN PULSE OIH
2 O'MIN RESP QIH
16UlOO 8P OIH

IUSOI01 16105

IUN0171 11'25

IUSOIO' 19:]6

7157 11/24110

36.9·S TEMP 08H
92 'MIN PULSE 08H
lO'MIN RESP Q8H
~2'MIN
PULSE 08H
20'MIN RESP 08H
160/98 8P 08H

37.2·S TEMP 08H
88 'MIN PULSE OIH
20'MIN RESP OIH .THIS IS A 4:00PM TPR._
76'MIN PULSE 08H
2 O'MIN RESP 08H
166/104 BP OIH

IUN0221 10 '34

IUSOIOI 18117

IUN0201 11145

23110

104'MIN PUlSE QIH
20/MIN RESP Q8H
IlO'90 8P 08H .TAKEN AT 1100 PM ••
80/NIN PULSE OIH
170'100 8P OIH
lOIMIN RESP OIH
36.5'5 TEMP OIH
12 IMIN PULSE OIH
20/MIN RESP OIH
14/MIN

PULSE OIH

9149
14111

PT.AST: UP W, ASSISTA.

37.2'5 TEMP OIH
100 '"IN PULSE 08H
20'MIN RESP 08H

11

IUSOI01 15154

37 .1'5 TEMP OIH
108 'MIN PULSE 08H
20'MIN RfSP OIH
160'90 BP 08H

PAGE

IUNOl51

7138 l11l5/10

IUSOI0/

1119

/UNOZU 11140

255

256

Spring Joint Computer Conlerence, 1971

'ITER.2) -- SN 4 I2IIOllO
SlMOl02 543-S07-a 14 49
20/1lliN RESP oaM
150/90 8P oaM

11100

12

IUS0101 16109

36.I I C TEMP oaM
81t IMIN PULSE oaH
20/MIN RESP OIH

IUN0201 17136

81t/MIN PULSE OIH
I 6/MIN RESP OIH
156/116 8P oaH .AT 4100 PM ••
96/MIN PULSE OIH
2 OIMIN RESP OIH
1821100 8P 01"
120 L8S

PAGE

IlEIGHT 000

37. I C TEMP OIH
881I11IN PULSE OIlH
20/M I N RESP oaH

21:56

IUNOlt31

6139 11126170

IUSOIOI

7159

IUN0201 13137

681MIN PULSE oaH
2 OIMIN RESP.OIH
188/120 8P oaH

IUN021t1 15152

92/IilIN PULSE 08H
20/MIN RESP oaM
160/100 8P oaH

IUN0171 151S1t

37. I C TEMP oaH
96/14IN PULSE oaH
21t1MIN RESP 08H
IUN021t1 191"5

81t/MIN PULSE 08H
18/MIN • RESP 08H
161t/98 8P 08H
IUN0171 20115
ltNEOLOIO 60 Cc.S GIVEN P.o. FOR IVP PREPII/tlNOIItI 0116

~

11127170

150/92 8P 08H
80/MIN PULSE 08M
18/MIN RESP 08H

36.8 1 C
88 IMIN

20/MIN

IU50101

711t5

IUN0221

7148

TEMP 08H
PULSE 08H
RESP Q8H

1621110 8P 08H
PLN-GEN "'ROI
DISCHARGE PT.

NOTE: This is an execution of an order which
is part of a preparation reg~mine fC!r another
order. For cases such as thl.s, we l.nstruct
the nurse to type-in (using KEY) the order as
it was executed under the proper problem or
under "GENERAL WARD" if they do not know the
problem number. This should have been entered
unde"r problem IS (HYPERTENSION. H/O), for
which the IVP was ordered.

IMDOl41 14121
"'lTM FIIIENO. VALUA8LES. VIA AMBULATORY.

O!lJ:

IUS0101 16106

37.2'C

TEMP OIlH

t'l*PUT£.n~D POMIt tlTfR 2) SN 4 I l I l " " I l l "
PAGE]3
- - 5 8 H O l 0 2 543-S07-a .. 49
•••••••••• •
96 IMIN PULSf: 08H
2011111N RESP 08H
IUNOI71 161 ..5
"'T. DISCMARGED AT 5110 P .... lilT" PRESCIIIPTJ~.11

NOTE: The reJIainder of thh printed recora
consists of all of the data in aequence on
each of the probl_a on the "TOTAL PROBLEM LIST",
beginning with proble. n. All inforaation on
a single problea is not atored physically
together by the STORE progr_ in the patient'a
record, but rather is phyaically atored in a
chronological faahion vith pointera to each
ePLANS - INnlAL»
paragraph (identified by ITl, IT2, Date, Tt..,
---------------------------and Proble.. f) ao .any different aaaociative
modea of retrieval are _de poaaible, e.g.
1.ALCOtiOlISM. CHRONIC.
RETRIEVE data aa it vaa chronologically entered,
PLliI-R/OI
RETRIEVE data grouped by probl_, RETRIEVE tt..
IM001 ..1 19121 11/19170
ordered lht of physiologic par_tera (FLOWRIO _IMP£NOING OT·S.
SHEET), etc. The ca.plexibility and flexibility
PLN-DDAI
of the atored patient data is not obvious to
... ARALDEHYDE. 7.SML 114" PRN SlANDING . . . OR AGnATI~ P ' O ' l l11Ost users. The RETRIEVE progr_, in the
~HLORAL HYORATElil IGM OMS PAN STANDING P.O.
"cycled" IIOde apecified here, goaa through the
PLN-PROI
.
record and bring. together all inforaation on
IIPADOED SID£
kAOt:
"AVE SYRING( lilT"
- = So:na

I:A~~. V~=D I~o::.

"A~Y.

:~~hp~~~:!.,~ ~l!n;~~~!x.

tt!~bl-

problem.

ePROG NOT[S:.

---------------------------1.ALCOHOLISM. CHRONIC.
QX-GIVEN:
PADDED Slot: RAILS. PAt
.S'ZOPM. PARALDEHYOt: 7.5ML O4H*

IUN0251 20111 ll/19170

=l

S~:

,MDO ...,
9131 ll120170
08JI .SPENT "AIR NIGHT.II
.APPEARS LESS SHAKY THAN LAST NIGHT. PA~AlO[HYo[ IIORKING IIELL.
OIIIENTIED UC[PT ,.OR OAY OIF MONTH. THINKING STILL
~O.ED HneVER..
IIII-GIVENI
IUN.211 10 I SI
PADDED SlOE "AILS. PA*
IUN0171 2Z118
CHLORAL HYDAAT[ IG.. 0*
SA:

'MO' 1'"
91"6 11121170
.I ....ROVING. APPETITE RETURNING ••
OVERALL COURSE: GETTING KTTER.
PLN-DDU
lITttU"INf. SING I/OAY Xlt S.D. I ....
• THERAGRAN..... lCAP. 110AY STANDING. 5.0. P.O.
"x-GIVENI
IUNtIlI 22116
CHLORAL HYDRATE IGM 0*
OIiJI

el1/20170 2 HR PC-lit MGS ••

1((125'

Ilia 1.,22171

~jI-61VENt

IUNtl31 11113

"GTE: Theae drug ordera, after being atored in
the record by STORE (SM-N), vill be added to
the patient'a Uat of outatanding ordera by
STORE-LIST. Note that the drug _ a _re
typed in. This indicatea that the druga given
were not On the liat of pre-defined drugs for
that problea ata~nt. Since theae are
standing ordera, they can atill be aeen on the
Uat of outatanding ax ordera on page ,61.

t!OTE: Here are executions of two of the ordera
fr_ above. Note they are truacated with a
"." for eaae in reading the record. Note alao
that the PARALDEHYDE vaa not executed . .
ordered but rather vaa given at 5:20 p •••

Initial Operational Problem Oriented Medical Record System

IITEA.ll -- SN 4 IUIOnO
SBHOL02 543-S01-8 .. 49

11100

PAGE

14

THEAAGAA",-M ICAP. 110TlIIAMINE SONG 1I0AY IISill
11'100141 10126
"EELING 8ETTEA, EATlNG ••
08J:
'IlTLE TAEMOA, CALM. ONLY REQUIRED INITIAL DOSE Of
PARALDEHYDE ON ADMISSION, NONE SINCE ••
PLIll-OBI
~
IISOC UL SEAVICE CONSULT.
PLN-DOAI
12141
jlTHIAMlME.50NG STAT III ONLY, 1.1'1.
IU-GIVENI
IUNOl71 ZZIIS
CHLORAL HYDRATE IGM QIUNOlll 10113 1l/?J/70
THEAAGAAN-M ICAP. 110IUNOI71 2310!
CHLOAAL HYDRATE IGM Q.
CON REPLY:
IMDllOI 9111 IIIZ4I10
AGAEE 'III PAOBLEM AS rORMULATEO.
AGq[E 'III PLANS AS rOAMULA
AGREE 'III PROBLEM AS rORMULATED.
AGREE 'III PLANS AS rOAI'IULA
OBJI
IIONOAS• .aN

NOTE, Here is another order in the form of •
request for a SOCIAL SERVICE CONSULT.

NOTE, This is the reply to the request for a
SOCIAL SERVICE CONSULT entered by the consultant.
There are a few duplicate entries and errors in
using the type-in progr_ (KEY).

ME TO
AND HOSPITAL. MR
KNO'IIN PATIENT , . O R M TIME AND ICNO'IIS HIS
STORY OUIlE 'IIEll. MA
AND OTHER MEMBERS
Of' HIS AGENCY ifILL OfrER MEL
•
IIHILE IN
HOSPITAL AND ArTER DISCHARGE. MA.
HAS EIIPRESSEO
CONCERItf AND QUESTIONS Ir PAlJE"'T
ElENT TO
HANDLE HIS PEASONAl Ar""IAS ArTEA DISCHAAGE rROM THE

IIII........... -~~
IUNOlJl 10lZ2

THERAGRAN-M lCAP. 110CON REPLYI

IM03201 IZI17
AGAEE 'III PROBLEM AS rORMULATED.
AGREE 'III PLANS AS rORMULA
08JI
"'LANI I. 'IIllL ACT AS LIASON BnwEEN O.E.O. ALCOHOLISM PROGRAM AND
Mf'U SUf'f'. 2. If' OUR DEPARTMENT CAN BE Of HELP IN SOME
OBJI

Nn' UBS [" ;PU! "DI,[

fAil

IIA.

IUlf0201 18145
IIPATIENT APPURS SOME'IIHAT CONFUSED THIS EVENINGII

PAGE

IS

All-GIVEN:
IUN0241 Ul40

CHLOAAl HYDAUE IGM QSlI:

SPEAK~~~A~I!~l:;aa.

tllN

11400141

9139 11/25110

add ~:~::~~ :~~A~~S:~T~~~~\H£

QUESTION or MENTAL DETEAIOAU 10'" IN THE PATJ£NT. THEY
If AVE SUGGESTED THoU HE HAS RECOM£ SDMrll"AT DULL
HENULL Y AND oU TIMES SUMS TO "AVE rAlSE IDEAS SUCH
AS SUGGESTING T"AT HE HAS AN APPLICATION IN rOR WORK
U THE ilF:EKS SCHOOL OR THINKING THAT HIS wirE IS STILL
ALIVE (TIfIS liAS SEVE~AL WfEIIS AGO) ••
08J:
ilS•
.,ATlENT NOTED 8Y surr TO lIE ACTING STRANGE AND CONf"USEO AT
TIMES. rOR EUMPlE. YESTERDAY HE ASIIED ONE Of THE
NURSES SEVERAL TIMES ABOUT A U(, THoU HE THOUGHT If AS
SuPPOSED TO ~E 0111 THE DOOR. ALSO. PATIENT IS VERY SlOil
WHEN ASKfO TO NAME THE OoUE I NEVERTHELESS, HE USUALLY
MANAGES TO GET AT LEAST THE Y[AII AND MONT .. CORRECT.
OIlIENT£O TO. PLACE AI40 NAME._
ASMT:
~THERE

IS A QUESTION or ORGANIC 8RAIN SYNDROME SECONDARY TO
CHRONIC ALCOHOl IS" HfAE._

PLIV-PIlOI
!lPS'fCHOMETAIC TESTlNG- TO 8E DONE TODAY BY OR. • • • • • • GIlOUPHAVE CALLED.
IUNOISI 12:06

T"ERAGAAN-M ICAP. 110CON REPLY:
11'103201 17:011

AGREE ill PR08lEN AS rORMUlATEO.
AGAEE WI PLANS AS rOAMULAJED.
Sill
IISOCIAl SERVICE NOT£! AEeE I VED CAll
PROGRAM SHOAT n..r Ar.o
"I HAVE TALliED WI TH DR
IIOULD LJ"E TO SEE PT.
DISCHAAGE HOME. DA • .11\10 I 80TH rEn THAT PT. SMOULO
NOT GO DIRECTLY HOME rROM 04£OICAL UNIT, AS It[ IIILl GO
8AC" TO /)LD PATTERN. DR..-D
S NOT WAIVT V.S.H.
ADMISSION . . . J TOLD MR.
THAT I WOUlD PASS ntis
INfO. ON TO MEDICAL STA
•
DID QUESTION IfHAT OTHER
PLA'" THEY MIGHT HAVE IN MIND Ir OUA PSYCHIATRIC UNIT
COULD NOT TAKE PT. M R . - DID ...OT "AVE ANY ANStlER AT
THIS TIME. MA._ HAS REQUESTED THAT I CALL "1M

-

1IIIIIiIiIIIIii-~~::is~~

~·.SUPEAVISOR-""UIJ

A.. -GIIIENI

.

CHlORAL HYDRATE 1611 Q-

IUff'Z.'

22113
1tI00l41 2ZUII

OTE:

Additional inforaation froB consultant.

257

258

Spring Joint Computer Conference, 1971
CITER.21 -- SN 10 12110170
SBHOI.D2 5103-501-8 M 109

11100

PAGE

16
~NO'1'I:::

IMPROVING OVERALL.
OR..I:

IIOR. _

ASHU
ilf'EEl THAT PATIENT IS sn8lE ENOUGH fOR DISCHARGE. liE IiAVE REACH1'O
THE POINT or M ~
..u M M H SP A
NUIU. lULL BE
fOllOIllED BY MR.
flCf
Of ECONO'41C OPP
•
•
RII-GIVEN:
;'UN02S;' 10:01 1l;'?6;'70
THERAGRAIII-M lCAP. 110IUNonl 21 H.9
CHLORAL HYDRATE lGM 0;'UNOlS;' 10:00 11/27110
;'''001'';' 131?Z
THEIIAGRAN-M lCAP. 110IMPROVING OVERAll.
OB.)I
IIt4AVE CAllED DR.
IIIHO f'fElS THAT PATIENT CAN BE
DISCHAIIGEO. HE STATES THAl "I' HAS QUESTlOIIIEO "H£THER
TIiE PATIENT HIGHT BE OISPlAYIIIIC, SIGIIIS OF' EAPLY Pleps
DISEASE. BUT SEEMS SATlSfYEO 'IIITH OUr< PSYCHOI..OGICAL
EVAlUATlOIli. HE "AS ASI MG.
Olotl
COlli REPlYI
;'H03201 1111l?
AGAEE WI PR08LEM AS FORMULATED.
AGIIEE ,,;' PlaNS AS rOIlHUlATEO.
511:

~SOCUL

This status is reflected in the selectable
list (sec pag.' ,,5) of the patient's problem in
t"rms of being al..ole to look at all problems or
thos" that are GETTIllG WOR~I::. It is also pos"ible to look at lists of patients on a ward
with prol..olems in a certain subspeciallty,problems
getting worse or prol..olems in a certain subspeciality 'letting worse.

HAS SEEN THE PATIENT AND THE PSYCHOMETRIC TESTS
HAVE BEEN ADMINISTERED. THE PRELIMINARY RESULTS
SUGGEST THAT THERE IS SOME INTELLECTUAL IMPAIAMENh
PERHAPS MORE THAN COULD III' EIIPLAINED MERELY BY AC,E.
BUT THAT THE DEFECT IS NOT SEVERE ENOUGIi THAT IT SEEMS
TO BE IN K[£PING IIIITH THE rRIENDS' AND RELATIVES
CONCERNS. fULL REPORT TO FOLlOIII.~

SEPVICE

NOT~~ ~~{ll

.w.&iSTILL

~:jOTl;:

Assessment of a problem is always typed

in.

2l "10: T~l~~~~S:T~P~~!:~I!!.rOA

[JIPR[S5[O CONCERN RfGAROING fUTURE Of
HE fEELS THAT PATIENT STIll ..EEOS "ORE III fOR
ETOH PRIOII TO GOING HOOlE. BUT pn1£IIIT OO(S NOT IIIISH TO
GO TO H.lf W.Y.OR "APLE LEAf rAA" rOIl
ALCOHOL ICS. "II
TO rOLlOIi PT. AT HOME .fTER "ru
OISCHAr<> deleted, any _istue. in entering information
must be corrected by entering and noting the
correct infor.atlon. In this case. th~ alkalinL
~~:~~:~;~e value vas originally entered in-

Initial Operational Problem Oriented Medical Record System

11101

PAGE

III

OBJI
17142 11124110
IIfDEMA OF' ANKLES CLEAR ING.
"SMTI
• , IIfTlOLOGY OF' EDEM" SOMEWHAT UNCLE .. R WITH SERUM PROT. OF 6.2
"NO ESSENTI"llYNL. CHEST F'lLM. LOW BUN. ESSENTI"LLY
NL. LFT·S. IN VIEW OF RE III IIETIOLOGY OF' EDEM.. SOMEWH.. T
UNCLE .. R SINCE LFT'S. BUN. CHEST F'ILM .. LL ESSENTIALLY
WNL. SERU.. PROTEIN OF 6.1 GMS. HOWEVER EOE"''' CLEARING
AT PRESENT. AND "LSO EVIDENCE OF "'HHRITIS L "NKLE.
PROB"8L Y DUE TO SPR"IN.


4.INFL ....... RTlCUL .. R DISI INVOLVING: WRIST/C .. RP"LS RT. KNEE. LT.
SECOND.. RY TOI
PLN-R/OI
/MD011o/ 191Z8 11/19110
RI'O .,. ...
LATEX
RI'O IIRF.
"SLT
RI'O ~OUT.
URIC "C.
RI'O IIINF'ECTIOUS COOU8T IN VIEW OF' ABSENCE OF' F'EVERIII
RI'O IIlRAUN ....
X-R"YI WRIST.
RT.
X-R .. VI KNEE LT.

nx.



4.INF'L"M . . . RTICUL .. R 0151 INVOLVINGI WRIST/CARP"LS RT. KNEE. LT.
SECONDARY TO.
PLN-D8:

/"'0014/ ZIIZ2 11/19110
NLOOD CULTURES TO BE COLLECTEO- THREE SETS TOTAL (ONES AT AM
DRAwING. ONE AT IP" DRAWING. ONE .. T .... DRAWING SAT .111
CONTINGENCVI 'IF OTHER STuDIES NEGATIVEII
CONTINGENCYI .IF' OTHER STUDIES NEG .. TIVE WILL ASPIRATE LEfT KNEE
JOINTII
PLN-PRO'
IIHOT PACKS TO LEFT KNEE. RIGHT WRIST FOR 20 "IN. PERIOD~ TID ••
RX-GIVENI
/UN020/ 10'58 11/20110
 HOT PACKS TO LEn KNE- IIPT. REF'USED HOT PACKS THIS "M.II
I'UN0251' 21117
HOT PACKS TO LEfT KNED8J'
I'CC0251' ZZ.SZ
5.0 MG/I00 Ml URIC AC.
0158 11/2Z/70
 X-RAY I WRIST, RT. IIAP AN~ LAT VIEWS DEMONSTRATE SMALL

-- sg=Ul"IZ'B POMR
•••••••••••

IITER.ZI -- SN 4 121'10110
SBHOLDZ 543-507-8 M 49

1110.

PAGE

19

CYSTIC AREAS Of' THE CARPAL BONES AND POSSIBL Y SOME
SCLEROTIC REACTION. THERE IS ALSO OVERLYING SOF'T
TISSUE SWELLING AND THE PICTURE WOULD BE COMP"TIBLE
WITH RHEUMATOID ARTHRI Tl S. POSSIBLY GOUT ••

II-RAYI KNEE LT. Ill. KNEE DEMONSTRATES A SM"LL F'USION IN
THE SUPRAPATELLAR IIURS .. AND POSSIBLY POSTERIORLY.
THERE IS M"INTENANCE ON THE .. RTICUL"TING SURFACES .. NO
NO SUGGESTION OF' CYC; TIC CH .. NGES. THERE IS HOWEVER
SMALL LINE .. R STREAK OF'CALCIF'lCATION OF SOFT TISSUE
WHICH COULD REPRESENT SOME C"LCIFIC .. TlON OF THE
POPLI TEAL ARTERY. •
NEGATIVE. lATEII FIX.
REACTIVE.
ASL T 1140 TODD UNITS ••
SX:
JOINT PAIN GETTING BETTER.

/"'0014/ 10126

08JI
"'IGHT WRIST STIll SLIGHTLY SWOLLEN. HEAT DIFFERENCE NOT SO
.... RKED. NOT SO TENDER ••
EXAM Of' KNEE (511 LEF'T. SWElLING: UNCI1ANGED.
ASMT.
ItwRIST FILM ON RIGHT wOULD BE CONSISTENT WInt EITHER GOUT OR A ...
lATEX F'U. NEG. BUT THIS DOESNtJ RULE OUT RA. URIC
ACID NORMAL - DOUBT GOuT AS ETIOLOGY. CONDITION
IMPROVING AT PRESENT. MAKIIliG INFECTIOUS PROCESS
UNliKELY. "UST STILL CONSIDER TI1 .. T PATIENT MAY WELL
H"VE HEREl Y INJURED THESE JOINTS WHILE IN AN ALCOHOLIC
STUPOR. II
RII-GIVENI
/UN025/ ZI.44
HOT P"CKS TO LEF'T KNEe
/UNOZU 14'17 11/23110
HOT PACKS TO lEFT KNEe
OBJI
/MD014/ 17.4Z 111'2"110
IIWRIST ANO KNEE MUCH BETTER. NOW COlo4PLAINS OF SORENESS l ANKLE.
EDEM" PRESENT ON BOTH ANKLES HAS CLEARED.II
"S"U
.SUSPECT THAT THESE JOINT P"INS RELATED TO TRAU.... TlC LESIONS _
PATIENT ACTUALLY DESCRIBES SPR"INING L KNEE AND ANKLE
AT S .. ME TIME WHEN HE F'EEL OOWN.II
RX-GIVENI
/UN020/ .8'45
HOT PACKS TO LEfT KNEe
/MOO.4/ 9'01 11126/70
IMPROVING OVERALL.
SJII
JOINT PAIN SUBSIDING.
ASHT'
1tN0RMAL URIC aCID. ABSENCE Of' FEVER. LOW ASOT, NEG. LATEX FIX. ,
GRADUAL SUBSIOANC£ OF SYMPTOMS ALL IN FAVOR OF
TR"UMATIC ETIOLOGY, BUT CORRECTED SED RATE Z9. RA
STILL CANNOT lIE RULED OUT..
10'\Z 11/271'70
IMPROVING OVERALL.
SJII

259

260

Spring Joint Computer Conference, 1971

.-.-.C.O.MIPUITIEIRIUIEIDIPOMR (ITER.2) -- SN It 12110170
•
..II SBHOLD2 51t3-507-8 M 1t9

11101

PAGE

20

JOINT PAIN SUBSIDING.
JOINT SIIELLING SUBSIDING.
08J:
_CAN D£TECT LITTLE DIFFERENCE IN TEMP BETIIEEN IIRISTS ANO KNEES.
SIIELLING HAS SUBSIDED. AND PATIENT IIALKS ALMOST
IIITHOUT LIMP."
ASMTI
"JOINT INFLAMMATION SUBSIDINGI TRAUMA 8EST BET AS TO ETIOLOGY ...
S.HYPERTENSION. HIO
OBJI

~

NOTE: Initial plans should have been specified
for this problem.

10126 11122110
"ADMISSION K BORDERLINE LOll."
ASMTI
tlHAS BEEN ON "HIGH BLOOD PRESSURE PILL" IIHICH UNOOUBTEOLY EXPLAINS
HIS LOll K."
PLN-D81
NA
K

OBJI

Ice0251 11128 11123170
lit 1 MEO/L NA
3.6 MEO/L K
IMDOllt1 1111t2 11/21t170
..PRESSURE STAYING UP RANGING lIt0-190/90-110. ON REPEAT LYTES
YESTERDAY K STILL LOll NAIK - lltll3.6..
ASMT:
IINEED TO RIO HYPEReORTICISM"
PLN-DBI
.. AM AND PM CORTISOL LEVELS..
9101 11126170
STAyING SAME, OVERALL.
OBJI
..REMAINING HYPERTENSIVE IIlTH A DIASTOLIC Of A80UT 100..
ASMT:
illfEEL PUIENT NEEDS HYPERTENSIVE 1I0RK-UP • H8P HAS NOT R£SOLVED
IIITH BED REST. LOll K IS SOMEIIHAT SUGGESTIVE Of" CONN'S
DISUSE. BUT VALUE IS STILL IINL. AND URINE PH IS ACID
(USUALLY ALKALINE IN CONN'S)"
PLN-OB:
URINE VMA
PLN-R/OI
RIO ..RENAL ARTERY LESIONtI
HYPERTENSIVE IVP
RIO IIPHEO."
OBJ:
IUN0151 llt111
URINE VMA IIZIt HOUR URINE STARTED U 11130 AMII
IUN0241 22114
STAYING SAME OVERALL.
OBJI
'TAP IIUER ENEMA rOR IVP PREP. OK. BROWN LIO RETURNY
,/MDU41 13122 11/2U10

STAYING SAME OVERALL.

11••

r:58~ (ITER.2) --

SN 10 12110/10
SBHOLD2 5103-507-8 M 49

11101

PAGE

21

STAYING SAME OVERALL.
08JI
'HYPERTENSIVE IVP READ AS NEGATIVE IN CONF'ERENCE TODAY. 24 HOUR
URINE F'OR VMA PENDING'

PL~;~:;L~D~E

TOLDI NATURE OF' THE PR08LEM. 'WILL LET DR • • • • • •
DECIDE ON THE THEAAPY F'OR THE HYPEATENSIONI IN
GENERAL. A THIAZIOE MIGHT nRST BE INDICATED. THIS
ALONG WITH THE PHEN08ARBITAL. AND A LOW SALT DIET
MIGHT WELL CONTROL THE HBP F'AlRLY NICELY. If" NOT
RESERPINE MIGHT BE ADDED~ PT. WILL BE ADVISED TO ADO
NO SALT TO F'OOO.'

OBJI



,M021U 21129
AM AND PM CORTISOL LEe 'AT AM 10 MCG AND AT 1 PM 9 MC~
,CC020.1 11137 11.130/10
HyPERTENSIVE IvP "HyPERTENSIVE IVP PROB. WNL WITH THE
POSSIBILI fY Of' SOM£. OEGREE OF' LOWER OBSTRUCTIVE
UROPATHy.,


6.->UAOLITHIASIS. H'O.
PLN-DB.I
'MD01".1 19:28 11119/10
SER CA
SER P


6.->UROLITHUSIS. H'O.
OBJI
/Ce025' 22152 11.120/10
9.2 MG'100 ML SEA eA
4. 1 MG'100 ML SEA P

'5P. GR. 1.02".'


7.PLEURAL F'LuIDI 'OR PLEuRAL REACTI~
PLN-DBI
'"0014' 15131 11.120170
ItCHEST F'LUOROtt
ePAOG NOTES>

1.PLEURAL F'LUIDI tlOR PLEURAL REACTIONtt
OeJl


.ICC025' 16131 11/24/10
CHEST fLUORO 'SUPPLEMENTED BY A RT LATERAL DECUBITUS F"JLM
AND THEAE IS NO F'LUIO IN TH£ AT. CHEST, ONLY SOME

Initial Operational

IITER.Z) -- SN 4 IUlOno
SBHOl.DZ S43-507-8 M 49

11101

PAGE

Prob~em

Oriented Medical Record System

261

ZZ

BLUNTING Of' THE COSTOPHRENIC ANGLE • •
IMOGl41 1714Z
litHEST F'LUORO. ANO LL DECU8TUS REVEAL THAT THE lI-RAY rINOINGS
IlERE THOS[ OF' AN OLD PLEURAL SCAR- NO F'LUIO.
ZlilB ll/l5/70
:-IOTE: Initial plans should have been specified
'.LOW HEIIIATOCRIT/HGR
~ for this problem.
STAyiNG SAME OVERALL.
OBJ:
_REPEAT HB/HCT 11.4/36 IN A PATIENT WITH PAST HlI OF' "lIEVE'S
SYNOROME".HAS COME DOWN F'ORM IZ.9 137 ON AOMISSION._

Z3101
STAYING SAME OVERALL.
OBJ:
lION REVIEW OF' THE INITIAL SMEAR IT APPEARS NORMOCNRONIC.
NORMOCYTIC TO ME._
PLN-R/O:
RIO 1161 LOS~
STOOL HEMATEST->CHART 1I3

~

-- RETRIEVAL RE1iJEST --

:-IOTE: Since the resul ts for this order were
not recorded. it remains an outstanding order.
as seen on the selectable list on page ..'I.

FRINTER. ENTIRE. CYCLED
- 4 0 - :'OTE: At the end of each retrieval is the
complete statement of the ret rieval request.
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
This det ines clearly what the output

represents as there are many different
retrieval requests possible.

The first list is the patient's Problem List. This is
used to define which problem a Progress Note is to be
written on or to define a specific problem for the retrieval of all data in sequence on one problem. The
physician or nurse selects the specific problem and then
proceeds to enter the data. The last two lists (for
therapy and non-therapy orders) are used to indicate
when something has been given and to allow the entry
of laboratory and x-ray results.
Associated with each patient's computer-based record
are these three Intermediate Selectable List Files. They
contain copies of information also contained in the
record. (Compare the Problem List of the printed
record with the one on· the screen and the Plans for
problem # 8 in the printed record with the General
Ward and non-therapy order list in selectable form on
the cathode ray tube.) The data are stored redundantly
in the lists to facilitate speedy retrieval.

THE PATIENT'S PROBLEM LIST IN
SELECTABLE FORM

J.. ALCOHOL ISM, CHRONIC.

~

.-..

2.

(LAENNEC'S) CIRRHOSIS:

3.

EOE~.

1111" 7'
U/l"7.

',TT,NG. SECONDARY TO, NOT "E*U/1"7.

It. INFLAto. ARTICULAR 01 S: INVOLVINGI WR'*J.1IJ."7 •

s.

HYPERTENSION. HlO

U/J."7.

6. -)UROLITHIASIS. HlO.

U/211/7,

7.

J.1I211/7.

PLEURAL FLUIOI OR PLEURAL REACTION

8. LOW HEMATOCRI T/Hce
• RETURN

U/2S/7"
•

eEG I NHI NG 0' LIST

L.._ _ _ These problem statements have been truncated

to fit on one choice line.
(NOTE: The printed
record contains the total problem statement)

-

262

Spring Joint Computer Conference, 1971

3"0'

THE PATIENT'S GENERAL WARD AND
NON-THERAPY ORDER LIST IN
SELECTABLE FORM ON THE CATHODE
RAY TUBE SCREEN
NOTE: The problem titles and specific orders have
been truncated. In case of ambiguity, it is possible to
see the complete problem statement and/or the complete orders by referring back to the printed record or
by a specific retrieval to the cathode ray tube terminal.

----------------

••••• - L.

SB376-L S43-5'7-8 M 49

EXECUTE ORDERI

PARALDEHYDE 7.5ML Q.4H PRN STANDING, FOR A*U/L9/7g
CHLORAL HYDRATE lGM Q.HS PR.N STANDING P.0.*11/L9/7'
THERAGRAN-M lCAP. L/DAY STANDING, 5.0.
THIAMINE S~/>iG STAT XL ONLY,
PADDED SIDE RAILS,

'1)16-1 5'J-5'1-8 M ~,

EXECUTE OIOEII

----------------

PADDE.D TONGUE BLADE HA*U/L9/7,

••••• - 3.

EDEMA, PITTIN*
• LI Sf CONTI NUED - -....-

3~.'l

U./1911,

.. II. - 4.

U/UI1,

HOT PACKS TO LEFT KNEE,

NOun DIIT. nUllS AD LII.

U/L"1,

1111"1'

TEM' UH

U/UI7.

I NFLAM. A RTIC*
RIGHT WRIST FOR 2*11/L9/7~

1.1/19/ l'

• UTUR.

• LIST CONTINUED
•

In,

I~

U/1911,

WEIGHT QOO

UH

~

58376-1 S43-S07-8
49
JOBST STOCKINGS
U/1917.

PT.ACTI U, WI ASSlITA.ct AS TOL.

'ULSE

...

_ _- - - - - - . ' I

VlSITI.G InULAI.

U, '.0.

U/22/70

PSYCHOp,ETR I C TESTI NG- TO BE DONE TEDAY BY*U/2S/70

ELEVATE LEGS,

hlU OF *GUSIA SUSP. J,M&. 'I"

P.*U/21/7'

I.k.

• RETURN

J....

----------------

ALCOHOLISM, c*

.- BEGINNING OF LIST

PREVIOUS PAGE

UH
11/1"1 •

., U"

DISCHARGE ,T. WIT" FIIUD, VALUAlLES, VIA*11/2111,

UTERATION '2) --S8HOLD2 543-507-1 M 49

••••• - 1. ALCOHOLISM. C.
SOCIAL SEIVICE COISULT

11/2211'

SGPT
1970

••••• - J. EDE*, "TTII·
HA vr Ui\, IUN,

n,

OIOUro

LFT'S olDun
. . . . . - ••

PRO T.

ALK.P

11/19 <0>

1111"1.

19'21

1111911'

11120
15151

INFLAM. ARTIC*

• 'REVIOUS"'GE

SER.8IL. U
0

SN 3

TOTlO.7NG.. 0 12.1 5£C.' >501
IRIMG"
OF" CONTROL.

13 UNIT
S.

10 UNIT

• LIST CONTINUED

22152

s•

"INTERI

11 INCH VIDE

HEPATIC

PARA~TERS'

ALL

'OR PRE5£NT
ADMISSION

BLOOD CUL TUIES TO IE
••••• - 8. LOW "E*TOCII*
STOOL HENUEST-)CHA IT x)

11/2511'

NOTE:

This is' a flowsheet on all the hepatic parameters
for the patient. Note that the (0) is when the
treatment was ordered. The followinq paqes contain
a flowsheet of all the vital siqns on the patient.
For both flowsheets, the information displayed can
be seen in narrative form in the P%eoedinq %ecord.

CONCLUSION
• 'REVIOUS MGE

.- IEGII.IING 0' LIST

THE PATIENT'S THERAPY ORDER LIST IN
SELECTABLE FORM
NOTE: The problem titles and specific orders have
been truncated yet a complete copy of everything is
contained in the record in case of any ambiguities.

An experimental time-shared medical information
system has been developed upon the organizing principle of the Problem Oriented Medical Record. The
boundaries of the system are defined by the content
of the patient's Problem Oriented Medical Record
and all the information that is a natural extension of
that document, e.g., the billing, the pharmacy, laboratory and X-ray data, the admitting office information, etc.

Initial Operational Problem Oriented Medical Record System

(ITERATION '21 --SBHOlD2 543-507-8 M 49
TEMP

PUlSE

RE5P

8P

<0>

<0>

co>

5N 3

PAGE

.----- ------ ------- ------ ------ ----------------------------------1970

11/19 <0>
19128

PULSE

RE5P

BP

.21
MIN

20'
MIN

160/
100

15154

",
MIN

24'
MIN

810'

19145

MIN

11/
MIN

164'
9.

11/
MIN

150'
92

15152

Ito

10:58

881
MIN

201
MIN

37.3'C 841
21: 17
MIN

201
MIN

11121 37.2'C 88 I
9:16
MIN

201
MIN

881
MIN

201
MIN

1501
100

221
MIN

1781
110

17:10

15156
37'C

841
MIN

201
MIN

881
MIN

201
MIN

36.8'C 88 ,
8:46
MIN

20'
MIN

16:010

11122
7141

37.'C

138'
90

15:43

921
MIN

221
MIN

160'
90

201
MIN

T (ITERATION '2) --8HOlD2 543-507-8 14 49
PULSE

RESP

3

------ ------ ----------------------------------

7148
37.2'C "
I
MIN

162.1
110

201
MtN

8 lI~-INCH WIt)[
·PRESENT AbMI5SION.

VITAL SIGNSI

All VITAlSIGNS (NURSES).

- - - - -- - -. -- --- - --- - - --- - - - - - - - .. --- - -

1601

TEMP

36."C 88 ;'
20'
MIN
MIN
----~--

------. ------

90

9:49

MIN

PRINT~RI

20'
MIN

11123 37.JtC 108 I
7:33
MIN

801

0116
7145

190'
112

84'
MIN

201
MIN

11/27

16106

10:50

37.4'C 88 I
15:510
MIN

PAGE

------. ------ ------- ------ ------ -----------------------------------

1601
110

------ ------ ------- ------ ------ --------------------------------_.9156

SN 3

1970

201
MIN
1421

5N 3

PAGE

2

8P

1970

;;;-;-- ;;;--- ------ ---------------------------------_.
16:05
MIN
MIN
------ ------ ;;;---- ;-;;-;;;;-- ---------------------------------_.
18:25
MIN
MIN
100
------ ------ ;;;---- ;-;;-- ;;;;-- ---------------------------------_.
19136
MIN
MIN
100
-ii;;; ;~:9:~ 92-;--- 20;--- ------ ----------------------------------.
7157
MIN
MIN
------ ------ ;;;---- ;;;--- ;;;;-- ---------------------------------_.
10:34
MIN
MIN
98
-----;;-;--- ;;;--- ------ ----------------------------------.
18: 17
MIN
MIN
------ ------ ;;;---- ;-;;-- ;;;;-- ---------------------------------_.
MIN
104
------ ------ MIN
;;;;--- ;;;--- ;;;;-- ---------------------------------_.
23110
MIN
MIN
90
------ ;;;---;;;--- ;;;;-- ---------------------------------_.
7:38
MIN
MIN
100
-----;;-;--- ;;;--- ------ ---------------------------------_.
8119
MIN
"IN
----------- ;;;---;;;-----------------------------------_.
13140
MIN
MIN
90
------ ;;:;:~ ;;-;--- ;;;--- ------ ----------------------------------.
16109
MIN
MIN
----------- ;;;---;-;;-- ;;;;-- ---------------------------------_.
17:36
MIN
"IN
116
----------- ;;;---;-;;-- ;;;;-- ---------------------------------_.

------

(ITERATION '21 --SBHOlD2 543-507-8 M 49
TEMP

11120 37.6'C 88 I
8107
MIN

263

;;:;:~

;;:;:~

181105

-;;;;~

;;:;:~

The principle focus of our effort has been to directly
interface the patient, the physician, and the nurse
with the computer in a manner that allows the input
and retrieval of all medical information normally
generated by them for the medical record. The subsequent
computer interfaces of the same medical information
via message switching with the business office for
billing, the pharmacy, the laboratories, admitting office,
etc., in a meaningful problem oriented fashion are
planned but not yet implemented.
Now that the physician-nurse component has been
effectively integrated into a computerized system,
final assembly of all the components is possible. In
this regard, as Jay Forrester found in management
and engineering, we are finding in medicine through
the Computerized Problem Oriented Record and its
many extensions that the amplification and interactions
among the components of the system may at times be
more important than the components themselves.
We are now prepared to develop a model whereby the
ultimate role of the computer in the delivery of health
services can be defined.

;~;;--

21156

MIN

",IN

7:59

MIN

MIN

13137

MIN

MIN

-ii;;;

;;::~-

100

;;;---- ;;;--- ------ ----------------------------------.

------ ------ ;;;---- ;-;;-- ;;;;-- ---------------------------------_.
120

ACKNOWLEDGMENT
We would like to acknowledge the medical philosophical leadership and clinical expertise of Lawrence L.
Weed, M. D.; the clinical experience and expertise of
Laura B. Weed, M. D. for the massive effort of developing the bulk of the medical content material (in the
branching structured logic displays) in the system; the
development of the drug displays by George E. Nelson,
M. D.; the early computer software specification efforts
of Mrs. Lee Stein; the medical superstructure specifica-

264

Spring Joint Computer Conference, 1971

tion efforts of Charles Burger, M. D.; and the efforts,
as represented by the actual patient record, of the
house staff and nurses at the Medical Center Hospital,
Burlington, Vermont. Important contributions have
also been made by many other people whom we wish
.to thank for their efforts. The authors would like to
state explicitly that the computer software described
in the paper wo:uld be but a skeleton without the medical content material contained in the branching displays.

REFERENCES
1 L L WEED
Medical records, medical education and patient care
Case Western Reserve University Press Cleveland Ohio
1969
2LLWEED
Medical records that guide and teach
New England Journal of Medicine Volume 278 pp 593-600
652-657 1968
3 M D BOGDON OFF
Clinical science
Archives of Internal Medicine Volume 123 p 203 February
1969

4 R M GURFIELD J C CLAYTON JR
Analytic hospital planning: A pilot study of resource
allocation using mathematical programming in a cardiac unit
RAND Corporation RM-5893-RC April 1969
5 MASSACHUSETTS GENERAL HOSPITAL
Memorandum nine
Hospital Computer Project (Status Report) February 1966
6 C M CAMPBELL
Akron speeds information system slowly
Modern Hospital Volume 104 p 118 April 1965
7 R WHAMMING
One man's view of computer science
Journal of the Association for Computing Machinery
Volume 16 pp 3-12 January 1969
8 CONTROL DATA CORPORATION
SHORT operating system program rules reference manual
Publication Number 60259600 April 1968
9 CONTROL DATA CORPORATION
SHORT operating system basic formats reference manual
Publication Number 60259700 February 1969
10 CONTROL DATA CORPORATION
SETRAN selectable element translator reference manual
Publication Number 60249400 May 1968
11 L L WEED
Technology is a link not a barrier for doctor and patient
Modern Hospital pp 80-83 February 1970
12 L L WEED et al
The problem oriented medical record-Computer aspects
(A supplement to Medical Records, Medical Education and
Patient Care)
Dempco Reproduction Service Cleveland Ohio 1969

Laboratory verification of patient identity
by SAMUEL RAYMOND, LESLIE CHALMERS and WALTER STEUBER
Hospital oj the University oj Pennsylvania
Philadelphia, Pennsylvania

The problem of errors in data input was, in fact, by
far the most serious one encountered in the development
of our computer report system, and one which was
essentially out of control until the completion of the
work described in this paper. Beyond a few references
to slang expressions (GIGO) this is a topic that is discussed very little in the published literature. A recent
well-documented monograph,! for example, contains
no reference to this topic in index, bibliography, or
text. Yet our experience has been that every source
of raw data, and every transcription step in the data
processing operation, carries an error rate of one to
two percent. Furthermore, unless these errors are
sought for and corrected, they are additive. Since
there may be ten or more identifiable steps involved
in ordering a laboratory test on a specific patient,
errors accumulate until finally as many as 20 percent
of the requests received in the laboratory are unacceptable in some particular of patient or specimen
identification. That has in fact been our experience
over the last four years, and there has been no significant downward trend in the 20 percent figures during
this time.
Part of the reason for this has been the policy, which
we adopted at the beginning of our data processing
development, of not attempting to make any change
in the procedures of data processing used outside the
laboratory. Although the problem can, and perhaps
should, be considered in its entirety as embracing the
whole hospital, to do so immediately involves admissions policy, patient accounting practices, the medical
record room, in fact almost every aspect of hospital
operation. Faced with that much, we decided it would
be better to solve some small definable segment of the
problem, and the boundaries of the laboratory seemed
a convenient place to draw the line.
In our system, consequently, the hospital staff are
still free to make occasional errors in preparing a re-

In considering the installation of a computer-based
laboratory report system, what are the legal and professional responsibilities created by such systems?
Computer-generated reports and records are acceptable,
legally, in place of the original handwritten laboratory
request form, but there is nevertheless an increased
legal duty, as well as a strong professional responsibility, to see to it that the computer record is correct
in every detail. By adoption of proper verification procedures, similar in principle to quality control procedures now regarded as essential in every laboratory,
a computer-based record system can be made much
more accurate and reliable and far more accessible
than the usual manual methods of record-keeping.
And this can be done without substantially increasing
the burden of laboratory work.
Preliminary studies of laboratory requests coming
into our laboratory before installation of a computerized
report system showed that over 20 percent of the requests carried patient identification data unacceptable
by objective standards of verifiability: this means that
patient information was incomplete, so that it could
not be verified, or wrong, so that verification would
lead to the wrong patient. And in a manual system,
there appeared to be no practical way to improve the
data. In studying other laboratories we found no reason
to believe that our experience was unusual.
Our manual system involved returning the original
request to the referring physician or other source, with
the laboratory report transcribed on to it. This placed
the identity of laboratory data in the hands of ward
clerks and others outside the control of the laboratory;
and for a significant fraction of the laboratory work,
left the laboratory without any valid record of the
work done on the patient for whom it was done. This
was clearly an unsatisfactory state of affairs, although
one which was hidden until we began our preliminary
analyses for the computer system.

265

266

Spring Joint Computer Conference, 1971

quest for laboratory service; we have accepted for
ourselves the responsibility for finding and correcting
all errors before sending out our reports.
The principal object in our error-correcting system
is to make sure that laboratory results get back to the
correct patient, and to keep one patient from getting
another's results. To perform and report an extra or
unordered test-this cannot do very much harm; at
most it may result in an extra charge on the patient's
bill. To miss doing a test which was ordered-this is a
situation which very quickly corrects itself by way of
an indignant phone call from the physician who ordered
the test. But to report results to the wrong patientthis is the ultimate disaster. Such a mistake could be,
quite literally, fatal. It could cause a wrong diagnosis
to be made or a wrong treatment to be given. Even if
the physician receiving the report recognizes that the
results reported do not apply to his patient, he will
inevitably place all his reports in the suspect category,
and will lose all confidence in the computer report
system. And of course, there is the question of legal
liability for negligent errors.
Our system, therefore, is designed to assure us of
100 percent correctness in the patient identification,
and to provide us with an exact record of every input,
a separate record of every wrong input, and every step
taken to correct the input. The system requirement is
satisfied only if there is verification of every input in
two independent traces from the original source of the
data. This includes both data from outside the laboratory, specifically patient identification, and data
generated inside the laboratory, specifically the laboratory result.
Even within the laboratory itself, where the data
processing procedures are under our direct supervision,
there seems to be no possibility of eliminating errors
completely: no degree of motivation or discipline that
can be applied will suffice. Outside the laboratory,
where conditions are completely beyond our control,
not even an attempt can be made. This is a fact of life
which justifies the policy decision referred to above.
Our objective, then, is to catch the errors and to correct
them before the reports leave the iaboratory.
Before developing the details of our system it is
useful to introduce three terms which may well have
been applied by others earlier in this field; they have
certainly clarified our thinking and our system.
A udit trail is a printed record, with necessary annotations, of every record entered into the computer,
any errors which were found, and the actions taken to
correct them. The tendency, all too human, to bury
the error once it has been corrected must be strongly
resisted. The audit trail is an essential part of the

system; it has both prospective and retrospective functions. It is a necessary record for every input, as from
this record alone can one identify later-discovered errors
and can know how and where to correct them. It identifies problem areas in the input procedure in the same
way that a quality control chart gives warning of procedural error in the laboratory. With the audit trail,
any error which does get through the check system,
but is later brought to the attention of our staff, can
be identified as to occurrence and responsibility. The
identification of the source of the error has proved
essential in developing psychological defenses within
our staff against repetition of the error at a later time.
The varieties of error are infinite.
Checking and validation: To distinguish between
these processes, which are essentially different, is to
provide a logical basis for the design of an error-proof
input system. Checking is an entirely mechanical
process, although it may be very complex. It can be
carried Qut by computer, only providing that a set of
logical decision rules can be given. Validation is a
human process, involving human judgment, which
changes as each new experience is assimilated. Checking
is always catching up to validation, as the judgment is
analyzed and formulated into logical rules, which can
be programmed into the computer. Validation is always
ahead of checking, as judgment is always being increased by experience.
Checking should be done only by the computer. To
give this task to a human is wasteful and inefficient
and in the end impossible, because, when boredom and
fatigue set in, the rate of human error increases faster
than the check errors caught. The computer can perform with unrelenting accuracy any checks of any complexity, once the decision tables are specified with
logical precision.
The computer can never replace professional
judgment.
A necessary condition of the validation process is
that the information to be validated must be meaningful. That is to say, the data must not only have meaning
in themselves, but they must be presented to the mind
in a form which conveys meaning to the validator. It
follows that a mere list of numbers cannot be validated
(Figure 1); the validation in this case consists of noting
that two numbers do not match. Did you, as a careful
reader, pick up this point?
One final feature which we believe contributes substantially to the efficiency and completeness of our
system: each step is conceptually separate and distinct
from the others. This is a principle taught by experience
in programming a computer. In practice, it means that
we demand of computer or of operator only one type

Laboratory Verification of Patient Identity

of checking or validation at a time. It enables us to see,
not only that a given step depends on the successful
completion of a precedent step, but, more importantly,
that in many places, the system allows of parallel
paths through to the final error-corrected report.
In summary, our system is built upon the following
rules:
1. The computer is always right.
2. The input is wrong until confirmed by checking
or validation.
3. Checking is done by computer.
4. Validation must be meaningful.
5. Two independent sources are required for checking or validation.
6. One thing at a time.

With a basis thus established, we are now ready
for the development of a system for controlling input
errors. In accordance with standard literary conventions of scientific publication,2 we shall omit any
description of the initial fumblings, outright mistakes,
false trails, and utter disasters which we encountered
during the first three years of this project. Starting
from scratch, our system has now been in full successful
operation for nearly a year. We shall describe it as an
orderly logical progression of ideas and events. Like
most successful systems, it looks very easy now. Had
we known just how to do it, it would have taken us
three months instead of three years.
In describing our input validation system, the
following outline will be useful:
1.
* 2.
3.
4.
* 5.
6.
* 7.
8.
* 9.

Enter census information
Verify census file
Enter test requests
Print work documents for laboratory
Verify
Keypunch lab results
Verify test runs
Print patient reports
Validate reports.

As this outline shows, most of the steps in our overall
procedure can be carried out in parallel. While we
insist that every entry of data into the system be independently verified, we also recognize that most of the
information is correct as it goes in. Therefore, we go
ahead and use the input in our laboratory system before

ACCESSION
NUMBER

801
802
803
804

CENSUS
NUMBER

REQUEST
NUMBER

312486
236925
318470
334862

312486
236925
381470
334862

267

Figure I-Validity check of patient identification numbers

it has been verified or corrected, but always under
conditions which do not permit the release of unverified
or uncorrected data, and which do permit the exact and
secure correction of any errors before they can possibly
damage the system.
The system starts with the control of patient identification information, which includes name, hospital unit
record number, date of admission, hospital location,
age and genetic data, physician, and hospital service
assigned. The nominal source of this information is
the patient himself, but in accordance with our principle of independ~nt verification, we require two independent traces back to the original source. Fortunately,
the hospital operating system does provide two: one
from admission desk through hospital accounting office
to patient census, and one from admission desk through
the patient's chart and his charge plate. * From the
first of these we receive a punched-card deck on Monday containing a complete record on each patient in:
the house, and update decks each day containing patient admissions, transfers, and discharges. From the
second we receive an imprinted laboratory request
slip.
Although the data entry starts with the patient at
the admission desk, the error entry may begin long
before, when a patient number, properly belonging to
one patient, is improperly assigned to another. Under
an old system, formerly used in this hospital, this
particular error was to be corrected if, and as soon as,
it was discovered. A new commercial accounting
system** recently installed by the hospital has the
astonishing feature of forbidding any correction of this
error. Our experience to date has been too short to
demonstrate what effect this rule will have on our
system, but it is evident that it will make some of the
medical records actively misleading for retrospective
studies.

* Our hospital uses a charge plate imprinter system with embossed
plastic plates like credit cards, but without machine-readable
features.
** SRAS, supplied by International Business Machines Corp.

268

Spring Joint Computer Conference, 1971

ACCESSION
NUMBER

801
802
803
804

CENSUS
NAME

REQUEST
NAME

TARBELLSUSA
GRICE GEORGE
QUIGLEY RALP

TARBELL SUSAN A.
GRICEG.D.
JAFFE CHARLES
EATON ANDREW

Figure 2-Validity check of patient names

In any case, we accept the census and update decks
as input data requiring correction. Our experience has
been that one to two percent of the records in each new
deck are in error. This is substantially lower than the
error rate which would be expected on the basis of the
number of steps intervening between the data source
and the final record, indicating that substantial efforts
at error correction are being made all along the line.
Nevertheless, we make a final purge.
We maintain, in our computer, a file of patient
identification comprising the current house list plus
records of all patients who were in the house at any
time in the last two weeks. t This file contains about
2000 names. Our first operation is to combine the
current file with the new deck, sort it, and delete all
duplicate records from the internal computer file. The
cards containing the duplicate records are also ejected
from the card file by the computer. An audit trail is
generated by printing all the records deleted. This is,
of course, a checking operation, not a validating one,
and requires only a minute or two of computing and
printing time each day.
The duplicate list printed usually contains less than
twenty names. These are subjected to a validation by
the computer staff. This process may include anything
from correction of an obvious misspelling of a patient
name to a direct inspection of the original admitting
record, according to the judgment of the operator.
Most discrepancies are cleared up by a telephone call.
The corrected records are re-entered and the process
is repeated. Usually the first repetition confirms the
accuracy of the corrected census.
It is important to keep in mind just what the accuracy of the corrected census comprises. We have, in
effect, a census file in which we· are certain that each
patient has been assigned a unique hospital unit
record number. Each patient record has been compared
with every other record, both current and recent past
admissions. Every discrepancy has been removed,
except one. The exception is the patient who has the
same unit record number as some other patient who had
been admitted at some more distant past time. This

exception makes the record doubtful, to that extent,
for retrospective studies. It could be removed by enlarging our file storage to cover all past admissions,
but this does not seem worthwhile. Even with this
exception, we are sure that no laboratory results can
be sent to the wrong patient under the control of a
wrong patient number.
The next major operation in our system is to enter
the laboratory test requests into the computer. The
laboratory receives requests in the form of the usual
3-part request form, on which the patient identification is imprinted from the charge plate and the tests
selected are indicated by handwritten marks. The
request is assigned an accession number, and 6-digit
patient number, the particular tests required, and the
accession number are transcribed by key-punching on
a punch-card * which is used to enter the request into
the computer. Note that this is the minimal information
which must be entered: the accession number, the
patient identification number, and the test requested.
If everyone of these passes the tests which are now to
be applied, the request is accepted without further
entry; if anyone fails, corrections must be made and
additional information must be supplied.
As the test requests are entered into the computer,
an audit list is printed, showing the accession number
and the patient number. At the same time, the patient
number is checked against the census file, and if the
patient number is found in the file the corresponding
name is printed also; if the patient number is not found,
no name is printed (Fig. 2). Usually two to ten percent
of the entries are missing a name. These are filled in,
often by calling the source of the specimen, so that each
entry in the entire entry list has an associated name.
The list is then validated by direct comparison of the
list, name-by-name, with the names on the request
slips. This process is not as burdensome or as time... consuming as it sounds, because the validation is comparing
two lists of meaningful names (most names convey some
sense of familiarity to a literate person, and even the
unusual name is meaningful ipso facto) and the two
lists are in the same order. The validation is highly
significant, since the computer-generated name list
comes from the census file, while the name on the request slip comes from the patient's hospital chart or
charge plate, or-in a fair proportion of the cases-as a
handwritten entry on the request slip. Judgment is
required to decide when the two apparently similar
names are to be accepted as identical.
Note that this process does not require any operator
checking or verifying of the six-digit patient number,

t We

would like to go back further, but our file space is limited
and the indicated interval covers a sufficiently high fraction of
our needs.

* The

mechanics of this transcription will be described in a
separate report.

Laboratory Verification of Patient Identity

since the validation could fail to turn up an error only
in the unlucky coincidence that (1) an incorrect patient
number exactly matched a patient number in the census
file, and (2) the name in the file, in turn, was identical
with the name of the actual patient. Since the patient
number is six digits, the chance of the first is on the
order of one in a million, and considering the statistical
distribution of names, the chance of the second must
be one in a thousand: the combined chance of one in a
billion is acceptable. At least, this mischance has not
shown up yet.
At this point we can be certain that within the above
probability, each test request is correctly and unambiguously matched with some individual patient
in the hospital, and that the three-digit accession number or the six-digit patient identification number, either
one, will unambiguously lead us to the correct inpatient.
As the requests accumulate in the day's run, and
interspersed with the above-described verifying activities, we also print, by computer, a series of test checklists of accession numbers arranged by the laboratory
work-station which is to handle them. Also, laboratory
personnel have been carrying forward preliminary
processing on the specimens received. The test checklist is compared, by laboratory personnel, with the
specimens being processed, and any discrepancies are
reported to the computer staff for correction as necessary. This is the first of several verifications made
of the tests ordered on each request. The source document for this information is the test request itself,
supplemented by telephone calls from the patient
areas, adding to, altering, or cancelling the list of tests
requested. Because very little machine checking is
possible, and a great deal of human checking and

033 1376703 RAY6 GOLONER EUG
376703 RAY6 GOLDNER EUG

563 l4 02 W M

034 1376704 RAY6 DUBROW SOL
376704 RAY6 OUBROW SOL

622 58 05 W M

038 2336838 WO N MOLOEN MARGA
336838 WD N MOLDEN MARG

028 36 28 N F

041 1317132 MAL4 RAYNOR JOHN
317132 MAL4 RAYNOR JOHN

799 61 01 W M

043 1366042 GISS DEEGAN EDWIN
366042 i GISS DEEGAN EOWI

725 53 03 W M

052 1329766 NEUR THOMAS JESSI
329766 NEUR THOMAS JESS

028 51 20 N F

•••• CENSUS ENTRY HAS THE SAME NUMBER BUT NOT THE SAME NAME
055 1346710 MAL4 FREIDMAN LEO
34671,0 MAL4 FRIEDMAN LE
.3. 55 01 W M
057 1178790 WHI7 LEHR ETHEL

Figure 3-Audit trail: duplicate census entries

••••

269

12/03170

Ace

••
3,16

317

31/1
319
320
321
322
323
324
325
326
327
328
j29

•

PATIENT NAME

NUMBER
376739
365248
376713
000000
374916
000000
000000

FELDMAN EVA
UR8AN MITCH
POMANO PASCA
FREEMAN SOP
MARTIN PERCU
CALA8 ALLEN
SYLVIA 8EARM 000000
VA NKOUWEN8ER 316664
WOODARD MIN 115048
8ASC~' SADIE
113689
HENRy RALPH
000000
KROUSE lEONA 000000
TARASON EVA
000000

CLASS
1
1

! - 'Sc.&..~t- "I 6W~s
1
t.t,~ "to ~...:."' , -.4-:"",,,,,,.
4
4

3
1
2
1

TIw.

~~..lMI~
~ t&......

........ w.....

4

4
3

...""

-....

s.-&

••

DATf

l':l./,
Figure 4-Audit trail: verification of patient identity

human verification is required, more human effort,
perhaps, is expended in keeping this list correct than
in getting the patient identification correct. We regard
this as principally a public relations effort, since an
extra test or a missing test on a patient's specimen
cannot have very serious consequences. Nevertheless,
we do what we can.
All the while, preparations are being made for
printing the daily worksheets for the laboratory personnel, one of the two main tasks of our computer
system. These worksheets carry complete information
about every patient specimen-name, number, hospital
location, hospital service assigned, doctor code, age,
sex, and genetic information. We expect the laboratory
personnel who are professionally trained at all levels
from staff physician to technician, to notice this information, to take a personal interest in the people
for whom they are helping to provide medical care, and
to notify the computer staff of any technical or data
processing discrepancies they may pick up. It is largely
due to their alert interest that the last one-tenth percent
of errors is corrected which make the difference between success and failure.
Among the subsidiary but useful records which the
system generates in this period, which occupies the
first two or three hours of the working day, are the
master accession list, arranged in numerical accession
order, and the alphabetic list of patients. Both of these
carry the complete lists of tests entered for each patient.
The alphabetic list is highly useful in answering telephone inquiries from the house staff who want to know
if they "forgot to order" and like excuses. The provision of this service by the computer has undoubtedly
reduced the number of "stat" requests received during
the day.
Enough has been said already to illustrate our general approach to input, so that only a brief summary of
our input verification of results is necessary. Here we

270

Spring Joint Computer Conlerence, 1971

have two distinct problems: (a) the correct transcription of the results from laboratory to computer and
(b) the medical significance of the results as reported.
(a) As to the first, we have had no direct personal
experience yet with direct on-line acquisition of data
from the laboratory analyzers. Our observation of such
systems installed in other laboratories leads us to believe that on-line direct data acquisition. will raise
just as many problems as it solves. In our system, the
laboratory technician key-punches her own results
direct from her original record. The key-punched cards
are checked by another technician or I operator. After
the key-punched results are entered, the computer
prints a reconstruction of the laboratory record, which
is used for a second check against the original record.
This reconstruction not only affords a second check on
the numerical results reported, but especially calls
attention to (1) extra results reported which were apparently not called for on the original test requests,
and (2) results missing on tests which were originally
entered into the computer.
(b) Still more important, however, and one of the
principal benefits to be sought from a computer-based
report system, is a professional evaluation of the medical validity of the laboratory results reported. In the
days before the overwhelming expansion of volume in
the laboratory work, every result reported from our
laboratory was personally examined by the chief of
the laboratory. This protected the laboratory from
many embarrassing mistakes and made the results,
even with the crude and unspecific tests of those days,
more significant medically in many cases than the excessive number of tests which are indiscriminately
reported today. It is now humanly impossible for the
chief, or even any reasonable number of assistants, to
examine attentively and with judgment all the results which are reported on hospital patients today.
We need some sifting procedure to separate the laboratory results which are obviously reasonable, or for
which no informed judgment is possible, from those
which require and would benefit from the attention of
an experienced clinical pathologist. No one, for example,
can make anything out of a single blood sugar determination on a patient for whom no previous laboratory
work has been reported. There is no value in taking up
the time and attention of the professional staff on such
a report. If, however, the computer is programmed to
bring together all the patient's results and to print
out, for human attention, the blood sugar which is
dubious when compared to other values for that patient, much valuable time could be saved for more
productive use. We are just beginning to see the benefits of this approach; it does not properly belong in a
discussion of input error checking.

The cost. The elaborate scheme proposed above for
human and machine verification of computer data may
seem all out of proportion to the benefits it produces.
We do not deem it so, even though we cannot estimate
either the cost of one negligent error in a laboratory
or the added cost of preventing it. But more than this,
the verification procedure actually costs us very little.
We have run our computerized report system for
several years with 98 percent accuracy and someoccasionally extreme-dissatisfaction on the part of
the final users of it, i.e., the medical staff. We now run
at better than 99.9 percent accuracy in patient identification with no increase in costs. The explanation is
that in any clinical laboratory system there are periods
of great activity and periods of comparative quiet;
the human effort required by the above system of
verification can easily fit into the quiet periods. The
procedures usually need not be carried out a predetermined time, either in relation to the clock or in relation
to the system procedures, so long as they are completed
before the first external report is generated. The
verification procedures are each one simple in themselves, and they use printed lists and batches of source
documents which are in simple orderly relationship to
each other, so that there is not a lot of frantic backand-forth searching involved. Each error that is detected can be pinpointed as to source, occurrence, and
effect, and the fear of unknown and unknowable
responsibilities has been dissipated. The staff has confidence in the system. It is this fact alone which makes
our system a practicable one.
In recent months, following circulation of this paper
in preliminary form, the level of input accuracy has
risen appreciably. Post hoc ergo propter hoc. We have
noticed that when the input is 100 percent accurate,
our verification system reduces to nothing more than
reading each laboratory request slip twice: once when
it is transcribed for keypunching and again with the
validation list produced by computer. This does not
seem unduly burdensome. In pharmacy practice, each
prescription is read tl,lree times-once when picking up
the stock bottle, second when counting out the pills,
and third when returning the stock bottle to the shelf.
We should do no less than twice in laboratory
processing.

REFERENCES
1 DAB LINDBERG
The computer and medical care
C C Thomas Springfield
2 J D WATSON
The double helix
Atheneum N Y 1968

The data system environment simulator (DASYS)*
by LAURENCE E. DeCUIR and ROBERT W. GARRETT
System Development Corporation
Santa Monica, California

INTRODUCTION

computer complex with associated command functions
and a number of tracking stations located throughout
the world. An elaborate communications network
connects the stations with the central computer
complex.
The general purpose portion of the system provides
acquisition data to the various tracking stations,
acquires the satellite as it passes over, collects telemetry
data, transmits commands to the satellite, and provides
tracking data used to update the ephemeris for future
acquisitions.
The system provides the eyes, ears" nervous system,
and muscle to the over-all process.
Each tracking station has one operational computer.
This computer handles the tracking, commanding, and
telemetry data. It is tied to a buffer computer in the
central computer complex, which can be automatically
switched from station to station as it follows the
orbiting satellite. Orbit ephemeris computations, acquisition predictions, command generation, and other
associated functions are performed on off-line computers
using data from the buffers. Acquisition and command
data are transmitted through the buffers to the tracking
stations.
Because of the size and complexity of the computer
programs that make up the system, the organization
employs a number of separate firms for software
development. SDC has contracted to integrate the
resulting computer program system.

For the past nine years, SDC has been Integration
Contractor to one of the largest satellite tracking,
commanding and control networks in the nation.
During this period the software portion of the total
system has increased in both dollars and development
time. Software is now a major element in the over-all
system cost.
One of the prime factors complicating this situation is
the typical requirement to implement such software on a
hardware system that is in a parallel state of development. A second is the use of actual mission supporting
system hardware for development and integration of
new software capabilities. The nature of this problem
differs from that of initial over-all system development
since the hardware already exists in developed form.
However, the interference between normal operations of
support hardware and software development can cause
major disruptions to both efforts.
Past attempts to alleviate this problem have included
a variety of approaches utilizing software only, hardware
only, or a combination of the two to simulate system
functions in a non-operational environment. Although
the results of these efforts have been good, they have
also been piecemeal, and the time required in the
operational environment has still been painfully large.

The support role and configuration
The mission of the organization for which SDC is
Integration Contractor is to acquire, track, command,
monitor and recover spacecraft in a multiple satellite
environment.
The organization utilizes a very large, extremely
complex computer-based general purpose command and
control system. The system consists of a central

SDC's integration role
A software system is a host of computer programs:
each performs one aspect of a complex job, each must be
coordinated and compatible with the others. A system
must be complete and must effectively fulfill the
requirements for which it was designed and built. To say
that a system is complete and effective means that all of
its parts fit together, that they cover all of the tasks

* © Copyright System Development Corporation 1970.

271

272

Spring Joint Computer Conference, 1971

necessary to do the total job, .that the parts and the
system have been tested and proved reliable, and that
personnel have been trained to operate the system and
keep it running. The process of making a system
complete and assuring its quality is called integration.
To comprehend how sizable the integration effort can
be, one need only consider the following:
• There are currently five system program models in
various stages of work, and there have been as
many as seven. Two are operational, one is in the
final phase of validation testing prior to installation,
one is being coded, and one is in design.
• Each program model consists of approximately
1,600,000 instructions. New models are introduced
based on changing requirements.
• Change control (design changes and program
changes resulting from detected errors) must be
maintained for each model. A typical program
model has dozens of design changes and hundreds
of discrepancy correcting changes made during its
dynamic operational life.
• The system uses three different computers. Conversion to new configurations requires retraining of
personnel.
• Interface requirements must be specified for each
model. New or modified simulation techniques and
tools, so vital in the validation testing activity,
must be developed continually.
• Training of operations personnel and on-going
support must be provided for continuity and
feedback.
The scope of SDC's integration effort varies slightly
depending on the schedule for each program model, but
in order to provide continuous quality control and
assurance, a level of effort concept is clearly mandatory.
A discussion of problems encountered in software
development in the live environment follows.

to the actual system element status and will generally
rely on a telephone network for some of the information
transfer and control. The setup of his test condition will
require scheduling which impacts on normal system
operations. Test personnel availability and travel to
scattered locations are also some significant problem
areas. This type of operation may also require extended
time at remote locations to run tests, analyze results,
and factor system changes back into the software under
development.
The use of live vehicles in a system test environment
is designed to provide conditions as close to operational
as possible; however, even the best of these tests are
often analyzed on a statistical basis in an attempt to get
the most information out of a very expensive set of test
conditions, but here again compromises are made to hold
the cost and time involved to reasonable values.
In general it can be said that a command and control
system involving real-time operational software must be
tested or demonstrated in an environment as close to
operational as possible prior to final acceptance.
However, these tests or demonstrations are very
difficult and costly to design, set up, staff, and control.
These problems make it very desirable to reduce as
much as possible the system time required for these
activities in the operational environment.
After careful evaluation of the cost and utility of
providing a complete data environment functional
simulation system (test bed) to minimize the problems,
SDC built the Station Ground Environment Simulator
(STAGES) for the Air Force. The system consisted of
interfacing hardware, a simulation computer and the
simulation system software. An updated version of
STAGES is being Installed now.
The Data System Environment Simulator (DASYS)
represents a refinement and expansion of the STAGES
system on a custom basis to accommodate any simulation requirements.

ALTERN ATIVES
OPERATIONAL PROBLEMS
The software developer has a great deal of confidence
in his ability to cope with problems which occur in a
closed computer and computer peripheral environment.
When this environment is extended to include complex,
real-time asynchronous acting and geographically remote system elements, his confidence and ability are
significantly reduced.
In any network-wide test condition which is set up to
exercise software much of the system technical performance is invisible to, and beyond the control of, the
software developer or integrator. He will be uncertain as

The ground rules established for the design and
development of the software test bed are:
1. That the total support environment be available
(includes telemetry plus tracking and command),
2. That a direct operator interface be provided
through the Operator Console,
3. That the operational computer process be as close
to real-time as possible,
4. That software being tested be in the exact
deliverable condition (no octals required to
accommodate test bed peculiarities),

DASYS

273

5. That dynamic control of every bit of each word
passing through the computer interface be
available,
6. That recording capability of the operations be
available for analysis,
7. That environment modification require a minimum of software changes.

12. Software checkout and integration support are
the sole roles,
13. Additional applications areas are available for:
(a) Mission rehearsal
(b) Operator training under nominal and
anomalous real-time conditions
(c) Data system test and exercise.

Two alternatives for a software test bed were
considered by the Air Force before selection of SDC's
DASYS approach.

The only disadvantages of simulating the support
environment are:

1. That the functional support environment be
provided to the real-time computer by means of
digital simulation (the DASYS approach),
2. That the configuration consist of all support
components of the actual tracking station.
I

The advantages of Software Checkout by simulating
the functional support environment are:
1. Complete real-time environment is provided for
testing, using dynamic inputs,
2. Implementation and operation costs are significantly less than in a facility using real hardware,
3. The user controls the test environment; there is
no requirement for a larger complement of
support personnel or extensive communication
systems,
4. Versatility is provided by simulation rather than
actual components. This allows:
(a) Practically unlimited computer provided
data input parameter control
(b) Experimentation with alternatives
(c) Automatic or manual control of environment
(d) Operation and control by user with minimal
support,
5. Computer time, calendar time, and cost for
program checkout are reduced,
6. There is ability to repeat specific tests for each
program model and automate comparison of
results,
7. The system can be readily modified to reflect new
interface characteristics,
8. Non-interference between hardware and software
subsystem checkout processes is guaranteed
through final acceptance,
9. Real-time dynamic and post-test analysis of all
computer/hardware interface is available,
10. Real-time program checkout is conducted independent of operational system availability,
11. Simulation of hardware anomalies and future
hardware capabilities is available prior to
\ equipment availability,

1. Inability to investigate hardware-specific problems,
2. No checkout of hardware diagnostic types of
programs.
The .advantages inherent in the actual tracking
station configuration were:
1. The system could be used to accept, check out,
and integrate hardware to be installed in the
tracking stations,
2. Hardware modifications could be tried and tested
in a non-operational environment,
3. Hardware diagnostics and other supporting
programs could be checked out on a non-interference basis,
4. The equipment components could be used as a
training aid for both hardware operators and
maintenance personnel,
5. The system would provide a limited operational
software checkout capability on operational
computers.
The fact that only a limited checkout capability
existed in the duplicated tracking station test bed
(alternative 2) plus the disadvantages listed below were
the reasons for not taking this approach. The disadvantages are:
1. The configuration resembles a tracking station to
such an extent that many of the disadvantages of
site use for software checkout also apply to the
test bed.
(a) Full-scale preventive maintenance is required for configuration components because
real, rather than simulated, hardware is
used.
(b) All engineering changes apply to the components, the same as they would to the
tracking stations. This increases system costs
and interferes with software checkout ..
(c) Extra manning is required for monitoring
and operating the components that have no
real bearing on the software subsystem test.

274

Spring Joint Computer Conference, 1971

modify the operational computer to make it operate as
if it were in real-time. This is accomplished by stopping
the clock in the operational computer while the
simulation computer processes the next input stream.
Thus, the operational computer is operating in
segmented real-time.

General description

Figure 1-Data system environment simulator

2. The primary role of the test bed would be
hardware support. The amount of software
checkout time required would all but eliminate
this role.
3. Parameter control is severely limited.
4. No capability exists to:
(a) Checkout completely controlled error conditions for any vehicle command functions
(b) Run with multiple controlled environment
(c) Repeat tests under manual control.
5. The absence of system-oriented hardware/software technical interface personnel hinders resolution of problems.
The first alternative was chosen as being the best and
most economical solution for developing the software
test bed. The entire data system functional environment
would be provided by a relatively small simulation
computer.

THE DATA SYSTEM ENVIRONMENT
SIMULATOR (DASYS)
There were two approaches available for the design of
DASYS. First, the simulation computer could be large
enough and fast enough to provide information to the
operational computer, in real-time or near real-time.
This would require the simulation computer to be
several times larger than the operational computer and
the test programs would have to be essentially real-time
programs. This approach-although used by several
government agencies-did not offer any savings over the
conventional method of program testing and evaluation;
in fact, the cost of the simulation computer and the
cost of writing the real-time programs for this system
would cause it to be more expensive than using the
actual tracking station.
The second approach was to use a relatively small
simulation computer, operating in nonreal-time, and to

The Data System Environment Simulator consists of
a small simulation computer with associated peripherals
plus simulation software and interfacing hardware
(Figure 1). In this system, user operational controls and
displays are included in the interfacing hardware.
The interfacing hardware in the system provides all
of the externally generated signals and data to the
operational computer including user controls in the same
manner that the live environment does when interacting
with a live moving vehicle.
Since the simulation computer must complete its
functions without compromising the timing integrity of
the operational computer, the operational computer
clock may be stopped before the next input from the
simulation computer. This permits operational computer
transactions to be performed as if the operational
computer were operating in a continuous time mode, and
allows the simulation computer to run in nonreal-time.
Thus, the environment simulation is not time
constrained by the simulation computer.
One part of the simulation system "drives" the
environment by functionally simulating data, equipment responses, and operator interactions to whatever
level is required for the purpose at hand. Operations in
real-time, delayed time, and accelerated time are all
possible. During software tests, data rates and complexity may be varied over ranges that are much broader
than those in actual operations.
Another part of the simulation system records the
results of each run, so that the software's interaction
with its environment may be analyzed in detail after the
simulation run. The required analytic tools may be run
on the simulation computer.
The user can operate the system in any of three ways:
thereis manual control of the system using the hardware
alone, automatic control using previously generated
tapes, and semi-automatic control using tapes and
manual intervention with the hardware.
The capability to simulate a data system environment
is defined generically as data system environment
simulation. The parameters of an operational system
which must be considered are operational computer I/O
channels, word size, and environment functional
responses at the computer I/O channels.

DASYS

Interfacing hardware
The interfacing hardware consists of electronic
equipment that connects the simulation computer to the
operational computer. The hardware complements the
simulation computer by providing functions not feasibly
provided by software either in cycle time or economy. It
also provides monitoring functions of the data exchange
occurring at the operational computer interface by
providing all intelligence with time tagging in microseconds for recording on magnetic tape. No modification
has to be made to the operational computer data
channels due to the matched interface logic provided by
the hardware.
General functions provided are: master timing,
operational clock control, data recording and time
tagging, s:mulation computer interface, controls and
indicators, and environmental functions which can. be
hardwired.
Operational controls and displays are provided to the
user through the interfacing hardware to allow recording
of user-generated inputs or inputs from the simulation
computer.
Simulation computer
The simulation computer configuration includes
appropriate peripherals to provide the programmed
functional environment and control signals through the
interfacing hardware to the operational computer. The
simulation computer is smaller than the operational
computer but will transfer data in and out of the
interfacing hardware at a rate fast enough to cause the
simulated environment to respond to the operational
computer stimulus in near real-time. The ratio of
operating time to real-time will normally be of the order
of 1.5 : 1. The simulation computer configuration can
perform utility and off-line functions for the operational
computer when the required peripherals are included in
the system.
Simulation software
The software operating in the simulation computer
will provide nominal environmental inputs and responses
to the operational computer unless a specific anomalous
condition is programmed into the system. This computer
will generally input varying environmental parameters
from exercise-specific, system parameter tapes containing data such as track, telemetry and operator inputs,
command responses, and communication data. These
data will be output to the operational computer with
timing integrity. Anomalies can be entered manually

275

from a card reader or keyboard, or automatically from
a tape.
The software can cause the printout of selected
time-tagged operational computer interface data (during
an exercise or after an exercise) from the recording tape
which has been written by the simulation computer.

Summary
Using DASYS, a programmer can checkout operational programs in a real-time environment without
disrupting system operations or having to involve
others. There is considerable versatility in the way he
can use both hardware and software. DASYS also
permits experimentation with alternative programming
approaches and determination of the exact time-tagged
operational implication of each. There can be manual
control capability of the system, using the simulator
hardware alone, or the user can operate from a previously generated environment tape that provides
automatic control. This real-time simulation is also
practical for mission rehearsals, system tests, and
operator training.
DASYS can handle functional simulation requirements for nearly all system elements-sensors, command
equipment, telemetry receivers, radars, system operator
consoles, timing systems, alarms, and displays. Functional simulation of hardware being developed and
objects being or to be supported, sensed, or directed by
the software system being tested can also be provided.
It makes little difference whether these objects are
satellites, missiles, aircraft, railroads, or items being run
through processing plants.

COST EFFECTIVENESS
The process of developing, testing, and integrating
large computer programs includes detecting and correcting program system errors. This points to a very
significant way of measuring the effectiveness of any
means to accomplish these tasks.
The parameters involved are numbers of errors, time
between locating and correcting errors, and cost to spend
system time in locating and correcting errors. Additional
considerations include errors remaining in the system,
level of confidence, system performance parameters
control, and availability of test environment.
The measurement of progress in development of
computer programs has been the subject of much study
and analysis. The detection and correction of an error
constitutes a step in this process. Measures of program
status can be related to the rate at which new errors

276

Spring Joint Computer Conference, 1971

UIIT - 0 EMOR DETECTION POStIBILITIES

-I.EYEL
OF

~
I

I
I
I

I

~~----------~-------------------A
OL---+~------------+tl----------------------7t2

Figure 2-Level of confidence chart

come to light. The lower the error detection rate, the
closer the system is to the error-free asymptote.
Figure 2 portrays the method in which software is
normally tested and accepted. The vertical axis
represents the level of confidence represented as percentage. The top horizontal line represents the limit or
100 percent level of confidence while the horizontal
dashed line represents an acceptance level of confidence.
When the acceptance level is reached the program or
system is usually declared operational and turned over
to the maintenance programmers. The horizontal axis
represents time from the start of system testing.
The slope of Curves A, B, C, D, and E represents the
error detection possibilities. Therefore, any curve with
zero slope has zero error detection possibilities. Curve A
represents the assembly and testing cycle. The initial
error rate is steep since the program is assembled and
cycled for the first time but levels off very fast as the
limitation of this procedure is reached.
Curve B starts (point 1)when Curve A approaches its
limitation and represents the system where the environment is simulated by hardware, function generators, and
special purpose simulators. The error detection rate will
immediately go up since the capability of simulating

various subsystems of the live environment has been
introduced. Again the cumulative number of errors
detected and corrected cannot exceed the capability of
the facility and it is necessary to go to the next facility.
This system is limited primarily by the inability to test
the total system and the limitations of special purpose
hardware, generally analog, which cannot be controlled
to the desired precision.
Curve C represents final testing in the live environment. It is a step curve since testing is limited by
hardware status, scheduled support of operations, and
the inability to repeat the exact set of conditions to test
corrections. The length of the horizontal segments of
this curve will vary depending on the above factors. The
system is accepted when it reaches the acceptance line
of the chart (~).
Curve D represents the error detection possibilities of
using a simulation computer operating in nonreal-time.
Due to the fact that it can exercise the entire system
and repeat the exact conditions as many times as
necessary, it has steep error detection capability up
through the acceptance level. Realizing that no one
would accept a system that had not operated satisfactorily in the live environment, the acceptance test on
the operational system could be accomplished at time t1•
Curve E represents a simulation computer operating
in real-time. The only reason it does not have as high a
confidence level as Curve D is that at some time in the
testing, the capability of the computer· to continue to
provide real-time input to the operational computer
would be exceeded. Larger and larger systems could be
installed to eliminate this problem, but the hardware
and software costs would be prohibitive.
There have been no statistics gathered as to how
great a savings in time and money can be realized
through use of the Data System Environment Simulator.
It is estimated that the time savings (t 2 -t1) could be
as much as 70 percent and that the dollar savings could
easily be 50 percent or more.

Management information systems-What happens after
implementation?
by D. E. THOMAS, JR.
Western Electric Company, Incorporated
Greensboro, North Carolina

sible and rapid conveyor belt for appropriate high
quality information from its generator to its user."3
The information system should provide not only a
confrontation between the user and information, but
also, the interaction required for relevant and timely
decision making. The heart of an effective MIS is a
carefully conceived, designed and executed data base.
By fully utilizing MIS, it can become "an intelligence
amplifier" and the computer can become an extension
of the manager's memory. Ultimately, the computer
should free him from routine tasks, providing more
time to devote to the creative aspects of his job.!

INTRODUCTION
One of the greatest challenges facing the modern business firm is the development of means to efficiently and
prudently harness computer technology, and make it
produce. Classical computer production has been directed toward making existing tasks more efficient. A
typical indicator of this direction has been the automation of clerical accounting and payroll systems. Recent
computer production, however, is becoming increasingly
orient.ed toward automation of the decision making
processes themselves (the "what if" question). This
latter trend has forced the development of large, fast
memory, tremendous on-line storage capabilities, powerful operating systems,- and generalized, easy retrieval
programming languages.
Furthermore, it is recognized that these hardware
and software capabilities alone may not be sufficient
to provide the desired results. The operating environment must also undergo change. As the complexity and
importance of automated information continues to
grow, the role of the classical, job-shop service organization must be abandoned. A vital, dynamic support
group of Production System Analysts must be substi-'
tutedwhich will be responsible for the management
and control of this computer based "information
utility."

TECHNOLOGY NOW AVAILABLE
The technology for MIS is available now for most
companies:
-Computers have large memory capacities and
great speed.
-Memory is now economical.
-Multiprogramming and multiprocessing systems
have been developed.
-On-line storage in tremendous quantities is now
available.
-Programming languages specifically designed for
quick and variable retrieval are available.

USE OF MANAGEMENT INFORMATION
SYSTEMS

Yet, in a recent poll, the users of MIS from 655 firms
(73 percent of those responding) stated that their companies had not used computers to maximum advantage
in meeting management's information needs. 2

One means by which many firms utilize computing
capability is through the development and implementation of management information systems (MIS). MIS
attempts to solve the information problem by providing
relevant information in the right form to the right
person at the right time. 3
Appropriately, MIS has been defined as "an acces-

WHY IS MIS NOT WORKING?
What is the problem? What then remains for MIS
to be widely and effectively utilized by managers as a
277

278

Spring Joint Computer Conference, 1971

partner in the management of their business? Except
for the continued improvements in hardware and software technology that will occur, the problems' of MIS
are primarily problems of how to manage the technology existent. Many reasons may be advanced to
account for this mismanagement of MIS and computer
technology:
• Difficulty in finding and training EDP personnel.
• Insufficient participation of management in supporting computer application development.
• "Communications g,ap" between systems personnel
and management.
• Improper location of the EDP function within the
organizational structure.
• Lack of involvement of user organizations in computer decisions.
Each of these is valid and each must be solved if
MIS is to efficiently perform as a "conveyor belt" of
information. However, too often, a well conceived, welldesigned, and well-programmed information system
will fail because it is mismanaged after production
implementation. There has been little attention devoted
to how an information system should function after it
has been conceived, designed, programmed, and implemented. How should the system be operated? What
should be the user involvement? Development personnel involvement? These questions must be answered
with workable solutions dependent upon the system
functions, company policy, and company goals present.

CLASSICAL OPERATION OF INFORMATION
SYSTEMS
The classical modes of operation of computer systems
are: (1) The design organization, which included system analysts and programmers, continued to be responsible for the execution of the system, (2) the user
was educated to a degree whereby he could, at least
mechanically, exercise the system to produce what he
needed, or (3) the Data Center operation personnel
were held responsible for the execution of the system,
this responsibility either being exercised by computer
operators or control clerks.
The first of these modes of operation, execution by
the design organization, is inefficient because human
resources will not be effectively utilized. An unbalanced
work load would exist, whereby some people would
spend most of their time on production runs whereas
others would spend only a small amount of time. Furthermore, trained systems analysts and programmers
should be utilized as systems analysts and programmers

and not performing in a repetitive, production environment.
Development organizations would find it difficult to
provide the necessary SUbsystem interface because they
would not be aware of activity in those subsystems in
which they had not been involved during the development stages. For example, subsystems of a total system
may be developed by different programming groups,
relatively independent of each other. In summary then,
the management and control of production information
systems should be recognized as a full-time job, and
programmers and systems analysts would not, and
should not be expected to, devote this required effort.
The second possible mode of operation emphasized
the development organization designing and programming a system and educating the user to submit his
own work to the data center. This was a mechanical
process for the user because he neither had the time nor
interest to learn enough to solve any EDP problems or
effect any system improvements. Consequently, the
development organization was responsible for solving
all problems. Furthermore, the system was infrequently
analyzed to determine if it was continuing to meet user
needs and, as a result of this, the user frequently became disenchanted with the information system. Another drawback to this type of operation was that systems could not be designed with more than one direct
source of input to or disposition of output from the data
base. Data collection and document distribution organizations were formed to exercise this interface with
the data base for these areas in which volume demanded
it.
The most serious drawback, however, was that the
user of a subsystem would be naturally concerned only
with a subset of the total system, that subset being the
subsystem which produces the information he needs.
If his reports are validly produced, he does not care
that he has entered "garbage" into the data base that
some other user might inherit. The Data Center, furthermore, because it has no control over what it does
or when it does it, will not be in control of its operation
and, hence, the operation of the center might tend to
be wasteful and inefficient. This is particularly true in
the multi-programming environment where much efficiency can be gained by careful scheduling of various
job mixes.
The third mode of operation, 'the use of computer
operators or control clerks to set up and run what was
requested by the users, was deficient for some of the
same reasons mentioned above-(l) The management
and control of a system required an indepth knowledge
of that system; it is not a part-time job, (2) interfaces
between subsystems must be timely and accurately
made to generate meaningful output, and (3) input

Management Information Systems

data control and computer job control must be exercised
by the same group for greatest efficiency. This mode of
operation, while it provided the data center with some
control over its operation, was primarily mechanical
with problem solving and system improvement by development personnel.
ADVANTAGES OF PRODUCTION SYSTEMS
ANALYST GROUP
The incorporation of a strong support group solely
dedicated to managing production information systems will solve many of these traditional operating
problems. This group of Production System Analysts
provides the following advantages:
1. Development personnel can devote full time to
development effort.
2. User devotes full time to improving accuracy
and timeliness of data content.
3. Operations-computer operators and control
clerks-devote full time to improving the efficiency of the computer.
4. Central point of coordination for data and file
maintenance is established.
5. System is constantly analyzed for improvement,
ensuring that it meets user's everchanging needs.
Development and maintenance effort is requested as required.
6. Quick and efficient processing of data into the
base and the resultant generation of documents,
interconnecting dependent information subsystems as necessary, is effected.

PROBLEM OF STAFFING
Proper staffing of this group is perhaps the greatest
problem encountered. The types and levels of personnel
required for this group, in that the function had no
precedent, has been largely experimental, but is now
stabilizing itself. One key element is becoming increasingly dominant-the group should be more useroriented than EDP-oriented. The members of the
group, furthermore, must be highly motivated for they
are the only bridge between company needs and the
computer. It is recognized that many of the tasks performed by the g