1972 12_#41_Part_1 12 #41 Part 1

1972-12_#41_Part_1 1972-12_%2341_Part_1

User Manual: 1972-12_#41_Part_1

Open the PDF directly: View PDF PDF.
Page Count: 666

Download1972-12_#41_Part_1 1972-12 #41 Part 1
Open PDF In BrowserView PDF

December 5 - 7, 1972
Anaheim, California

The ideas and opinions expressed herein are solely those of the authors and are not necessarily representative of or
endorsed by the 1972 Fall Joint Computer Conference Committee or the American Federation of Information
Processing Societies, Inc.

Library of Congress Catalog Card Number 55-44701
210 Summit Avenue
Montvale, New Jersey 07645

©1972 by the American Federation of Information Processing Societies, Inc., Montvale, New Jersey 07645. All
rights reserved. This book, or parts thereof, may not be reproduced in any form without permission of the publisher.

Printed in the United States of America

Properties of disk scheduling policies in multiprogrammed computer
systems .. " ........................................ , ....... .
The interaction of multiprogramming job scheduling and CPU
scheduling .................................................. .


T. J. Teorey


J. C. Browne
J. Lan
F. Baskett
D. Murphy


K. Levitt

Exact calculation of· computer network reliability .................. .


R. Wilkov
E. Hansler
G. McAuliffe

A framework for analyzing hardware-software trade-offs in fault
tolerant computing systems ................................... .


K. M. Chandy
C. V. Ramamoorthy
A. Cowan

Automation of reliability evaluation procedures through CARE-The
computer aided reliability estimation program .................. .
An adaptive error correction scheme for computer memory systems .. .


Dynamic configuration of system integrity ........................ .


F. P. Mathur
A. M. Patel
M. Hsiau
B. Borgerson

Storage organization and management in TENEX ................. .
The application of program-proving techniques to the verification of
synchronization processes ..................................... .

The in-house computer department .............................. .
A computer center accounting system ............................ .
An approach to job billing in a multiprogramming environment ..... .


Facilities managemBnt-A marriage of porcupines ................. .


J. Pendray
F. T. Grampp
C. Kreitzberg
J. Webb
D. C. Jung

Automated map reading and analysis by computer ................ .


Computer generated optical sound tracks ......................... .


Simulating the visual environment in real-time via software ......... .
Computer animation of a bicycle simulation ...................... .


An inverse computer graphics problem ........................... .


R. H. Cofer
J. Tou
E. K. Tucker
L. H. Baker
D. C. Buckner
R. S. Burns
J. P. Lynch
R.·D. Roland
W. D. Bernhart

Module connection analysis-A tool for scheduling software debugging
activities ................................................... .
Evaluating the effectiveness of software verification-Practical experience with an automated tool ............................... .
A design methodology for reliable software systems ................ .
A summary of progress toward proving program correctness. . . ..... .


F. M. Haney


J. R. Brown
B. H. Liskov
T. A. Linden


D. J. Kuck
J. Watson


J. A. Rudolph

SIFT-Software Implemented Fault Tolerance .................... .
TRIDENT-A new maintenance weapon ........................ .
Computer system maintainability at the Lawrence Livermore
Laboratory ................................................. .


J. H. Wensley
R. M. Fitzsimons


The retryable processor ........................................ .


J. M. Burk
J. Schoonover
G. H. Maestri


G. J. Nutt
T. E. Bell
A. De Cegama

LOGOS and the software engineer ............................... .
Some conclusions from an experiment in software engineering
techniques .................................................. .
Project SUE as a learning experience ............................ .


C. W. Rose


System quality through structured programming .......... , ....... .


D. L. Parnas
K. C. Sevcik
J. W. Atwood
M. S. Grushcow
R. C. Holt
J. J. Horning
D. Tsichritzis
F. T. Baker

Supercomputers for ordinary users ............................... .
The Texas Instruments advanced scientific computer .............. .
A production implementation of an associative array processorSTARAN .................................................. .

Evaluation nets for computer system performance analysis ......... .
Objectives and problems in simulating computers .................. .
A methodology for computer model building ...................... .

(P anel Discussion-No Papers in this Volume)

An application of cellular logic for high speed decoding of minimum
redundancy codes ........................................... .


On an extended threshold logic as a unit cell of array logics ......... .
Multiple operand addition and multiplication ..................... .


Techniques for increasing fault coverage for asynchronous sequential
networks .................................................. , .


L. R. Hoover
J. H. Tracey

K. Nezu
S. Naito
T. Nanya
R. Mori
R. Waxman
S. Singh

System identification and simulation-A pattern recognition
approach .................................................. , .
Horizontal domain partitioning of the Navy atmospheric primitive
equation prediction model .................................... .


W. J. Karplus


An analysis of optimal control system algorithms .................. .


Computer simulation of the metropolis ........................... .


E. Morenoff
P. G. Kesel
L. C. Clarke
C. N. Walter
G. H. Cohen
B. Harris


S. Rothman
R. F. Boruch


R. Turn
N. Z. Shapiro


J. M. Carroll

Hardware-software trade-offs-Reasons and directions ............. .
A design for an auxiliary associative parallel processor ............. .


An eclectic information processing system ........................ .


R. L. Mandell
M. A. Wesley
S. K. Chang
J. H. Mommens
R. Cutts
H. Huskey
J. Haynes
J. Kaubisch
L. Laitinen
G. Tollkuhn
E. Yarwood

The protection of privacy and security in criminal offender record
information systems ......................................... .
Security of information processing-Implications for social research ... .
Privacy and security in data bank systems-Measures, costs, and
protector intruder interactions ................................ .
Snapshot 1971-How one developed nation organizes information about
people ............................ , ..... , .................. .
(Panel Discussion-No Papers in this Volume)

Microtext-The design of a microprogrammed finite state search
machine for full text retrieval ................................. .


Design of the B1700 ........................................... .


R. H. Bullen, Jr.
J. K. Millen
W. T. Wilner

An on-line two-dimensional computation system ................... .
Debugging PLII programs in the multics environment ............. .
AEPL-An Extensible Programming Language ................... .


The investment analysis language ............................... .


T. G. Williams
B. Wolman
E. Milgrom
J. Katzenelson
C. Dmytryshak

The design approach to integrated telephone information in the
Netherlands ................................................ .


R. DiPalma
G. F. Hice

Field evaluation of real-time capability of a large electronic switching
system ..................................................... .


Minimum cost, reliable computer-communications networks ........ .


W. C. Jones
S. H. Tsiang
J. De Mercado

(Panel Discussion-No Papers in this Volume)
Control Data STAR-lOO file storage station ....................... .


Protection systems and protection implementations ................ .
B1700 memory utIlization ...................................... .
Rotating storage devices as "partially associative memories" ........ .


G. Christensen
P. D. Jones
R. M. Needham
W. T. Wilner
N. Minsky

Page fault frequency (PFF) replacement algorithms ............... .


Experiments wish program locality .............................. .


W. W. Chu
H. Opderbeck
J. R. Spirn
P. J. Denning

TASSY-One approach to individualized test construction .......... .


T. Blaskovics
J. Kutsch, Jr.

A comprehensive question retrieval application to serve classroom
teachers ... , ............................................. ' .. .
Computer processes in repeatable testing ..... , ................... .


G. Lippey
F. Prosser
J. N akhnikian

Properties of disk scheduling policies in multiprogrammed
computer systenls
University of Wisconsin
Madison, Wisconsin

SCAN have been suggested by Coffman and Denning, 2
Manocha, 9 and Merten. 10 The implementation of SCAN
is often referred to as LOOK,1O,12 but we retain the
name SCAN for consistency within this paper. Both
C_SCAN9,11,12,13 and the N-step scan6 ,12,13 have been
discussed or studied previously and the Eschenbach
scheme was developed for an airlinessystem. 14 Because
it requires overhead for rotational optimization as well
as seek time optimization it is not included in the
following discussion. In the simulation study12 it was
seen that the C-SCAN policy, with rotational optimization, was more appropriate than the Eschenbach
scheme for all loading conditions, so we only consider
C-SCAN here.
The simulation results indicated the following, given
that cylinder positions are addressed randomly:12
under very light loading all policies perform no better
than FCFS. Under medium to heavy loading the FCFS
policy allowed the system to saturate and the SSTF
policy had intolerable variances in response time.
SCAN and the N -step policies were superior under
light to medium loading, and C-SCAN was superior
under heavy loading.
We first investigate various properties of the N -step
scan, C-SCAN, and SCAN, since these are the highest
performance policies that optimize on arm positioning
time (seek time). The properties include mean, variance, and distribution of response time; and the
distribution of the positions of requests serviced as a
function of distance from the disk arm before it begins
its next sweep. Response time mean and variance are
then compared with simulation results.
A unified approach is applied to all three policies to
obtain mean response time. The expressions are nonlinear and require an iterative technique for solution;
however, we can easily show that sufficient conditions
always exist for convergence.
Finally, we look at the factors that must be considered in deciding whether or not to implement disk

The subject of scheduling for movable head rotating
storage devices, i.e., disk-like devices, has been discussed at length in recent literature. The early scheduling models were developed by Denning, 3 Frank, 6 and
Weingarten. 14 Highly theoretical models have been set
forth recently by Manocha,9 and a comprehensive
simulation study has been reported on by Teorey and
Pinkerton. 12
One of the goals of this study is to develop a model
that can be compared with the simulation results over
a similar broad range of input loading conditions. Such
a model will have two advantages over simulation: the
computing cost per data point will be much smaller,
and the degree of uncertainty of a stable solution will
be decreased.
Although the previous analytical results on disk
scheduling are valid within their range of assumptions,
they do not provide the systems designer with enough
information to decide whether or not to implement disk
scheduling at all; neither do they determine which
scheduling policy to use for a given application, be it
batch multiprogramming, time sharing, or real-time
processing. The other goal of this study is to provide a
basis upon which these questions can be answered.
The basic scheduling policies are summarized with
brief descriptions in Table 1. Many variations of these
policies are possible, but in the interest of mathematical
analysis and ease of software implementation we do
not discuss them here.
SCAN was first discussed by Denning. 3 He assumed
a mean (fixed) queue length and derived expected
service time and mean response time. The number of
requests in the queue was assumed to be much less than
the number of cylinders, so the probability of more
than one request at· a cylinder was negligible. We do
not restrict ourselves to such an assumption here.
Improvements on the definition and representation of


Fall Joint Computer Conference, 1972

TABLE I-Basic Disk Scheduling Policies
1. FCFS (First-come-first-served): No reordering of the queue.
2. SSTF (Shortest-seek-time-first): Disk arm positions next at
the request tha.t minimizes arm movement.
3. SCAN: Disk arm sweeps back and forth across the disk
surface, servicing all requests in its path. It changes direction
only when there are no more requests to service in the current
4. C-SCAN (Circular scan): Disk arm moves unidirectionally
across the disk surface toward the inner track. When there
are no more requests to service ahead of the arm it jumps back
to service the request nearest the outer t.rack and proceeds
inward again.
5. N-step scan: Disk arm sweeps back and forth as in SCAN, but
all requests that arrive during a sweep in one direction are
batched and reordered for optimum service during the return
6. Eschenbach scheme: Disk arm movement is circular like
C-SCAN, but with several important exceptions. Every
cylinder is serviced for exactly one full track of information
whether or not there is a request for that cylinder. Requests
are reordered for service within a cylinder to take advantage
of rotational position, but if two requests overlap sector
positions within a cylinder, only one is serviced for the current
sweep of the disk arm.

(constant transmission time) is a fair approximation for our purpose of a comparative
3. Only a single disk drive with a dedicated controller and channel is considered, and there is
only one movable head per surface. All disk
arms are attached to a single boom so they must
move simultaneously. A single position of all
the read/write heads defines a cylinder.
4. Seek time is a linear function of seek distance.
5. No distinction is made between READ and
WRITE requests, and the overhead for scheduling is assumed negligible.
If there are L requests in the queue at equilibrium
and C cylinders on the disk, we partition the disk
surface into C1 equal regions (as defined below) and
assume that at least one request lies in the center of that
region. This partition is only valid when seek time is a
linear function of distance. C1 is -computed as follows:
since the distribution of L requests serviced is uniform,
the probability that cylinder k has no requests is
given by

scheduling in a complex system. In practice, considerable attention should be given to these factors
before thinking about which policy to use.
The N -step scan is the simplest scheduling policy to
model using the approach discussed here. While the disk
arm is sweeping across the surface to service the previous group of requests, new requests are ordered
linearly for the return sweep. No limit is placed on the
size of the batch, but at equilibrium we know the
expected value of that size to be L, the mean queue
length. Furthermore, we know that the resulting
request position distribution will be the same as the
input distribution, which we assume to be uniform
across all the disk cylinders. We also assume the
1. Request inter arrival times are generated from
the exponential distribution.
2. File requests are for equal sized records. This
simplifies the analysis. We assume that the total
service time distribution (seek time plus rotational delay plus transmission) is general and
cannot be described by any simple distribution
function. We also assume that the access time
(seek time plus rotational delay) dominates the
total service time, so that fixed record size


The expected number of cylinders with no requests is
CO=CPk , so that the expected number of cylinders
requiring service is:

C1 =C-CO

=C-C(l- ~r
If the incoming requests are placed at· random and
the disk arm has equal probability of being at any
cylinder, we know that the expected distance between
an incoming request and the current position of the
disk arm is approximately C/3 for large C. Typically,
C ~ 200 for currently available disks. In Figure 1 we see
the possible paths taken from the disk arm to the new
request for the expected distance of C/3. The expected
number of requests serviced hefore the new request is
serviced is L, and the mean response time is

where Ts is the expected service time per request and
T 8W is the expected sweep time from one extreme of the
disk surface to the other.

Properties of Disk Scheduling Policies


we have:

L= XCl(Tmin+AT/Cl+T/2+T/m-a)



Figure 1

The expected service time under the assumptions
listed above was derived by Teorey and Pinkerton12
as follows:

T.=P (T,.+

f + f)


Equation (9) computes mean queue length in terms
of the input rate X, the known disk hardware characteristics, and C1• C1 is, however, a nonlinear function
of L. We solve (9) by estimating an initial value for L
in (2) and iteratively substituting (2) into (9) until
the process converges.



[(mt-2) (m-l) +1]



where P is the probability that a seek is required to
service the next request, Tsk is the expected seek time,
T is the rotational time of a disk, m is the number of
sectors per track, and t is the number of tracks per
cylinder. Under our conditions, P=CI/L, and we
simplify expression (4) by making the following

T [(mt-2) (m-l)
a= +1 ]


L= - 1-Xa

[1- (C~lrJ (Tmin+ f +; -a)

+- (Tmin +T/2+T/m-a)

- - - (Tmin +T/2+T/m-a) -C


Letting Kl=XATj(l-Xa) +[XC/(1-Xa)](Tmin +T/2+
Tim-a) andK2 =[XC/(1-Xa)]. (Tmin +T/2+T/m-a)
we obtain after i iterations:



T m in+



Also, for a linear seek time characteristic


Rewriting (9) in terms of (2) we obtain


where llT = T max- T min, T min is the seek time for a
distance of 1 cylinder, and T max is the seek time for a
distance of C -1 cylinders. Restating (4) we now have

Assuming that Li>O for all i, and l-Xa>O (no saturation) , we have:

Li>O} =}0:5: (C-l)Li
- -C1-Xa>0
=}0 L i - 1=}Li+l >Li and L i 1 for all i, and l-Aa>O (no saturation).
L'i>1 }










 Ka also exists; each has a
probability of .5.






Figure 6

cylinder k obtaining the next incoming request:

1. KrKa


for Areas 1, 2, 3;

lR = number of requests serviced from Kr to Ka


Area 2


= 72 (Kr-Ka) ( C":'I·




C + C-=-1 • C



The input distribution is uniform; therefore each arrival
of a new request represents a repeated Bernoulli trial
of the same experiment. The probability that cylinder
k remains empty is
for Area 2

(Kr-Ka) (Kr+Ka-2)L'


(1- k-l • ~)L'

To compute the expected number of cylinders with
no requests, we first determine the probability of a given


for Areas 1, 2, 3

and the expected number of occupied cylinders in that
region is



C2 =C/3-

Kr [1- (k -=-.
1 2L'/ )]lR
-C lR
C 1



for Area 2





c (
C1=C-:E 1- . -




t;; 300



for Areas 1, 2,3


Mean response time



The mean response time is given by

W = Probability {K r> Ka} • Tsw {Area 2}





+Probability {Kr-'

This expression is the same as (9) for the N -step scan
except for the meaning of L' and C1• Solution of (27)
is obtained by iteration.







~---+-----t----+---I--_ _

Sufficient conditions for convergence of the above
procedure for SCAN are L'o>O and l-Xa>O. The
proof proceeds as before: Letting K 1= (X/I-Xa)[AT+
C(Tmin +T/2+T/m-a)] and K 2 = (X/I-Xa) [Tmin +



Figure 7

Variance of response time
The response time distribution for SCAN is not
intuitively obvious. In order to obtain a close approximation to this distribution we can sample all possible

TABLE II-Ratio of Requests
Serviced per Sweep to Mean
Queue Length for SCAN
L' /L







- - - - - N-STEP SCAN
................. SCAN





1. 50






T/2+T/m-a] we can substitute (25) into (27) and
obtain after i iterations:

c (




2 k-l )Lif
1- - . - C C-l

0} (2 k -1 )Lif

1- - . - -





f:d 10




for all k~C















C C-l
o (



2 k_l)Lif
1- - .  (S:> T)J == [CRAS)':) T)]
to form a logical expression involving a single implication where
only the transformed consequent assertion appears on the right
side of the implication.


/\ (B/2= Q*D/4) /\ (D/2=2- k )
'/\ (k = nonnegative integer)
/\ (P/Q-D/2O.

TA-71 0582-33

Figure I-Flow chart representation of simple control program

Two steps must be followed in proving the program

Application of Program-Proving Techniques

with respect to the above four assertions. Step 1 is to
prove that for all paths the assertions are consistent
with the transformation specified by the intervening
code; Step 2 is to establish that the validity of the
correctness and deadlock parts is correctly embedded
in the guessed assertions.
First, in Step 1 the following control paths must
be verified:
1~2, 1~3,3~4,3~3.

For purposes of illustration we will outline the proof of
1~3; this outline should enable the reader to verify the
other paths. The path from 1~3 embodies the following





Test: 8=0




Back substitution on qa leads to the following verification condition:
[ (integer S) /\ (8 ~ 1) /\ (D = u ( - 8 + 1) )
/\ (PENS=u(8) -S) J/\ (8-1 =0)
:> [ (integer 8 - 1) /\ (8 -1 ~ 0)
/\ (D+1=1) /\ (PENS = -S+l)].
The first term of the consequent, integer 8 -1, is true
from integer S. The second term is true from 8 -1 = 0,
i.e., 8=1. The third term, D=O, is true from
[D=u( -8+1)J/\ (S=l). The fourth term is true
from [PEN8=u(S)-8J/\(8=1). Thus the path 1~3
is verified (with respect to the "guessed" assertions).
Step 2, establishing that the assertions embody the
desired behavior of the correctness and deadlock parts,
remains to be carried out. The correctness part is
apparent from the assertions by noting that D = or 1.
The deadlock part is satisfied by noting that whenever
PEN8 ~ 1, then also D ~ 1; thus there exists a process
currently in the critical section that will eventually
schedule some deferred process.
As an extension of this simple control program that
we will use in the following sections, consider the
program displayed in Figure 2. The program is a
straightforward extension of the simple single critical
section program discussed above. It can be shown by a
proof similar to that outlined above that access is
granted to only one of the two critical sections at a
time. Thus, control cannot be simultaneously at points
® and 0. The interpretation of P(8) and V(S) is
modified from that described previously, as shown in
Figure 3. The variables PENS1 and PENS2 serve to




p ( S ) -......









TA-71 0582-34
Figure 2-A control program with two critical sections

indicate, respectively, the number of processes pending
on semaphore 8 at critical sections 1 and 2. The
"CHOOSE" box functions as follows. Either of the two
output branches is chosen at random. If the test in the
selected branch succeeds, then control continues along
the branch; otherwise, control is passed to the other
branch. Note that the relation S < 1 ensures that control


Fall Joint Computer Conference, 1972





S -1






I -' ~





1+- PENS

To (V



> O~ TEST: PE1V:>





2 +- PENS 2 -


The simple program of Figure 1 reveals, although
only in a trivial manner, the possibilities for parallel
activity that we wish to exhibit. For example, in Figure
1 it is possible for control to reside simultaneously in the
critical section (point 0) and at point CD. The assertion
we applied at point CD reflects the possibilities for
multiple points of control in that the variable relationships correspond to control being only at point CD,
simultaneous at points CD and 0, or simultaneous at
points CD, ®, 0. (It is assumed that processors are
available to execute any code associated with the critical
section as well as with the peS) and YeS) blocks.) In
proving the program we did not require any new
formalisms beyond those associated with the uniprocessing situation since hardware locks are so constituted that the P and V operations are not simultaneously executed.
A more general situation is displayed in Figure 4.
Here we illustrate portions of two processes, A and B,
with interprocess communication achieved via the
semaphore S. The particular model of computation that
we will assume is as follows:
Assume that at periodic intervals calls are made
on sections A or B. The availability of a processor

TA-71 0582-35

Figure 3-Interpretation of P and V for two critical sections

can pass along at least one of the branches because if
S < 1, then PENS1 + PENS2 > 1. The purpose of the
CHOOSE box is to place no arbitrary constraints on
the scheduling of deferred processes. The "SPLIT" box
simultaneously passes control along each of its output
branches. The intention here is both to reschedule
another process onto a critical section associated with
semaphore S and to have the process that just finished
the critical section execute the instructions following
Wherever two or more parallel paths converge there
is a JOIN box, embodying some rules for combining
the paths. Points 0 and 0 of Figure 3 are really JOIN
boxes. The most apparent such rules are OR (AND)
indicating that control is to proceed beyond the JOIN
box wherever any (all) of the inputs to the JOIN box
are active. Our discussion will apply mainly to the OR
rule, but is easily· extended to the AND case.








Y, <- f(Y21



r-- -----------------------,




ITEST':< 1YH.( SPtT ~



















PENS 1 <- PENS 1 - 1




PENS 2 ... PENS 2 - 1



Y2 <- h(Y2 1


Ya <- g(Y2 1


L---------i------------------- J


Figure 4-Program to illustrate assertion interpretation

Application of Program-Proving Techniques

to commence process'ng of the calls is always
assumed to exist. If two or more processors
attempt simultaneous reference to a variable or
operator, the selection of the processor that
achieves access is made arbitrarily. If execution
is deferred, say, at point @ , subsequent to the
P (lVI) operation, the affected processor is
presumably freed to handle other tasks. When
the corresponding V (M) operation is carried out,
schedul ng a deferred process, a processor is
assumed to exist to effect the processing.
With reference to this program and the assumed
model of parallel computation, we will illustrate approaches to the placement of assertions and to proving
the consistency of the assertions relative to intervening
program statements.
Assertion placement

Since we are assuming a parallel/multiprocessing
environment, there are potentially many points in the
flow chart at which a processor can be in control. For
example, in Figure 4 control can be simultaneous at
points CD, ®, and 0. However, we will assume that the
role of the POVI) and V(l\1:) operations is to exclude
simultaneous accesses to the intervening critical section,
provided there are no branches into the critical section.
Hence, control cannot be simultaneous at points CD and
@ . An assertion, for example at point CD, must reflect
the state of the variables of the various processes
assuming that:
(1) Control is at point CD and, simultaneously,
(2) Control is at any possible set of allowable points.
By "allowable" we mean sets of points not excluded
from control by virtue of mutual exclusion. We recall
that for the uniprocessor environment assertions are
placed so that each path in the program is provable. As
an extension of that observation we can show that the
proving of paths in a parallel program can be accomplished provided the following rules are satisfied:
(1) Each loop in the· program must be broken by
at least one assertion.
(2) Within a critical section (i.e., one where control
is at only one point at a time and where any
variables in the critical section common to other
portions of the program are themselves in
critical sections under control of the same
semaphore), only a sufficient number of assertions need be applied to satisfy the loop-cutting


rule, (1). We assume that all entries to critical
section are controlled by P, V primitives. If not
then rule (3) below applies.
(3) All points not covered by rule (2) must generally
be supplied with assertions.
(4) Each HALT point and all WAIT points associated with a P operation must contain assertions.
Thus, in Figure 5 a possible placement of assertions is
at points @ , CD, ®, 0, 0, and 0. Note that since the
purpose of synchronization programs is generally to
exclude, by software techniques, more than one process
from critical sections, such programs will not require
the plethora of assertions associated with a general
parallel program. Also note that it is a simple syntactic
check to determine if a given assertion placement
satisfies the above rules.
Once the points where the assertions are to be placed
have been selected and the assertions have been developed, it remains to prove the consistency of assertions. As in the uniprocessor case, the first step in this
proof process is to develop the verification conditions
for each path. For the parallel environment of concern
to us here, we are confronted with the following types
of paths: simple paths, paths with SPLIT nodes, paths
with CHOOSE nodes, and impossible paths. These four
path types are handled below, wherein the rules are
given for developing the verification conditions, and
some indication is given that the parallel program is
correct if these rules are followed. A complete proof of
the validity of the rules is not given because an induction argument similar to that of Floyd's applies here.
Verification condition for a simple path

By a simple path we mean a path bounded by an
antecedent and a consequent assertion, with the intervening program steps being combinations of simple
branch and assignment statements. For such a path the
verification condition is derived exactly as in the
uniprocessor case. That this is the correct rule is seen
by noting that the assertion qa placed at point a in the
program reflects the status of the variables, assuming
that control is at point a and also at any allowable
combination of other points containing assertions. Also
note that because of our assertion placement rules, the
variables involved in the code between a and b are not
modified simultaneously by any other process. Thus,
if a simple path a~b is bounded by assertions qa and
qb and if it is proven that %/\ (intervening code) ::)qb,
then the path is proven independently of the existence
of control at other allowable points.


Fall Joint Computer Conference, 1972

Verification conditions for paths with SPLIT nodes

Impossible paths

Assume that a SPLIT node occurs in a path, say,
bounded on one end by the antecedent assertion qa.
Recall that at the SPLIT node, separate processors
commence simultaneously on each of the emerging
paths. Also assume that along the two separate paths
emerging from the split nodes the next assertions
encountered are qb and qc, respectively. * In this case the
"path" (which is actually two paths) is proved by
showing that

As mentioned above, not all topological paths in a
program are necessarily paths of control. In effect, what
this means is that no input data combinations exist so
that a particular exit of a Test is taken. Recall that for
antecedent and consequent assertions qa, qb and an
intervening Test, T, the verification condition is
qa/\ T'::)qb', where the prime indicates that back substitution has been carried out. Clearly, if the test always
evaluates to FALSE, then qa/\ T' must evaluate to
FALSE, in which case the implication evaluates to
TRUE independent of qb'. (We recall that TRUE::)
all TRUE.)

qa/\ (code between point a and SPLIT node) /\
(code between SPLIT node and point b)
/\ (code between SPLIT node and point c)::)
(qb/\ qc).

Proving that program has no deadlock

Note that it is not sufficient merely to consider the path
between, say, a and b, since the transformations between
the SPLIT node and c may influence the assertion qb.
However, note that the variable references along the
two paths emerging from the SPLIT node are disjoint,
by virtue of the rules for selecting assertion points.
Hence the use of back substitution to generate the
verification condition can function as follows. Assertion
qb is back-transformed by the statements between point
b and the SPLIT node, followed by the statements
between point c and the SPLIT node, finally followed
by the statements between the SPLIT node and point a
to generate qb. A similar rule holds for traversing backward from qc to generate qc. Note that the order in
which the two paths following the SPLIT node are
considered is not crucial since these paths are assumed
not to reference the same variables.
Verification condition for a path with a CHOOSE node

Recall that when control reaches a CHOOSE node
having two exits, the exit that is chosen to follow is
chosen arbitrarily. Hence the effect of a CHOOSE node
is simply to introduce two separate simple paths to be
proven. For antecedent assertions qb, qc, what must be
proved is
qa /\ (code between a and b) ::)qb
qa /\ (code between a and c) ::)qc.

Note that one or possibly both of the paths might not be
control paths, but this introduces no difficulties, as we
show below.

* Various special cases are noted, none of which introduce any
particular difficulties. It is possible that qa, qb and qc might not
be all distinct or that another SPLIT node occurs along a path
before encountering a consequent assertion.

For the parallel programs that we are dealing with
deadlock will be avoided if for every semaphore S such
that one or more processes are pending on S, there
exists a process that will eventually perform a YeS)
operation and thus schedule one of the deferred processes. (Weare not implying that every deferred process
will be scheduled, since no assumptions are made on the
scheduling mechanism.) In particular, if a process is
pending on semaphore a, then it is necessary to show
that another process is processing a. If that latter
process is also pending on a semaphore b, it is necessary
to show that b~a, and that a third process is processing
b. If that third process is pending on c, it is necessary
to show that c~b, c~a, and that a fourth process is
processing c, etc.
In the next sections we apply the concepts above to
the verification of particular control programs.
This section presents a proof of a control program
that was proposed by Courtois et al. The program is as
RC; initial value = 0
M, Q; initial values = 1



if RC= 1 then P(Q)
V (lVI)

if RC=O then V(Q)



Application of Program-Proving Techniques







RC + 1

TEST: RC = 1.:!!!,

NOj_ . PIQ )-@



RD + 1










RD - 1
RC - 1









WD - 1


V(Q) _ __

TEST: RC = 0--,


WD + 1







certain rare circumstances a reader's access might be
deferred for a writer even though at the time at which
the reader activates the READER section no writer is
actually on the device.) A writer is to be granted access
only if no other writer or reader is using the device;
otherwise, the requesting writer's access is deferred. In
particular, any number of simultaneous readers are
allowed access provided no writers are already on.
The role of the semaphore M is to enforce a scheduling
discipline among the readers' access to RC and Q. For
our model of parallel computation, it can be shown that
the semaphore M is not needed, although its inclusion
simplifies the assertion specification.
Figure 5 is a flow chart representation of the program.
A few words of explanation about the figure are in
order. The V(Q) operator for the reader and the
V (1\1) operator for the upper critical section are assumed to be the generalized V's containing the CHOOSE
and SPLIT nodes as discussed in the two previous
sections. The other V operators are assumed to contain
CHOOSE but no SPLIT nodes. The dashed line
emerging from V (Q) indicates a control path that will
later be shown to be an impossible path.
Associating appropriate variables with each of the
P and V operators, the following integer variables and
initial values are seen to apply to the flow-chart.
1 1 0


where the Rand W prefixes to a variable correspond,
respectively, to readers and writers and the 1 and 2
suffixes correspond, respectively, to the "upper" and
"lower" critical sections associated with semaphore 1\1.
Once again we will divide the proof for this program
into a correctness part and a deadlock part. For the
correctness part we will establish that

......_ _ V(M)


TA-71 0582-37

Figure 5-Flow chart representation of Courtois problem 1

The purpose of the program is to control the access of
"readers" and "writers" to a device, where the device
serves in effect as a critical section being competed for
by readers and writers. If no writers are in control of
the critical section, then all readers who so desire are to
be granted access to the device. (We show below that
the program almost satisfies this goal, although under

(1) WD = 0 or 1, indicating that at most one writer
at a time is granted access to the device.
(2) If WD=l, then RD=O, indicating that if one
writer is on the device, then no readers are
(3) If WD=O, then RPENQ=O, indicating that if
no writer is on the device, then a reader is not
held up by semaphore Q. An entering reader
under these circumstances could be held up by
semaphore 1\1, i.e., RPENMl>O. (We will
temporarily defer discussion of this situation.)
According to the assertion placement rules, each


Fall Joint Computer Conference, 1972

input, output and wait point must possess an assertion,
each loop must be cut by an assertion, and in addition,
an assertion must be placed at each point along a path
wherein along another parallel path there exists an
instruction referencing variables common to the point in
quest:on. For this program the assertion placement
problem is simplified since all variables, e.g., RC and Q,
common to two or more parallel paths are a part of
critical sections wherein access is granted only to one
such critical section at a time. Hence, only the inputoutput and loop-cutting constraints must be satisfied,
leading to a possible placement of assertions at the
numbered points in Figure 5. Note that point ® does
not require an assertion, but since it represents a control
point where readers are on the device, it is an interesting
reference point.
The assertions associated with all 11 control points
are indicated in Table 1. The assertion labelled G LO BAL
is intended to conjoin with the other 11 assertions. The
appearance of (i) at the beginning of a disjunctive
term in q2, q3, q8, q9 indicates that the first (i) terms are
the same as in ql. Thus, for example, in the first disjunctive term of assertion q2, the first six conjunctive
terms are the same as in the first disjunctive term of ql,
but the seventh and eighth terms are different, as
It is worthwhile discussing our technique for specifying the assertions-we will provide sample proofs
later on to attest to the validity of the assertions. In
specifying the assertion at a point a, we assumed, of
course, that control is at a and then attempted to guess
at which other points control could reside. Variable

relationships based on this case analysis were then
derived, and then the expressions were logically simplified to diminish the proliferation of terms that resulted.
For example, in assertion ql, the first disjunctive term
corresponds to the case: no writers on the device, i.e.,
control is not at @. The second disjunctive term corresponds to the case of control at @. With regard to the
second term if control is hypothesized at @, it is also
guessed that control could possibly be at 0, ([), and

It remains to verify all the paths bounded by the
11 assertions. The paths so defined are:
1~2;1~3;3~;3~(5,3);3~(5, 7);5~6;5~7,

7~8 [RC~O]; 7~7 [RC~O]; 7~3 [RC~O];

3) [RC=O]; 7~(5, 7) [RC=O]; 7~(5, 8)
[RC=O]; 7~(10, 3) [RC=O]; 7~(10, 7)
[RC=O]; 7~(10, 8) [RC=O]; 1~9; 1~10;
10~11; 10~10; 10~5; 10~(5, 3); 10~(5, 7).

A brief discussion of the symbolism is in order. For
example, the path 3~(5, 3) commences at 0, and then
splits at the SPLIT node of V (M) into two paths
leading to ® and 0. The path 7~(10, 3) [RC=O]
indicates that the branch defined by RC = 0 is taken,
followed by a splitting at V (Q), one path leading to 0,
and the other path taking the CHOOSE exit toward
@. Clearly, many of the above paths are impossible
paths-as revealed by the proof.
We will not burden the reader of this paper with
proofs of all the paths, but we will provide an indication
of the proofs for several of the more interesting paths.

TABLE I-Assertions for Courtois Problem 1
Global: (All variables


1);'\ (M ~ 1);'\ (Q ~ 1);'\ (RC ~O);'\ (RD ~O);'\ (WD ~O);'\ (RPENQ ~O);'\ (WPENQ ~O);'\ (RPENMI ~ 0);'\





[(WD=O);,\(RD=RC);,\(RPENQ=O);,\(WPENQ=u(Q)-Q);,\ u(Q)=u(I-RC»;'\(RPENM2~RD);'\(RPENMl+
RPENM2=u(M)-M)]V [WD= 1);'\ (RD =0);'\ (RPENQ = RC);'\ (WPENQ = -Q-RC);'\(RC =u(RC»;,\ (RPENM2 =0);'\
(RPENMI =u(M) - M);'\ (M ~u( 1- RC»]
[(6)(RPENMI >Q);,\ (M O

Test: M<5




Backsubstitute qa and q5 to yield qa', q5'
qs': (WD=1)I\(RD+1=RC)I\(RPENQ=1)I\(WPENQ= -Q-l)I\(Q+1::S;0)I\(RPENM2::S;RD)I\(RPENMl+RPENM2


r(WD = 1)1\ (RD + 1 =RC)I\ (RPENQ = 1)1\ (WPENQ = u(Q + 1) -Q -1)(u(Q + 1) =u( 1-RC»I\ (RPENM2 ::S;RD + 1)
I\(RPENM1+RPENM2=u«M+l)-M)]V[(WD=2)I\(RD= -1) ... ]
Tests backsubstituted
T': (RPENM1 >0)1\ (M <0)1\ (RPENQ >0)1\ (Q <0)
Verification Conditions
qlOl\ T':>q5'
qlOl\ T'::>q3'
Sample Proof: Proof of Q5' term RPENQ = 1
From qlO: (RPENQ=RC)I\(RC=u(RC»
Thus RPENQ=O or 1
Thus RPENQ = 1

Table II outlines the steps in proving the path
10~(5, 3). At the top of Table II we delineate the steps
encountered along the path. As is readily noted, the
path contains a SPLIT node. To develop the verification
condition, back substitution is required from both q3
and q5 to form qa' and q5'; note that in developing q5'
the statements between the SPLIT node and point ®
must be considered, in addition to the statements
directly between points @ and 0. To verify the path,
the following two logical formulas must be proved true:
qlOl\ T'~q5', qlOI\ T'~q3" At the bottom of Table II we
outline the few simple steps required to prove the term
(RPENQ = 1) in q5'. The patient reader of this paper
can carry out the comparably simple steps to handle
the remaining terms. Note that qs' is the disjunction of
two terms, one beginning with the term (WD = 1) and
the other with the term (WD=2). For ql01\ T'~qs' to
be true, it is necessary for only one of the disjunctive
terms to be true. The reader can verify that it is indeed
the first disjunctive term that is pertinent.
As a final note on the verification of paths, consider
the path 10~(5, 7). A little thought should indicate
that this should be an impossible path since the effect
of control passing to point (j) is to schedule a process
that was deferred at point 0, but at point 0 a reader

is considered to be on the device, and hence point 0
could not be reached from point @ where a writer is on
the device. This is borne out by considering the formula
(qlO 1\ T') for the path in question. In qlO there is the
conjunctive term (RPENM2=O) while in T', the
back-substituted test expression, there is the conjunctive term (RPENM2 <0). Thus, qlOl\ T' evaluates
to FALSE, indicating that the path is impossible.
I t remains now to prove the correctness and deadlock
conditions by observation of the assertions and the
program itself. The key assertion here is ql since it
expresses the relationship among variables for all
possible variations of control, e.g., for all allowable
assignments of processors to control points in the
program. On the basis of ql we can conclude the following with regard to correctness:
(1) WD = 0 or 1, indicating that no more than one
writer is ever granted access to the device.
(2) If WD=l, then RD=O, indicating that if a
writer is on the device, then no reader is.
(3) The issue. of a requesting reader not encountering
any (appreciable) delay in getting access to a
device not occupied by a writer is more complicated. From the first disjunctive term of ql


Fall Joint Computer Conference, 1972

(that deals with the case of no writers on
the device), we note that if WD =0, then
RPENQ = O. Hence, under the assumed circumstances a requesting reader is not deferred by
semaphore Q. However, a requesting reader
could be deferred by semaphore lVr. In fact, a
requesting reader could be deferred at point ®
while the RD readers on the device emerge from
point 0, and then be scheduled onto the lower/
critical section wherein the last emerging reader
performs V(Q) and schedules a writer. The
deferred reader will then be scheduled onto the
upper critical section only to be deferred by Q
at point 0. Although it is an unusual timing of
requests and reader completions that leads to
this situation, it still violates the hypothesis
that a reader is granted access provided no
writer is on the device. * Note that, under the
assumption that (WD = 0) and RPENM2 remains zero while a requesting reader is deferred
by M at point ®, the requesting reader will be
granted access to the device prior to any requesting writers.
We now dispose of the question of deadlock. We need
to demonstrate that, if a process is pending on a
semaphore, then there exists another process that will
eventually perform a V operation on that semaphore.
With regard to semaphore Q, we note from observation
of ql that if RPENQ>O or WPENQ>O, then either
WD = 1 or RD ~ 1. Thus, if any process is pending on
Q, there exist processes that might eventually do a
V(Q) operation. It remains to dispose of the issue of
these processes themselves pending on semaphores. It
is obvious that a writer on the device must emerge
eventually, at which time it will do a V (Q) operation.
For one reader (or more) on the device, in which case
RPENQ = 0, we have shown that the last reader will
perform a V(Q) operation. A reader could be deferred
by semaphore M, but in this case there is a reader processing lVI that is not deferred by Q and hence must do
a V (1\11) operation.

* We conjecture that there is no solution to this problem without
permitting the conditional testing of semaphores, so that the
granting of access to a writer or reader to the device is decided on
the basis of the arrivaltime of a reader or writer at the entry point
to the control program. In effect, what the program here accomplishes is to grant a reader access to the device provided it passes
the test: RC = 0 while WD = O. Note that there are other
problems that do not admit to solutions using only P and V
operations unless conditional testing of semaphores is permitted,
e.g., see Patil.1 5

In this paper we have developed an approach based
on program-proving techniques for verifying parallel
control programs involving P and V type operations.
The proof method requires user-supplied assertions
similar to Floyd's method. We have given a method for
developing the verification conditions, and for abstracting the proof of correctness and nondeadlock from the
We applied the technique to two control programs
devised by Courtois et al. At first glance it might appear
that the method is only useful for toy programs since
our proofs for the above two programs seem quite
complex. However, in reality the proofs presented here
are not complex, but just lengthy when written out in
detail. The deductions needed to prove the verification
conditions are quite trivial, and well within the capability of proposed program proving systems. * By way of
extrapolation it seems reasonable for an interactive
program verifier to handle hundreds of verification
conditions of comparable complexity. Thus one might
expect that operating systems containing up to 1000
lines of high-level code should be handled by the
proposed program verifier.
We might add that some additional theoretical work
is called for relative to parallel programs and operating
systems. Perhaps the main deficiency of the proofs
presented here is that a suspicious reader might not
believe the proofs. In establishing the correctness of the
programs it was required to carry out a nontrivial
analysis of the assertions. For example, we refer the
reader to the previous section where the subject of a
reader not encountering any delay in access is discussed.
Contrast this with a program that prints prime numbers, wherein the output assertion says that the nth
item printed is the nth prime-if the proof process
establishes the validity of the output assertion there is
no doubt that the program is correct. It is thus clear
that the operating system environment could benefit
from a specification language that would provide a
mathematical description of the behavior of an operating
system. Also some additional work is needed in understanding the impact of structured programming on the
proof of operating systems. We would expect that
structured programming techniques would reduce the
number of assertion points and the number of paths
that must be verified.

* See London16 for a review of current and proposed program
proving systems.

Application of Program-Proving Techniques

The author wishes to thank Ralph London for many
stimulating discussions on program proving and operating systems and for providing a copy of his proof of
the two programs discussed in this paper. Peter
Neumann, Bernard Elspas and Jack Goldberg read a
preliminary version of the manuscript. Two referees
provide some extremely perceptive comments.




The structure of THE multiprogramming system
Comm ACM 11 5 pp 341-346 May 1968
Concurrent control with "READERS" and WRITERS"
Comm ACM 14 10 pp 667-668 October 1971
Assigning meanings to programs
In Mathematical Aspects of Computer Science
J T Schwartz (ed) Vol 19 Am Math Soc pp 19-32
Providence Rhode Island 1967
Proof of algorithms by general snapshots
BIT 6 4 pp 310-316 1966
The correctness of programs
J Computer and System Sciences 3 2 pp 119-127 May 1969
Computer programs can be proved correct
In Proc 4th Systems Symposium-Formal Systems and
N onnumerical Problem Solving by Computer R B Banerji
and M D Mesarovic (eds) pp 281-302 Springer Verlag
New York 1970
Proof of algorithms, a new kind of certification (Certification
of Algorithm 245, TREESORT 3)
Comm ACM 136 pp 371-373 June 1970
Correctness of two compilers for a LISP subset
Stanford Artificial Intelligence Project AIM-151 Stanford
California October 1971
An assessment of techniques for proving program correctness
ACM Computing Surveys 4 2 pp 97-147 June 1972
Formalization of properties of parallel programs
Stanford Artificial Intelligence Project AIM-110
Stanford California February 1970
Synchronization of communicating processes
Comm ACM 153 pp 177-184 March 1970
A program verifier
PhD Thesis Carnegie-Mellon University
Pittsburgh Pennsylvania 1969
Toward a man-machine system for proving program correctness



PhD Thesis University of Wisconsin Madison Wisconsin
Research in interactive program-proving technique
Stanford Research Institute Menlo Park California May
Limitations and capabilities of Dijkstra's semaphore
primitives for coordination among processes
MIT Project MAC Cambridge Massachusetts February
The current status of proving programs correct
Proc 1972 ACM Conference pp 39-46 August 1972
Comments on the prevention of system deadlocks
Comm ACM 14 1 pp 36-38 January 1971


Proof of Courtois problem 2
Figure 6 displays the flow chart of the second control
problem of Courtois et al. 2 The intent of this program is
(1) similar to problem 1 in that the device can be shared
by one or more readers, but a writer is to be granted
exclusive access; (2) if no writers are on the device or
waiting for access, a requesting reader is to be granted
immediate access to the device; and (3) if one or more
writers are deferred, then a writer is to be granted
access before any reader that might subsequently
request access. As we show below, a formal statement
of the priorities can be expressed in terms of the variables of Figure 6. Also, as in problem 1, the intent of
the program is not quite achieved relative to the
receiving of requests at the entry points of the reader
and writer sections.
It is noted that the program contains semaphores
L, M, N, Q, S, all of which have initial value 1, and
"visible" integer variables RS, RD, RC, WS, WD, WC,
all of which have initial value o. In addition, as in
problem 1, there are the variables associated with the
various P and V operations. As in problem 1, the V
operators, with the exception of VeL) and those at
points @ and @ , embody both the SPLIT and CHOOSE
nodes; VeL) has only the SPLIT node, and the final V's
have only CHOOSE nodes. The dotted control lines
indicate paths that can be shown to be impossible.
The numbered points on the flow chart are suitable
for assertion placement in that each loop is cut by at
least one assertion and all commonly referenced variables are contained within critical sections. There are
several approaches toward deriving the assertions, but
once again the most sensible one involves case analysis.


Fall Joint Computer Conference, 1972






_ _ _ _ _ _ _ _ _ _ _ _ _ _- - ,

l.. ------ .,I

RS +- RS + 1



r-I-r------:::rl ®




1~ ®


WC +-

wc +




II ...--_ _No\,_=_





+ 1






l I

RD +- RD + 1



RS +-








WD +- WD - 1









RD +- RD - 1



+ 0--,

RC +- RC - 1

Nol ..

















CI describes the domain of the individual variables and
is common to all assertions for the program. It was convenient to decompose the second disjunctive term into
two disjunctive terms, bl, b2, corresponding to the
reader processing Q and the reader not processing Q. A
similar case analysis for the al term is embedded in the
conjunctive terms. Note that, as in problem 1, the
prefixes W, R refer to writer and reader and the suffixes
1 and 2 refer to the upper and lower critical sections.
We will not burden the reader of the paper with a
listing of the assertions at all points or with a proof of
the various paths; the proof is quite similar to that
illustrated for problem 1. However, it is of interest to
abstract from qi sufficient information to prove the
program's intent. For a discussion of deadlock the
reader is referred to Reference 14.
As with problem 1 the decision concerning whether a
requesting reader or a requesting writer gains access to
the device is based on which one arrives first at the
corresponding P (8) operation-not on arrival time of
the readers and writers at the corresponding section
entry points. This point is discussed in more detail





V(O)-:..-- - - - - '







L...-_ _ V(M)



Figure 6-Flow chart representation of Courtois problem 2

From the view of control at point CD, we have derived
the assertion qi of the form qi = CI/\ (al V [a2/\ (b i V b2) ]) ,
wherein al corresponds to a writer not processing S, i.e.,
WS =0, and [a2/\ (biV b2) ] corresponds to a 'writer
processing S, i.e., WS = 1. The assertionql listed in
Table III reflects this case analysis: The global assertion

(1) The assertions indicate that any number of
readers can share the device provided no writers
are on, since if WD = 0, then from al we see there
are no constraints on RD. The assertions indicate that at most one writer is on the device
because from observation of both al and a2 we
note that WD = or 1.
(2) The assertions indicate, as follows, that a reader's
access to the device is not delayed provided no
writers are processing S or are on the device, and
provided no writers are pending on Q or S.
The term al indicates that if WS = WD = 0, i.e.,
no writers are processing S or on the device, and
if WPENQ= WPENS=O, i.e., no writers are
pending on Q or S, then RPEN8=RPENQ=0,


TABLE III-Main Assertion for Courtois Problem 2

(All. variables


I)I\(L~l)/\ (M ~l)/\ (N ~l)/\ (Q ~l)/\ (S~l)/\(RC~O)/\ (RD~O)/\ (WC~O)/\ (WD ~O)/\ (RPENS~O)


ql: (Writer not processing S)
(RS=u( -S+l»/\(WS=O)!\(WD=O)/\(WC=u(WC»/\(WPENQ=O)/\(RPENQ=O)/\(RPENS=O)/\(u(WPENS)=u(S)-S)
/\ (RD = RC)/\ (u(Q) =u(l-RC»/\ (WPENS = WC)/\(WPENNI =u(N)-N/\ (WPENN2 =0)/\ (RPENL=u(L) -L)/\(u(L)
=u(S»/\ (RPENMI +RPENM2 =u(M)- M)/\ (RPENMI ~RD)
(Writer processing S)
{(WS=l)/\(RS=O)/\(WPENS=O)/\(RPENS= -S)/\(S= -u( -S»A (RPENQ =0)/\ (WPENQ =u(Q)-Q)/\(WPENNl
+ WPENN2 = u(N) - N)/\ (RC =RD)/\ (RPENL = u(L) - L)/\ (RPENMI = 0)/\ (RPENM2 = u(M) - M)/\ (L:::; u(S + 1» }/\ {[(Q :::;0)
/\(RC~l)/\(WD=O)/\(WC= -Q)/\(WPENN2=0)]V[(RC={)/\(M=1)/\(WC=WD+WPENNl+WPENQ)/\(WD=u(WD»
/\ (WD ~u(WPENQ»]1

Application of Program-Proving Techniques

indicating that no reader is deferred by S or Q
from access to the device.
The issue of writer priority will be handled by applying case analysis to ql.
• RPENQ is always 0; thus a V(Q) operation can
only grant access to a deferred writer, never to a
• RS is 0 or 1; thus, at most, one reader is processing
S. If RS = 1, then RPENS = 0 and WPENS = 0
or 1. This indicates that if a reader is processing S,
the subsequent YeS) operation can only signal a
deferred writer.
• If WS = 1 then WPENS = 0, and there are no
constraints on WPENQ. This indicates that all
deferred writers are pending on Q (or N as discussed below), and since RPENQ=O a writer must
get access to the device either immediately if
RD= WD=O, or when the next V(Q) is performed
by either a writer or a reader.
As we mentioned above, the issue of granting access


to a writer or a reader is determined by the arrival time
at peS). If this is indeed the intent of the program,
then the above discussion serves to prove the correctness of the program. However, there are several important exceptions that deserve discussion. For example,
while a writer is pending on S, all subsequent requesting
\\-Titers will be deferred by N . Now these writers should
be granted access to the device before any requesting
readers receive it, which will be the situation under
"normal" timing conditions. The deferred writer, at
point @, will be scheduled by a reader doing V(S), in
which case the writer will perform YeN) and in turn
will schedule a deferred writer. These previously
deferred writers will not get blocked by S but will pass
to P (Q). Of the readers requesting access, one will be
blocked by S and the remainder by L. The only way
this scheduling would not occur as stated would be if
the deferred writer at point @ passed through the
\\-Titer section and performed a YeS) operation, thus
scheduling a deferred reader before a writer processing
the upper critical section could get through the first
two instructions.

Exact calculation of computer network
IBM Research Laboratory
Ruschlikon, Switzerland

IBM Corporation
Dublin, Ireland

IBM Corporation
Armonk, N ew York

network fail with the same probability q and each of
the b links fail with the same probability p, then Pees, t]
is approximately given by

The exact calculation of the reliability of the communication paths between any pair of nodes in a distributed
computer network has not been feasible for large networks. Consequently, many reliability criteria have
been suggested based on approximate calculations of
network reliability. For a thorough treatment of these
criteria, the reader is referred to the book and survey
paper by Frank and Frisch 1 •2 and the recent survey
paper by Wilkov. 3
Making use of the analogy between distributed
computer networks and linear graphs, it is noted that
a network is said to be connected if there is at least one
path between every pair of nodes. A (minimal) set of
links in a network whose failure disconnects it is called
a (prime) link cutset and a (minimal) set of nodes
with the same property is called a (prime) node cutset.
If a node has failed, it is assumed that all of the links
incident to that node have also failed. A cutset with
respect to a specific pair of nodes ns and nt in a connected network, sometimes called an s-t cut, is such
that its removal destroys all paths between nodes ns and
The exact calculation of PeeS, t], the probability of
successful communication between any pair of operative computer centers ns and nt, requires the examination of all paths in the network between nodes ns and
nt. More specifically, if each of the n nodes in any given


PeeS, t]=

2: A:.t(i) (l_p)ipb-i,




In Eq. (1), A:.t(i) is the number of combinations of
i links such that if only they are operative, there is at
least one communication path between nodes ns and
nt. On the other hand, the calculation of the probability
P,[s, t] of a communication failure between nodes ns
and nt requires the examination of all s-t cuts. For
specified values of p or q, P,[s, t] is approximately
given by

P,[s, t]=

2:C:. t (i)pi(l-p)b-i,




For q»p, a similar expression can be given replacing
C:.t(i) by C:.t(i). The coefficients ·C:.t(i) and
C:. t (i) denote the total number of link and node s-t
cuts of size i. The enumeration of all paths or cutsets
between any pair of nodes ns and nt is not computationally possible for very large networks.
If any network G of b links and n nodes, it is easily
shown that the order of the number of cutsets is 2 n - 2



Fall Joint Computer Conference, 1972

whereas the order of the number of paths between any
pair of nodes is 2b-n+2. For networks having nodes of
average degree (number of incident links) greater than
four, b>2n and 2b-n+2>2n-2. Consequently, such networks have a larger number of paths than cutsets.
Computation time would clearly be reduced in such
cases by calculating network reliability from cutsets
instead of paths. In this case PeeS, tJ can be obtained
from PeeS, tJ= 1-Pr [s, tJ, where Pres, tJ can be calculated from Eq. (2). Alternatively,



where Q!. t is the event that all links fail in the ith
prime s-t cut and N is the total number of prime cutsets with respect to nodes ns and nt. The calculation of
Pres, tJ from Eq. (2) clearly requires the examination
of all s-t cuts. The number of prime s-t cuts is usually
much smaller. However, Pres, tJ is not readily calculated from Eq. (3) because the Q!.t are not mutually
exclusive events.
Following Wilkov,4 we shall use Pr = Maxs.tPr[s, tJ
as an indication of the overall probability of service
disruption for a given computer network. For specified
values of p or q, Pr depends only on the topology of the
network. A maximally reliable network clearly has a
topology which minimizes Pr and hence minimizes
Maxs.tC:.t(m) or Maxs.tC:.tCm) for small (large)
values of m when p or q is small (large). Letting
X:,t(m) and X:.t(m) denote the number of
prime node and edge s-t cuts of size m, Xn (m) =
Maxs.tX:.t(m) and Xe(m) =Maxs.tX:,t(m) have been
proposed4 as computer network reliability measures. These measures Xn(m) and Xe(m) denote the
maximum number of prime node and edge cutsets of
size m with respect to any pair of nodes. A maximally
reliable network is such that Xn(m) and Xe(m) are as
small as possible for small (large) values of m when the
probability of node or link failure is small (large).
In the calculation of Xn(m) and Xe(m) for any given
network, all node pairs need not be considered if all
nodes or links have the same probability of failure. It
has been shown5 that in order to calculate Xn(m) and
Xe(m), one need only consider those node pairs whose
distance (number of links on a shortest route between
them) is as large as possible. For a specified pair of
nodes n s, nt, X:. t(m) can be calculated for all values of
m using a procedure given by Jensen and Bellmore. 6
Their procedure enumerates all prime link cutsets between any specified pair of nodes in a non-oriented network (one consisting only of full or half duplex links).
It requires the storage of a large binary tree with one
terminal node for each prime cutset. Although these
cutsets are not mutually exclusive events, it has been

suggested 6 that Eq. (3) be approximated by

Pr[s, tJ ~ ~ P[Q!. tJ.



However, it is shown in the following section that no
additonal computation time is required to actually
compute Pr[s, tJ exactly.
A simple procedure is described below to iteratively
calculate a minimal set of mutually exclusive events
containing all prime link s-t cuts. This procedure starts
with the prime cutset consisting of the link incident to
node nt. Subsequently, these links are re-connected in
all combinations and we then cut the minimal set of
links adjacent to these that lie on a path between node
ns and nt, assuming that the network contains no pendant nodes (nodes with only one incident link). The link
replacements are iterated until the set of links connected to node ns are reached. The procedure is easily
extended to provide for node cutsets as well and requires a very small amount of storage since each event
is generated from the previous one. PieS, tJ is obtained
by accumulating the probabilities of each of the mutually exclusive events.

Procedure I
1. Initialization

Let: N

be the set of all nodes except nodes n s •

C be the set of all links not incident to node ns.
M1 = {ns}
F1 be the set of links incident to both ns and nt
Sl be the set of links incident to ns but not nt

be a binary number consisting of only I Sl


2. Let:

be a subset of Si consisting of those elements
in Si for which the corresponding digit in bi is
M i +1 be a subset of N consisting of nodes incident
to the links in T i.
N = N-Mi+l'
be a subset of C consisting of links incident to
nt and adjacent to any member of T i •
Si+l be a subset of C consisting of links incident to
nodes in N other than nt and adjacent to any
member of T i •
C- (Si+lUFi+1).

Exact Calculation of Computer Network Reliability

3. If Si+I¢0, then let:
bi+l be a binary number with I Si+l I ones
i = i+1
Go to step 2
Otherwise, let:

CS= U [Fk U1\U (Sk-Tk)],

where CS is a modified cutset and Tk indicates
that the links in set Tk are connected.
4. Let:
bi = bi-l (modulo 2)
If bi B ij • A flow-chart for determining whether a state ought to be saved after task i is
finished and if task j is called next is shown in Figure 2.

If S = H + M, then it is possible to implement rollback and to allow recovery from one error by means of
rollback. The reliability in this case is the probability
of no error in T+H time units (in which case no rollback is necessary) plus the probability of exactly one
error in T + H units followed by a period of M error free
units in which recovery is taking place.
R(H+M) = e-a(T+H)

[a(T+H) ]te-a(T+H+M)


By the same argument, if S = H +2M then two error
recoveries are possible and

[a (T +H +M) J2e -a(T+H+2M)

In general
R(H+nM) =R(H+(n-1)M)


{ aCT + H + (n -1) MJ} ne-a(T+H+nM)


for n=2, 3, ...
If we are considering delaying the time required to
complete the mission by S units we get the Software
Reliability Efficiency index to be:

SRE= R(S) -R(O)

Computing the software reliability efficiency index

Let T be the time required to complete the program
if there is no error, and without implementing a rollback method. Let H be the overhead incurred by implementing a rollback procedure. H can be easily computed
for an arbitrary program as shown in Reference 8.
Recollect that the rollback procedure is designed so
that the maximum recovery time will not exceed a given
value M. If the mission is completed in T+S units
rather than T units a "lateness penalty" is incurred
which gets larger as S increases. We shall find the reliability of a system with rollback as a function of S, the
amount of "lateness" permitted. We shall assume
that failures occur according to the exponential failure
law, and the mean time between failures is l/a.
If S = 0 then the program must finish in T time units
without error. The probability of no error in T time
units is e-aT • Letting R(S) be the reliability, defined as
the probability of completing a successful mission, we
R(O) =e-aT

Note that in this analysis undetected and permanent
errors were ignored. They can be included quite simply.
Let Q(S) be the probability of the event that there is
no undetected or permanent error in S units and let it
be independent of other events. Then we have
SRE= Q(S) ·[R(S) -R(O) ]


I nstructional retrial

If an error is detected while the processor is executing
an instruction, the instruction could be retried, if its
operands have not already been modified. This technique is an elementary form of rollback: recovery time
never exceeds the execution time of an instruction, and
overhead is negligible. However, there is a probability
that an error will persist even after instruction retry.
Let this probability be Q. The SRE for this technique
can be computed in a manner identical to that for rollback and has the same form. The SRE for instruction
retrial will in general be higher than that for rollback.

Framework for Hardware-Software Tradeoffs


Deadlock prevention


--_1. . ___

CO_",_P_UT_E_R_ _ _: - _........ OUTPUT

Prevention of deadlocks is an important aspect of
overall system reliability. Deadlocks may arise when
procedures refuse to give up the resources assigned to
them, and when some procedures demand more resources from the system than the system has left unassigned. Consider a system with one unit each of two
resources A and B, and two procedures I and II. Now
suppose procedure I is currently assigned the unit of
resource A while II is assigned B. Then if procedure I
demands B and II demands A, the system will be in a
deadlock: neither procedure can continue without the
resources already assigned to the other. The hardware
approach to this problem is to buy sufficient resources
so that requests can be satisfied on demand.
Habermann and others 6,7 have discussed methods for
preventing deadlocks without purchasing additional
resources. In these methods sufficient resources are
always kept in reserve to prevent the occurrence of
deadlock. This may entail users being (temporarily)
refused resources requested, even though there are unassigned resources available. Keeping resources in reserve also implies that resource utilization is (at least
temporarily) decreased. An alternative approach is to
allocate resources in such a manner, that even though it
is possible that deadlocks might arise, it is very improbable that such a situation could occur. The tradeoff
here is between the probability of deadlock on the one
hand and resource utilization (or throughput) on the
other. The tradeoff is expressed in terms of the software
reliability efficiency index.


Figure 3a

Summary of software methods

Different methods for improving the overall reli...
ability of a system using software have been discussed. The software reliability efficiency index was suggested as an aid in evaluating software methods.
Techniques for computing SRE were discussed. Similar
techniques can be used for computing SRE for other
software methods.
Triple modulo redundancy

Triple Modulo Redundancy (TMR) was one of the
earliest methods! suggested for obtaining a reliable sys,;.
tem from less reliable components. The system output
(Figure 3) is the majority of three identical components. If only one of the components is in error, the
system output will not be in error, since the majority of

Determining the software reliability efficiency index

The probability P of a deadlock while the mission is
in progress and the time T required to complete the
mission (assuming no deadlock) using a scheme where
resources are granted on request are determined through
simulation. The time (T+H) required to complete the
mission using a deadlock prevention scheme is 'also determined by means of simulation. If Q(L) is the probability that no malfunctions other than deadlock arise
in L time units, then assuming independence, we have:
SRE= Q(T+H) -Q(T)· (l-P)


r - - ---------------,
















L _____________________

At this time we know of no way of computing Hand P

Figure 3b




Fall Joint Computer Conference, 1972

components will not be in error. Thus, the system can
tolerate errors in anyone component; note that these
errors may be transient or permanent. In this discussion
we discuss only permanent errors.
Computing the hardware reliability efficiency index

Let P be the probability that a permanent malfunction occurs in a given component before the mission is
completed. If failures obey an exponential law, and
if the average time to a failure is 1/a, then P = 1- e-aT ,
where T is the time required to complete the mission.
If the system is a discrete transition system (such as a .
computer system), then the time required to complete
the mission can be expressed as N cycles (iterations)
where computation proceeds in discrete steps called
cycles. If the probability of failure on any cycle is p
independent of other cycles then
P=l- (1-p)N

Let v be the probability of a malfunction in the votetaker before the mission is complete independent of
other events. The reliability R of a TMR system is the
probability that at least two components and the votetaker do not fail for the duration of the mission.
R= [(I-P)3+3(I-P)2 oPJo (I-v)
If C is the cost of each component, and D the cost of the

vote-taker, the hardware reliability efficiency index is:



Transient errors can also be included quite easily in
Hybrid system

Mathur and Avizienis 2 discuss an ingenious method of
obtaining reliability by using TMR and spares, see
Figure 4. The spares are not powered-on and will be
referred to as "inactive" components. If at any point
in time, one of the three active components disagrees
with the majority, the component in the minority is
switched out and replaced by a spare. The spare must
be powered-up and loaded; one method of loading the
component is to use rollback and load the component
with the last saved error-free state, and begin computation from that point. If at most one component fails











I----+-'-~-- OUTPUT









L _____ 1_ _ _ _ _ _ _ ~~e~o~ _ _ J


Spare Un i ts


Hybrid System (5,3)

Figure 4

during a cycle and if the vote-taker is error-free, this
system is fail-safe until all the spares are used up, i.e.,
the system output will not be in error. Consider a comparison of a system with three active units and two
spares with another system which has five active units.
If at most one unit can fail at a time then the majority
is always right and the system with three active units
is at least as good as a system with five active units
(since a majority of two active units is as right as a
majority of four). Thus if at most one unit fails at a
time, the number of active units need never exceed
three; additional units should be kept as spares. Of
course in digital computer systems where computation proceeds in discrete steps such as cycles, iterations, instruction-executions, task-executions, etc., it is
possible, though improbable, that more than one unit
may fail in a single step. In this case, an analysis which
assumes that at most one active unit can fail at a time
is an approximation to the real problem.
Computation /oJ the hardware reliability efficiency index

Mathur and Avizienis (op cit) assume that malfunctions occur according to an exponential failure law. A
consequence of this assumption is that at most one unit


Framework for Hardware-Software Tradeoffs

curve of reliability as a function of N is shown in Figure
8. Let RH be the reliability of the hybrid system, C the
cost of each unit and D the cost of the vote-taker. The
hardware reliability efficiency index with two spares is

Number of
Active Units


Passive Units

Self-purging system

Markov diagram of a hybrid conflguratlon

Figure 5

can fail at a given instant which in turn implies that the
majority is always right. Now consider what happens if
the improbable event does occur and the majority is in
error and the minority is correct. The correct minority
unit will be switched out to be replaced by a spare which
is powered up and initialized. A comparison with the
other two active units will show that the powered-on
spare is in the minority, and it will in turn be switched
out to be replaced by yet another spare and so on. Eventually all the spares will be used up and the system will
crash. Thus even though the probability of failure of two
units in one iteration is indeed small, the consequence
of this improbable event is catastrophic. Hence we feel
that in calculating SRE it is important to back-Up the
¥athur-Avizienis study of this ingenious method with
an analysis that does not assume that simultaneous
failures never occur.
In this analysis we will assume that computation proceeds in discrete steps called tasks; a task may consist
of several instructions or a single instruction. Key
variables of the active units are compared at the end of
a task completion, and the minority element, if any, is
switched out. Let the probability of failure of a unit
on any step of the computation be P, independent of
other units and earlier events. A discrete-state, discrete~transistion Markov process may be used to model
this system. A Markov state diagram is shown in Figure
5. If the system is in state F, then a system failure has
already occurred. The reliability of the system is the
probability that the system is not in state F at the Nth
iteration, where N is the number of computation steps
required in the mission. The reliability can be computed analytically from the Jordan normal form. A

Consider a self-purging system shown in Figure 6.
Initially there are five active units and no spares. If
at any instant the vote-taker detects a disagreement
among the active units, the units whose outputs are in
the minority are switched out, leaving three, active,
error-free units. If the failure rates for active and passive units are the same, the self-purging system will
tolerate two simultaneous failures, which may be
catastrophic for the hybrid system.
Computation of the hardware reliability efficiency index

In this analysis we shall assume that computation
proceeds in discrete steps, as in the analysis for the


Self-purging System with

Figure 6

5 Units

Fall Joint Computer Conference, 1972


reliability of the system is the probability that the system is not in state F one the Nth computation step. A
curve showing the reliability of this system as a function of N is shown in Figure 8. Let Rs be the reliability
of a self-purging system with five active units initially.

Number of
fault free


Rs- (l-P)N

If the cost of power supplies are included HRE for the
hybrid system is larger than that for self-purging.

Markov dlagram of a self-purging configuration

Figure 7

hybrid system. Let P be the probability of failure of a
unit on a computation step, independent of other units
and earlier steps. A Markov state diagram for this
process is shown in Figure 7. As in the hybrid case the

Summary of hardware methods

TMR, hybrid, and a system called a self-purging
system were discussed. Some of the problems of approximating these systems as continuous transition
systems were analyzed. Techniques for obtaining the
hardware reliability efficiency indices were presented.
Similar techniques can be used for other hardware





We have attempted to develop a set of simple indices
which may be useful in comparing different techniques
for achieving reliability. We feel that an important research and pedogogical problem is to develop a more
comprehensive, sophisticated framework. Models for
rollback and discrete transition models for hybrid and
self-purging systems were discussed briefly.






This research was supported in part by NSF grants
GJ-35109 and GJ-492.





Figure 8


Probabilistic logics and the synthesis of reliable organisms
from unreliable components
Automata Studies p 43-98 Princeton University Press
Princeton N J 1956
Reliability analysis and architecture of a; hybrid-redundant
digital system: Generalized triple module redundancy with
Proc Spring Joint Computer Conference 1970

Framework for Hardware-Software Tradeoffs

Redundancy for better maintenance of computer systems

Computer Design pp 50-52 January 1969
Self-repair in a T M R computer

Computer Design pp 54-57 February 1969
Hardware-software tradeojJs in the design of reliable computers

Master's thesis in the Department of Computer Sciences
University of Texas December 1971
Prevention of system deadlocks

Comm ACM Vol 12 No 7 July 1969
The coordination of multiple processes in computer operating


Dissertation Computer Sciences Department University of
Texas at Austin 1970
Optimal rollback

IEEE-C Vol C-21 No 6 pp 546-556 June 1972
Considerations of software protection and recovery from
hardware failures

Proc FJCC 1968 AFIPS pp 29-37
Error recovery through programming

Proc FJCC 1968 AFIPS pp 39-43
On the harmonious cooperation of abstract machines

Thesis Mathematics Department Technological
U Eindhoven The Netherlands 1967

Automation of reliability evaluation procedures through
CARE-The computer-aided reliability estimation program*
University of Missouri
Columbia, Missouri


Unifying notation

The large number of different redundancy schemes
available to the designer of fault-tolerant systems, the
number of parameters pertaining to .each scheme, and
the large range of possible variations in each parameter
seek automated procedures that. would enable the
designer to rapidly model, simulate and analyze preliminary designs and through man-machine symbiosis
arrive at optimal and balanced fault-tolerant systems
under the constraints of the prospective application.
Such an automated procedural tool which can model
self-repair and fault-tolerant organizations, compute
reliability theoretic functions, perform sensitivity
analysis, compare competitive systems with respect to
various measures and facilitate report preparation by
generating tables and graphs is implemented in the
form of an on-line interactive computer program called
CARE (for Computer-Aided Reliability Estimation).
Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy
schemes. These equations, under program control, are
then interrelated to generate the desired mathematical
model to fit the architecture of the system under
evaluation. The math model is then supplied with
ground instances of its variables and then evaluated to
generate values for the reliability theoretic functions
applied to the model.
The math models may be evaluated as a function of
absolute mission time, normalized mission time, nonredundant system reliability, or any other system
parameter that may be applicable.

A unifying notation, developed to describe the
various system configurations using selective, massive,
or hybrid redundancy is illustrated in Figure 1.
N refers to the number of replicas that are made
massively redundant (NMR) ; S is the number of spare
units; W refers to the number of cascaded units, i.e.,
the degree of partitioning; R( ) refers to the reliability
of the system as characterized in the parentheses;
TMR stands for triple modular redundant system
(N =3); the NMR stand for N-tuple modular redundancy.
A hybrid redundant system H(N, S, W) is said to
have a reliability R(N, S, W). If the number of spares
is S = 0, then the hybrid system reduces to a cascaded
NMR system whose reliability expression is denoted by
R(N, 0, W) ; in the case where there are no cascades,
it reduces to R(N, 0,1), or more simply to R(NMR).
Thus the term W may be elided if W = 1. The sparing
system R (1, S) consists of one basic unit with S spares.
Furthermore, the convention is used that R * indicates
that the unreliability (1- Rv) due to the overhead
required for restoration, detection, or switching has
been taken into account e.g., R*(NMR) =Rv.R(NMR);
if the asterisk is elided then it is assumed that the overhead has a negligible probability of failure. This proposed notation is extendable and can incorporate a
number of functional parameters in addition to those
shown here by enlarging the vector or lists of parameters
within the parentheses, e.g., R (N, S, W, ... , X, Y, Z).
Existing reliability programs

Some reliability evaluation programs, known to the
author, are the RCP, the RELAN, and the REL70.
The RCpl,2 is a reliability computation package
developed by Chelson (1967). This is a program which

* The

work presented here was carried out while the author was
with the Jet Propulsion Laboratory, California Institute of
Technology, and under Contract No. NAS7-100, sponsored by the
National Aeronautics and Space Administration.



Fall Joint Computer Conference, 1972


: R(NrMRI\
., S=O
W=l \ /

\ S=O'






: R(N,O,W)

\ I










: N =1

S =0


--------.&..---_______ l.

RH, S, WIl+--l R(N, S, WI



I W= 1



R(3 S WI




,/ N = 3
lW = 1

repository of equations is extendable. Dummy routines
are provided wherein new or more general equations
may be placed as they are developed and become
available to the fault-tolerant computing community.
For example, the equation developed by Bouricius,
et al., for standby-replacement systems embodying
the parameters C and Q has been bodily incorporated
into the equation repository of CARE.





L~E:..~ ___ J+--L -R(N,
SI .. /
R(3, SI
- - - - - - - - - - - _______ -J

Figure l-Unifying notation

can model a network of arbitrary series-parallel combinations of building blocks and analyzes the system
reliability by means of probabilistic fault-trees. RE LAN3
is an interactive program developed by TIME/WARE
and is offered on the Computer Sciences Corporation's
INFONET network. RELAN like Rep models arbitrary series-parallel combinations but in addition allows
a wide choice (any of 17 types) of failure distributions.
RELAN has concise and easy to use input formats and
provides elegant outputs such as plots and histograms.
REL704 and its forerunner REL5 developed by Bouricius,
et al., are interactive programs developed in APL/360.
Unlike RCP and RELAN, REL70 is more adapted for
evaluating systems other than series-parallel such as
standby-replacement and triple modular redundancy.
It offers a large number of system parameters, in
particular C the coverage factor defined as the probability of recovering from a failure given that the failure
exists and Q, the quota, which is the number of modules
of the same type required to be operating concurrently.
REL 70 is primarily oriented toward the exponential
distribution though it does provide limited capabilities
for evaluating reliability with respect to the Weibull
distribution; its outputs are primarily tabular. Since
APL is an interpretive language, REL is slow in operation; however, its designers have overcome the speed
limitation by not programming the explicit reliability
equations but approximate versions6 which are applicable to short missions by utilizing the approximation
(l-exp( - AT» = AT for small values of AT.
The CARE program is a general program for evaluating fault-tolerant systems, general in that its reliability theoretic functions do not pertain to anyone
system or equation but to all equations contained in its
repository and also to complex equations which may be
formed by interrelating the basic equations. This

CARE consists of some 4150 FORTRAN V statements and was developed on the UNIVAC 1108 under
EXEC 8 (version lIe). The particular FORTRAN V
compiler used was the Level 7E having the modified
2/3/4 argument ENCODE-DECODE commands. The
amount of core required by the unsegmented CARE is
64K words. The software for graphical outputs is
designed to operate in conjunction with the Stromberg
Carlson 4020 plotter. The software enabling threedimensional projections, namely the Projectograph
routines,7 are a proprietary item of Technology Service
In addition to the Jet Propulsion Laboratory, the
originator, currently there are three other users of
CARE, namelyNASA Langley Research Center (a
FORTRAN II version operational on a CDC 3600),
Ultrasystems Corp. (operational on a UNIVAC 1108
under EXEC II), and MIT Draper Laboratory. The
CARE program, minus the Projectograph routines, has
been submitted to COSMIC** and is available to
interested parties from them along with users manuals.
Its reference number at COSMIC is NPO-13086.
CARE's repository of equations

The equations residing in CARE, based on the
exponential failure law, model the following basic
fault-tolerant organizations:
(1) Hybrid-redundant (N, S) systems. 8 •9
(a) NMR (N, 0) systems.lO
(b) TMR (3,0) systems. 10
( c) Cascaded or partitioned versions of the
above systems.
(d) Series string of the above systems.
The equation representing the above family of

** Computer Software Management and Information Center,
University of Georgia, Athens, Georgia 30601.

The Computer-Aided Reliability Estimation Program

systems is the following:



for K=



R*(N, S)





L --'(CQAT/W)iJwz



(3) TMR systems with probabilistic compensating
failures. Io
(a) Series string and cascaded versions of the
The equation characterizing this system is:

R*(3, 0)

= {RV[3R2IW -2R3IW
+6P(1- P)RIIW (1- RIIW)2]} wz

L (Kl+S)(_l

_ 8-2




_1)i+I}]. RV

(4) Hybrid/simplex redundant (3, S)sim systems. ll
(a) TMR/simplex systems,s
(b) Series string and cascaded versions of the
The general equation for this class of systems is
the following:
R(3, S)sim[T]
1 -1 )
=R3Rs 8 { 1+1·5 ( -2-8

R Rs




{RNIWR8 [1+(NK+I)








X(R~-l -1) (2K+:~~K+i)}











S= 1

S>O and ",>0




and S>I





8 (3K+ ') 8-1 (S)
- II
'J L

t (~) f (i)



II (3K


X[(I.5) i-I]-R3[(1.5)8+1-1]

(2) Standby-sparing redundant (1, S) systems. a,10


S>O and


(a) K-out-of-N systems,S
(b) Simplex systems.
(c) Series string and cascaded versions of the
The general equation for the above is:





{H E[~(l-R,'IW);




R*(3, S)sim=R v ·R(3, S)sim
For the description of the above systems and their
mathematical derivations, refer to the cited references.
These equations are the most general representation of
their systems parameterizing mission time, failure
rates, dormancy factors, coverage, number of spares,
number of multiplexed units, number of cascaded
units, and number of identical systems in series. The
definitions of these parameters reside in CARE and
may be optionally requested by the user. More complex
systems may be modeled by taking any of the above
listed systems in series reliability with one another.


Fall Joint Computer Conference, 1972

TABLE I-Table of Abbreviations and Terms




= U npowered failure rate
= AI J1, = Dormancy factor


Powered failure rate

Mission time
Normalized mission time
= Simplex reliability
= Dormant reliability, exp( -p,T).
= Number of spares
= (N-1)/2 where N is the total number of multiplexed
Q = Quota or number of identical units in simplex
C = Coverage factor, Pr(recovery /failure)
RV = Reliability of restoring organ or switching overhead
= Number of identical systems in series
W = N umber of cascaded or partitioned units
P = Probability of unit failing to "zero"
TMR = Triple modular redundancy
TMRp = TMR system with probabilistic compensating
(1, S) = Standby spare system
(N , S) = Hybrid redundant system
(3, S)sim = Hybrid/simplex redundant system
MTF = Mean life
R(MTF) = Reliability at the mean life

Reliability theoretic functions

The reliability equations in the repository may be
evaluated as a function of absolute mission time (T),
normalized mission time (AT), nonredundant system
reliabili ty (R), or any other system parameter that
may be applicable. The set of reliability theoretic
functions defined in CARE are applicable to any of the
equations in the repository. This independence of the
equations from the functions to be applied to the
equations impart generality to the program. Thus the
equation repository may be upgraded without effecting
the repertoire of functions. The various reliability
theoretic functions useful in the evaluation of faulttolerant computing systems have been presented in
Ref. 11, the measures of reliability have been defined,
categorized into the domains of probabilistic measures
and time measures and their effectiveness compared.
Among the various measures of reliability that the user
may request for computation are: the system mean-life,
the reliability at the mean-life, gain in reliability over a
simplex system or some other competitive system, the
reliability improvement factor, and the mission time
availability for some minimum tolerable mission
Operational features

Although CARE is primarily an interactive program,
it may be run in batch mode if the user prespecifies the

protocol explicitly. In the interactive mode CARE
assumes minimum knowledge on the user's part. Default
values are provided to many of the items that a user
should normally supply. This safeguards the user and
also makes usage simpler by providing logical default
values to conventionally used parameters. Instructions
provided by CARE are optional thus the experienced
user can circumvent these and operate in fast mode.
Definitions of reliability terms and abbreviations used
in the program may be optionally requested. An optional
"echo" feature that echoes user's responses back to the
terminal is also provided .. A number of diagnostics and
recovery features that save users from some common
fatal errors are in the program.
Model formulation-an example

A typical problem submitted for CARE analysis may
be the following: Given a simplex system with 8 equal
modules which is made fault-tolerant by providing two
standby spares for each module, where each module
has a constant failure rate of 0.5 failures per year and
where the spares have a dormancy factor of 10 and the
applicable coverage factor being 0.99, it is required to
evaluate the system survival probability in steps of
1/10 of a year for a maximum mission duration of 12
years. It is required that the system reliability be compared against the simplex or nonredundant system and
that all these results be tabulated and also plotted. It is
further required that the mean-life of the system as well
as the reliability at the mean-life be computed. It is
of interest to know the maximum mission duration that
is possible while sustaining some fixed system reliability
objective and to display the sensitivity of this mission
duration with respect to variations in the tolerable
mission reliability.
I t is also required that the above analysis be carried
out for the case where three standby spares are provided
and these configurations of three and two spares be
compared and the various comparative measures of
reliability be evaluated and displayed.
The above problem formulation is entered into CARE
by stating that Equation 2 (which models standby
spare systems) is required and the pertinent data
(S=2,3; Z=8; K=10; T=12.0; LAMBDA =0.5;
C=0.99; STEP=O.l) is inserted into CARE between
the VARiable namelist delimiters $VAR ... $END.
The above example illustrates the complexity of
problems that may be posed to CARE, and the simplicity with which the specifications are entered. The
reliability theoretic functions to be performed on the
above specified system are acknowledged interactively
by responding a YES or NOon the demand terminal to
CARE's questions at the time it so requests.

The Computer-Aided Reliability Estimation Program


----m---rn-... -1:IDAN m- PARTITIONED PRIMITIVE SYSTEM (W = m).


... --c:z::::::J---



L _______ .J
L ______ .J L ______ J

=m, Z =2).

....._---,,,--_. . .1. ... -1'--_ _



Figure 2-Formation of complex systems

The basic equations in CARE's repository define the
primitive systems: (1, S), (N, S), (3, S)sim and TMRp.
Equations representing more complex systems may be
fabricated by combining the primitive systems in
series reliability with one another as shown in Figure 2.
The description of a complex system is entered by
first enumerating the equation numbers of the primitive
systems involved in namelist VARiable1. Thus
"$VAR1; PROD = 1, 2; $END;" states that equation
1 and equation 2 are to be configured in series reliability.
N ext, the parameter specifications for these equations
are then entered using the namelist VARiable.
The set of values for any parameter pertaining to a
complex system is stored as a matrix, thus in the general
case of PARAMETER (m, n) n refers to the equation
involved m is the internal index for the set of values that
will be attempted successively. For example, C(I, 2) =
1.0, 0.99 states that in equation 2 (the equation for
standby-spares system) the value of the coverage
factor should be taken to be 1.0 and having evaluated
the complex system for this value the system is to be
reconsidered with coverage factor being 0.99.


here was to be evaluated for the worst case dormancy
factors K of 1 and infinity.
On completing the evaluation of the above system,
the effect of reducing coverage to 0.99 was to be reevaluated. Also the effect of increasing the number of
spares to 3, as also the effect of increasing the module
failure rates to their upper bound value of .0876
failures/year. All combinations of these modifications
on the original system are to be considered. The mission
time is 12 years and evaluations are to be made in
steps of 1/10th of a year.
The above desired computations are specified using
the VAR namelist thus:
$VAR; T=12.0; STEP=O.I; Z(I, 1) =1,
Z(I,2)=8; C(1, 2) =1.0, 0.99; N(I,I)=3;
S(I, 1) =2,3,S(I,2) =2,3;LAMBDA(I, 1) =
.01752, .0876, LAMBDA(I, 2)=.01752,
.0876; K(1, 1) = 1.0, INF, K(I, 2) = 1.0, INF;
(N ote the semicolons (;) denote carriage returns.) The
ease and compactness with which complex systems can
be specified in CARE is demonstrated by the above
example. The reader will note the complex system
configured in this example corresponds to a STAR-like
system having eight functional units in standby-spare
mode and a hard-core test-and-repair unit in Hybrid
redundant mode (Figure 3).
Some significant results pertaining to the behavior
of W partitioned NMR system (Figure 4) will now be
presented. These results pertain to the behavior or
reliability theoretic functions of an NMR system such
as its mean life or mean time to first failure (MTF)
and the reliability of the system at the mean life,
R(MTF). The reliability theoretic system measure-

Complex model formulation-an example
It was required to evaluate a system consisting of 8
equally partitioned modules in a standby-spares (1, S)
configuration having 2 spares· for each module. The 9th
module was the hard-core of the system and was
configured in a Hybrid redundant (3, S) system having
2 spares (S = 2). The coverage on the (1, S) system
modules was to be initially considered to be 1.0. The
lower bound on the failure rate A on all the modules
had been evaluated to be .01752 failures/year on the
basis of parts count. This complex system as specified



00000 DOD

Figure 3-Configuration for an example of a STAR-like
complex model


Fall Joint Computer Conference, 1972







0 ::



.125.06250.00 .03125
0.5 I





Figure 4-R(N ,0, W) vs AT as a function of Nand W

reliability at the mean life, R(MTF)-is the reliability
of the system computed for missions or time durations
of length equal to the mean time to first failure of the
system. The behavior of these functions were evaluated
under the limiting conditions of the system parameters
in order to establish system performance bounds. The
results presented here have been both proven mathematicallylO and been verified by CARE analysis.
Since it is well-known that mean-life (MTF) is not a
credible measure of reliability performance (e.g., MTF
of a simplex system is greater than the MTF of a TMR
system!), another measure the reliability at the meanlife R(MTF) has been used to supplant MTF. 'This
measure essentially uses a typical reliability estimate of

the system. The typical reliability value being taken at
a specific and hopefully a representative time of the
system. This representative time is taken to be the
time to first failure of the system, namely the MTF of
the system. The foregoing is the rationale for choosing
R(MTF) as a viable measure of system reliability.
However, contrary to general belief this measure
R(MTF) is not a good measure for partitioned NMR
systems due to its asymptotic behavior as a function of
the number of partitions W. It is proved in [10J that
the reliability at MTF of a (3, 0, W) system in the
limit as W becomes very large approaches the value
exp ( -11"/4) asymptotically from below and that this
bounding value is reached very rapidly, see Figure 5.

The Computer-Aided Reliability Estimation Program


TABLE II--MTF and R(MTF) as a Function of W


(3,0, W) System




o (Simplex)


exp( -11"/4) = 0.454

1 (TMR)
co (3,0, co)



Some other results observed graphically in Figure 4
and the detailed mathematical proof of which are in
[10J are summarized below. These results follow from
the general reliability equation for a W partitioned
NMR system, which is:
R(N, 0, W)



The construction of the submatrices H64 and A is done
by an APL program3 given in the appendix with theory
stated in Section III. The sub matrix 

13) and drop the column if this seven digit vector has already appeared in a previously taken column. This guarantees that these columns along with the first 7 columns for check bits form a single error correcting code. This exercise was carried out using an APL computer program which generated a (104,90) and (172, 154) DEC codes which has separable SEC and can be shortened to handle data bit lengths 64 and 128. The codes are given in the Appendix. The DED capability is obtained by adding a check bit on the SEC code which makes a SEC-DED odd-weightcolumn-code. 5 The number in front of each column of H-matrix in the Appendix represents the cyclic position number in the full length code. These position numbers are used in the algebraic decoding algorithm4 in error correction process. SYSTEM IMPLEMENTATION There are at least three methods of generating the parity check matrix of a double error ·correcting code. The parity check matrix denoted by HI has Xi mod ml(x)ma(X) as its ith column (0 origin) where ml(x) and ma(X) are minimum functions of the field elements a and aa of GF (27). The parity check matrix H2 generated by the second method has the concatenated vector Xi mod ml(X), Xi mod ma(X) as its ith column. The parity check matrix Ha generated by the third method has the concatenated vector Xi mod mi (x) , x3 i mod ml(x) as its ith column. The codes generated by thes~ three matrices are not only equivalent but also isomorphic. These three matrices possess different desirable properties. In particular, the matrix HI possess the property (1) for the adaptive correction scheme-presently under consideration. The firs t 14 columns of HI represent an identity matrix which corresponds to 14 independently-acting check digits. However, any 7 check bits as a group do not provide SEC capability which is the required property (2). The matrix H2 on the other hand can be divided into two parts where the first group of seven check bits, corresponding in the part column Xi mod ml (x), does provide SEC capability, however, the two groups of check bits do not act independently and hence are not separable. The matrix Ha behaves in the same manner as H2 except that the syndrome in Ha is easily decodable. 4 Let us use a simple example for illustration. Figure 1 shows a memory system which contains two basic memory units. Each unit has a (72, 64) SEC-DED code. The following is the parity check matrix for this simple system. [H64 IsJ cf> cf> H= cf> cf> [HS4 IsJ cf>] cf> (2) [ [A cf>J [A cf>J Is Where H64 is the first group of 7 columns of the matrix in the Appendix and an additional column is added to make it odd weight. The A-matrix is the second group of 7 columns of the matrix in the Appendix. Another column is added to these 15 columns to make the overall parity matrix odd weight. This means that the overall code has double error correction and triple error detection capability. The encoding follows directly from the H-matrix of Equation (2). The decoding is classified as follows: 1. Any single error in each memory unit can be corrected separately and simultaneously. 2. If a double error is detected in one of the memory units and no error indication in the other memory Adaptive Error Correction Scheme for Computer Memory System 85 words out of a group of m words is very small. Such adaptive error correction scheme more closely matches the requirements of modern computer memory systems and can be used very effectively for masking faults and reducing cost of maintenance. REFERENCES MEM MEM Check DECODER To CPU or Channel Figure 1 unit for the corresponding word, then this double error can be corrected by the additional 8 check bits. 1 J F KEELEY System/370-Reliability from a system viewpoint 1971 IEEE International Computer Society Conference September 22-24 1971 Boston Massachusetts 2 W W PETERSON Error correcting codes MIT Press 1961 3 A D F ALKOFF K ElVERSON APL/360 user's manual IBM Watson Research Center Yorktown Heights New York 1968 4 A M PATEL S J HONG Syndrome trapping technique for fast double-error correction Presented at the 1972 IEEE International Symposium on Information Theory IEEE Catalog No 72 CHO 578-S IT 1972 5 M Y HSIAO A class of optimal minimum odd-weight-column SEC-DED codes IBM J of Res & Dev Vol 14 No 4 pp 395-402 July 1970 APPENDIX A-CODE GENERATION PROGRAl\1 The decoding of the double errors as stated in class 2 needs the data bits portion of both memory units. The data bit portion for the error free memory is required to cancel its effects in the last 8 syndrome bits. Therefore, the double error correction can be done as that given in Reference 4. APL 360 VSECDEC[O]V SUl\1MARY An adaptive ECC scheme with SEC-DED feature can be expanded to DEC feature in a memory system containing several memory units environment. The normal memory cycle time remains unaffected, except in the presence of a double error when extra decoding time is required for the double error correction procedure. Other major advantage is cost savings in terms of number of check bits required. If the memory system contains m basic memory units then 8(m-l) check bits can be saved by using this scheme. The number m is chosen such that the probability of double-errors in two [1] [21 [3] [4) [5] [6) V SEC DEC C M+-1+pG }l+2*(!tf+2 ) V+,~1p 0 Z+l'JpO (j+(MpO),l I+O [7] V+MpQ+(-l4>q)~(Gx.q[M-l]) ~9+(I>M)x«X+(2i«M+2)+V»)€Z) [8] [9) '0123456789*'[(10 10 10 TI).10,V] Z[I]+X I+I+1 [10] [11] [12] [13] [14] ~7+(7x(I=N» 'DONE' ~15 V 86 Fall Joint Computer Conference, 1972 APPENDIX B-PARITY CHECK l\;fATRIX FOR (104, 90) SEC-SEPARABLE DEC CODE SEeDEC 1 0 0 0 0 1 1 0 1 1 1 0 1 1 1 000*10000000000000 001*01000000000000 002*00100000000000 003*00010000000000 004*00001000000000 005*00000100000000 006*00000010000000 007*00000001000000 008*00000000100000 009*00000000010000 010*00000000001000 011*00000000000100 012*00000000000010 013*00000000000001 014*10000110111011 015*11000101100110 016*01100010110011 017*10110111100010 018*01011011110001 019*10101011000011 020*11010011011010 021*01101001101101 022*10110010001101 023*11011111111101 024*11101001000101 025*11110010011001 026*11111111110111 027*11111001000000 028*01111100100000 029*00111110010000 030*00011111001000 031*00001111100100 032*00000111110010 037*00110001010110 03$*00011000101011 039*10001010101110 040*01000101010111 041*10100100010000 042*01010010001000 043*00101001000100 044*00010100100010 045*00001010010001 046*10000011110011 047*11000111000010 050*11011101011110 051*01101110101111 052*10110001101100 053*01011000110110 054*00101100011011 055*10010000110110 056*01001000011011 057*10100010110110 058*01010001011011 059*10101110010110 060*01010111001011 061*10101101011110 065*00101011011011 066*10010011010110 074*10001100000010 075*01000110000001 077*11010100000110 078*01101010000011 083*11101111000111 084*11110001011000 085*01111000101100 086*00111100010110 088*10001001111110 094*11001111110100 095*01100111111010 096*00110011111101 097*10011111000101 098*11001001011001 099*11100010010111 100*11110111110000 101*01111011111000 108*10010110110001 109*11001101100011 110*11100000001010 111*01110000000101 112*10111110111001 113*11011001100111 114*11101010001000 115*01110101000100 116*00111010100010 117*00011101010001 119*11000010110010 120*01100001011001 124*00110111011100 125*00011011101110 126*00001101110111 Adaptive Error Correction Scheme for Computer Memory System APPENDIX C-(172, 154) SEC-SEPARABLE DEC CODE SEC DEC SECDEC 1 0 1 1 0 1 1 1 1 0 1 1 0 0 0 1 1 000*1000000000000000 001*0100000000000000 002*0010000000000000 003*0001000000000000 004*0000100000000000 005*U000010000000000 006*0000001000000000 007*0000000100000000 008*0000000010000000 009*0000000001000000 010*0000000000100000 011*0000000000010000 012*0000000000001000 013*0000000000000100 014*000000000CO n 0010 015*0000000000000001 01~*1011011110110001 017*1110110001101001 01q*11000001100001~1 011*1101111101110011 020*1101110000001000 021*°110111000000100 022*OOliolll000000l0 023*0001101110000001 024*1011101001110081 025*1110101010001001 026*1100001011110101 l27*1101011011001011 031*1010110000101011 032*1110000110100100 033*0111000011010010 034*0011100001101001 035*1010101110000101 030*1110001001110011 o37*1100011~laOOlnoo 038*0110001101000100 03·::J*QOll00nll0l000l0 040*0001100011010001 041*1011101111011001 045*0110101101111111 04r*1000001000001110 047*010Q0001COOOOlll 048*1001011100110010 04J*0100101110011001 050*1001001001111101 051*1111111010001111 052*1100100011110110 053*0110010001111011 o 5 II * 1 000 n 11 110 0011 0 0 05S*0100001011000110 05C*0010000101100011 057*1010011100000000 058*0101001110000000 05j*0010100111J000JO 06J*Q0010ljOlll00000 I) 61 * ,) 0 (; 0 1 0 10 Q 111 :) 0 :I 0 062*0000J11100111000 ~~7·~1~11-11·111"""~ 063*1001101001001001 069*1111101010010101 070*1100101011111011 071*1101001011001100 072*0110100101100110 073*0011010010110011 074*1010110111101000 075*0101011011110100 076*0010101101111010 077*0001011110111101 078*1011110101101111 079*1110100100000110 08~*0111a10010000011 ~81*100Dll0lllll00CO 082*0100011011111000 083*0010001101111100 o [lll * 'i 0 r) 1800110111110 085*1011001111011110 187*0181100111101111 088*10011:lllJ1nOOll0 () 8 ') * 0 1 'J (; 11 r; 11 0 1 r) 0 0 11 090*1001000101100000 ,) 91 * ,:; 1 :) 0 1 0 0 0 1 0 11 ~; 0 () 0 092*0010010001011000 093*0001001000101100 094*0000100100010110 096*1011010111110100 097*0101101011111010 098*0010110101111101 099*1010000100001111 100*1110011100110110 101*0111001110011011 102*1000111001111100 103*0100011100111110 105*1010011001111110 107*1001111800101110 108*010011111001~111 109*1001iJ00000111010 111*1001001110111111 11 3 * 0 1111111 () () 11 0 111 114*1001100000101010 115*0100010000010101 11~*100101011nlll0ll 117*1111110101101100 llR*nllllll0l0ll0ll0 11~*OOlllll101011Qll 120*1010100000011100 121*0101010000001110 1 2 2 * .J 0 1 ,1 1 0 10 0 0 0 0 0 111 123*1010001110111010 124*~1()liJOOlalJll0Jl 125*10011111JOOlll0l 126*11111J0000111111 127*1110101110101110 128*0110010111010111 131*1001011011100111 132*1111110011000010 135*1111001111110001 136*1100111001001001 137*1101000010010101 138*1101111111111011 130*1101100001001100 140*0110110000100110 141*0011011000010011 147*0101111010111101 143*1001100011101111 143*1111101111000110 150*0111110111100011 151*1000100101000000 153*0010001001010000 15l*lnll0llQl01JOOll lGO*Oll1111nOlllQono 161*0011101100111000 162*0001110110011100 163*0000111011001110 164*0000011101100111 165*1011110000000010 171*11DllllollCllo00 172*0110111101101100 176~alnlll01J010llln 177*0010111010010111 178*1010008011111010 179*0101000001111101 182*0111110000111011 189*1010001101100001 190*1110011000000001 191*1100010010110001 192*1101010111101001 193*1101110101000101 19 11 * 11 0110 () 100010011 195*1101101100111000 1 96* 0 11 () 11 () 11 00111 f) 0 201*1001100100110001 209*0000110111101110 210*0000011011110111 21G*OOll111010111100 217*0001111101011110 71~*0000111110101111 210*1011000001100110 220*0101100000110011 223*001001101110101i )24*8001001101110101 225*1011111000001011 226*1110100010110100 228*0011101000101101 229*1010101010100111 23J*nll1a~OlUlllJOOl 232*1000111100001001 233*1111000000110101 234*1100111110101011 23G*Ol10100000110010 240*1100011100000110 242*1000011001110000 243*0100001100111000 87 Dynamic confirmation of system integrity * by BARRY R. BORGERSON University of California Berkeley, California INTRODUCTION continuously integral are identified, and the integrity of the rest of the system can then be confirmed by means less stringent than concurrent fault detection. For example, it might be expedient to allow certain failures to exist for some time before being detected. This might be desirable, for instance, when certain failure modes are hard to detect concurrently, but where their effects are controllable. It is always desirable to know the current state of any system. However, with most computing systems, a large class of failures can remain undetected by the system long enough to cause an integrity violation. What is needed is a technique, or set of techniques, for detecting when a system is not functioning correctly. That is, we need some way of observing the integrity of a system. A slight diversion is necessary here. Most nouns which are used to describe the attributes of computer systems, such as reliability, availability, security, and privacy, have a corresponding adjective which can be used to identify a system that has the associated attribute. Unfortunately, the word "integrity" has no associated adjective. Therefore, in order to enhance the following discourse, the word "integral" will be used as the adjective which describes the integrity of a system. Thus, a computer system will be integral if it is working exactly as specified. Now, if we could verify all of the system software, then we could monitor the integrity of a system in real time by providing a 100 percent concurrent fault detection capability. Thus, the integrity of the entire system would be confirmed concurrentlYJ where "concurrent confirmation" of the integrity of any unit of logic means that the integrity of this unit is being monitored concurrently with each use. A practical alternative to providing concurrent confirmation of system integrity is to provide what will be called "dynamic confirmation of system integrity." With this concept, the parts of a system that must be QUALITATIVE JUSTIFICATION In most contemporary systems, a multiplicity of processes are active at any given time. Two distinct types of integrity violations can occur with respect to the independent processes. One type of integrity violation is for one process to interfere with another process. That is, one process gains unauthorized access to another's information or makes an illegitimate change of another process' state. This type of transgression will be called an "interprocess integrity violation." The other basic type of malfunction which can be caused by an integrity violation occurs when the state of a single process is erroneously changed without any interference from another process. Failures which lead to only intraprocess contaminations will be called "intraprocess integrity violations." For many real-time applications, no malfunctions of any type can be tolerated. Hence, it is not particularly useful to make the distinction between interprocess and intraprocess integrity violations since concurrent integrity-confirmation techniques must be utilized throughout the system. For most user-oriented systems, however, there is a substantial difference in the two types of violations. Intrapr{)cess integrity violations always manifest themselves as contaminations of a process' environment. Interprocess integrity violations, on the other hand, may manifest themselves as security infractions or contaminations of other processes' environments. * This research was supported by the Advanced Research Projects Agency under· contract No. DAHC15 70 C 0274. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the Advanced Research Projects Agency or the U.S. Government. 89 90 Fall Joint Computer Conference, 1972 We now see that there can be some freedom in defining what is to constitute a continuously-integral, useroriented system. For example, the time-sharing system described below is defined to be continuously integral if it is providing interprocess-interference protection on a continuous basis. Thus other properties of the system, such as intraprocess contamination protection, need not be confirmed on a continuous basis. Although the concept of dynamic confirmation of system integrity has a potential for being useful in a wide variety of situations, the area of its most obvious applicability seems to be for fault-tolerant systems. More specifically, it is most useful in those systems which are designed using a solitary-fault assumption. Where "solitary fault" means that at most one fault is present in the active system at any time. The notion of "dynamic" becomes more clear in this context. Here, "dynamic" means in such a manner, and at such times, so that the probability of encountering simultaneous faults is below a predetermined limit. This limit is dictated not only by the allowable probability of a catastrophic failure, but also by the fact that other factors eventually become more prominent in determining the probability of system failure. Thus, there often becomes a point beyond which there is very little to be gained by increasing the ability to confirm integrity. The rest of this paper will concern itself with dynamic confirmation in the context of making this concept viable with respect to the solitary-fault assumption. DYNAMIC CONFIRMATION TECHNIQUES In this section, and the following section, a particular class of systems will be assumed. The class of systems considered will be those which tolerate faults by restructuring to run without the faulty units. Both the stand-by sparing and the fail-softly types of systems are in this category. These systems have certain characteristics in common; namely, they both must detect, locate, and isolate a fault, and reconfigure to run without the faulty unit, before a second fault can be reliably handled. Obviously, if simultaneous faults are to be avoided, the integrity of all parts of the system must be verified. This is reasonably straightforward in many areas. For instance, the integrity of data in memory can be rather easily confirmed by the method of storing and checking parity. Of course, checks must also be provided to make sure that the correct word of memory is referenced, but this can be done fairly easily too. 1 It is generally true that parity, check sums, and other straightforward concurrent fault-detection techniques can be used to confirm the integrity of most of the logic external to processors. However, there still remains the problems of verifying the integrity of the checkers themselves, of the processors, and of logic that is infrequently used such as that associated with isolation and reconfiguration. All too often, there is no provision made in a system to check the fault detection logic. Actually, there are two rather straightforward methods of accomplishing this. One method uses checkers that have their own failure space. That is, they have more than two output states; and when they fail, a state is entered which indicates that the checker is malfunctioning. This requires building checkers with specifically defined failure modes. It also requires the ability to recognize and handle this limbo state. An example of this type of checker appears in Reference 2. Another method for verifying the integrity of the fault-detection logic is to inject faults; that is, cause a fault to be created so that the checker must recognize it. In many cases this method turns out to be both cheaper and simpler than the previously mentioned scheme. With this method, it is not necessary to provide a failure space for the checkers themselves. However, it is necessary to make provisions for injecting faults when that is not already possible in the normal design. With this provision, confirming the integrity of the checking circuits becomes a periodic software task. Failures are injected, and fault detection inputs are expected. The system software simply ignores the fault report or initiates corrective action if no report is generated. Associated with systems of the type under discussion, there is logic that normally is called into use only when a fault has been detected. This includes the logic dedicated to such tasks as diagnosis, isolation, and reconfiguration. This normally idle class of hardware units will collectively be called "reaction logic." In order to avoid simultaneous faults in a system, this reaction logic must not be allowed to fail without the failure being rapidly detected. Several possibilities exist here. This logic can be made very reliable by using some massive redundancy technique such as triple-modular-redundancy.3 Another possibility is to design these units such that they normally fail into a failure space which is detected and reported. However, this will not be as simple here as it might be for self-checking fault detectors because the failure modes will, in general, be harder to control. A third method would be to simulate the appropriate action and observe the reaction. This also is not as simple here as it was above. For example, it may not be desirable to reconfigure a system on a frequent periodic basis. However, one way out of this is to simulate the Dynamic Confirmation of System Integrity action, initiate the reaction, and confirm the integrity of this logic without actually causing the reconfiguration. This will probably require that the output logic either be made "reliable" or be encoded so as to fail into a harmless and detectable failure space. The final area that requires integrity confirmation is the processors. The technique to be employed here is very dependent on the application of the system. For many real-time applications, nothing short of concurrent fault detection will apparently suffice. However, there are many areas where less drastic methods may be adequate. Fabry 4 has presented a method for verifying critical operating-system decisions, in a timesharing environment, through a series of independent double checks using a combination of a second processor and dedicated hardware. This method can be extended to verifying certain decisions made by a real-time control processor. If most of the tasks that a real-time processor performs concern data reduction, it is possible that software-implemented consistency checks will suffice for monitoring the integrity of the results. When critical control decisions are to be made, a second processor can be brought into the picture for consistency checks or dedicated hardware can be used for validity checking. Alternatively, a separate algorithm, using separate registers, could be run on the same processor to check the validity of a control action, with external time-out hardware being used to guarantee a response. These procedures could certainly provide a substantial cost savings over concurrent fault-detection methods. For a system to be used in a general-purpose, timesharing environment, the method of checking processors non-concurrently is very powerful because simple, relatively inexpensive schemes will suffice to guarantee the security of a user's environment. The price that is paid is to not detect some faults that could cause contamination of a user's own information. But conventional time-sharing systems have this handicap in addition to not having a high availability and not maintaining security in the presence of faults, so a clear improvement would be realized here at a fairly low cost. In order to detect failures as rapidly as possible in processors that have no concurrent fault-detection capability, periodic surveillance tests can be run which will determine if the processor is integral. VALIDATION OF THE SOLITARY-FAULT ASSUMPTION 91 removed at any given time. However, in order to design the system so that all possible types of failures can be handled, it is usually necessary to assume that at most one active unit is malfunctioning at any given time. The problem becomes essentially intractable when arbitrary combinations of multiple faults are considered. That is not to say that all cases of multiple faults will bring a system down, but usually no explicit effort is made to handle most multiple faults. Of course by multiple faults we mean multiple independent faults. If a failure of one unit can affect another, then the system must be designed to handle both units malfunctioning simultaneously or isolation must be added to limit the influence of the original fault. A quantitative analysis will now be given which provides a basis for evaluating the viability of utilizing non-concurrent integrity-confirmation techniques in an adaptive fault-tolerant system. In the analysis below, the letter "s" will be used to designate the probability that two independent, simultaneous faults will cause a system to crash. The next concept we need is that of coverage. Coverage is defined5 as the conditional probability that a system will recover given that a failure has occurred. The letter "e" will be used to denote the coverage of a system. In order to determine a system's ability to remain continuously available over a given period of time, it is necessary to know how frequently the components of the system are likely to fail. The usual measure employed here is the mean-time-between-failures. The letter "m" will be used to designate this parameter. It should be noted here that "m" represents the meantime-between-internal-failures of a system; the system itself hopefully has a much better characteristic. The final parameter that will be needed here is the maximum-time-to-recovery; This is defined to be the maximum time elapsed between the time an arbitrary fault occurs and the time the system has successfully reconfigured to run without the faulty unit. The letter "r" will be used to designate this parameter. The commonly used assumption that a system does not deteriorate with age over its useful life will be adopted. Therefore, the exponential distribution will be used to characterize the failure probability of a system. Thus, at any given time, the probability of encountering a fault within the next u time units is: p= jU (llm)*exp( -tim) dt o Fault-tolerant systems which are capable of isolating a faulty unit, and reconfiguring to run without it, typically can operate with several functional units = 1-exp( -ulm) From this we can see that the upper bound on the 92 Fall Joint Computer Conference, 1972 conditional probability of encountering a second independent fault is given by: q= l-exp( -rim) Since it is obvious that r must be made much smaller than m if a system is to have a high probability of surviving many internal faults, the following approximation is quite valid: q= l-exp( -rim) 00 =1- 2: (-r/m)k/k! k=O = 1-I+r/m- (Y2)*(r/m)2+ Oi)*(r/m)3- ..• ~r/m Therefore, the probability of not being able to recover from an arbitrary internal failure is given by: x= (I-c) +c*q*s = (I-c) +c*s*r/m where the first term represents the probability of failing to recover due to a solitary failure and the second term represents the probability of not recovering due to simultaneous failures given that recovery from the first fault was possible. If we now consider each failure as an independent Bernoulli trial and make the assumption that faulty units are repaired at a sufficient rate so that there is never a problem with having too many units logically removed from a system at any given time, then it is a simple ~atter to determine the probability of surviving a given period, T, without encountering a system crash. The hardware failures will be treated as n independent samples, each with probability of success (1- x), where n is the smallest integer greater than or equal to T /m. Thus, the probability of not crashing on a given fault is (I-x) =c*(1-r*s/m) and the probability, P, of not crashing during the period T is given by: P= [c*(I-r*s/m) In =c *(I-r*s/m)n . concurrent schemes and since this time is essentially equivalent to how frequently the confirmation procedures are invoked, we can assume that r is equal to the time period between the periodic integrity confirmation checks. In order to gain a feeling for the order of r, rather pessimistic numbers can be assumed for m, s, and T. Assume m=1 week, s= Y2, and T=10 years; this gives an n of 520. For now, assume c is equal to one. Now, in order to attain a probability of .95 that a system will survive 10 years with no crashes under the above assumptions, r will have to be: r= m/ s*[I-. 95(1/520) ] = 119 seconds Thus, if the periodic checks are made even as infrequently as every two minutes, a system will last 10 years with a probability of not crashing of approximately.95. The effects of the coverage must now be examined. In order for the coverage to be good enough to provide a probability of .95 of no system crashes in 10 years due to the system's inability to handle single faults, it must be: c= .95(11520) =.9999 Now this would indeed be a very good coverage. Since the actual coverage of any given system will most likely fall quite short of this value, it seems that the coverage, and not multiple simultaneous faults, is the limiting factor in determining a system's ability to recover from faults. The most important conclusion to be drawn from this section is that the solitary-fault assumption is not only convenient but quite justified, and this is true even when only periodic checks are made to verify the integrity of some of the logic. INTEGRITY CONFIRMATION FEATURES OF THE "PRIME" SYSTEM ll With this equation, it is now possible to establish the validity of using the various non-concurrent techniques mentioned above to confirm the integrity of a system. What this equation will establish is how often it will be necessary to perform the fault injection, action simulation, and surveillance procedures in order to gain an acceptable probability of no system crashes. Since the time required to detect, locate, and isolate a fault, and reconfigure to run without the faulty unit, will be primarily a function of the time to detection for the non- In order to better illustrate the potential power of dynamic integrity confirmation techniques, a descrip, tion. will now be given of how this concept is being used to economically provide an integrity confirmation structure for a fault-tolerant system. At the University of California, Berkeley, we are currently building a modular computer system, which has been named PRIME, that is to be used in a multiaccess, interactive environment. The initial version of this system will have five processors, 13 8K-word by 33-bit memory blocks with associated switching units, Dynamic Confirmation of System Integrity DISK I I ~ DRIVE .. .. . DISK DRIVE I I I IEXTERNAL DEVICE DISK DRIVE I I I"'TERNAL DEVICE .... IEXTERNAL DEVICE I I I I I INTERCONNECTION NETWORK I I I I/O CONTROLLER I I~I I I 1/0 CONTROLLER I 1*11ri I I I/O CONTROLLER 1*1 ~ 93 I T I/O CONTROLLER * I I ~ CONTROLLER I/O I *1 *RECONFIGURATION LOGIC EACH nIDICATED LINE REPRE SENTS 16 TERMINAL CONNECT IONS I PROCESSOR I I MEMORY :rnTERFACE PROCESSOR I I I PROCESSOR I I MEMORY INTERFACE MEMORY 1 :rnTERFACE PROCESSOR I I I I PROCESSOR I r :rnTERFACE MEMORY r MEMORY INTERFACE j - ~~E\j ~ 1 MB4 MB5 EJ~~ ~ EACH MEMORY BLOCK (MB) CONSISTS OF TWO 4K MODULES MBlO MBn 88 Figure 1-Block diagram of the PRIME system 15 high-performance disk drives, and a switching network which allows processor, disk, and external-device switching. A block diagram of PRIME appears in Figure 1. The processing elements in PRIME are 3-bus, 16-bit wide, 90ns cycle time microprogrammable processors called IVIETA 4s. 6 Each processor emulates a target machine in addition to performing I/O and executive functions directly in microcode. At any given time, one of the processors is designated the Control Processor (CP), while the others are Problem Processors (PPs). The CP runs the Central Control Monitor (CCM) which is responsible for scheduling, resource allocation, and interprocess message handling. The Problem Processors run user jobs and perform some system functions with the Extended Control l\1onitor (ECl\1) which is completely isolated from user processes. Associated with each PP is a private page, which the ECM uses to store data, and some target-machine code which it occasionally causes to be executed. A more complete descrip- tion of the structure and functioning of PRIME is given elsewhere. 7 The most interesting aspects of PRIl\1E are in the areas of availability, efficiency, and security. PRIME will be able to withstand internal faults. The system has been designed to degrade gracefully in the presence of .internal failures. 8 Also, interprocess integrity is always maintained even in the presence of either hardware or software faults. The PRIME system is considered continuously integral if it is providing interprocess interference protection. Therefore, security must be maintained at all times. Other properties, such as providing user service and recovering from failures, can be handled in a less stringent manner. Thus, dynamic confirmation of system integrity in PRIl\1E must be handled concurrently for interprocess interference protection and can be handled periodically with respect to the rest of the system. Of course, there are areas which do not affect interprocess interference protection but which 94 Fall Joint Computer Conference, 1972 will nonetheless utilize concurrent fault detection simply because it is expedient to do so. Fault injection is being used to check most of the fault-detection logic in PRIlVIE. This decision was made because the analysis of non-concurrent integrityconfirmation techniques has established that periodic fault injection is sufficiently effective to handle the job and because it is simpler and cheaper than the alternatives. There is a characteristic of the PRIME system that makes schemes which utilize periodic checking very attractive. At the end of each job step, the current process and the next process are overlap swapped. That is, two disk drives are used simultaneously; one of these disks is rolling the current job out, while the other is rolling the next job in. During this time, the associated processor has some potential free time. Therefore, this time can be effectively used to make whatever periodic checks may be necessary. And since the mean time between job steps will be less than a second, this provides very frequent, inexpensive periodic checking capabilities. The integrity of Problem Processors is checked at the end of each job step. This check is initiated by the Control Processor which passes a one-word seed to the PP and expects the PP to compute a response. This seed will guarantee that different responses are required at different times so that the PP cannot accidently "memorize" the correct response. The computation requires the use of both target machine instructions and a dedicated firmware routine to compute the expected response. The combination of these two routines is called a surveillance procedure. This surveillance procedure checks all of the internal logic and the control storage of the microprocessors. The target machine code of the surveillance routine is always resident in the processor's private page. The microcode part is resident in control storage. A fixed amount of time is allowed for generating a response when the CP asks a PP to run a surveillance on itself. If the wrong response is given or if no response is given in the allotted time, then the PP is assumed to be malfunctioning and remedial action is initiated. In a similar manner, each PP periodically requests that the CP run a surveillance on itself. If a PP thinks it detects that the CP is malfunctioning, it will tell the CP this, and a reconfiguration will take place followed by diagnosis to locate the actual source of the detected error. More will be said later about the structure of the reconfiguration scheme. While the periodic running of surveillance procedures is sufficient for most purposes, it does not suffice for protecting against interprocess interference. As previously mentioned, this protection must be continuous. Therefore, a special structure has been developed which is used to prevent interprocess interference on a continuous basis. 4 This structure provides double checks on all actions which could lead to interprocess interference. In particular, the validity of all memory and disk references, and all interprocess message transmissions, are among those actions double checked. A class code is used to associate each sector (lK words) of each disk pack with either a particular process or with the null process, which corresponds to unallocated space. A lock and key scheme is used to protect memory on a page (also lK words) basis. In both cases, at most one process is bound to a lK-word piece of physical storage. The Central Control ]VIonitor is responsible for allocating each piece of storage, and it can allocate only those pieces which are currently unallocated. Each process is responsible for deallocating any piece of storage that it no longer needs. Both schemes rely on two processors and a small amount of dedicated hardware to provide the necessary protection against some process gaining access to another process' storage. In order for the above security scheme to be extremely effective, it was decided to prohibit sharing of any storage. Therefore, the Interconnection Network is used to pass files which are to be shared. Files are sent as regular messages, with the owning process explicitly giving away any information that it wishes to share with any other process. All interprocess messages are sent by way of the CPo Thus, both the CCM and the destination EC]VI can make consistency checks to make sure that a message is delivered to the correct process. The remaining area of integrity checking which needs to be discussed is the reaction hardware. In the PRIlVIE system, this includes the isolation, power switching, diagnosis, and reconfiguration logic. A variety of schemes have been employed to confirm the integrity of this reaction logic. In order to describe the methods employed to confirm the integrity, it will be necessary to first outline the structure of the spontaneous reconfiguration scheme used in the PRIME system. There are four steps involved in reorganizing the hardware structure of PRIME so that it can continue to operate with internal faults. The first step consists of detecting a fault. This is done by one of the many techniques outlined in this paper. In the second step, an initial reconfiguration is performed so that a new processor, one not involved in the detection, is given the job of being the CPo This provides a pseudo "hard core" which will be used to initiate gross diagnostics. The third step is used to locate the fault. This is done by having the new CP attach itself to the Programmable Control Panel9 of a Problem Processor via the Interconnection Network, and test it by single-stepping this Dynamic Confirmation of System Integrity PP through a set of diagnostics. If a PP is found to be functioning properly, then it is used to diagnose its own I/O channels. After the fault is located, the faulty functional-unit is isolated, and a second reconfiguration is performed to allow the system to run without this unit. Of the four steps involved in responding to a fault, the initial reconfiguration poses the most difficulty. In order to guarantee that this initial reconfiguration could be initiated, a small amount of dedicated hardware waS incorporated to facilitate this task. Associated with each processor is a flag which indicates when the processor is the CPo Also associated with each processor is a flag which is used to indicate that this processor thinks the CP is malfunctioning. For every processor, these two flags can be interrogated by any other processor. Each processor can set only its own flag that suggests the CP is sick. The flag which indicates that a processor is the CP can be set only if both the associated processor and the dedicated hardware concur. Thus, the dedicated hardware will not let this flag go up if another processor already has its up. Also, this flag will automatically be lowered whenever two processors claim that the CP is malfunctioning. There is somewhat of a dilemma associated with confirming the integrity of this logic. Because of the distributed nature of this reconfiguration structure, it should be unnecessary to make any of it "reliable." That is, the structure is already distributed so that a failure of any part of it can be tolerated. However, if simultaneous faults are to be avoided, the integrity of this logic must be dynamically confirmed. Unfortunately, it is not practical to check this logic by frequently initiating reconfigurations. This dilemma is being solved by a scheme which partially simulates the various actions. The critical logic that cannot be checked during a simulated reconfiguration is duplicated so that infrequent checking by actual reconfiguration is sufficient to confirm the integrity of this logic. The only logic used in the diagnostic scheme where integrity confirmation has not already been discussed is the Programmable Control Panel. This pseudo panel is used to allow the CP to perform all the functions normally available on a standard control panel. No explicit provision will be made for confirming the integrity of the Programmable Control Panel because its loss will never lead to a system crash. That is, failures in this unit can coexist with a failure anywhere else in the system without bringing the system down. For powering and isolation purposes, there are only four different types of functional units in the PRIlVIE system. The four functional units are the intelligence module, which consists of a processor, its I/O controller 95 and the subunits that directly connect to the controller, its memory bus, and its reconfiguration logic; the memory block, which consists of two 4K-word by 33-bit 1\10S memory modules and a 4X2 switching matrix; the switching module, which consists of the switch part of two processor-end and three device-end nodes of the Interconnection Network; and the disk drive. The disk drives and switching modules can be powered up and down manually only. The intelligence modules must be powered up manually, but they can be powered down under program control. Finally, the memory blocks can be powered both up and down under program control. No provision was made to power down the disks or switching modules under program control because there was no isolation problem with these units. Rather than providing very reliable isolation logic at the interfaces of the intelligence modules and memory blocks, it was decided to provide additional isolation by adding the logic which allows these units to be dynamically powered down. Also, because it may be necessary to power memory blocks down and then back up in order to determine which one has a bus tied up, the provision had to be made for performing the powering up of these units on a dynamic basis. Any processor can power down any memory block to which it is attached, so it was not deemed necessary to provide for any frequent confirmation of the integrity of this power-down logic. Also, every processor can be powered down by itself and one other processor. These two power-down paths are independent so again no provision was made tofrequently confirm the integrity of this logic. In order to guarantee that the independent power-down paths do not eventually fail without this fact being knmvll, these paths can be checked on an infrequent basis. All of the different integrity confirmation techniques used in PRIlVIE have been described. The essence of the concept of dynamic confirmation of system integrity is the systematic exploitation of the specific characteristics of a system to provide an adequate integrity confirmation structure which is in some sense minimal. For instance, the type of use and the distributed intelligence of PRI1\1E were taken advantage of to provide a sufficient integrity-confirmation structure at a much lower cost and complexity than would have been possible if these factors were not carefully exploited. REFERENCES 1 B BORGERSON C V RAVI On addressing failures in merrwry systems Proceedings of the 1972 ACM International Computing Symposium Venice Italy pp 40-47 April 1972 2 D A ANDERSON G METZE Design of totally self-checking check circuits for M-out-of-N 96 Fall Joint Computer Conference, 1972 codes Digest of the 1972 International Symposium on Fault-Tolerant Computing pp 30-34 3 R A SHORT The attainment of reliable digital systems through the use of redundancy-A survey IEEE Computer Group News Vol 2 pp 2-17 March 1968 4 R S FABRY Dynamic verification of operating system decisions Computer Systems Research Project Document No P-14.0 University of California Berkeley February 1972 5 W G BOURICIUS W C CARTER P R SCHNIEDER Reliability rrwdeling techniques for self-repairing computer systems Proceedings of the ACM National Conference pp 295-309 1969 4. computer system microprogramming referlmce manual Publication No 7043MO Digital Scientific Corporation San diego California June 1972 7 H B BASKIN B R BORGERSON R ROBERTS P RIM E-A rrwdular architecture for terminal-oriented systems Proceedings of the 1972 Spring Joint Computer Conference pp431-437 8 B R BORGERSON A fail-softly system for time-sharing use Digest of the 1972 International Symposium on Fault-Tolerant Computing pp 89-93 9 G BAILLIU B R BORGERSON A multipurpose processor-enhancement structure Digest of the 1972 IEEE Computer Society Conference San Francisco September 1972 pp 197-200 6 META The in-house computer department by JOHN J. PENDRAY TECSI-SOFTWARE Paris, France INTRODUCTION tive's office. This organizational form has created two of the most widely adopted erroneous concepts ever to permeate corporate activity. The first, and perhaps least damaging of these, is the concept that the highest corporate officers should be directly in touch with the computer at all times (and at any cost) to take advantage of something called the J\1:anagement Information System (MIS). (Briefly, a MIS is a system designed to replace a fifteen-minute telephone call to the research department by a three-second response from a computer, usually providing answers exactly fourteen minutes and fifty-seven seconds faster than anyone can phrase his precise question.) The second concept to follow the attachment of the computer department to the chief executive's office has been the missionary work which has been undertaken in the name of, and with the power or influence of, the chief executive. Information service missionary work generally consists of the computer experts telling each department exactly what their information needs are and how they should go about their business. This article will examine the nature of· the in-house computer department in terms of its place in the corporate structure, its product, its function in the maturing of the product, and its methods of optimizing its resource utilization. Additionally, one possible internal structure for an in-house computer department will be presented. Over fifteen years ago, in some inner recess of some large corporation, a perplexed company official stood pondering before a large corporate organizational chart on his office wall. In his hand he held a small square of paper on which the words "Computer Department" were inscribed. Behold one of the modern frontiersmen of twentieth century business: the first man to try to stick the in-house computer department on the company organizational chart. He probably failed to find a place with which he felt comfortable, thereby becoming the first of many who have failed to resolve this problem. Most of the earlier attempts ended by putting the computer department somewhere within the grasp of the corporate financial officer. The earliest computer applications were financial in nature, such as payroll, bookkeeping, and, after all, anything that costs as much as a computer must belong in the financial structure somehow. Many corporations are still trying to get these financial officers to recognize that there are many non-financial computer applications which are at least as important as the monthly corporate trial balances. Additionally, and perhaps even worse, the allocation of the computer department's resources is viewed as a relatively straightforward financial matter subject to budgeting within financial availability. This method of resource dispensing seems not to provide the right balance of performance and cost generally sought in the business world. As the computer department growth pattern followed the precedent of Topsy, many corporations began to wonder why something that had become an integral part of every activity in the company should belong to one function, ·like finance. This questioning led to a blossoming forth of powerful in-house computer departments disguished under surcharged names like Information Services Department. Often, this square on the organizational chart had a direct line to the chief execu- THE IN-HOUSE COMPUTER DEPARTMENT WITHIN THE CORPORATE STRUCTURE Most of the blocks on the corporate organizational chart have some direct participation in the business of the company. Take an example. The Whiz-Bang Corporation is the world's leader in the production of whizbangs. Its sales department sells whiz-bangs. Its production department produces whiz-bangs. Its development department develops new types of whiz-bangs. Its 97 98 Fall Joint Computer Conference, 1972 computer department has nothing to do with whizbangs. The people in the computer department know lots about computers, but their knowledge about whizbangs comes from what other departments have told them. What are they doing in the whiz-bang operation? The computer department provides services to all the other departments in the company. These other departments are directly involved in the business of the company, but the function of the computer department is to provide services to the company, not to contribute directly in the business of the company. In this light, the computer department is like an external supplier of services. How should such a supplier of services be funded? Let's return to the Whiz-Bang Corporation analogy. The marketing department is allocated sufficient resources to market whiz-bangs; the production department gets resources adequate to produce whiz-bangs, etc. It is not possible to allocate r~sources to the computer department on the basis of its direct contribution to whiz-bangs. The computer department provides services, and these services are what should be funded. The value of these services provides the basis for funding. Other departments use the computer services, and it follows that only these departments can place the value on a service and that each department should pay for the services which it gets. Therefore, the funding of the computer department is the sum total of the payments received from the other departments for services rendered. How should the computer department be controlled? First of all, it is necessary to define what is to be controlled. Either one can control product specifications or one can control the resources necessary to produce a product. Product specifications are generally controlled, in one way or another, by the buyer, while resource control is usually an internal problem concerned with the production of the buyer-specified product. At the Whiz-Bang Corporation, the marketing determines the buyer-desired product specifications, but each internal department calculates and controls its resource requirements to yield the specified number and type of whizbangs. If the nature of the computer department is to provide services as its product, the users of these services should control their specifications. Mter all, they are paying for them (or should be). If the computer department has the task of providing services that the other departments will be willing to fund, it should have the responsibility to allocate its resources to optimize its capability to provide the services. Mter all, they are the experts (or should· be). In resume, the departments in the corporation are using an external type of service from an internal source, the in-house computer department. Only they can value the service, but they won't do this job of valuation unless they are charged for the service. This valuation will automatically produce customer-oriented specifications for the services. On the other hand, once the services are specified and accepted at a certain cost, it is the job of the computer department to use its revenues in the best manner to produce its services. That is, the funding flows as revenues from the other departments; but the utilization of this funding is the proper responsibility of the provider of the services, the computer department. These principles indicate that the in-house computer department can be melded into the corporate structure in any position where it can be equally responsive to all of the other departments while controlling, itself, the utilization of its resources. THE PRODUCT OF THE COl\fPUTER DEPARTMENT-THE COMPUTER SERVICE A computer service, which is the product produced and sold by the computer department, has an average life span of between five and ten years. It is to be expected that as the speed of computer technological change diminishes, this life span will lengthen. To date, many computer services have been conceived and developed without a real understanding of the nature of a computer service. The lengthening of the life span of the computer service should produce a more serious interest in understanding this nature in order to produce more economical and responsive services. A well-conceived computer service is a highly tuned product which depends on the controlled maturing and merging of many technical facets. Too often this maturing and merging is poorly controlled because the life cycle of the computer service is not considered. The net result may be an inflexible and unresponsive product which lives on the edge of suicide, or murder, for the entirety of its operational life. Computer services management should not allow this inflexibility to exist, for the computer is one of the most flexible tools in the scientific grabbag. This innate flexibility should be exploited by management in the process of maturing a computer service. MATURING THE COMPUTER SERVICE There are four major phases in the maturing process: definition, development, operation, and overhaul. Perhaps the most misunderstood aspect of this maturing The In-House Computer Department process is the relation between the phases in the life cycle of a computer service and the corresponding changes required in the application of technical specialties. Each phase requires a different technical outlook and a different level of technical skills. The definition phase Defining the service is oriented toward producing the functional specifications which satisfy the needs and constraints of the client. From another point of view, this is the marketing and sales problem for the computer department. It should be treated as a selling problem because the service orientation of the computer department is reinforced by recognition that the buyer has the problem, the money, and the buying decision. The technical outlook should be broad and long term, for the entire life of the service must be considered. Technical details are not desirable at this stage, but it is necessary to have knowledge of recent technical advances which may be used to the benefit of the service. Also, a good understanding of the long-range direction and plans of the computer department is necessary in order to harmonize the proposed service with these goals. The first step in defining a computer service is to locate the potential clients and estimate their susceptibility to an offer of a computer service. At first glance, this seems an easy task as the potential clients are well-known members of the corporate structure. Not so! Many of the most promising avenues of computer services cut across the normal functional separations and involve various mixtures of the corporate hierarchy. These mixtures are frequently immiscible, and the selling job involves convincing each participant of his benefit and helping him justify his contribution. The corporate higher-ups would also need to be convinced, but the money will seldom come from their operating budgets. In any case, the responsibility to seek out and sell new computer services lies with the computer department; however, the decision to buy is the sole property of the client departments. Mter potential clients are identified, a complete understanding of the problem must be gained in order to close the sale. This understanding should give birth to several alternative computer system approaches giving different performance and cost tradeoffs. The potential customer will want to understand the parameters and options available to him in order. to select his best buy. This is a phase of the life cycle of the service where the computer department provides information and alternatives to the prospective client. Closing of the agreement should be in contractual terms with each party obligated for its part of the 99 responsibility. All terms such as financing schedules, product specifications, development schedules, modification procedures, and penalties should be reduced to writing and accepted before work begins. A computer department that cannot (or will not) make firm commitments in advance of a project is poorly managed. (Of course there can always be a periodic corporate reckoning to insure that imbalances are corrected.) The development phase The contract is signed; the emphasis for the computer department changes from sales to development and implementation of the service. This phase calls for a concentrated, life-of-the-effort technical outlook with in-depth and competent technical ability required at all levels. The specialists of the computer department must be organized to produce the system which will provide the service as specified. The usual method for accomplishing this organization is the "project". Many learned texts exist on the care and feeding of a technical project, so let's examine here only the roles of the computer department and the client within the general framework of a project. Computer department participation centers on its role as being the prime responsible party for the project. It is the computer department's responsibility to find the best techniques for satisfying all the goals of the project. The correct utilization of the resources available to the computer department is a key to the project's success. One resource is time, and time runs put for a project. That is to say that no true project succeeds unless it phases out on time. A project team produces a product, turns it over to the production facility, and then the project ceases to exist. The personnel resource of the computer department is also viewed differently in a project. The project team is composed of a hand-tailored mix of specialists who are given a temporary super-incentive and then removed from the project after their work is done. Superincentives and fluid workforces are not easily arranged in all companies, and this is one of the reasons why the computer department must maintain control of the utilization of its resources. The computer department should acquire new resources for a project within the following guideline: don't. Projects should not collect things around them or they become undisintegratable. The only exception: acquisitions which form part of the product, and not part of the project, and which will go with the product into the production phase. Assuring the continuing health of the project's product is another critical aspect of the computer depart- 100 Fall Joint Computer Conference, 1972 ment's responsibility in the project. Since the project team will die, it must provide for the product to live independently of the project. This involves producing a turnoverable product which is comprehensible at all levels of detail. Also, the final product must be flexible enough to respond to the normal changes required during its lifetime. It is interesting to note that in the development phase of the life cycle of a service, the project philosophy dictates that the computer department orient itself toward project goals and not just toward satisfying the specifications of the service. That is, the service specifications are only one of the project goals along with time, cost, etc. On the other hand, the eventual user of the service, i.e., the client department, views the project as only a part of the total process necessary to have the service. To the client, the project is almost a "necessary evil"; however, the development project philosophy depends on active client involvement. Three distinct client functions are required. In their order of importance they are: 1. Countinuing decision-making on product performance and cost alternatives surfaced during the project work. 2. Providing aid to the technical specialists of the computer department to insure that the functional specifications are well understood. 3. Preparing for use of the service, including data preparation, personnel training, reorganization, etc. These three client functions are certainly important aspects of a project, but it should not be forgotten that the development project is a method used by the computer department to marshal its resources and, therefore, must be under the responsibility of the computer department. Development of the service may be an anxious phase as the client has been sold on the idea and is probably eager for his first product. This eagerness should not be blunted by the project team, nor should it affect the sound judgment of the team. Consequently, contact between the technical experts and the client should be controlled and directed toward constructive tasks. philosophy which is single-minded: to assure the continuing viability of the service. This is often a fire-fighting function in which the quick-and-dirty answer is the best answer. There isn't much technical glory in this part of the life cycle of a service, but it's the part that produces the sustaining revenues for the computer department. The computer department enhances continuing product viability by performing two functions. Of primary importance is to reliably provide the specified service with minimum expenditure of resources. Secondarily, the client must be kept aware of any possible operational changes which might affect the performance or cost of his service. Again, the client has a strong part in the decision to effect a change. The client must contribute to the continuing viability of the product by using it intelligently and periodically evaluating its continuing worth. The overhaul phase As a service ages during its operational heyday, the environment around it changes little by little. Also, the quick-and-dirty maintenance performed by the operations personnel will begin to accumulate into a patchwork quilt which doesn't look much like the original edition. These two factors are not often self-correcting, but they can go unnoticed for years. The only answer is a complete technical review and overhaul. Every service should be periodically dragged out of the inventory and given a scrub-down. This is another job where the technical glamor is quite limited; however, overhauling services to take advantage of new facilities or concepts can provide significant gains, not to mention that the service will remain neat, controllable, flexible, and predictable. Thus definition, development, operation, and overhaul are the four phases in the life cycle of a computer service. All of these phases directly affect the clients and are accomplished with their aid and involvement. However, there is another area of responsibility for the computer department that does not touch the clients as closely. This area is the control over the utilization of the computer department's resources. OPTIMIZING THE UTILIZATION OF THE COMPUTER DEPARTMENT'S RESOURCES The operation phase The third step in the life cycle of a service begins when the development project begins to phase out. This is the day-to-day provision of the service to the client. In this phase, the computer department has a production This important responsibility of the computer department is an internally-oriented function which is not directly related to the life cycles of the services. This is the problem of selecting the best mix of resources which fulfills the combined needs of the clients. In the comput- The In-House Computer Department er service business there are two main resources, people and computing equipment. Effective management of computer specialists involves at least training, challenging, and orienting. If these three aspects are performed well, a company has a better chance of keeping its experts, and keeping them contributing. Training should be provided to increase the professional competence of the staff, but in a direction which is useful to the company. It is not clear, for instance, that companies who use standard off-theshelf programming systems have a serious need to train the staff in the intricate design of programming systems software. It's been done, and every routine application suddenly became very sophisticated, delicate, and incomprehensible. However, training which is beneficial for the company should be made interesting for the personnel. Challenging technical experts is a problem which is often aggravated by a poor hiring policy which selects over-qualified personnel. Such people could certainly accomplish the everyday tasks of the company if only they weren't so bored. The management problem of providing challenge is initially solved by hiring people who will be challenged by the work that exists at the company. Continuing challenge can be provided by increasing responsibility and rotating tasks. Orienting the technical personnel is a critical part of managing the computer department. If left alone, most technical specialists tend to view the outside world as it relates to the parameter list of his logical input/output module, for example. He needs to be oriented to realize that his technical specialty is important because it contributes to the overall whole of the services provided to the clients. This client-oriented attitude is needed at all levels within a service organization. Besides personnel, the other major resource to be optimized by the computer department is the computing system. This includes equipment and the basic programs delivered with the equipment, sometimes called "hardware" and "software". Optimizing of a computing system is a frequently misunderstood or neglected function of the computer department. In a sense this is not surprising as there are three factors which obscure the recognition of the problem. First of all, computers tend to be configured by technical people who like computers. Secondly, most computer systems have produced adequate means of justifying themselves, even in an unoptimized state. Lastly, computer personnel, both manufacturers and users, have resisted attempts to subject their expenditures to rigorous analysis. It seems paradoxical that the same computer experts who have created effective 101 analysis methodologies for so many other fields maintain that their field is not predictable and not susceptible to methodological optimization. The utilization of computer systems is capable of being analyzed and may be seen as three distinct steps in the life cycle of the resource. These three steps can be presented diagrammatically as follows: general requiremBnts I I development of the hardware strategy computing requirements selection of a system system options j tuning of the system system configuration All too often, the strategy is chosen by default, the selection is made on the basis of sales effectiveness, and the tuning is something called "meeting the budget." Development of the hardware strategy Many computer departments don't even realize that different strategies exist for computing. This is not to say that they don't use a strategy; rather that they don't know it and haven't consciously selected a strategy. The hardware strategy depends on having an understanding of the general needs of the computer department. The needs for security, reliability, independence, centralization of employees, type of computing to be done, amount of computing, etc., must be formulated in general terms before a strategy decision can be made. There are many possible ways to arrange computing equipment, and they each have advantages, disadvantages, and, as usual, different costs. The problem is to pick the strategy which responds to the aggregate of the general needs. Perhaps some examples can best demonstrate the essence of a computing strategy. A large oil company having both significant scientific and business processing decides to separate the two applications onto two machines with each machine chosen for its performance/ cost in one of the two specialized domains. A highly decentralized company installs one large economical general purpose computer but with remote terminals each of which is capable of performing significant 102 Fall Joint Computer Conference, 1972 independent processing when not being used as a terminal. A highly centralized company installs two large mirror-image general purpose computers with remote terminals which are efficient in teletransmission. This is one area where the in-house computer department is not exactly like an external supplier of services, for the system strategy must reflect the general needs, and constraints, of the whole corporation. Selection of a system Mter the strategy is known, it becomes possible to better analyze and formulate the computing needs in terms of the chosen strategy. This usually results in a formal specification of computing requirements which includes workload projections for the expected life of the system. This is not a trivial task and will consume time, but the service rendered by the eventual system will directly depend on the quality of this task. Once an anticipated workload is defined, one is free to utilize one, or a combination, of the methods commonly used for evaluating computer performance. Among these are simulation, benchmarks, and technical expert analysis. One key decision will have a great influence on the results of the system selection: is a complete manufacturer demonstration to be required? This question should not be answered hastily; because a demonstration requires completely operational facilities, which may guarantee that the computer department will get yesterday's system, tomorrow. On the other hand, not having a demonstration requirement may bring tomorrow's most advanced system, but perhaps late and not quite as advanced as expected. In any case, some methodology of system selection is required, if only to minimize the subjectivity which is so easily disguised behind technical jargon. environment. Take an example. As a result of the characteristics of the selected computer system, it might turn out that the mix of jobs "required" during the peak hours dictates that the expensive main memory be 50 percent larger than at any other time. Informing the clients of this fact, and that the additional memory cost will naturally be spread over their peak period jobs, will usually determine if all the requirements are really this valuable to the client. The client has the right to be informed of problems that will directly affect his service or costs. Only he can evaluate them and decide what is best for him. Tuning of the environment involves selecting the best technical options, fully exploiting the potential of the computing configuration, and otherwise varying the parameters available. The trick is to examine all the parameters in the environment, not just the technical ones. This tuning process should be made, on a periodic basis, to insure that the environment remains as responsive as possible to the current needs. PROPOSAL-AN ORGANIZATIONAL STRUCTURE FOR THE IN-HOUSE COMPUTER DEPARTMENT It may not be possible to organize every computer department in the same manner, but some orientation should be found which would minimize the lateral dependencies in the organization. Perhaps a division of responsibilities based on the time perspective would be useful. Something as simple as a head office with three sections for long-range, medium-range, and short-range tasks could minimize lateral dependencies and still allow for exploitation of innate flexibility. In the language of the computer department, these sections might be called the head office, planning, projects, and operations, as shown in Figure 1. Tuning of the system The head office The winner of the hardware selection should not be allowed to start to take advantage of the computer department once the choice is made. On the contrary, the computer department is now in its strongest position as the parameters are much better defined. One more iteration on the old specifications of requirements can now be made in light of the properties of the selected system. Also, an updating of the workload estimates is probably in order. Armed with this information, the computer department is now ready to do final battle to optimize the utilization of the system. This optimization involves more than just configuring the hardware. It is a fine tuning of the computing There are three functions which must be performed by the head office. These functions are those which HEAD OFFICE PROJECTS SECTION Figure 1 The In-House Computer Department encompass all of the other sections and are integral to the computer department. The first, and most important, of the functions for the head office is certainly marketing and client relations. All aspects of a service's life cycle involve the customer and he must be presented with a common point of contact on the business level. Every client should feel that he has the attention of city hall for resolving problems. In the opposite direction, the three sections should also use the head office for resolving conflicts or making decisions which affect the clients. The second function of the head office is to control the life cycle of a service. As a service matures from definition to development to operations, it will be passed from one section to another. This phasing avoids requiring the technical people to change their outlook and skills to match the changes in the maturing process, but may create problems as a service is passed from hand to hand. Only the head office can control the process. Resource control is the last function of the head office. The allocation of the various resources is an unavoidable responsibility and must reflect the changing requirements of the computer department. 103 is limited for each task and each task is executed in a project approach. A permanent nucleus of specialists exists to evaluate and implement major changes in the equipment. Each such major change is limited in its scope and accomplished on a project basis. Development of services is naturally a task for the projects section. Each such project is performed by a team composed of people from the permanent nucleus and from the other two sections. The leadership comes from the projects section to insure that the project philosophy is respected, but utilization of personnel from the other sections assists in the transistions from planning to projects and from projects to operations. This latter transition from development to operations is a part of the third function of the projects section. Direct aid is given to the operations section to insure that project results are properly understood and exploited in the day-to-day operations. The operations section Here is the factory. The time orientation is immediate. There are five major tasks to be performed, each of which is self-evident. The planning section This is the long-range oriented group which must combine technical and market knowledge to plan for the future. The time orientation of this section will vary from company to company, but any task which can be considered as being in the planning phase is included. Among the planning tasks is the development of longrange strategy. This strategy must be founded on a knowledge of expected customer needs (market research), advances in technical capabilities (state-of-theart studies), and constraints on the computer department (corporate policy). Development of an equipment strategy is a good example of this task. Another planning function is the developing of functional specifications for potential new services. In this respect, the planning section directly assists the head office in defining new services for clients. Lastly, -the planning section assists the projects section by providing state-of-the-art techniques which can be used in developing a specified service. The projects section This section has responsibility for the tasks in the computer department which are between planning and operation. Included is both development of services and changes in the technical facilities. The time orientation • Day-to-day production of the services, • Accounting, analysis and control of production costs, • Installation and acceptance of new facilities, • Maintenance of all facilities (this includes systems software and client services), • Recurring contact, training, and aid to the clients in use of the services. TWO EXAMPLES Perhaps the functioning of this organization can be demonstrated by an exampleJrom each of the two major areas of services and resources. The life cycle of a service may begin either in the planning section (as a result of market research) or in the head office (as a result of sales efforts). In any case, the definition of the service is directed by the head office and performed by the planning section. Once the contract is signed, the responsibility passes to the projects section and the project team is buJlt for the development effort. On the project team there will be at least one member from the planning section who is familiar with the definition of the service. The operations section also contributes personnel to facilitate the turnover at the end of the project. Other personnel are gathered from the permanent nucleus and the sections as needed. Each 104 Fall Joint Computer Conference, 1972 project member is transferred to the project section for the life of the project. The service is implemented, turned over to the operations section, and the project team is disbanded. Daily production and maintenance are performed by the operations section as is the periodic overhaul of the system. Each change of sections and all client contacts are under the control of the head office. For resource utilization a close parallel exists. The head office again controls the life cycle. As an example, take the life cycle of a computer system. The planning section would develop a strategy of computing which would be approved by the head office. When the time arrived for changing the computer system, the projects section would define a project and combine several temporary members and permanent nucleus personnel to form the project team. A computer system selection would be made in line with the strategy of computing, and the system would be ordered. The operations section would be trained for the new system and accept it after satisfactory installation. Periodic tuning of the computer system would be done by permanent personnel in the projects section with the cooperation of the operations section. The flow of responsibility for these two examples is represented by Figure 2. SUMMARY Excepting those cases where the product of a company contains a computer component, the in-house computer department is in the business of providing an external service to the integral functions of a non-computer business. For this reason, the computer department does not appear to mesh well on an organizational chart of the departments which do directly contribute to the product line of the corporation. However, a wellfounded in-house computer department which depends on its users for funds and on itself for the optimizing of the resources provided by these funds can peacefully serve within the organization. The computer department can respond to these two principles of funding and resource control by recognizing that its funds depend on the satisfaction of the users and that the optimizing of the use of these funds can be aided by organizing around the life cycles of both the services provided and the resources used. One possible organization designed to fulfill these two goals is composed of a head office and three sections. The head office maintains continuing control over the client relationship and over the life cycle of both services and resources. Each of the three sections specializes on a certain phase of the life cycle: definition, development, and operation. Such an organizational approach for the computer department should provide: CLIENTS 1 HEAD OFFICE I servi ce functi ons rt - - - I SERVIcESALES L _ _ _ ...J --------1 I -, I resource functions I r---, RESOURCE r' .... - ....... -- -'-' .... - ' ':' i I NEEDS L _ _ _ _ .-1 PROJECTS SECTION r I, 1 I " '- _ _ _ _ . ____L I 1 l : RESOURCE: : STRATEGY I--~ RESOURCE : SERVICE SERVICE I_~ , I ~. -- t- _ _ _ _ _ _ _ 1 : ~ - : - - -- - _ ... I RESOURCE , SELAE~610N ;--.: UTILIZATION: 1 ' : OPT 1M t ZAT I ON : : 1--------1 :---------j !---------j '- ________ , ,- - - - - - - - - -l '- - - _ - - - - - - -! , : ~,1_DEF IN IT ION'- -,.,1_ DEVELOPMENT _______I _ _ _ _ _ _ _ Figure 2 L-~ I 1 •• 1_ SERVICE PRODUCTION _ _______ , .!I • Computer services which are responsive to, and justified by, the needs of the users, • A controlled and uniform evolution of the life cycle of both services and resources, • A computer department management oriented towards dealing with technical considerations on a business basis, • Technical personnel who are client-oriented specialists and who are constantly challenged and matured by dealing with different problems from different frames of reference, • An in-house computer department which is selfsupporting, self-evaluating, and justified solely by its indirect contributions to the total productivity of the corporate efforts. A computer center accounting system by F. T. GRAMPP Bell Telephone Laboratories, Incorporated Holmdel, New Jersey INTRODUCTION and timely reporting of charges by case (the term "case" is the accounting term we use for "project" or "account"), so that costs of computer usage to a project would be known, and by department, to ascertain the absolute and relative magnitude of computer expenses in each organization. These orders of reporting are not necessarily identical, or even similar. For example, the cost of developing a particular family of integrated circuits might be charged against a single case, and computer charges for this development might be shared by departments specializing in computer technology, optics, solid state physics, and the like. Similarly, a single department may contribute charges against several or many cases-a good example of this is a drafting department. Original charging information is associated with a job number, an arbitrary number assigned to a programmer or group of programmers, and associated with the department for which he works, and the project he is working on. This job number is charged to one case and one department at any given point in time; however, the case and/or department to which it is charged may occasionally change, as is shown later. This paper describes a computer center accounting system presently in use at the Holmdel Laboratory and elsewhere within Bell Telephone Laboratories. It is not (as is IBM's SMF, for example), a tool which measures computer usage and produces "original" data from which cost-per-run and other such information can be derived. ltis, rather, a collector of such data: it takes as input original run statistics, storage and service measurements from a variety of sources, converts these to charges, and reports these charges by the organizations (departments) and projects (cases) which incur them. "DESIGN CRITERIA," below, outlines the overall functions of the system and describes the design criteria that must be imposed in order to assure that these functions can be easily and reliably performed. The remainder of this paper is devoted to a somewhat detailed description of the data base (as seen by a user of the system) and to the actual implementation of the data base. Of particular interest is a rather unusual means of protecting the accounting data in the event of machine malfunction or grossly erroneous updates. Finally, we describe backup procedures to be followed should such protection prove to be inadequate. A description of the system interface is given in the Appendix for reference by those who would implement a similar system. Simplicity of modification Many factors were considered in designing the system described here. The following were of major importance: One thing that can be said of any accounting system is that once operational, it will be subjected to constant changes until the day it finally falls into disuse. This system is no exception. lt is subj ected to changes in input and output data types and formats, and to changes in the relationships among various parts of its data base. Response to such changes must be quick and simple. Cost reporting Expansion capability Reporting costs is the primary function of any accounting system. Here, we were interested in accurate One of the more obvious unknowns in planning a system of this type is the size to which its data base may DESIGN CRITERIA 105 106 Fall Joint Computer Conference, 1972 eventually grow. On a short term basis, this presents no problem: one simply allocates somewhat more storage than is currently needed, and reallocates periodically as excess space begins to dwindle. Two aspects of such a procedure must, however, be borne in mind: First, the reallocation process must not be disruptive to the day-to-day operation of the system. Second, there must be no reasonably foreseeable upper limit beyond which reallocation eventually cannot take place. Protection Loss of, say, hundreds of thousands of dollars worth of accounting information would at the very least be most embarrassing. Thus steps must be taken in the design of the system to guarantee insofar as is possible the protection of the data base. Causes of destruction can be expected to range from deliberate malfeasance (with which, happily, we need not be overly concerned), to program errors, hardware crashes, partial updating, or operational errors such as running the same day's data twice. If such dangers cannot be prevented, then facilities which recover from their effects must be available. Continued maintenance The most important design criterion, from the designer's point of view, is that the system be put together in such a way that its continued maintenance be simple and straightforward. The penalty for failure to observe this aspect is severe: the designer becomes the system's perpetual caretaker. On the other hand, such foresight is not altogether selfish when one considers the problems of a computer center whose sole accounting specialist has just been incapacitated. THE DATA BASE: LEVEL 1 There are two ways in which to examine the data base associated with the accounting system. In the first case, there is its external appearance: the way it looks to the person who puts information into it or extracts information from it. Here, we are concerned with a collection of data structures, the way in which associations among the structures are represented, and the routines by means of which they are accessed. In the second, we look at its internal appearance: Here, we are interested in implementation details-in particular, those which make the system easily expansible and maintainable, and less vulnerable to disaster. These two aspects of the data base are, in fact, quite independent; moreover, to look at both simultaneously would be confusing. For this reason, we shall consider the first here, and defer discussion of the second to a later part of this paper. We first examine the structures themselves. Tally records Accounting system data is kept on disk in structures called tally records. Since we are concerned with data pertaining to cases, departments and job numbers, we have specified a corresponding set of tally records: Case Tally Records, Department Tally Records and Job Tally Records, respectively. These will be abbreviated as CTRs, DTRs and JTRs. In each tally record is kept the information appropriate to the particular category being represented. Such data fall naturally into three classes: fiscal information-money spent from the beginning of the year until the beginning of the present (fiscal) month; linkage data-pointers to associated records; other data---;anything not falling into the other two categories. For example, a CTR contains fiscal and linkage information: charges (a) up to and (b) for the current fiscal period, and a pointer to a chain of JTRs representing job numbers charged to the CTR's case. A DTR's content is analogous to that of a CTR; the exception is the inclusion of some "other" data. When we. report charges by case, the entire report is simply sent to the comptroller. Department reports, however, are sent to the heads of individual departments. To do so, we require the names of the department heads, and their company mailing addresses; hence the "other" data. A JTR contains considerably more information: in addition to the usual fiscal and linkage information, a JTR contains pointers to associated case and department, data identifying the responsible programmer, and a detailed breakdown of how charges for the current month are being accumulated. There is no way of determining a priori those things which will be charged for in order to recover computer center costs. In the olden days (say, 10 years ago) this was no problem: one simply paid for the amount of time he sat at the computer console. With today's computers, however, things just aren't that simple, since the computer center is called upon to provide all sorts of computing power, peripherals and services, and in turn, must recover the costs of said services from those who use them. Thus one might expect to find charges for CPU time, core usage, I/O, tape and disk storage rental, mounting of private volumes, telephone connect time, and so on. Add to this the fact that the charging A Computer Center Accounting System algorithm changes from time to time, and it quickly becomes apparent that the number and kinds of charging categories simply defy advance specification. Further, it seems clear that a given resource need not always be charged at the same rate-that in fact the rate charged for a resource should be a function of the way in which the resource is being used. For example, consider a program which reads a few thousand records from a tape and prints them. If such a program were to be run in a batch environment, in which printed output is first spooled to a disk and later sent to a high speed printer, one would expect the tape drive to be in use for only a matter of seconds. If the same program were to be run in a time-shared environment, in which each record read was immediately shipped to a teletype console for printing, the drive might be in use for several hours. If the computer center's charging algorithm is designed to amortize the rental costs of equipment among the users of the equipment, the latter use of "tape" ought to be considerably more expensive than the former, even though the same amount of "work" was done in each case. For these reasons, we chose to make the process table-driven. In this way, new charging categories can be added, old ones deleted, and rates changed simply by editing the file on which a rate table resides. Such a scheme has the obvious drawback of requiring a table search for each transaction with the system, but the inefficiencies here are more than compensated by the ability to make sweeping changes in the charging information without having to reprogram the system. Our rate table is encoded in such a way that it may be thought of as a two dimensional matrix. One dimension of the matrix consists of the services offerred by the computer center: batch processing (in our case, an ASP system), time shared services, data handling (a catch-all category which includes such things as tape copying, disk pack initialization and the like) storage rental, and sundry others. The other dimension consists of the usual computer resources: CPU time, core, disk and tape usage, telephone connect time, etc. When a user incurs a charge, it is recorded in his JTR as a triple called a "chit." The chit consists of a service name, such as "ASP," a resource name, such as "CPU," and the dollar amount which he has been charged. In this implementation, each chit occupies twelve bytes: BYTE: 0 COST RES SERV 4 8 107 These chits are placed in an area at. the end of the JTR. Initially, the area is empty. As time progresses and charges are accumulated, the number of chits in the JTR grows each time the job number is charged for a service-resource combination that it hasn't used before. The JTR itself is of variable length, and open-ended "to the right" to accommodate any number of chits that might be placed there. Linkages There are, in general, two ways in which one accesses information in the data base. Either one knows about a job number, and applies a charge against it and its associated case and department, or one knows about a case or department number and desires to look at the associated job numbers. This implies that there must be enough linkage information available for the following: (a) Given a job number, find the case and department to which that number is charged. (b) Given a case or department number, find all of the job numbers associated with that case or department. The first case is trivial: one simply spells out, in a JTR, the case and department to which the job number is charged. The second case is somewhat more interesting in that there may be one, or a few, or even very many job numbers associated with a single case or department. At Holmdel, we have the worst of all possible situations in this regard, in that the large majority of our cases and departments have very few job numbers associated with them, whereas a certain few have on the order of a hundred job numbers. Viewed in this light, schemes such as keeping an array of pointers in a CTR or DTR are, to say the least, unattractive because of storage management considerations. What we have chosen to do, in keeping with our philosophy of open-endedness, is to treat the case-job and department-job structures as chains, and using the CTRs and DTRs as chain heads, operate on the chains using conventional list processing techniques. In our implementation, a case-job chain (more properly, the beginning of it) appears in a CTR as a character field containing a job number charged to that case. In the JTR associated with that job number, the chain is continued in a field which either contains another job number charged to the same case, or a string of zeros, which is used to indicate the end of a chain. Fields in the DTR and JTR function analogously to represent department job chains. 108 Fall Joint Computer Conference, 1972 will fit in the core storage currently allocated the index. RO,MRT: are not relevant at this time, and will be discussed later. Traversing such a chain (as one frequently does while producing reports) is quite simple: begin at the beginning and get successive JTRs until you run out of pointers; then stop. Inserting a new job number into a case- or department-job chain is also straightforward: copy the chain head into the chain field in the new JTR; then point the CTR or DTR to the new JTR. Deletion of JTRs from the system is accomplished by means of similar "pointer copying" techniques. Entries are of the form (P1,P2,NAME), where PI and P2 are 31-bit binary numbers pointing to records in a direct access data set, and NAME is a character string of appropriate length containing a case, job or department number. Indices A ccessing techniques As was previously mentioned, the job numbers that are assigned to users are arbitrary. They happen, in point of fact, to be sequential for the most part, but this is simply a matter of clerical convenience. The only convention followed by case and department numbers is that (as of this writing) they are strictly numeric. This implies the necessity of a symbol table to associate names: case, department and job numbers, with their corresponding tally records on disk. Three types of symbol table organization were considered for use with this system: sequential, in which a search is performed by examining consecutive entries; binary, in which an ordered table is searched by successively halving the table size; hash, in which a randomizing transformation is applied to the key. Of these, the sequential search is simply too slow to be tolerated. While the hashing method has a speed advantage over the binary method, the binary method has a very strong advantage for our application, namely, that the table is ordered. One of the functions of the accounting system is that of producing reports, which are invariably ordered by case, department or job number. The ordering of the indices facilitates the work of many people who use the system. In this implementation, there are three indices: one for cases, one for departments, and one for job numbers. These will be abbreviated CDX, DDX and JDX, respectively. Each index consists of some header information followed by pairs of names and pointers to associated tally records. Header information consists of five items: Two types of access to the data base are required. The first is the programmer's access to the various structures and fields at the time he writes his program. The second is the program' 8 access to the same information at the time the program is run. The choice of PL/I as the language in which to write the system was, oddly enough, an easy one, since of all of the commonly available and commonly used languages for System/360, only PL/I and the assembler have a macro facility. Using assembly language would make much of the code less easily maintainable, and thus PL/I won by default. The macro facility is used solely to describe various data base components to the routines that make up the accounting system by selectively including those components in routines which use them. Further, all references to these components are made via the macros. Adoption of this strategy has two somewhat related advantages: First,. it forces consistent naming of data items. Without the macros, one programmer would call a variable "X", another would call it "END-OFMONTH-TOTAL", and so on. This, at least, would happen, and worse can be imagined. Second, should there be a change in a structure, all of the programs that use the structure must be recompiled. If the macros are used, the change can be made in exactly one place (the compile-time library) before recompilation. Run-time access to the data base is achieved by following simple conventions and by using routines that have been supplied specifically for this purpose. These conventions are simple because they are few and symmetric. The data base consists of six structures: the three indices, ~nd the three types of tally records. None of these structures are internal to a program that interfaces with the data base. All of them are BASED, that is, located by PL/I POINTER VARIABLES which have been declared to be EXTERNAL so that they will be known to all routines in the RL: TN: TMAX: Record Length for the tally record. This is needed by the PL/I and 08/360 InputOutput routines. The number of entries currently in the index. The maximum number of entries which A Computer Center Accounting System system. Thus, for example, a program that accesses the JDX would contain the following declarations: DCL 1 JDX BASED(PJDX), /* The JDX is defined, and */ % INCLUDE JDX; /* its detailed description */ DCL PJDX POINTER EXTERNAL; /* called from the library. */ The same convention applies to all of the other structures: they are allocated dynamically and based on external pointers whose names are the structure names prefixed by "P". A more detailed description of the user interface is given in the Appendix. The foregoing implies that there is a certain amount of initialization work to be done by the system: setting pointers, filling indices and the like. This is, in fact, the case. Initialization is accomplished by calling a routine named INIT, usually at the start of the PL/I MAIN program. Among its other functions, INIT: (a) Opens the accounting files. These include the six files containing the indices and tally records. Also opened are the file which contains the rate table, and a file used for JTR overflow. (b) Allocates space for the indices and tally records, then sets pointers to the allocated areas. (c) Reads into core the indices and the rate table, then closes these files. Some unblocking is required here both because the designers of PL/I (and indeed, of OS/360) have decreed that records shall not exceed 32,756 bytes in length, and because short records make the data base accessible to time shared programs running on our CPS system. Once INIT returns control, the operating environment for the accounting system has been established. Indices are in core, and can be accessed by conventional programming techniques or by using the SEARCH, ENTER and DELETE routines, provided. Reading and writing. of tally records is also done by system routines, these being: RDCTR RDDTR RDJTR WRCTR WRDTR WRJTR The read-write routines all require two arguments-a character string containing the name of the tally record to be read or written, and a logical variable which is set to signal success or failure to the caller. Actual data 109 transfer takes place between a direct access data set and a based "TR" area in core. A typical example of the use of these routines is: CALL RDJTR(MYJOB,OK); IF --, OK THEN STOP; Two higher level routines, FORMJTR and LOSEJTR, are available for purposes of expanding or contracting the data base. FORMJTR examines the contents of JTR. If the JTR seems reasonable, that is, if it contains a case and department number, and its chain pointers are explicitly empty (set to zero) it performs the following functions: (a) Checks to see if an appropriate CTR and DTR exist. If not, it creates them. (b) Writes the JTR. (c) Includes the JTR in the linkage chains extending from the CTR and DTR. LOSEJTR performs exactly the inverse function, including deleting CTRs and DTRs whose chains have become empty as a result of the transaction. INTERFACING WITH THE SYSTEM Activities involving the system fall into four general categories: creating the data base, modifying the existing data base, inputting charges, and producing reports. Creating the data base No utility is provided in the system for the express purpose of creating the data base, because the form and format of previously extant accounting information varies widely from one Bell Laboratories installation to the next. A program has to be written for this purpose at each installation; however, the system routines provided are such that the writing of this program is a straightforward job requiring a few hours' work, at most. Briefly, creation of the data base proceeds as follows: (a) Estimates are made of data set space requirements. These estimates are based on the number of cases, departments and job numbers to be handled, and on the direct access storage device capacities as described in IBM's Data Management! publication. Data sets of the proper size are allocated, and perhaps catalogued, using normal OS/360 procedures. (b) An accounting system utility named RESET is 110 Fall Joint Computer Conference, 1972 then run against the files. RESET initializes the indices so that they can communicate their core storage requirements to the system. No entries are made in the indices. (c) The aforementioned installation-written routine is run. This routine consists of a two step loop: read in the information pertinent to a job number and construct a JTR; then call FORMJTR. (d) At this point, the data base is installed. A program called UNLOAD is run so that a copy of the data base in its pristine form is available for backup purposes. Modifying the data base Two types of data base modifications are possible: those which involve linkage information, and those which do not. The latter case is most easily handledan EDITOR is provided which accepts change requests in a format similar to PL/l's data directed input, then reads, modifies and rewrites a designated tally record. The former case is not so simple, however, and is broken down into several specific activities, each of which is handled by an accounting system utility supplied specifically for that purpose. Authorizing new job numbers and closing old ones is done by a program called AUTHOR. This program adds a new entry to the data base by calling FORMJTR, and closes a job number by setting a "closed" bit to "I" in its JTR. Note that closed job numbers are not immediately deleted from the system. Deleting closed job numbers is done once per year with an end-of-year program designed for that purpose. At this time, DTRs and CTRs which have no attached JTRs are also deleted from the system. Changing the case or department number to which a job number is charged may be done in either of two ways. It is best to illustrate these by example. In the first case, consider a department which has been renamed as a result of an internal reorganization. Its department number has been changed, say from 1234 to 5678, yet its work and personnel remain the same. In this case, it is desirable to delete "1234" from the DDX, install "5678", and change all "1234" references in the department-job chain to "5678". As a second example, consider the case of a job number which was used by department 2345 but is now to be used by department 6789 due to a change in departmental work assignments.· On the surface, this seems to be a matter of taking the job number out of 2345's chain and inserting it into 6789's. Unfortunately, it isn't that simple. The charge fields in a chain, if added, should be equal to the field in the DTR at the chain head. Simply moving a JTR from one chain to another will make the old chain's fields sum low, and the new chain's fields sum high. The obvious solution to this problem is to forbid the changing of charged departments-i.e., to require that in the event that such a change is desired, the old job number be closed, and a new one authorized. Such a solution is not a very popular one, since job numbers have a habit of becoming imbedded in all sorts of hardto-reach places--catalogued procedures, data set names and the like. Furthermore, it has been our experience that programmers develop a certain fondness for particular job numbers over a period of time and are somewhat reluctant to change them. Our solution, then, is as follows: Given a job number, say 1234, whose charged department is to be reassigned, open a new job number, say 1234X, whose name was not previously known to the system, and which is charged to the proper department. Then close the old job number, and proceed to exchange n~mes in the JTRs, and linkage pointers in the respective chains. A utility called SWAP is available which permits renaming or reassignment of either departments or cases (or both). Inputting charges As might be expected from our previous discussion of charging categories, there are many inputs to the accounting system. Moreover, the input formats are quite diverse, and subject to constant change. In order that the people charged with maintaining the accounting system might also be able to maintain their own sanity, it was necessary to design a simple way of incorporating new sources of charging information into the system. Our first thought was to design a "general purpose input processor" i.e., a program that would read a data description and then proceed to process the data following (in this case, charge records). This approach was quickly abandoned for two reasons. First, the data description language required to process our existing forms of charge records would be quite complicated and thus difficult to learn and use, if in fact it could be implemented at all. Second, for each class of input charges, there is a certain amount of validity checking that can be performed at the time the charge records are read. Such checking need not be limited to a single record-for example, if it is known that a certain type of input consists of sequentially numbered cards, then a check can be made to determine whether some cards have been left out. A Computer Center Accounting System Our approach was as follows. For each type of charge record used by an installation, an input program must be written. This input program reads a charge record, does whatever checking is possible, constructs a standard structure consisting of a job number, service name, and one or more resource-quantity pairs, and passes this structure to aprogram called CHARGE. CHARGE does the rest. It brings in the appropriate JTR, converts the quantities in the resource-quantity pairs to dollar charges via factors contained in the rate table, charges the JTR, adding chits if necessary, and charges the associated CTR and DTR. The important point here is that the writer of an input program is allowed complete freedom with respect to formats and checking procedures, while he is also allowed almost complete naivete with respect to the rest of the system. Reporting The system includes programs to produce three "standard" reports: one, (by cases) to be sent to the comptroller, one (by departments) to be sent to department heads, and a third (by job number) to be sent to the person responsible for each active job number in the system. The comptroller's report is required of the computer center, and its format was specified in detail by the comptroller. The other two reports were designed to give the users of the computer center a clear and easily readable report of their computer usage in as concise a form as possible. The department report shows old and recent charges to the department, followed by a list of job numbers being charged to that department. Accompanying each job number are its charges, the case to which it is charged, and the name of the person responsible for it. A more detailed breakdown is certainly possible; the average department head, however, usually doesn't want to see a breakdown unless something looks unusual. In that case, the programmer responsible for the unusual charges is probably his best source of information. The user's report shows old and new charges for a job number, together with a detailed breakdown of the new charges by service-resource pairs. Its use to the programmer is threefold: it satisfies his curiousity-it enables him, in some cases, to detect and correct uneconomical practices-and it enables him to supply more detailed information to his department head should the need arise. In order to produce the user's report, all of the chits in all of the JTRs in the system must be scanned. Dur- 111 ing the scanning process, it is a trivial matter to maintain a set of "grand totals" showing the money recovered by the computer center in terms of all serviceresource categories. This valuable "by-product" is published after the user reports have been generated. More specialized reporting is possible, but these programs, by their nature, are· best written by particular installations rather than distributed as a part of the accounting system package. As was mentioned earlier, the ordering of the indices greatly facilitates the writing of such programs. THE DATA BASE: LEVEL II The foregoing discussion of the data base was aimed at the user of the system, and thus said nothing about its structure in terms of physical resources required, and the way in which these resources are used. We now expand on that discussion, concentrating on those factors influencing expansibility and protection. The main features of interest here are the implementation of tally record storage, the indices, and the provision to handle variable length JTRs. Free storage pools CTRs, DTRs and JTRs are stored on direct access data sets. When it is desired to access a tally record, a search of the appropriate index is made, and a relative record number on which the tally record is written is obtained from the index and used as the KEY in a PL/I read or write statement. The interesting feature of the system is that there is no permanent association between a particular tally record and a particular relative record number. Direct access records used to contain tally records are stored in linked pools. The RO entry in the appropriate index head points to the first available link, that link points to the second, and so on. One can think of the initial condition of a pool (no space used) as follows: RO contains the number 1, record # 1 contains the number 2, etc. When a link is needed for tally record storage, a routine called GETLINK detaches the record pointed to by RO from the free pool by copying that record's pointer into RO. The record thus detached is no longer available, and its number can be included into an index entry. A second routine called PUTLINK performs exactly the inverse function. These activities are well hidden from the users of the system. The obvious advantage of the casual user not seeing the list processing operations is that he won't 112 Fall Joint Computer Conference, 1972 be confused by them. The disadvantage is that when he runs out of space and allocates a larger data set, he will forget to initialize the records by having the nth unused record point to the n + 1st, as above. On the assumption (probably valid) that the data base will continue to grow over long periods of time, we have simplified the initialization procedure as follows: (a) Let RO point to the first available record (initially 1) and MRT point to the maximum record ever taken (initially 0). (b) When GETLINK calls for a record, compare RO and MRT. If RO>MRT then the data base is expanding into an area it never used before. In this case, set MRT=MRT+1. Otherwise, the record specified by RO has been used in the past and has since been returned by PUTLINK, in which case we proceed as before. With this procedure, initialization of the pools is done at the time that the data base is first created. Subsequent reallocation of data sets for purposes of enlarging the storage area is done as per standard OS / 360 practice. Indices and protection Recall that although an index entry can be thought of as consisting of a pair (P ,NAME) where P is a record number, and NAME is some character string, the entries are in fact represented as triples (Pl,P2,NAME). At the time that an index is read into core by INIT, all of the PIs contain record numbers, while all of the P2s contain O. Reading and writing of the tally records is done as follows. For reading: (a) If the P2 entry is non-zero, read record # P2. (b) Otherwise, read record # Pl. And for writing: (a) If the P2 entry is zero, call GETLINK for a free record number, and copy it into P2. (b) Write record # P2. At the conclusion of a run in which the data base is to be updated, the main program, which had caused the operating environment to be established by calling INIT, now calls a routine named FINI, which in turn: (a) Exchanges the P2 and PI index entries in all cases where P2 is non-zero. (b) Returns surplus links to the pool via PUTLINK. (c) Rewrites the indices in more than one place. (d) Closes all of the files. Such a strategy offers both protection and convenience. Clearly, the danger of partial updating of the files during a charging run is minimized. Indeed, our standard operating instructions for those who run the system state that a job which crashes prior to completion is to be run a second time, unchanged. Further, a program that doesn't call FINI will not update the accounting files. Included in this category, besides "read only" reporting programs, are debugging runs, and input programs which contain algorithms to test the validity of incoming data, and which may not modify the files. Variable length records The JTRs, because of the fact that they can contain an unpredictable number of chits, are variable in length. Overflow records are obtained via GETLINK to extend the JTR as far as required. As read into storage by RDJTR, the overflow links are invisible to the user. Besides the obvious convenience, the overflow handling in the JTR offers a different, if not devious type of protection. In the case of a system such as this, where the number of charging categories is, for practical purposes, unlimited, there is always the temptation to make the charging breakdown finer, and finer, and finer. Succumbing to this temptation gives rise to nasty consequences. Processing time and storage space increase but the reports from the system become more voluminous, hence less readable, and in a sense contain less information because of the imprecision inherent in so many of the "computer usage measurement" techniques. (In this latter case, we often tend to behave analogously to the freshman physics student who measures the edges of a cube with a meter stick and then reports its volume to the nearest cubic millimicron.) By happy coincidence, it turns out that in a system with "normal" charging categories, most JTRs have relatively few chits-too few to cause overflow-while occasional JTRs require one or more overflow records. Should the breakdown become fine enough that most of the JTRs cause overflow, the cost of running the accounting system rises-not gradually, but almost as a step. Further, if the breakdown is subsequently made coarser, the excess chits, and hence the overflow records, quietly disappear at the end of the next account- A Computer Center Accounting System ing period. Thus the system is, in a sense, forgiving, and tends to protect the user from himself. BACKUP As Mr. Peachum2 aptly remarked, it has never been very noticeable that what ought to happen is what happens. In addition to our efforts to make the system crash-proof, we have also provided several levels of backup procedures. 113 to the extent that one or more of them "points to the wrong place". Although this condition is most unusual, it is also most insidious, since there is a possibility that errors of this, type can remain hidden for, perhaps, as long as a few weeks. If enough input data has been added to the data base to make it undesirable to backtrack to the point prior to that at which the initial error is suspected to have occurred, symbolic information sufficient to regenerate the pointers is contained in the data base, and routines have been provided to copy the data base, sans structure, onto a sequential file, and then to rebuild it, using FORMJTR. Backup indices ACKNOWLEDGMENT As noted previously, FINI rewrites the indices, but in more than one place. Since the "extra" copies are written first, these can be copied to the "real" index files in the event that a crash occurs while the latter are being rewritten by FIN!. Unload-reload copies Two utilities, UNLOAD and RELOAD, are supplied with the system .. UNLOAD copies the structured files onto tape in such a way that the structure is preserved if RELOAD copies them back. It is our present practice to take UNLOAD snapshots daily, cycling the tapes once per week, and, with a different set of tapes, monthly, cycling the tapes once per year. Since chits are deleted at the end of each month (for economy of storage) UNLOAD-style dumps are also useful if it becomes necessary to backtrack for any reason to a point in time prior to the beginning of the current month. Further, the tapes are in such a format that they are easily transmitted via data link to another installation for purposes of inspection or off-site processing. The design and implementation of the accounting system described here was completed with the help and cooperation of many people, and for this the author is truly grateful. In particular, the efforts, advice, insight and inspiration provided throughout the project by Messers. R. E. Drummond and R. L. Wexelblat assured its successful completion. REFERENCES 1 IBM system/360 operating system data management services Order Form GC26-3746 2 B BRECHT Die Dreigroschenoper APPENDIX The user-system interface The facilities provided to give the user convenient access to the data base and the routines which manipulate it can be divided into two categories: compile-time facilities and run-time facilities. 08/360 dump-restore It is the practice, in our computer center, to periodicany dump all of our permanently mounted direct access storage devices using the OS/360 Dump-Restore utility. Since the accounting files are permanently mounted, this procedure provides an additional level of safety. Reformatting The worst possible mishap is one in which the chains in the system, for one cause or another, are destroyed Compile-time facilities These consist of PL/I macro definitions describing various structures. Since the storage class of a structure (e.g., BASED, STATIC, etc.) may be different in different routines, or, where there are multiple copies of a structure, even within the same routine, the initial "DCL 1 name class," must be provided by the user. Compile-time structures include the indices (CDX, DDX, JDX) the tally records (CTR, DTR, JTR) and the rate table. 114 Fall Joint Computer Conference, 1972 2 CCUM FIXED(31) BINARY, /* Cumulative total. */ 2 CJCH CHAR(8); /* Job chain for this case. */ Example 1: DCL 1 CDX BASED(PCDX), % INCLUDE CDX; produces: Example 3: DCL 1 CDX BASED (PCDX), 2 RL FIXED(15) BINARY, /* Record Length */ 2 TN FIXED(15) BINARY, /* # of Entries */ 2 TMAX FIXED(31) BINARY, /* Max. Entries. */ 2 RO FIXED(31) BINARY, /* Pool Head */ ~2 MRT FIXED(31) BINARY, /* Max. Record Taken */ 2 VAREA(O:N REFER(CDX.TN)), /* Index Proper */ 3 PI FIXED(31) BINARY, /* Read Ptr. */ 3 P2 FIXED(31) BINARY, /* Write Ptr. */ 3 NAME CHAR(9); /* Case Number */ Since it is expected that the user will always use the system-supplied rate table (as opposed to a private copy of same): % INCLUDE RATES; produces: Example 2: DCL 1 CTR BASED (PCTR), % INCLUDE CTR; produces: DCL 1 CTR BASED (PCTR) , 2 CLNK FIXED(31) BINARY, /* Used by GETLINK. 2 CCAS CHAR(9) , /* Case charged. 2 CUNUSED CHAR(3), /* For future use. 2 COLD FIXED(31) BINARY, / * $ to last fiscal. 2 CNEW FIXED(3l) BINARY, /* Latest charges. DCL 1 RATES BASED (PRTS) , /* Rate Data */ 2 #SERVICES FIXED BIN(31), 2 TOT_RES FIXED BIN(31), /* Tot. # Resources */ 2 SERVICE(12) , /* Classes of Service */ 3 NAME CHAR(4) , 3 CODE CHAR(l), /* Comptroller's Code */ 3 #RESOURCES FIXED BIN(31), 3 OFFSET /* Into Res. Table */ FIXED BIN(3l), 2 RES_TABLE (120) , /* Resources */ 3 NAME CHAR (20) , 3 ABBR CHAR(4), 3 UNIT CHAR(8) , 3 RATE FLOAT DED(14); /* Per-unit */ */ */ Run-time facilities */ Routines are provided to establish and terminate the system's run-time environment, maintain the indices, fetch and replace tally records, expand and contract the data base, and handle allocation of disk storage. These are shown in Table I, below. */ */ TABLE I-User Interface Routines ROUTINE INIT FINI SEARCH ENTER DELETE RDCTR RDDTR RDJTR WRCTR WRDTR WRJTR FORMJTR LOSEJTR GETLINK PUTLINK FUNCTION Initialization & termination. Index maintenance. ARGUMENTS REQUIRED None. Index name, key name, return pointer, success indicator. Read and write routines for tally records Name (i.e. case, department or job number), success indicator. Installation & deletion of job nos. Allocate & return disk space. Job number, success indicator. Data set name, pointer to 1st avail. record, return pointer. EXAMPLE CALL INIT; CALL FINI; CALL SEARCH(DDX, '1234', RP, OK); IF , OK THEN STOP; CALL RDJTR('MYJOB', OK); IF ,OK THEN DO; PUT LIST(MYJOBII'MISSING'); STOP; END; CALL FORMJTR(NEWJOB,OK); IF ,OK THEN STOP; CALL GETLINK (FILE,RP,POOLHD) ; An approach to joh pricing in a multi-programming environment by CHARLES B. KREITZBERG and JESSE H. WEBB Educational Testing Service Princeton, New Jersey equitably charge for the running of jobs. The two major reasons for this are: INTRODUCTION Computers are amazingly fast, amazingly accurate, and amazingly expensive. This last attribute, expense, is one which must be considered by those who would utilize the speed and accuracy of computers. In order to equitably distribute the expense of computing among the various users, it is essential that the computer installation management be able to accurately assess the costs of processing a specific job. Knowing job costs is also important for efficiency studies, hardware planning, and workload evaluation as well as for billing purposes. For a second generation computer installation, job billing was a relatively simple task; since in this environment, any job that was in execution in the machine had the total machine assigned to it for the entire period of execution. As a result, the billing algorithm could be based simply upon the elapsed time for the job and the cost of the machine being used. In most cases, the cost for a job was given simply as the product of the run time and the rate per unit time. While this algorithm was a very simple one, it nevertheless was an equitable one and in most cases a reproducible one. Because of the fact that in a second generation computer only one job could be resident and in execution at one time, the very fast CPUs were often under utilized. As the CPUs were designed to be even faster, the degree of under utilization of them increased dramatically. Consequently, a major goal of third generation operating systems was to optimize the utilization of the CPU by allowing multiple jobs to be resident concurrently so that when anyone job was in a wait state, the CPU could then be allocated to some other job that could make use of it. While multiprogramming enabled a higher utilization of the CPU, it also introduced new problems in job billing. No longer was the old simple algorithm sufficient to • The sharing of resources by the resident jobs, and • The variation in elapsed time from run to run of a . given job. Unlike the second generation computer a given job no longer has all of the resources that are available on the computer allocated to it. In a multi-programming computer, a job will be allocated only those resources that it requests in order to run. Additional resources, that are available on the computer, can be allocated to other jobs. Therefore, it is evident that the rate per unit time cannot be a constant for all jobs, as it was for second generation computer billing, but must in some sense be dependent upon the extent to which resources are allocated to the jobs. The second item, and perhaps the most well-known, that influences the design of a billing algorithm for a third generation computer is the variation that is often experienced in the elapsed time from run to run of a given job. The elapsed time for any given job is no longer a function only of that job, but is also a function of the job mix. In other words, the elapsed time for a job will vary depending upon the kinds and numbers of different jobs which are resident with it when it is run. In order to demonstrate the magnitude of variation that can be experienced with subsequent runs of a given job, one job was run five different times in various job mixes. The elapsed time varied from 288 seconds to 1,022 seconds. This is not an unusual case, but represents exactly what can happen to the elapsed time when running jobs in a multi-programming environment. The effect, of course, is exaggerated as the degree of multiprogramming increases. Not only can this variation in run time cause a difference in the cost of a job from one run to another, 115 116 Fall Joint Computer Conference, 1972 but it also can cause an inequitability in the cost of different jobs; the variation in run time can effectively cause one job to be more expensive than another even though the amount of work being done is less. Objectives We have isolated several important criteria to be met by a multi-programming billing algorithm. Briefly, these criteria are as follows. • Reproducibility-As our previous discussion has indicated, the billing on elapsed time does not provide for reproducibility of charges. Any algorithm that is designed to be used in a multiprogramming environment should have as a characteristic, the ability to produce reproducible charges for any given job regardless of when or how it is run, or what jobs it is sharing with the computer. • Equitability-Any billing algorithm designed for use in a multi-programming environment must produce equitable costs. The cost of a given job must be a function only of the work that the job does, and of the amount of resources that it uses. Jobs which use more resources or do more work must pay more money. The billing algorithm must accommodate this fact. • Cost Recovery-In many computer operations it is necessary to recover the cost of the operation from the users of the hardware. The billing algorithm developed for a multi-programming environment must enable the recovery of costs to be achieved. • A uditability-A multi-programming billing algorithm must produce audit able costs. This is particularly true when billing outside users for the use of computer hardware. The charges to the client must be audit able. • Encourage Efficient Use of the H ardware-8ince one goal in a design of the third generation hardware was to optimize the use of that hardware, a billing algorithm that is designed for use in a multijobbing environment should be such that it encourages the efficient use of the hardware. • Allow for Cost Estimating-The implementation of potential computer applications is often decided upon by making cost estimates of the expense of running the proposed application. Consequently, it is important that the billing algorithm used to charge customers for the use of the hardware also enables potential customers to estimate beforehand, the expense that they will incur when running their application· on the computer hardware. We distinguish between job cost and job price: job cost is the amount which it costs the installation to process a given job; job price is the amount that a user of the computer facility pays for having his job processed. Ideally, the job price will be based on the job cost but this may not always be the case. In many organizations, notably universities, the computer charges are absorbed by the institution as overhead; in these installations the job price is effectively zero-the job costs are not. In other organizations, such as service bureaus, the job price may be adjusted to attract clients and may not accurately reflect the job cost. In either case, however, it is important that the installation management know how much it costs to process a specific job. 1 •2 The development of the job billing algorithm (JBA) discussed in this paper will proceed as follows: first, we will discuss the "traditional" costing formula used in second generation computer systems: cost = (program run time) X (rate per unit time) and we shall demonstrate its inadequacy in a multijobbing environment. Second, we shall develop a cost formula in which a job is considered to run on a dedicated computer (which is, in fact, a subset of the multi-programming computer) in a time interval developed from the active time of the program. DEVELOP1VIENT OF THE JOB PRICING ALGORITHlVI In order to recover the cost of a sharable facility over a group of users, the price, P, of performing some operation requiring usage t is; P= (C) ( (tk) ) Lti (1) where; C is the total cost of the facility L ti is the total usage experienced tk is the amount of use required for the operation Consider the billing technique which was used by many computer installations running a single thread (one program at a time) system. Let $m be the cost per unit time of the computer configuration. Then, if a program began execution at time tl and terminated execution at time t2, the cost of running the program was computed by: (2) As the utilization of the computer increased the cost per unit time decreased. The cost figure produced by (2) is in many ways a Approach to Job Pricing in Multi-Programming Environment very satisfying one. It is simple to compute, it is reproducible since a program normally requires a fixed time for its execution, it is equitable since a "large" job will cost more than a "small" job (where size is measured by the amount of time that the computers resources are held by the job). Unfortunately for the user, however, the cost produced by (2) charges for all the resources of the computer system even if they are unused. This "inflated" charge is a result of the fact that, in a single thread environment, all resources of the computer system are allocated to a program being processed even if that program has no need of them. The effect of this is that the most efficient program in a single thread environment is the program which executes in the least amount of time; that is, programmers attempt to minimize the quantity (l? - t1 ) ; this quantity, called the wall clock time (WeT) of the program, determines the program's cost. Since the rate of the computer is constant, the only way to minimize the cost for a given program is to reduce its WeT; in effect, make it run faster. Hence, many of the techniques which were utilized during the second generation were designed to minimize the time that a program remained resident in the computer. The purpose of running in a multi-thread environment, one in which more than the one program is resident concurrently, is to maximize the utilization of the computer's resources thus reducing the unit cost. In a multi-thread processing system, the cost formula given by (2) is no longer useful because: 1. It is unreasonable to charge the user for the entire computer since the unused resources are available to other programs. 2. The wall clock time of a program is no longer a constant quantity but becomes a function of the operating environment and job mix. For these reasons we must abandon (2) as a reasonable costing formula. lVlany pricing algorithms are in use; however, none is as "nice" as (2). If possible, we should like to retain formula (2) for its simplicity and intuitive appeaJ.3 This may be done if we can find more consistent definitions to replace m (rate) and WeT (elapsed time). Computed elapsed time A computer program is a realization of some process on a particular hardware configuration. That is, a program uses some subset of the available resources and "tailors" them to perform a specific task. The program is loaded into the computer's memory at time t1 and compute ---rI 110 voluntary ,·",t 117 ~ +____ _ T " t l - - - - - - - - - - - - - - - - - -.~ . time Figure I-States of a program in a single thread environment terminates at time t2 • During the period of residency, the program may be in either one of two states: active or blocked. A program is active when it is executing or when it awaiting the completion of some external event. A program is blocked when it is waiting for some resource which is unavailable. These categories are exhaustive; if a program is not active and is not waiting for something to happen then it is not doing anything at all. The two categories are not, however, mutually exclusive since a program may be processing but also awaiting the completion of an event (for example an input/output operation) indeed, it is this condition which we attempt to maximize via channel overlap. Therefore, we define voluntary wait as that interval during which a program has ceased computing and is awaiting the completion of an external event. We define involuntary wait as the interval during which a program is blocked; a condition caused by contention. In general, voluntary wait results from initiation of an input/output operation and in a single thread system we have: (3) where: each tc is a compute interval and each tv is a voluntary wait interval. graphically, the situation is represented as in Figure 1. The solid line represents periods of compute and the broken line indicates in~ervals of input/output activity. Since '1;tc is based on the number of instructions executed which is constant and '1;t v is based on the speed of input/output which is also constant (except for a few special cases), WeT is itself constant for a given program and data in a single thread environment. The ideal case in this type of system is one in which the overlap is so good that the program obtains the i+ 1st record as soon as it has finished processing the ith awn .4_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ~ll ---1J----][---]----- - - - Figure 2-A program with maximum overlap 0/1 alndwo~ 118 Fall Joint Computer Conference, 1972 voluntary wait compute i nvo 1untary compute ----.r,.-------.. ,. wait ~~~p~te ~::::::::;:~, J~~OI involuntary wait l _____t I JOB 2 : compute r'-----'"""',,.,,------"""'-\r'-----"""'-, -------------- compute ~,::::::::~, j-I_ _ _ _ _ _ _----! ____________________ comoute: .~ time time Figure 5-States of a program based on active time Figure 3-States of a program in a multi-thread environment record. Graphically, this situation is shown in Figure 2 and we can derive the lower bound on WCT as: (4) and, of course: WCT~~tc as ~tv~O (5) In a multi-thread environment, we know that: WCT = 'J;tc + ~tv + ~ti (6) where: ti is an interval of involuntary wait. But, from the above discussion we know that ~tc+ ~tv is a constant for a given program, hence, the inconsistency in the WCT must come from ti . This is precisely what our intuition tells us; that the residency time of a job will increase with the workload on the computer. Graphically, a program running in a multithread environment might appear as in Figure 3. During the interval that a program is in involuntary wait, it is performing no actions (in fact, some programmers refer to a program in this state as "asleep"). As a consequence, we may "remove" the segments of time that the program is asleep from the graph for time does not exist to a program activity in involuntary wait. This permits us to construct a series of time sequences for the various programs resident in the computer; counting clock ticks only when the program is active. When we do this a graph such as Figure 4 becomes continuous with respect to the program (Figure 5). Of course, the units on the x-axis in Figure 5 no longer represents real-time, they represent, instead, the active time of the program. We shall call the computed time interval computed elapsed time (CET) defined as: , CET=~tc+~tv=WCT-~ti and as ship: ti~O, CET~WCT (7) so that we have the relation- WCT~CET compute voluntary wait -----.",----...,', involuntary wait (8) ----, " ----.. comoute --- - ---/111//11//1/ 1 . - - - - - I I I time Figure 4-States of a program based on real time The quantity WCT-CET represents the interference from other jobs in the system and may be used as a measure of the degree of multi-programming. Unfortunately, the CET suffers from the same deficiency as the WCT-it is not reproducible. The reason for this is that on a movable head direct access storage device contention exists for the head and the time for an access varies with the job mix. However, the CET may be estimated from its parameters. Recall that CET = ~tc+ ~tv. The quantity ~tc is computed from the number of instructions executed by the program and is an extremely stable parameter. The quantity ~tv is based upon the number and type of accesses and is estimated as: (9) where a (i) is a function which estimates the access time to the ith file and ni is the number of access to that file. The amount of time which a program waits for an input or output operation depends upon a number of factors. The time required to read a record is based upon the transfer rate of the input/output device, the number of bytes transferred, the latency time associated with the device (such as disk rotation, tape inter-record gap time, and disk arm movement). For example, a tape access on a device with a transfer rate of RT and a start-stop time of ST would require: (10) seconds to transfer a record of b bytes. Hence, for a file of n records, we have a total input/output time of: n L (ST+RTb i ) =nST+RT~bi (11) i=l where ~bi is the total, number of bytes transferred. In practice ~bi ~ nB where n is the number of records and B is the average blocksize. The term ST is, nominally, the start-stop time of the device. However, this term is also used to apply a correction to the theoretical record time. The reason is that while the CET will never be greater than the I/O time plus the CPU time, overlap may cause it to be less. This problem is mitigated by the fact that at most computer shops (certainly at ETS) almost all programs are written in high-level computer languages and, as a result, the job mix is homogeneous. A measure of overlap may be obtained by fitting various curves to historical data and choosing Approach to Job Pricing in IVlulti-Programming Environment the one which provides the best fit. In other words, pick the constants which provide the best estimate of the WCT. It is important to remember that the CET function produces a time as its result. We are using program parameters such as accesses, CPU cycles, and tape mounts only because they enable us to predict the CET with a high degree of accuracy. The original billing formula (2) which we wished to adapt to a multi-thread environment utilized a time multiplied by a dollar rate per unit time. The CET estimating function has provided us with a pseudo run time; we must now develop an appropriate dollar rate function. In order to develop a charging rate function we consider the set of resources assigned to a program. In a multi-programming environment, the computer's resources are assigned to various programs at a given time. The resources are partitioned into subset computers each assigned to a program. The configuration of the subset computers is dynamic; therefore, the cost of a job is: n cost= L: CETiori (12) i=l where i is the allocation interval; that is, the interval between changes in the job's resources held by the job. CET i is the CET accumulated during the ith interval. ri is the rate charged for the subset computer with the configuration held by the program during interval i. The allocation interval for OS/360 is a job step. The rate function Some of the attributes which the charging rate function should have are: • the rate for a subset computer should reflect the "size" of the subset computer allocated; a "large" computer should cost more than a "small" computer. • the rate for a subset computer should include a correction factor based upon the probability that a job will be available to utilize the remaining resources. • the sum of the charges over a given period must equal the monies to be recovered, for that period. With these points in mind, we may create a rate function. The elements of the resource pool may be classified as sharable resources and nonsharable resources. Tape drives, core memory, and unit record equipment are 119 examples of nonsharable resources; disk units are an example of a sharable resource. While these categories are not always exact they are useful since we assume that allocation of a nonsharable resource is more significant than allocation of a sharable resource. At Educational Testing Service, it was determined that the most used nonsharable resources are core storage and tape drives. Therefore, it was logical to partition the computer into subset computers based upon the program's requirement for core and tapes. Tapes are allocated in increments of one; core is allocated in 2K blocks. Hence, there are (# tapes * available core/2,000) possible partitions. For any given design partition, we would like to develop a rate which is proportional to the load which allocation places upon the resources pool. A single job may sometimes disable the entire computer. If, for example, a single program is using all of the available core storage then the unused devices are not available to any other program and should be charged for. On the other hand, if a single job is using all available tapes, other jobs may still be processed and the charge should be proportionately less. The design proportion is the mechanism by which the total machine is effectively partitioned into submachines based upon the resources allocated to the sub machines. A design proportion can be then assigned to any job based upon the resources it requires. The design proportion should have at least the following properties. • The design proportion should range between the limits 0 and 1. • The design proportion should reflect the proportion of the total resources that are allocated to the job. • The design proportion should reflect, in some fashion, the proportion of the total resources that are not allocated to the job, but which the job prevents other jobs from using. The design proportion proposed for the billing algorithm is based upon the probability that when the job is resident, some other job can still be run. The definition of this parameter is as stated below. The design proportion of a job is equal to the probability that when the job is resident, another job will be encountered such that there are insufficient resources remaining to run it. Since OS/360 allocates core in 2K blocks, the number of ways that programs can be written to occupy available core is equal to: N=C/2 where, (13) 120 Fall Joint Computer Conference, 1972 N = Number of ways that programs can be written C = Core available in Kilo-bytes In addition, if there are T tapes available on the hardware configuration then there are T plus 1 different ways that programs can be written to utilize tapes. Therefore, the total number of ways that programs can be written to utilize -core and tapes is given by the following equation, N = (C/2) (T+1) (14) where, N = Total number of ways that programs can be written C = Core available in Kilo-bytes T=Number of tape drives available The design proportion for a given job can be alternately defined as 1 minus "the probability that another job can be written to fit in the remaining resources." This is shown as follows. D =1.0- [(C A -Cu )/2](TA -Tu +1) p [(C A )/2](TA +1) (15) where, Dp = Design proportion for the job CA = Core available in Kilo-bytes Cu = Core used by job T A = Tape drives available on the computer Tu=Tape drives used by the job It is important to note that the sum of the design proportions of all jobs resident at one time can be greater than 1.0. For example, consider the following two jobs resident in a 10K, four tape machine. Job #1: 6K; 1 Tape Dp =17/25 Job #2: 4K; 3 Tapes D p =19/25 resources. Clearly, the theoretical probability and the actual probability may be somewhat different. Consequently, a design proportion could be designed based upon the actual probabilities experienced in a particular installation. Such a probability function would change as the nature of the program library changed. The design proportion function described above would change only as the configuration of the hardware changed. Either technique is acceptable and the design proportion has the desired properties. That is, the design proportion increases as the resources used by the various jobs increase. However, it also reflects the resources that are denied to other jobs because of some one jobs' residency. Consider the fact that when all of core is used by a job, the tape drives are denied to other jobs. The design proportion in this case is 1.0 reflecting the fact that the job in effect has tied up all available resources even though they are not all used by the job itself. While the design proportion function is simple, it has many desirable properties: • It is continuous with respect to OS/360 allocation; all allocation partitions are available. • It always moves in the right direction, that is, increasing the core requirement or tape requirement of the program, results in an increased proportion. • It results in a proportion which may be multiplied by the rate for the total configuration to produce a dollar cost for the subset computer. • It is simple to compute. If it were determined that the required recovery could be obtained if the rate for the computer were set at $35 per CET minute, the price of a step is determined by the equation: P step = (($35.)Dp(core, tapes)) (CET/60) (16) and the price of a job (with n steps) is: The sum of their design proportion is 36/25. This seems odd at first since the design proportion of a 10K; four tape job is 1.0. However, this can be shown to be a necessary and desirable property of the design proportion. To show that this is the case, it is necessary to consider the amount of work done and the total cost of the work for two or more jobs that use the total machine compared to the cost of the same amount of work done by a single job that uses the total machine. This analysis will not be covered here. The design proportion function as defined herein is a theoretical function. It is based solely upon the theoretical possibility of finding jobs to occupy available n P job = L P step (17) 1:=1 We have come full circle and returned to our "second generation" billing formula: cost = rate· time The key points in the development were: • A multi-tasking computer system may be considered to be a collection of parallel processors by altering the time reference. • The variation in time of a program run in a multi- Approach to Job Pricing in Multi-Programming Environment· programmed environment is due to involuntary wait time. • The computed elapsed time may be multiplied by a rate assigned to the subset computer and an equitable and reproducible cost developed. IMPLEMENTATION OF THE JOB PRICING ALGORITHl\1 The Job Pricing Algorithm (JPA) is implemented under OS/360 Release 19.6. No changes to the operating system were required; a relatively minor modification was made to HASP in order to write accounting records to disk for inclusion in the accounting system. The basis of the JPA is the IBM machine accounting facility known as Systems IVlanagement Facility (SMF).4 Billing under the JPA involves four steps: 121 CONCLUSION The approach to user billing described in this paper has proved useful to management as well as users. Many improvements are possible especially in the area of more accurate CET estimation. Hopefully, designers of operating systems will, in the future, include sophisticated statistics gathering routines as part of their product thus providing reliable, accurate data for acco?nting. APPENDIX A method of deriving GET parameters Let the wall clock time (W) be estimated as follows, where, 1. Collect the job activity records at execution time. The records are produced by SMF and HASP and are written to a disk data setSYS1.MANX. 2. Daily, the SYSl.lVIANX records are consolidated into job description records and converted to a fixed format. 3. The output from step (2) is used as input to a daily billing program which computes a cost for the jobs and prep arBS a detailed report of the day's activity by account number. 4. Monthly, the input to the daily program is consolidated and used as input to a monthly billing program which interfaces with the ETS accounting system. X T = # of tape accesses XD=# of disk accesses X M = # of tape mounts C=CPUtime AT, AD, AM = Coefficients to be determined We wish to determine the coefficients AT, AD, and AM that will maximize the correlation between W', the computed elapsed time, and W, the actual elapsed time. Define the error e as, e= (W - W') (2) The correlation coefficient, r, can be written as, The raw SMF data which is produced as a result of job execution contains much valuable information about system performance and computer workload which is of interest to computer center management. One useful characteristic of the· JPA is that costs are predictable. This enables a programmer or systems analyst to determine, in advance, the costs of running a particular job and, more importantly, to design his program in the most economical manner possible. In order to facilitate this process, a terminal oriented, interactive, cost estimating program has been developed. This program is written in BASIC and enables the programmer to input various parameters of his program (such as filesize, CPU requirements, blocking factors, memory requirements) and the cost estimate program produces the cost of the program being developed. Parameters may then be selectively altered and the effects determined. (4) Then, in order to maximize r2, it is sufficient to minimize u e2 since u w 2 is a constant over a given sample. Since (6) we have, (7) ue2= ~e2-n-l(~e)2 Finally, we have, ue2= ~[(Wi-Gi) -ATXT -ADXD-AMXMJ2-n-l[~(Wi-Gi) (8) 122 Fall Joint Computer Conference, 1972 (9) (10) (11) Solving the simultaneous equations ( 13), ( 14), and (15) for AT, AD, and AM should give values for the parameters that will maximize the correlation between the computed elapsed time and the actual elapsed time. The technique was applied to a sample month of data which was composed of 19401 job steps. The coefficients determined were, AT = 0.0251 seconds AD = 0.0474 seconds AM=81.2 seconds (12) Since all the partials must vanish, we have, When these coefficients were used in Equation (1) to determine the computed elapsed time, the correlation coefficient between the computed time and actual time over the 19401 steps was 0.825. When other coefficients were used, i.e. A T =0.015, AD =:,0.10, and A M=60.0, the correlation was only 0.71. Note: Card read, card punch, and print time constants were not computed in this fashion simply because there is insufficient data on job steps that use these devices as dedicated devices. However, as data become available in the future, the method could be applied to obtain good access times. (13) (14) REFERENCES 1 L L SELWIN Computer resource accounting in a time sharing environment Proceedings of the Fall Joint Computer Conference 1970 2 C R SYMONS A cost accounting formula for multi-programming computers The Computer Journal Vol 14 No 11971 3 J T HOOTMAN The pricing dilemma Datamation Vol 15 No 8 1969 4 IBM Corp IBM System/360 operating system: System management facilities Manual GC28-6712 1971 Facilities management-A marriage of porcupines by DAVID C. JUNG Quantum Science Corporation Palo Alto, California Fl\1-DEFINED tion. Moreover, this software succeeded in improving operator control and reducing operating costs. Consequently, EDS marketed these software packages to other Blue Cross/Blue Shield organizations. Outside of the medical insurance field, EDS has successfully pursued FM opportunities in life insurance, banking, and brokerage. The success of EDS, both in revenue/profit growth and in the stock market did not go unnoticed by others in the computer services industry. As a result, in the late 1960's and early 1970's many software firms and data service bureaus diversified into Fl\![-many, unfortunately, with no real capabilities. Since FM has proven itself as a viable business in the commercial market, over 50 independent FM firms have been formed. Moreover, at least 50 U.S. corporations with large, widespread computer facilities have spun off profit centers or separate corporations from their EDP operations. In many cases, these spinoffs offer customers FM as one of their computer services. F M definition often elusive There are almost as many definitions for Facilities Management (FM) as there are people trying to define it. Because FM can offer different levels of service, some variations in its definition are legitimate. FM was initiated by the Federal Government in the 1950's when the Atomic Energy Commission, the National Aeronautics and Space Administration (NASA), and the Department of Defense offered several EDP companies the opportunity to manage and operate some of their EDP installations. Previously, these companies had developed strong relationships with the various agencies through systems development and software contracts. FM definition expanding Nurtured by the Federal Government, FM has emerged as a legitimate computer service in the commercial EDP environment. Since FM has been offered in the commercial market, its definition has expanded to include additional services. In fact, customers are now beginning to expect Fl\1 vendors to have expertise that extends far beyond the day-to-day management of the data processing department. Electronic Data Systems (EDS), formed in the early 1960's, pioneered the FM concept in the commercial market. Shortly after its founding, EDS recognized the massive EDP changes required in the hospital and medical insurance industry as a result of Medicare and other coverages changed by the Social Security Administration. Accordingly, EDS secured several State Blue Cross/Blue Shield organizations as customers. While operating these installations, EDS developed standard software packages that met the recordkeeping requirements of the Social Security Administra- A n ideal concept of F M The ideal role for the FM vendor is to assist in all the tasks related to business information and the EDP operations in the firm. The Facilities Manager could assume full responsibility for the EDP operations, from acquiring the equipment and staffing the installationto distributing the information to the firm's operating areas. FM also has a vital role in defining business information requirements for top management. More specifically, the FM vendor should be able to define what information is required to operate the business, based on his industry experience. He should also be able to help establish cost parameters, based on an analysis of what other firms in the industry spend for EDP. Moreover, FM vendors will assist top management to cost optimize the array of business processing methods which may include manual or semi-automated 123 124 Fall Joint Computer Conference, 1972 only a single division or a major application may be taken over by an FM vendor. Merely taking one of many applications on a computer and performing this function on a service bureau or time-sharing basis, however, is not included as an FM contract. TOP MANAGEMENT DEFINES BUSINESS INFORMATION REOUIREMENTS ----. PER IODIC REV lEW FOR EDP I I I I I • • • • • • DEFINE INFORMATION REQUIRED TO OPERATE PERFORM SYSTEMS ANALYSIS SPECIFY OUTPUTSflNPUTS SET TIMING SET COST PARAMETERS SELECT INFORMATION PROCESSING METHODS I NON-EDP PROCESSING METHODS II HOW EDP USERS BENEFIT EDP OPERATIONS • • • • • I I. FEEDBACK ACQUIRE EQUIPMENT AND PERSONNEL SCHEDULE AND BUDGET RESOURCES MANAGE DAILY OPERATIONS PERFORM AUDITS/SECURE OPERATIONS DISTRIBUTE INFORMATION COMPANY OPERATIONS F M benefits: A study in contrast SOllle FM users benefit • • • I USE INFORMATION • DEVELOP NEW/CHANGED INFORMATION REqUIREMENTS Figure I-Business information and EDP in the ideal firm approaches as well as EDP. The FM vendor must be skilled at working with personnel in the customer's operation centers to improve ways in which the information is used and to effectively develop new methods for handling information as a business grows. (See Figure 1.) FM-Today it's EDP takeover The real world of FM is quite different from the ideal version just described, and there will be a period of long and difficult transition to reach that level. Actual takeover of an existing EDP installation is now the prime determinant of whether an FM relationship exists. When the FM vector takes over the EDP installation, it also takes over such EDP dpeartment tasks as (1) possession, maintenance, and operation of all EDP equipment and the payment of all rental fees or acquisition of equipment ownership, (2) hiring and training all EDP personnel, and (3) development of applications, performance of systems analysis, acquisition of new equipment and implementation of new applications. Takeover lllay be partial Many FM vendors are increasingly offering cafeteriastyle services so that the customer can retain control over EDP activities that he can perform proficiently. In some cases where equipment is owned, the customer may retain title to the equipment. Salaries of EDP personnel may continue to be paid by the customer, but responsibility for management is assigned to the vendor. Also included as partial FM are takeovers of less than the client's total EDP activities. EDP activity of Southwestern Life, a $5 billion life insurance company in Dallas and a customer of EDS, typifies the satisfied FM user. Southwestern Life's vice president, A. E. Wood, has stated, "We are very pleased with our agreement and the further we get into it, the more sure we are we did the right thing. We won't save an appreciable amount of money on operations, but the efficiency of operation will be improved in great meassure. To do the same job internally would have taken us two to three times as long and we still would not have benefited from the continual upgrading we expect to see with EDS." ••• and SOllle do uot Disgruntled users exist too, but they are more difficult to find and in many cases are legally restricted from discussing their experiences. One manufacturing company told us, "We cannot talk to you; however, let me say that our experience was unfortunate, very unfortunate. They (the FM vendor) did not understand our business, did not understand the urgency of turnaround time on orders. We lost control of our orders and finished goods inventory for six weeks. As a result we lost many customers whom we are still trying to woo back after more than a year." Two medium-sized banks had similar comments that indirectly revealed much about FM benefits. "We're not in any great difficulty. In fact, the EDP operations now are running well, but every time we want to make a change it costs us. I wish I had my own EDP manager back to give orders to." FM benefits are far ranging Large users benefit least frolll FM There is no question in our mind that there are many potential benefits for FM users. However, installation size is the primary yardstick for measuring benefits Facilities Management-A ]Vlarriage of Porcupines users can obtain from FM; large users have the least to gain for several reasons. In most cases, large users have already achieved economy-of-scale benefits which FM and other computer services can bring to bear. Large users typically have computers operating more shifts during the day and do not allow the computers to sit idle. In addition to higher utilization, larger users can more fully exploit the capabilities of applications and systems programmers because they can spread these skills over more CPU's than can smaller users. For these and other reasons, it is much more difficult to demonstrate to large users that an FM vendor can operate his EDP department more efficiently and less expensively. For these reasons, the bulk of FM revenue will come from the small- or medium-sized EDP user. This is defined as a user who has a 360/50 or smaller computer and is spending less than $1.5 million per year on EDP. Improved EDP operations The most tangible benefit FM can bring to an EDP user is improved control over the EDP operations and stabilization of the related operating costs. This conclusion is based on Quantum's field research which has shown that installations in the small-to medium-sized range are out of control despite the refusal by managers to admit it. Lack of EDP planning, budgeting, and scheduling shows up in obvious ways, such as skyrocketing costs, as well as in obscure ways that are difficult to detect, yet contribute significantly to higher EDP costs. These subtle inefficiencies include program reruns due to operator or programming errors, equipment downtime due to sloppy programming documentation, and idle time due to poorly scheduled EDP workloads. Because they are obscure and often hidden by EDP departments, it is difficult for managements in small and medium installations to detect and correct these problems. On the other hand, an FM vendor can often quickly identify these problems and offer corrective remedies because his personnel are trained to uncover these inefficiencies and his profits depend on their correction. S:maller investments to upgradeEDP FM can also benefit end users by reducing proposed future increases in EDP costs. Small- and medium-sized users that have a single computer must eventually face the problem of increasing their equipment capacity to meet requirements of revenue growth and expanded 125 360/40 360/50 360/65 HAVE OS NOW 10% 46% 69% HAVE NO OS NOW, BUT PLAN TO INSTALL IN 1971-72 48 38 8 HAVE NO OS NOW AND NO PLANS 42 16 23 100% 100% 100% - TOTAL TABLE A-User OS Plans 1971-72 applications. This often means a significant increment in rental and other support costs. A 360/30 user, for example, who is spending $13,000 a year on equipment may have to jump to a 360/40 or a 370/135, costing $18,000-$22,000 per year to achieve the required increase in computing power. Support costs will also increase, in many cases more quickly. If a useris acquiring a 360/40, for example, he probably will have to use an Operating System (OS) to achieve efficient machine performance. Many users today will upgrade their software as shown in Table A. An OS installation requires a higher level of programming talent than is currently required to run a DOS 360 system. Because the user does not need the full time services of these system programmers, FM offers an economical solution whereby system programmers are shared among multiple users. Elimination of EDP personnel problems One of the most serious problems users encounter in managing EDP operations is personnel management. The computer has acquired an aura of mysticism that has tended to insulate the EDP department from the normal corporate rules and procedures. Many programmers often expect to receive special treatment, maintain different dress and appearance and obtain higher pay. High turnover among EDP personnel, often two to three times the norm for other company operations, further aggravates EDP personnel problems. Through subcontracting, FM vendors can separate EDP personnel from the corporation and thus alleviate this situation for management. Eased conversion to current generation software Over one-third of all users are locked into using third generation computers in the emulation mode, 126 Fall Joint Computer Conference, 1972 LARGE COMPANIES MEDIUM COMPANIES SMAll COMPANIES ANNUAL EDP EXPENOITURES PERCENT OF EDP COSTS SPENT ON PLANNING, ETC. >'1.5 MILLION 1-5% $3OOK-1.5 MILLION 0-2% <$300K 0-1% TABLE B-User Expenditures on EDP Planning where second generation language programs are run on third generation computers. Although software conversion is a difficult and expensive task for users, the FM vendor who has an industry-oriented approach usually has a standard package already available that the customer can use. In several installations, FM vendors have simplified conversion, thus providing their users with the economies of third generation computers. Improved selection of new equipment and services Users of all sizes continually need to evaluate new equipment and new service offerings, including the evaluation of whether to buy outside or do in-house development. Again, the large user holds an advantage because his size permits him to invest in a technical staff dedicated to evaluations. Installations spending more than $1.5 million annually for EDP usually have one full time person or more appointed to these functions. In smaller installations there is no dedicated staff and pro-tem evaluation committees are formed when required. Table B shows the relationship between the size of EDP expenditures and the share of those expenditures allocated to planning, auditing and technical evaluations. In this area of EDP planning, FM can benefit users in two ways. First, FM vendors can and do take over this responsibility and, second, the effective cost to any single user is less because it is spread over multiple users. Other operating benefits One potential benefit from FM relates to new application development. Typically, 60 percent or more of a firm's EDP expenditures are tied to administrative applications, such as payroll, general accounting, and accounts payable. Because of the relatively high saturation in the administrative area, firms are now extending the use of EDP into operational areas such as productioncontrol and distribution management. However, many of these firms lack the qualified EDP professionals and line managers necessary to develop and implement applications in non-administrative areas. Thus, they have become receptive to considering alternatives, including FM. Major EDP cost savings Earlier in this chapter, the stabilization of EDP costs was discussed. Now we will focus on the major savings that FM can provide through the actual reduction of EDP costs. This potential FM benefit is too often the major theme of an FM vendor's sales pitch. Consequently, its emotional appeal often clouds a rational evaluation that should precede an FM contract. If the FM contract is well written and does not restrict either party, the FM vendor can apply his economies of scale and capabilities for improving EDP operations and should be able to show a direct cost savings for the customer. However, these "savings" may be needed to offset costs of software conversion or other contingencies and thus, may not really be available to the customer in the early contract years. Long range benefits-Better information Improved operation control and profits through better information-this is the major long-range benefit from FM. While this contribution is not unique to FM vendors, few EDP users today have been able to develop a close relationship between company operations and EDP. Companies such as Weyerhauser and American Airlines-generally recognized as leading edge users-~re few in number, and many try to emulate their achievements in integrating EDP into the company operations. EDP expenditures, however, are seldom judged on their contribution toward solving basic company problems and increasing revenues and profits. Many apparently well-run EDP departments would find it difficult to justify their existence in these terms. The situation is changing, however. An indication of this new attitude is the increased status of the top EDP executive in large firms. The top EDP executive is now a corporate officer in over 300 of the Fortune 500 firms. While titles often mean little, the change to Vice President or Director of Business Information from Director of EDP Operations suggests that top management in many companies has considered and faced the problem. Facilities Management-A Marriage of Porcupines 127 In 'addition to new management titles, continuing penetration of EDP functions into operating areas is increasingly evident. FM-A permanent answer for users TOTAL $645 MI LLiON FM REVENUES FM should not he treated as an interim first-aid treatment for EDP. There are several good reasons for continuing the FM relationship indefinitely. • Individual users cannot duplicate the economies of scale that FM vendors can achieve. Standard softwar epackages, for example, require constant updating and support and new equipment evaluations are constantly required if lowest cost EDP is to be maintained. • Top management would have to become involved in EDP management if operations were brought back in-house. This involvement would take time from selling and other revenue producing activities. A rational top ,management trys to minimize the share of its time spent on cost-management activities. • By disengaging from the FM contract, the customer risks losing control over his EDP again while receiving no obvious compensation for this ri~k. Even if the customer believes he is being overcharged by the FM vendor, there ~s no real guarantee the excess profits can he converted to savings to the customer. For these reasons an F.M relationship should normally be considered permanent rather than temporary. MARKET STRUCTURE AND FORECAST Current F M market Total FM Illarket size and recent growth The 1971 market for FM services totals $645 million with 337 contracts. However, 45 percent or $291 million was captive and not available to independents. Captive FM contracts are defined as being solidly in the possession of the vendor because of other than competitive considerations. Typically, captive contracts are negotiated between a large firm and its EDP spinoff subsidiary. The remaining market is available to all competitors and totals $354 or 55 percent. Available does not necessarily mean the contract is available for competition immediately, since most contracts are signed for a term of two to five years. Captive and available 1971 FM revenues and contracts are shown in Figure 2. TOTAL 337 FM CONTRACTS Figure 2-1971 FM market Industry analysis Discrete and Process Manufacturing are the largest industrial sectors using 'FM services and account for over 44 percent of total FM revenues. However, most EDP spinoffs have occurred in manufacturing and much of these FM revenues are therefore captive and not available to independent competitors. After deleting the captive portion, the two manufacturing sectors account for only 12 percent of the available 1971 FM market of $354 million. Manufacturing has failed to develop into a major available FM market primarily because there is a general absence of common business and accounting procedures from company to company, thus, providing no basis for leveraging standard software. This is true even within manufacturing subsectors producing very common products. In the medical insurance sector, however, Federal Medicare regulations enforce a common method for reporting claims and related insurance data, thus providing a good basis for leveraging standard software. The Medical Sector accounts for 25 percent of available FM revenues. The Medical Sector includes medical health insurance companies (Blue Cross/Blue Shield) 128 Fall Joint Computer Conference, 1972 Type of perfor:mance SERVICE BUREAU TOTAL FM MARKET $645 MILLION FM vendors who initially take over on-site operation of a customer's computer strive for economies of scale. This has created a trend whereby the FM vendor has eliminated the need for the customer's computer by processing data through NIS (timesharing) or service bureaus. NIS now accounts for 5 percent of total FM revenues. Service bureau processing which requires the physical transport of data from the client's location to the vendor's computer installation accounts for 2 percent of total FM revenues. In Figure 3, which depicts FM market by type of performance, combination refers to the use of two or more of the above services to carry out the FM contract. Types of vendors Types of vendors that perform FM contracts are described below: AVAILABLE FM MARKET $354 MILLION Figure 3-1971 FM market by type of performance and hospitals. This sector was the first major commercial FM market. FM continues to be attractive in this sector because it permits rapid upgrading of EDP to meet the new Medicare reporting procedures and relieves the problem of low EDP salary scales. The largest industry sector in the available FM market, the Federal Government, accounts for over 34 percent of available revenues. All Federal Government contracts are awarded on the basis of competitive bids. Most Federal Government FM contracts still tend to be purely subcontracting of EDP operations rather than total business information management which is becoming more common in commercial markets. The Finance Sector currently accounts for 22 percent of available FM revenues. Banks and insurance companies are the major markets within the Finance Sector which also includes brokerage firms, finance· companies and credit unions. • Independents who accounted for 67 percent of total FM revenues in 1971 were startups in the computer service industry or vendors who have graduated from the ranks of spinoffs. • Spinoffs are potentially strongest in their "home" industries ; however , competitive pressures may limit market penetration here. An oil company spinoff, for example, would have a difficult time selling its seismic services to another oil company because of the high value placed on oil exploration and related information. • Computer manufacturers are increasingly offering FM services. Honeywell has several FM contracts and will be joined by Univac and CDC who have announced intentions to marketFM services. The RCA Services Division should find FM opportunities· among RCA· customers. IBM has several ways in which it can enter FM, and will show an expanding profile. Contract Values The average FM contract in 1971 is valued at slightly less than $2 million. This is the equivalent of a user with two or three computers, one at least a 360/50. However, this is based on the total market analysis which distorts the averages for captive and available FM markets. An analysis of available and captive contracts shows that the average value of an available contract drops to $1.24 million, which would be equivalent to a user with Facilities Management-A Marriage of Porcupines 129 TABLE C-Major Vendors AVAILABLE TOTAL FM Revenues· Rank Company 1 2 Electronic Data f3ystems Corp. McDonnell Douglas Automation Co. Boeing Computer Services Inc. University Computing Company Computer Sciences Corp. Grumman Data Systems Computing and Software, Inc. System Development Corp. Martin Marietta Data Systems Westinghouse Tele-Computer Systems Corp. MISCO National Sharedata Corp. A. O. Smith Corp.'s Data Systems Div. Executive Computer Systems, Inc. Unionamerica Computer Corp. Cambridge Computer Corp. Greyhound Computer Corp. Tracor Computing Corp. Mentor Corp. Programming Methods, Inc. (PMI) 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 95.7 89.4 82.2 42.2 26.6 25.6 14.7 14.0 11.5 11.0 10.0 7.5 5.4 5.2 5.0 4.4 4.3 4.0 4.0 3.8 Company Electronic Data Systems Corp. Computer Sciences Corp. Boeing Computer Services Inc. Computing and Software, Inc. System Development Corp. University Computing Company National Sharedata Corp. McDonnell Douglas Automation Co. Executive Computer Systems, Inc. Cambridge Computer Corp. Greyhound Computer Corp. Tracor Computing Corp. Programming Methods, Inc, (PMI) MISCO Allen Babock Computing Corp. RAAM Information Services Corp. Data Facilities Management Inc. Bradford Computer and Systems, Inc. Computer Usage Co., Inc. Martin Marietta Data Systems FM Revenues· 95.7 26.6 22.2 14.7 12.8 10.1 7.5 7.1 5.2 4.4 4.3 4.0 3.8 3.0 2.9 2.5 2.3 2.0 2.0 1.5 • Annual Rate in 1971 in millions of dollars two 360/40's. Analysis of captive contracts, however, shows that the average value is significantly higher at $5.6 million per year. Most of these contracts are spinoffs from ·large industrial firms who have centralized computer installations or multiple installations spread throughout the country. A more revealing analysis of contract values is shown in Table D. Here total and available contracts are distributed according to contract value. From this analysis, it is clear that well over one-third of contracts are valued at $300,000 or less per year. A typical computer installation of this value would include a CONTRACT VALUE PER YEAR 0.1-0.3 0.31-0.5 0.51-0.8 > 0.8 TOTAL ALL CONTRACTS % # 1 AVAILABLE CONTRACTS % # 129 40 39 129 38.3 11.9 11.6 38.2 122 30 34 99 42.8 10.5 11.9 34.8 337 100.0 285 100.0 AVERAGE CONTRACT VALUE: $1.91 MILLION 285 AVAILABLE CONTRACTS - AVERAGE VALUE: $1.24 MILLION 52 CAPTIVE CONTRACTS - AVERAGE VALUE: $5.61 MILLION TABLE D-FM Contract Analysis by Value I - I 360/30, 360/20, 360/25 or equivalent computers in other manufacturers' lines. There are in total 18 contracts, captive or available, valued at more than $5 million per year. These are all spinoff parent or Federal Government contracts. Projected 1977 FM market FM market potential The ultimate U.S. market potential for FM is the sum of EDP expenditures for all users. By 1977 EDP expenditures for equipment, salaries and services will total $29.5 billion spread among 52.4 thousand users. Since FM benefits are not available to all users, five criteria were developed to help identify the industry sectors which· could most benefit from FM and would be most amenable to accepting FM as an alternative approach for EDP. The five criteria are: • Homogeneous Business Methods. Industries with similar information requirements from company to company are ideal situations for FM. These might be the coding of business records, such as the MICR codes used in banks, or price standards, e.g., tariffs used in motor freight. 130 Fall Joint Computer Conference, 1972 TABLE E-High Growth Potential FM Markets Selection Criteria Industry Sector Homogeneous Business Methods Similar Products or Services Regulation by Government Agencies Prior Evidence of Subcontracting Services Special EDP Operating Problems X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X Medical-Health Insurance Federal Government Banking-Commercial Insurance State and Local Government Motor Freight Brokerage Utilities-all except telephone Medical-Hospitals, Clinics Regional/Interstate Airlines Mutual Fund Accounting Banking-Savings Small & Medium Aerospace Cos. Education-Elementary and Secondary Education-College Construction and Mining X X X X X X X X X X X X X X X X X X X X company or industry practices which indicate that subcontracting of vital services is an accepted business procedure also help pinpoint industries with high FM potential. Correspondent relationships between smaller banks and larger city banks, historically a part of the banking industry, is an example. • Special EDP Operating Problems. Several industries have special EDP operating problems. These may result from historically low pay scales for EDP personnel which cannot easily be changed, such as in state and local government or a pending major conversion in basic accounting approaches imposed by an outside force, resulting in major EDP conversions as was the case in health insurance when Medicare and state health programs were implemented in the 1960's. • Similar Products or Services. The more similar the products and services sold by companies within a given industry sector, the more likely they will have common business procedures and, therefore, EDP systems. In the brokerage industry, for example, there is little differentiation in the serivce provided. • Regulation by Government Agencies. Industries that are regulated directly by State/Federal agencies or indirectly through strong trade associations also become good candidates for FM because of the enforced standards for pricing, operating procedures, account books, or other factors that impact ED P operations. Health insurance firms, utilities of all types, and brokerage firms are typical of these highly regulated industries. • Prior Evidence of Subcontracting Services. Prior TABLE F-Total FM Revenue Growth 1971-1977 Revenues Millions Contracts # Compound Annual Growth Rate of FM Revenues 446 590 389 344 350 236 318 160 23 2856 635 590 255 200 350 185 400 320 60 2995 27 37 18 16 92 12 58 52 69 28 1977 1971 Revenues Millions Contracts 104 88 146 144 7 122 20 13 1 645 89 89 42 34 15 36 13 17 2 337 $ Medical and Other Finance Discrete Manufacturing Process Manufacturing Government-State and Local Government-Federal Utilities and Transportation Wholesale and Retail Trade EDP Service Bureaus and NIS Operators Total # $ Facilities Management-A Marriage of Porcupines 131 TABLE G-Available FM Revenue Growth 1971-1977 Revenues Contracts Revenues Contracts $ Millions # $ Millions # Compound Annual Growth Rate of FM Revenues 15 76 81 36 16 6 32 22 1 285 350 503 280 236 133 196 135 129 6 1968 350 480-510 380-400 185 250-270 235-245 90-95 70-75 12-14 2052-2144 92 37 21 12 49 137 41 30 35 33 1971 Government-State and Local Finance Medical and Other Government-Federal Wholesale and Retail Trade Utilities and Transportation Discrete Manufacturing Process Manufacturing EDP Service Bureaus and NIS Operators Total 7 77 88 122 12 3 17 27 1 354 INDEPENDENTS $1,396M 1977 The above criteria were applied against major industry sectors. As a result, 16 sectors were identified and ranked according to their suitability for FM. (See Table E.) On the basis of this analysis the industry sectors most likely to benefit from FM include banking (mainly commercial), insurance, state and local governments, Federal Government, motor freight, brokerage, and medical (hospitals, and health insurance firms). Of these, the Federal Government and medical sectors are already established FM markets and will grow more slowly as a result. AVAILABLE $1,968 MILLION ProjectedFM revenues, 1971-1977 Actual realized FM revenues will be $2.86 billion in 1977. This is a 28 percent annual growth from $645 billion in 1971. Total contracts will increase to 2,995 in 1977 from 337 in 1971, with an average contract value of slightly less than $1 million. The available portion of the 1977 FM market will total $1.97 billion, up from $354 million in 1971, a growth of over 500 percent. Related contracts will be between 2,000 and 2,200 in 1977, up from 285 in 1971. See Tables F and G. Who are the vendors? TOTAL $2.856 MILLION Figure 4-1971 FM markets by type of vendor Independent vendors will retain the same share of the total FM market in 1977, as in 1971. Computer manufacturers will increase their penetration in the FM business primarily to protect installations that are threatened by competitive equipment. See Figure 4. 132 Fall Joint Computer Conference, 1972 HOW TO EVALUATE FM PROPOSALS Know what benefits are desired For the purposes of reading this segment, assume you are an EDP user considering a proposed FM contract. Assume further that by reading the previous material, you have concluded that, indeed, FM can benefit your company, both in terms of improved EDP operations and in improved information flow to the operating departments. But now you must get specific about the vendor, his proposal, and finally the detailed provisions of the contract he wishes you to sign. In this chapter we will provide the guidelines you can use to make these evaluations. Before digging into the evaluation guidelines, you should first articulate just what you, the management, and the current EDP department are expecting in the way of benefits. By doing this, you can compare your expectations as a customer with what the FM vendor is willing and able to provide. Have you had a poor experience with EDP? Is your primary objective to get out of the operating problems of an EDP department? If this is the case, then don't expect immediate improvements in the information you are receiving from EDP and the speed in which it flows to your operating departments-even if you have been told by the FM vendor this is to be the case. On the other hand, if your real goal is speeding order entry and decreasing finished goods inventory by a factor of three without a major investment in new applications software, then these are the points an FM vendor should be addressing in his proposal and you will want to evaluate him on this basis. Assuming you and the vendor have agreed on a set of expectations, let us look at the guidelines you can use in evaluating the vendor, his proposal and the FM contract: • • • • Evaluating vendor and his proposal Vendor Three potential problem areas should be explored to accurately appraise an FM vendor. These are: financial stability, past FM record, and level of industry expertise. • Financial Stability Financial stability of the vendor is a critical issue to pin down, for if he is in difficulty, such. as being short of working capital, your information • flow from EDP could be stopped leaving you in an extremely serious and vulnerable position. Previous FM Performance Record N ext to the financial record, the vendor's previous performance in FM as well as in other data services can be a good guide to his future performance on your contract. If the vendor has done well in past contracts, he no doubt will use his past work as a "showcase" and invite your visit to current sites he has under contract. However, the absence of these referrals should not be taken negatively due to the possible proprietary nature of current FM work. Industry Expertise Full knowledge of your industry and its detailed operating problems should be demonstrated completely by the vendor. This should include full appreciation for the operating parameters most sensitive for profitability in your industry and company. The vendor should be staffed with personnel who have had top management experience in the specific industry and people who have had experience in other specific industries. Vendors become more credible if they can show existing customers who are pleased with the vendor's services and who will testify to his ability to solve specific industry-oriented EDP problems. Proposal Responsiveness The proposal should be addressed specifically to the objectives that you and the vendor agreed were the purpose of considering the FM contract. The vendor should .detail exactly how he will improve your EDP operation or provide faster or improved information to serve your operating areas. He should suggest where savings can be made or what specific actions he can take that are not now being taken to effect these savings. Work Schedule for Information Reports While it is not desirable to pin the vendor down to an operating schedule for theEDP department-for it is exactly this flexibility that allows him to achieve economies of scale-he should, however, be very specific about the schedule for delivery of required reports. If you have a dataentry problem, for example, then the proposal should indicate that the computer will be available when you need to enter data. The work schedule should fully reflect as closely as possible the current way in which you do business and any change should be fully justified in terms of how it can improve the operation of the whole company, not just the operation of the EDP department. Equipment Transfer Details of equipment ownership and any trans- Facilities Management-A Marriage of Porcupines • • • • fers to the vendor should be specified. Responsibility for rental or lease payments should also be detailed. Responsibility for maintenance not built into equipment rentals or leases should also be delineated. Cost Schedule Contract pricing is the most critical cost item. A fixed-fee contract is advantageous to both parties if the customer's business volume is expected to continue at current levels or grow. If business drops, a fixed fee could hurt the customer. Thus, the fairest pricing formula is composed of two components: a fixed fee to cover basic operating costs and a variable fee based on revenue, number of orders, or some easily identifiable variable sensitive to business volume. Some contracts also include a cost-of-living escalator. The proposed cost schedule should also take into account equipment payments, wages, salary schedules, travel expenses, overhead to be paid to the customer (if the vendor occupies space in the customer's facilities) and all other expenses that might occur during the course of the contract. If special software programming or documentation is to be performed for the customer, the hourly rates to be charged should be identified in the contract. Vendor Liaison A good proposal recognizes the need for continuing contact between top management and the FM vendor. Close liaison is especially required in the early days of the contract, but also throughout its life. The cost for this liaison person should be borne by the customer, but the responsibilities and the functions that will be expected of him should be clearly stated in the proposal. Personnel Transfer Since all or most of the personnel in ED P operations will be transferred to the FM vendor, you must make sure that this will be an orderly transfer. Several questions arise in almost every contract situation and should be covered in the proposal: Does the proposal anticipate the possible personnel management problems that might come about? If all personnel are not being transferred and some may be terminated, how will this be handled? Are FM vendor personnel policies consistent with yours? IIas the vendor taken into account the possibility that large numbers of persons may not wish to join the vendor and may leave? Failure to Perform While it is most desirable to emphasize the positive aspects of an FM relationship, the negative possibilities should be explored to the satis- 133 faction of both parties. Most of these revolve around failure to perform. If the vendor fails to perform his part of the contract, you should be able to terminate the contract. The proposal should detail how this termination can be carried out. Is the vendor, for example, obligated to permit you to recover your original status and reinstall your in-house computer? What are the penalties the vendor will incur? What is the extent of his liabilities to replace lost revenue, lost profits that you may suffer as a result of his failure to perform? IIow will these lost revenues and lost profits be identified and measured? That's the vendor's side, but you also have obligations as a customer. If your input data is not made available according to schedule, for example, what is your possible exposure in terms of late reports? • Software I t is important to pin down ownership of existing software when an FM contract is signed and any subsequent software that is developed. Proprietary as well as non-proprietary software packages should be identified and specified in the report so that competitors may not benefit unfairly if the FM vendor uses the packages with other clients in your industry. Software backup and full documentation procedures should also be identified. This is one area in which FM may be a great help. If your installation is typical, your backup and documentation procedures are weak and an FM vendor, using professional approaches, should be able to improve your disaster recovery potential. FM contract: Marriage of porcupines The FM contract should incorporate all the above issues, plus any others which are uniquely critical, in an organized format for signing. The FM co~tract is as legal a document as any other the company might enter into; therefore, the customer's legal staff should carefully review it in advance of any signing. The body of a typical FM contract shows the general issues which have been discussed above and which apply in most FM contract situations. Attachments are used to detail specific information about the customer that is proprietary in an FM contract Attachments discuss the service and time schedule, equipment ownership and responsibility, cost schedule and any special issues. One. of the most striking features is the general absence of detailed legal jargon. This is typical in most 134 Fall Joint Computer Conference, 1972 FM contracts and is a result of two factors. First, the two parties have attempted to communicate with each other in the language that both understand. Second, the wording reflects an aura of trust between the two parties. In a service subcontracting relationship the customer must implicitly trust the vendor. Without this mutual trust, it would be foolish for a vendor or a customer to even consider a proposal. BIBLIOGRAPHY EDP productivity at 50%'1 Administrative Management June 1971 pp 67-67 Working Paper #177 Graduate School of Business Stanford University P J McGOVERN Interest in facilities management-Whatever it is-Blossoms EDP Industry Report April 30 1971 D M PARNELL JR A new concept: EDP facilities management Administrative Management September 1970 pp 20-24 I POLISKI Facilities management-Cure-all for DP headaches? Business Automation March 1 1971 pp 27-34 A RICHMAN Oklahoma bank opts for FM Bank Systems and Equipment February 1970 pp 18-32 EDP-What top management expects Banking April 1972 pp 18-32 L W SMALL Special report on bank automation Banking April 1971 Facilities management users not sure they're using-If they are Datamation January 1 1971 p 54 When EDP goes back to the experts Business Week October 18 1969 pp 114-116 KUTTNER et al Is facilities management solution to EDP problems? The National Underwriter January 23 1971 H CLUCAS JR The user data processing interface Quantum Science Corporation Reports Dedicated information services July 1970 Facilities management-How much of a gamble? November 1971 Federal information services October 1971 Network information services April 1971 Automated map reading and analysis by computer by R. H. COFER and J. T. TOU University of Florida Gainesville, Florida INTRODUCTION Florida's IBM 360/65 computer utilizing less than lOOK words of direct storage. Although the set of possible map symbols is quite large, those used in modern topographic maps form the three classes shown in Figure 2. Point symbology is used to represent those map features characterized by specific spatial point locations. This class of symbology is normally utilized to represent cultural artifacts such as buildings, markers, buoys, etc. Lineal symbology is used to mark those features possessing freedom of curvature. This class is normally utilized to represent divisional boundaries, or routes of transportation. Typical examples of lineal symbology include roads, railways, and terrain contours as well as various boundary lines. Area symbology is used to represent those features possessing homogeneity over some spatial region. It is normally composed of repeated instances of point symbology or uniform shading of the region. Examples include swamps, orchards, and rivers. As its extension to the recognition of area symbology is rather straightforward, MAPPS has been designed to recognize the point and lineal forms of symbology only. Further it has been designed to recognize only that subset of point and lineal symbology which possess topographically fixed line structures. This restriction is of a minor nature since essentiall all map symbology is, or may be easily converted to be, of this form. Even given these restrictions, MAPPS has immediate practical utility since many applications of map reading require only partial recognition of the symbology of a given map. As an example, the survey of cultural artifacts can be largely limited to the recognition of quite restricted forms of point and lineal symbology. Color information provides strong perceptual clues in maps. On standard topographic maps for instance blue represents hydrographic features, brown represents terrain features, and black represents cultural features. Even so, utilization of color clues is not incorporated into MAPPS. This has been done to provide a more stringent test of other more fundamental techniques of A great deal of attention is presently being given to the design of computer programs to recognize and describe two-dimensional pictorial symbology. This symbology may arise from natural sources such as scenery or from more conventionalized sources such as text or mathematical notation. The standardized graphics used in specification of topographic maps also form a conventionalized, two-dimensional class of symbology. This paper will discuss the automated perception of the pictorial symbology to be found within topographic maps. Although conventionalized, this symbology is used in description of natural terrain, and therefore has many of the characteristics of more complex scenery such as is found within aerial photography. Thus it is anticipated that the techniques involved may be applied to a broader class of symbology with equal effectiveness. The overall hardware system is illustrated by Figure 1. A map region is scanned optically and a digitized version of the region is fed into the memory of a computer. The computer perceives in this digitized data the pictorial symbology portrayed and produces a structured output description. This description may then be used as direct input to cartographic information retrieval, editing, land-use or analysis programs. THE PROGRAM Many results of an extensive research into the perception of pictorial symbology have been incorporated into a computer program which recognizes a variety of map symbology under normal conditions of overlap and breakage. The program is called MAPPS since it performs Machine Automated Perception of Pictorial Symbology. MAPPS is written in the PL/l programming language heavily utilizing the language's list and recursive facilities. It is operated on the University of 135 136 Fall Joint Computer Conference, 1972 Computer System Figure I-The overall hardware system recognition. It is obvious however, that utilization of color descriptors can be easily incorporated, and will result in increased speed of execution and improved accuracy of recognition. MAPPS is divided into three systems: picture acquisition, line extraction, and perception. In brief, the picture acquisition system inputs regions of the map into the computer, the line extraction system constructs data entities for each elementary line and node present in the input, and the perception system conducts the recognition of selected symbology. A flow-chart of MAPPS is shown in Figure 3. scanning of 35 mm transparencies within a research environment. I It consists of a flying-spot scanner, minicomputer, disk memory, storage display, and incremental tape unit. In operation, PIDAC scans a transparency, measures the optical density of the transparency at each point, stores the results in digital form, performs limited preprocessing actions, and generates a digital magnetic record of the acquired data for use by the IBM 360/65 computer. For each transparency, PIDAC scans a raster of 800 rows by 1024 columns, a square inch in the film plane Picture Acquilition Slltem Lin. Eatractlon Sl,tem 51mboiou Perc.ptlon Sl't.m po.. lbility 0' laolottoll PICTURE ACQUISITION The picture acquisition system PIDAC is a hardware system developed by the authors to perform precision D Arroy looloto" Symbology ea.ed LI.t Structure Llno. a Nod.. 0' X .d. Gray-tovet Picture Point Symbology ~ 8tnary Ptctur. \\ Recoonized Map Symbolooy Arroy I \ Procel.. d lilt of 1111•• olld lIod •• Ullr QUlry \ DOIcrlptlon of •• irod .ymboloOJ J Figure 3-MAPPS flow chart Lineal SymboloGY "II, t =~ ":' ~I~ ":' 00000 00000 00000 0000 Area Symbology Figure 2-Classes of map symbology thus corresponds to approximately 106 points. At each raster point PIDAC constructs a 3-bitnumber corresponding to the optical density at that point. As the original map may be considered to be black and white, a preprocessing routine, operating locally, dynamically reduces the 3-bit code to a 1-bit code in a near optimal fashion. This action is accomplished by a routine called COMPACT since it compacts storage requirements as well. The result is an array whose elements correspond to the digitized points of the map region. This compacted array is then input to the line extraction system. Automated Map Reading and Analysis by Computer 137 LINE EXTRACTION I I CLEAN FINAL CLEAN UP I I MEDIAL AXIS DETERMINATION I ,:, I. &r ~ I ~ I LIST GENrATION Name - 1 1st fbie NaIre - 1 1st Ncxle Position - (38.44) 2nd Ncxle Name - 2 2m Ncxle Position - (57.45) Line Ier¢h - 49 Grid-Intersect Coding - 24424 54444544546464465 7656606007777000000000 60000 Ncxle Eht%y 4-POINT LOOP REMOVAL Name - 2 Positicn - (57.45) Ibnber Adjacent Lines - 3 1st Adjacent Line Erxi 1st Line - 2 2nd Adjacent Line - 2 End 2nd Line - 1 3td Adjacent Line - 3 Erxi 3rd Line - 2 As shown by Figure 3, the compacted array is input to the line extraction system. The function of this system is the extraction of each of the elementary line segments represented in the map, so that the program can conduct perception in terms of line segments and nodes rather than having to deal with the more numerous set of digitized points. The system of line extraction, as developed, does not destroy or significantly distort the· basic information contained within the map. This is necessary since significant degradation makes later perception more difficult or impossible. The action of the line extraction system is illustrated in Figure 4. First the map is cleared of all small holes and specks likely to have resulted from noise. Then a connected medial axis network of points is obtained for each black region of the map. This first approximation to a desired line network is converted to a true line network by an algorithm called 4-point loop removal. Operating on a more global basis, later algorithms remove spurious lines and nodes, locate possible corner nodes, and convert to a more suitable list processing form of data base. For each line and node, a PL/I based structure is produced. Each structure contains attributes and pointers to adjacent and nearby data entities. The structure for a line entity contains the attributes of width, length, grid interest coding, as well as pointers to adjacent nodes. The structure for each node entity contains the attribute of position and pointers to adjacent lines and nearby nodes. The line extraction system, being somewhat intricate, has been discussed in detail in a prior paper. 2 Abstractly, each state S of the system can be viewed as responding to distortions occurring within the map. These distortions may be characterized by a set of context sensitive productions of the form Rr(i,j)Rzn(i,j)~Rr(i,j)Rlnf(i,j) z= 1, 2, ... , Ns RIm (i, j) represent some region about the point (i, j) having a fixed size and gray-level distribution. Rzn(i, j) and· Rlnf (i, j) represent regions of the point (i, j) having the same fixed size but differing gray-levels. By inversion of the production sets, each stage can be described as the repetitive application of the rules Rr(i,j)Rzn f (i,j)~Rr(i,j)Rln(i,j) l= 1,2, ... ,Ns in forward raster sequence to the points (i, j) of the map until no further response is obtainable. As an example, one such rule {M(i+l,j) =0, M(i-l,j) =O} {M(i,j) =1} Figure 4-Action of line extraction system ~{M(i+l,j) =0, M(i-l,j) =O} {M(i, j) =2} 138 Fall Joint Computer Conference, 1972 is used in the medial axis determination to mark object regions of width 1 as possible line points for further investigation. It is important to observe the degree of data reduction and organization which is accomplished through the extraction of line data. As previously mentioned, even a small map region contains a huge number of nearly 106 picture points. The extracted list structure typically contains no more than 300 lines and node points thereby resulting in a very significant data reduction. Equally significant, the data format of the list structure permits efficient access of all information to be required in the perception of symbology. The digitized map array therefore may be erased to free valuable storage for other purposes. PERCEPTION OF SYMBOLOGY It is interesting to observe that certain familiar pattern recognition procedures cannot be directly used in the recognition of map symbology. This results from the fact that in cartography, symbology cannot be well isolated as there are often requirements for overlap or overlay of symbology in order to meet spatial positioning constraints. Many of the techniques used for recognition of isolated symbology, such as correlation or template matching of characters, cannot be used to recognize such non-isolated symbology and are thus not very powerful in map reading. In MAPPS, alternative techniques have been employed to accomplish isolation of symbology in addition to its recognition. THE CONCEPT OF ISOLATION PROCESSING Conceptually, isolation of symbology from within a map cannot be accomplished in vacuo. Isolation requires some partial recognition, while recognition generally requires some partial isolation. This necessitates the use of a procedure in which isolation is accompanied by partial recognition. In order to guide this procedure, there must exist some a priori knowledge about the structure of the symbology being sought. The underlying structure of pictorial symbology, such as is present in maps and elsewhere, is found to be that of planar graphs upon the removal of all metric constraints. Using this structure the isolation process functions by sifting through the data of the map proposing possible instances of pattern symbology on a graph-theoretic equivalency basis; thereby suppressing extraneous background detail. is isomorp hie to is h omao mo rphie to Figure 5-Graph equivalencies Two types of graph equivalency are used in isolation. These are • isomorphism • homomorphism One graph is isomorphic to another if there exists a one-to-one correspondence between their nodes which preserves adjacency of edges. A graph is homomorphic to another if it can be obtained from the other by subdivision of edges. Figure 5 yields an instance of isomorphic and of homomorphic equivalence of graphs. Using the above definitions of graph equivalency, the process of isolation can be achieved by means dependent upon and able to cope with the types of structural degradation, Figure 6, found within actual maps. For instance, should a map contain no structural degradation, then on the basis of graph structures only, it is necessary and sufficient to propose as possible symbology isolations those components of the map which are isomorphic to the sy~bology being sought. If the map Crossing of lines Breakage of lines Overlay of nodes Overloy of lines Uncertain location of corner nodes Figure 6-Structural degradations occurring in MAPPS Automated Map Reading and Analysis by Computer contains no crossing, overlay, or breakage of lines then on the basis of graph structures only, it is necessary and sufficient to propose as possible symbology isolations those partial subgraphs of the map which are isomorphic to the symbology. If the map contains no breakage or overlay of lines, then it is necessary and sufficient to propose those partial subgraphs which are homomorphic to the symbology sought. Finally, if a map contains as the only forms of structural degradation: line crossovers, line breakage, node overlay, and uncertain location of corner nodes, first complete the map by filling in all possible instances of line breakage. Then it is necessary and essentially sufficient to propose as possible pattern isolations those partial subgraphs of the completed map which have no two adjacent broken edges and which are homomorphic to the symbology sought. Although the process of isomorphic matching of graphs can be conducted rather efficiently,4 the more realistic process of homomorphic matching requires the investigation of large numbers of partial subgraphs of the map for possible equivalency to pattern symbology. (b) Region containing instances (a) a feature space of pattern symbology ./ L /' '/ V (c) region forme d by (d) best con.native region bounds te.ting of formed by bound testing feotures of f eatur IS Figure 7-Partitioning of feature space by metric tests (a) A feature space (b) Region containing instances of pattern symbology (c) Region formed by bounds testing of features (d) Best conservative region formed by bound testing of features 139 8 H ." /::;. G F Cl E y a .~ B • 0 A B fj I~' ~ • /::;.." 8 • y C Its spanning tree T. A pattern Symbol S 4 a c 8 T '. I~ 12 16 18 "~. 19 13 The element. T. Ii) of T. AppHcotion to a mop Figure 8-The structure of a pattern symbol In order to limit the number of partial subgraphs which need be checked for homomorphic match, metric equivalency tests have been integrated into the graph theoretic isolation process. These tests include bounds checking of the lengths, curvatures, thicknesses, and angles between lines, and may be easily extended as required. If the metric tests are well chosen then they will be conservative, i.e., will not reject any true instance of pattern symbology. This may be seen by viewing the various screening quantities as features in the feature space of Figure 7. The ensemble of all true instances of pattern symbology will form some region A in the space, Figure 7. Any set of metric tests may also be viewed as partitioning the feature space, passing only those instances of symbology which lie in some region B of the space formed by the partition. If region B contains region A, then the set of tests is conservative. If region B exactly contains region A, then the set of tests also form a perfect recognizer. It is more important however, that the tests result in a high processing efficiency. This may be achieved by immediate testing of each feature as it is first calculated. This form of testing generates a partition which boxes in some region of feature space as shown in Figure 7c. While this partition is not necessarily perfect, it is usually possible to adjust the bounds of the tests so as to achieve a near optimal, as well as conservative, performance on the basis of limited sampling of pattern symbology within a map, Figure 7d. Thus the isolation process may also often serve well as the final stage· of recognition. When desired, however, it is always possible to concatenate other more conventional recognition processes in order to achieve yet higher accuracies of recognition. 140 Fall Joint Computer Conference, 1972 To Calling Routines __--------------__ A~ (final Success End ~Of- Pattern _________________ Err~ Temporary Success Return \ Get Next M atc h Possibility For This Match Level ~ then MATCH takes a FINAL SUCCESS exit which carries it back up the recursive string with the isolated symbology from Gm • If all matching possibilities for T 8 (i) , i>l, have been exhausted then MATCH takes an ERROR RETURN exit back to the i-lth level of recursion in order to try to find other matching possibilities for T8(i-l). Alternatively if all matching possibilities for Ts (i), i = 1, have been exhausted then MATCH fails to isolate the symbology sought and exits along the ERROR RETURN exit to the calling program. On the other hand, if it finds an acceptable match for Ts(i) then it exits via the TEMPORARY SUCCESS exit to continue the matching search for T8(i+l). At each recursive level, MATCH performs one of three specific actions: matching to nodes of T 8 , matching to lines of T and initial matching to new pattern components of Ts. 8 , , ,\ Final \(uccess Temporary Success Error Return) Matching of nodes y Recursive Invocation Of Match Figure 9-Structure of MATCH The routine MATCH Application of the search for graphical and metric equivalencies is conducted via a recursive routine called MATCH. On the graph-theoretic level, MATCH functions through utilization of tree programming. In this approach, a spanning tree Ts is pre-specified for each pattern symbol S. The elements of Ts are named T s (i),i=l, 2, ... , N s, where Ts(i) is constrained to be connected to Ts(1) through the set {T8 (j), j = 1, 2, ... , i}. These structural conventions, illustrated by Figure 9, are developed to permit utilization of a recursive search policy in matching Ts and the partial subgraphs of a map Gm • The recursive structure of MATCH is shown in Figure 9. It has one entry from and two exits back to the calling program. Being recursive, it can call itself. At the ith level of recursion, MATCH investigates the possibilities of homomorphic equivalence of elements Gm to Ts(i). As each possibility is proposed MATCH checks to insure that all implied graph-theoretic equivalencies between Ta(j) ,j = 1, 2, ... , i, and Gm are acceptable, and that basic metric equivalences are met. More explicitly, at the recursive level i, MATCH takes the following action. If Ts has been fully matched The fundamental operation performed by MATCH is the matching of the immediate neighborhood of a node of Gm to that of Ts. This matching must satisfy several constraints. It must be feasible, must satisfy a···· CJ .. : ~.. ~-~><. (a) node neighborhoods before matching .. g':" . ~ Gm ... . 'l~' '. 5 6 (b) node neighborhoods after matching Figure lo-Matching of node neighborhoods (a) N ode neighborhoods before matching (b) Node neighborhoods after matching •• ' ... Automated Map Reading and Analysis by Computer ... ~ ... (a) line regions before matching . ~ .;y. 141 This path may contain one or more elementary lines and may even contain breaks. The path must, however satisfy minimal constraints. It must not cross over it~ self, no portion of the path other than endpoints may have been previously matched, no breaks may be adjacent, the implied endpoint matchings must be consistent with prior matchings, and finally certain metric equivalencies must be observed. Typically these metric equivalencies need be no more complex than a rough correspondence of length and curvature between line of T8 and the path within Gm • As an example of the matching of lines consider Figure 11. If the conditions of Figure 11a hoid upon a call of MATCH then Figure lIb shows a suitable matching between the line of Ts and a path within Om . Gm ... ~ ... (b) line regions after matching Figure ll-Matching of line regions (a) Line regions before matching (b) Line regions after matching certain angular conditions, and must not violate any prior matching assumptions. A matching is feasible if the degree of the node of Gm is greater than or equal to that of the node of Ts. This requirement, for example results in termination of the matching of the map seg~ ment of Figure 8 at recursive stage 16 because the degreeof node Ts(16) was greater than the degree of the corresponding node of Gm. A matching satisfies necessary angular constraints if all internal angles of the planar graphs of Gm and Ts are sufficiently similar. It satisfies prior matching assumptions if the present matching attempt is not in conflict with previous matching attempts or involves lines of Gm which are matched to other pattern symbology. The neighborhood of a node of Ts is considered to be fully matched when the node and its adjacent lines are matched to a node of Gm and some subset of its adjacent lines. For instance, if the conditions represented in Figure lOa hold upon a call of MATCH, then Figure lOb shows a suitable match of the neighborhoods. Matching of lines In matching a line of T8 to elements of Gm , MATCH finds a path in Gm which corresponds to the line of T s , Initial m.atching of com.ponents . Matching of nodes and lines of connected symbology conducted by the tracking of connectivity via T B • This technique may be extended to the matching of symbology S composed of disjoint components through inclusion of lines within Ts which link nodes of the various components of S. These lines may, for matching purposes, be treated as straight lines of T s , thereby simplifying the matching process. IS FINAL CLASSIFICATION MAPPS has the capability for inclusion of a final classification routine (CLASS). When used this routine serves to provide a final decision as to whether an isolated piece of symbology is a true instance of the symbology being sought. If the isolation is determined to be erroneous, then MATCH continues its search toward an alternative isolation .. CLASS may be implemented by a variety of app,roaches, the best known and most appropriate of which is through use of discriminant functions. The power of its application can be dramatically enhanced through proper use of results from MATCH. For example, quite complex features can be devised for input to CLASS from the very detailed description of the isolated symbology produced by MATCH. As further example MATCH may be used to isolate new symbology and tentative classification from a map to form a training set for CLASS. Then with or without a teacher, the discriminant function underlying CLASS can be perturbed toward a more optimal setting by any of several well-known algorithms. 3 142 Fall Joint Computer Conference, 1972 /} : I ! }i Figure 12-Test results Figure 12a-The input map Figure 12c-Isolated highways OUTPUT The Output Routine (OUTPUT) takes the isolated symbology recognized by earlier routines (MATCH, CLASS, ... ) and produces the final MAPPS output in ··-···l···-···-······ ..-.. --..---~I accordance with a specific user query. This is accomplished by establishing a data structure in which data can be retrieved through use of relational pointers. Retrieval is effected by specification of the desired symbology class and by calling various relations. The relation "contains" may be used, for instance, to find ----- ------_--..,-.. _.____ _ ................................................._._......_-.. _- .'~' Figure 12d-Isolated railways Figure 12b-The input map after line extraction Automated Map Reading and Analysis by Computer 143 tively a call of "position" will return the nominal location of the center of each isolated symbology. RESULTS OF TEST RUNS ....I.:::~:...I.·. ......-- ..:~.... ...-.=~=:::.- . . . .. ~> l / . Figure 12e-Isolated roads the various isolated symbols belonging to a specified symbology class. Another call of "contains" will then result in presentation of all lines and nodes present in the specified symbology. Yet another call of "contains" will return the specific picture points involved. Alterna- MAPPS has been tested on several map regions. In each case CLASS was set to accept all isolations in order to most stringently test the operation of MATCH. Throughout all testing the results were highly satisfactory. Figure 12 presents the results for a representative run. The map region of Figure 12a was fed to the early stages of MAPPS producing the preprocessed map of Figure 12b. This preprocessed map was then subjected to several searches for specified symbology resulting in Figures 12c through 12k. In all but one case the recognition was conservative. Only in the case of Figure 12f was a false isolation made. An M was there recognized as an E. Had CLASS been implemented using character recognition techniques, this misrecognition could have been avoided. In those cases where recognition was incomplete, as for the highway of Figure 12c, isolation was terminated by MATCH due to mismatch of structure between the map and symbology sought. Some overall statistics on the test run: MAPPS correctly found instances of 18 types of lineal and point symbologies. These instances were formed from 5382 elementary lines. In addition 7 incorrect instances were EF' .-.t. e E Figure 12f-Isolated 'E's Figure 12g-Buildings 144 Fall Joint 'Computer Conference, 1972 I I I . . e\ \/, " , '".r Figure 12h-Benchmark symbols Figure 12j-Swamp symbols isolated although in each case this could have been avoided by use of a proper classification structure within CLASS. Since minimization of run-time was of minor importance, the average test-run for each symbology search of a map region took approximately 10 minutes from input film to output description. It is estimated that this could have been improved very significantly by various means; however this was not a maj or goal at this stage of research. "·"+-r::, .~--!: Figure 12i-Churches Figure 12k-Spring symbols Automated Map Reading and Analysis by Computer 145 CONCLUSION ACKNOWLEDGMENT This work has been an investigation into a broad class of conventionalized, two-dimensional, pictorial patterns: the symbology of maps. Important aspects of the problem involve line extraction, isolation under conditions of qualitatively-defined degradation, use of graph structures and matching techniques in isolation, and interactive recognition of geometrically variable symbology. A sophisticated approach to line extraction yielded a useful data base upon which to conduct symbology isolation and recognition. The use of graph structure and matching in symbology isolation proved very effective. Unexpectedly, it was found to be seldom necessary to resort to formal classification techniques in recognition of the isolated symbology. Such techniques could be incorporated as desired resulting in a continued search for symbology in case of any misisolation. The program as a whole is able to be expanded to the recognition of a wide variety of graphical symbology. In addition, the concepts involved can quite possibly be applied to the automated perception of gray-level sceneries such as blood cells, aerial photographs, chromosomes, and target detection. The authors would like to acknowledge the interest displayed by other members of the Center for Informatics Research in this and related research. This research has been sponsored in part by the Office of Naval Research under Contract No. N00014-68-A-0173-0001, NR 049-172. REFERENCES 1 R H COFER Picture acquisition and graphical preprocessing system Proceedings of the Ninth Annual IEEE Region III Convention Charlottesville Virginia 1971 2 R H COFER J T TOU Preprocessing for pictorial pattern recognition Proceedings of the NATO Symposium on Artificial Intelligence Rome Italy 1971 3 J T TOU Engineering principles of pattern recognition Advances in Information Systems Science Vol 1 Plenum Press New York New York 1968 4 G SALTON Information organization and retrieval McGraw-Hill Book Company N ew York 1968 Computer generated optical sound tracks by E. K. TUCKER, L. H. BAKER and D. C. BUCKNER University of California Los Alamos, New Mexico were represented by sounds, interpretation of results would be greatly facilitated. This is feasible only if the sound track is computer produced, not "dubbed in" after the fact. It should be made clear at this point that it was not an objective of this project to have the computer create all of the waveforms represented on the sound track. What was required was that the computer be able to reproduce on an optical sound track any recorded audible sound, including voices or music. The waveforms that the computer would actually have to create could be limited to some of the sounds we wanted to use as data representations. INTRODUCTION For several years various groups at the Los Alamos Scientific Laboratory have been using computer generated motion pictures as an output medium for large simulation and analysis codes. 1 ,2,3 Typically, the numerical output from one simulation run is so large that conventional output media are ineffective. The timevariable medium of motion picture film is required to organize the results into a form that can be readily interpreted. But even this medium cannot always convey all of the information needed. Only a limited number of variables can be distinctly represented before the various representations begin to obscure or obliterate each other. Furthermore, the data presented usually must include a significant amount of explanatory material such as scaling factors, representation keys, and other interpretive aids. If a film is to have long-term usefulness to a number of people, this information must either be included on the film or in a separate writeup that accompanies the film. In an effort to increase the effective information density of these films, a study was undertaken to determine the feasibility of producing optical sound tracks as well as pictorial images with a microfilm plotter. Some exploratory work done at the Sandia Laboratories, Albuquerque, New Mexico, suggested that this might provide a good solution to the problem. 4 It has been demonstrated many times that a sound track facilitates the interpretation of visual presentations. 5 However, from our standpoint, the addition of another channel for data presentation was as important as facilitating interpretation. Not only could a sound track present explanatory and narrative material efficiently and appealingly, it could also be used to represent additional data that might otherwise be lost. For example, it is always difficult to clearly represent the movement of many particles within a bounded three-dimensional space. If, however, the collisions of particles-either with each other or with the boundaries of the space- OPTICAL SOUND TRACKS Sound is generated by a vibrating body which produces a series of alternating compressions and rarefactions of some medium, i.e., a wave. As this series is propagated through the medium, particles of the medium are temporarily displaced by varying amounts. We shall speak of the magnitude and direction of this displacement as the instantaneous amplitude of the wave. If the variation of this amplitude can be described as a function of time, a complete description or encoding of the wave is obtained. Thus, a sound wave can be "stored" in various representations, as long as the representation fully describes the variation of amplitude with respect to time. An optical sound track is one way of representing a sound. It consists of a photographic image which determines the amount of light that can pass through the track area of the film at a given point. As the film is pulled past the reader head, varying amounts of light pass through the film to strike a photocell, producing a proportionally varying electrical signal. A given change in signal amplitude can be produced at the photocell by varying either the area or the intensity of exposure of the sound track image. Conventional sound tracks are produced by either of two methods. The variable area type of track is pro147 148 Fall Joint Computer Conference, 1972 duced by having a beam of light of constant intensity pass through a slit of variable length to expose the film. In the. variable intensity recording method, either the light's intensity or the slit width can be varied with the slit length held constant. Commercial sound tracks are· produced by both methods. In both cases, the sound track image is produced on a special film that is moved past the stationary light source. Separate films of sound track and pictures are then reprinted onto a single film. Sixteen-millimeter movies with sound have sprocket holes on only one edge. The sound track is located along the other edge of the film (see Figure 1). Such sound tracks are normally limited to reproducing sound with an upper frequency of 5000-6000 Hz. This limitation is imposed by the resolution that can be obtained with relatively inexpensive lens systems, film and processing and by the sound reproduction system of most 16 mm projectors. 6 INPUT SIGNALS In order not to be limited to the use of computer created sounds alone, it was necessary to be able to store TIME SAMPLING THE ORIGINAL SIGNAL y ............... ............. ~------------------~~------------.-x APPROXIMATING THE ORIGINAL SIGNAL FROM THE SAMPLES Figure 1-A computer generated optical sound track Figure 2~Discrete sampling Computer Generat.ed Optical Sound Tracks other complex audio signals, such as voices, in a form that could be manipulated by a digital computer. As discussed above, any audio signal can be completely described by noting the variation of the signal's amplitude as a function of time. Therefore, the data for a digital approximation of an audio signal can be obtained by periodically sampling the signal's amplitude (see Figure 2). The primary restriction associated with this approach requires that the sampling rate be at least twice the highest frequency contained in the signal,7 In effect, samples obtained at a relatively low sampling rate S from a wave containing relatively high frequencies f will create a spurious "foldover" wave of frequency 8-f. The input for our experimental film was recorded on standard 7.4'-inch magnetic tape at a speed of 7,72 IPS. Frequencies greater than 8000 Hz were filtered out, and the resulting signal was digitized at a sampling rate of 25,000 samples/second. The digitizing was performed on an Astrodata 3906 analog-to-digital converter by the Data Engineering and Processing Division of Sandia Laboratories, Albuquerque. The digital output of this process was on standard ,72-inch 7-track digital magnetic tape in a format compatible with a CDC 6600 computer. This digital information served as the audio input for the sound track plotting routine. PLOTTING THE SOUND TRACK The sound track plotting routine accepts as input a series of discrete amplitudes which are then appropriately scaled and treated as lengths. These lengths are plotted as successive constant intensity transverse lines in the sound track area of the film. When these lines are plotted close enough together, the result is an evenly exposed image whose width at any point is directly proportional to an instantaneous amplitude of the original audio signal (see Figure 1). Consequently, as this film is pulled past the reader head, the electrical signal produced at the photocell of the projector will approximate the wave form of the original audio signal. The routine is written to produce one frame's sound track at a time. During this plotting, the film is stationary while the sound track image is produced line by line on the cathode ray tube of the microfilm plotter. The sound reproduction system of a motion picture projector is very sensitive to any gaps or irregularities in the sound track image. Plotting a sound track, therefore, requires very accurate film registration. Furthermore, the sound track image must be aligned in a perfectly vertical orientation. If either the registration or the vertical alignment is off, the track images for successive frames will not butt smoothly together and noise will be produced. 149 PLOTTER MODIFICATIONS All of our early experimental films were produced on an SD 4020 microfilm printer/plotter. Three modifications had to be made to the 16 mm camera of this machine in order to make these films. These modifications do not affect any of the camera's normal functions. In the first modification, the Vought 16 mm camera had to be altered to accommodate single sprocketed 16 mm movie film. For this it was necessary to provide a single sprocketed pull-down assembly. This was accomplished by removing the sprocket teeth on one side of the existing double sprocket pull-down assembly. Next, it was necessary to replace the existing lens with a lens of the proper focal length to enable the camera to plot the sound track at the unsprocketed edge of the film. The lens used was a spare 50 mm lens which had previously been used on the 35 mm camera. With the existing physical mountings in the 4020, this 50 mm lens presents, at the film plane, an image size of approximately 17.5 X 17.5 mm. Thus, with proper raster addressing, a suitable 16 mm image and sound track may be plotted on film. (Increasing the image size in this fashion produces a loss of some effective resolution in the pictorial portion of the frame while the 50 mm lens is in use. This loss of resolution in the picture portion is not particularly penalizing in most applications.) Finally, it was necessary to expand the aperture both horizontally and vertically to allow proper positioning and abutment of the sound track on the film. By interchanging the new lens with the original lens, normal production can be resumed with no degradation caused by the enlarged aperture and single sprocketed pull-down. No other modifications were required on the SD 4020 in order to implement the sound track option. The primary difficulty we encountered using the SD 4020 was that we could not get consistently accurate butting of consecutive frames. Therefore, the later films were plotted on an III FR-80, which has pin registered film movement. In order to use this machine, the film transport had to be altered to accommodate single sprocketed film, and the aperture had to be enlarged. A software system tape was produced to allow the sound track image to be plotted at the unsprocketed edge of the film, with the pictorial images still plotted in the normal image space. The FR-80 also provides higher resolution capabilities, so that no loss of effective resolution is incurred when pictorial images and the sound track are plotted in one pass through the machine. As was discussed earlier, optical sound tracks are usually limited to reproducing sound with an upper frequency of 5000-6000 Hz. Since motion picture film is projected at a rate of 24 frames/second, a minimum of 150 Fall Joint Computer Conference, 1972 410 lines per frame are needed to represent such frequencies in the sound track. While we have made no quantitative tests to demonstrate the production of such frequencies, we would expect efficient resolution to produce frequencies in or near this range with either of the plotters. Our applications so far have not needed the reproduction of sounds in this frequency range. THE TRACK PLOTTING ROUTINE The present sound track plotting routine was written with three primary objectives in mind. First, it was felt that it would be advantageous to be able to produce both pictorial imagery and the sound track in one pass through the plotter, with the synchronization of pictures and sound completely under software control. Second, the routine was written to allow the user maximum flexibility and control over his sound track "data files". Finally, the routine was designed to produce film that could be projected with any standard 16 mm projector. One-pass synchronization The sound track plotting routine is written to produce one frame's sound track at a time, under the control of any calling program. However, in a projector, the reader head for the sound track is not at the film gate; it is farther along the threading path. The film gate and the reader head are separated by 25 frames of film. Therefore, to synchronize picture and sound, a frame of sound track must lead its corresponding picture frame by this amount so that as a given frame of sound track arrives at the reader head, its corresponding pictorial frame is just reaching the film gate. In order to be able to generate both picture and sound in one pass through the plotter, it was necessary to build a buffer into the sound track plotting routine. This buffer contains the plotting commands for 26 consecutive frames offilm. In this way, a program plotting a pictorial frame still has access to the frame that should contain the sound track for the corresponding picture. The simultaneous treatment of pictorial plot commands puts the synchronization of pictures and sound completely under software control. Furthermore, this can be either the synchronization of sound with picture or the synchronization of picture with sound. This is an important distinction in some applications; the current picture being drawn can determine which sound is to be produced, or a given picture can be produced in response to the behavior of a given sound track wave. Flexibility The present routine will read from any number of different digital input files and can handle several files simultaneously. Thus, for example, if one wishes to have a background sound, such as music, from one file behind a narrative taken from another file, the routine will combine the two files into a single sound track. The calling routine can also control the relative amplitudes of the sounds. In this way, one input signal can be made louder or softer than another, or one signal can be faded out as another one fades in. Any input file can be started, stopped, restarted or rewound under the control of the calling program. DEMONSTRATION FILMS Several films with sound have been produced using the sound track plotting routine. Most of the visual portions were created with very simple animation techniques in order to emphasize the information content added by the sound track. The films review the techniques employed for the generation of a sound track. No attempts have been made to rigorously quantify the quality of the sounds produced since no particular criterion of fidelity was set as an objective of the project. Furthermore, the sound systems of portable 16 mm projectors are not designed to produce high fidelity sound reproduction, since the audio portion will always be overlaid by the noise of the projector itself. For our purposes it was enough to make purely subjective judgments on the general quality of the sounds produced. SUMMARY The ability to produce optical sound tracks, as well as pictorial imagery, on a microfilm plotter can add a tremendous potential to computer generated movies. The sound medium can serve to enhance the visual presentation and can give another dimension of information content to the film. This potential cannot be fully exploited unless the sound track and the pictures can be plotted by the computer simultaneously. Under this condition, the input for the sound track can be treated by the computer as simply one more type of data in the plotting process. The input for the sound track plotting routine discussed in this report is obtained by digitizing any audio signal at a suitable sampling rate. This digital information can then be plotted on the film like any other data. Very few hardware modifications were made to the Computer Generated Optical Sound Tracks plotter in order to produce sound tracks. The modifications that were made did not affect the plotter's other functions. The routine is written to give the user as much flexibility and control as possible in handling his sound track data files. Multiple files can be combined, and synchronization is under the control of the user's program. It now appears that the production of computer generated optical sound tracks will prove to be cost effective as well as feasible. If so, this process could conveniently be used to add sound to any computer generated film. ACKNOWLEDGMENTS While many individuals have made significant contributions to this project, the authors would like to give particular thanks to Jerry Melendez of Group C-4 for many hours of help in program structuring and debugging. The work on this project was performed under the auspices of the U. S. Atomic Energy Commission. 151 REFERENCES 1 L H BAKER J N SAVAGE E K TUCKER Managing unmanageable data Proceedings of the Tenth Meeting of UAIDE Los Angeles California pp 4-122 through 4-127 October 1971 2 L H BAKER B J DONHAM W S GREGORY E K TUCKER Computer movies for simulation of mechanical tests Proceedings of the Third International Symposium on Packaging and Transportation of Radioactive Materials Richland Washington Vol 2 pp 1028-1041 August 1971 3 Computer fluid dynamics 24-minute film prepared by the Los Alamos Scientific Laboratory No Y-204 1969 4 D ROBBINS Visual sound Proceedings of the Seventh Meeting of UAIDE San Francisco California pp 91-96 October 1968 5 W A WITTICH C F SCHULLER A udio visual materials Harper & Row Publishers, Inc. New York 1967 6 The Focal encyclopedia of film and television techniques Focal Press New York 1969 7 J R RAGAZZINI G F FRANKLIN Sampled-data control systems McGraw-Hill Book Company New York 1958 Simulating the visual environment in real-time via software by RAYMOND S. BURNS University of North Carolina Chapel Hill, North Carolina INTRODUCTION Laboratory, Providence, Rhode Island, has constructed several examples of unprogrammed simulators. One of these features a model terrain board with miniature roads and buildings over which a television camera is moved through mechanical linkages to the steering wheel of an automobile mock-up. The television camera is oriented so that the subject is presented with a windshield view. This arrangement earns the "unprogrammed" label within the physical limits of the terrain board. In practice, however, its value as a research tool is limited to studying driver behavior at dusk, as the image presented to the subject is dim. Natural daylight illumination, even under cloudy conditions, is much brighter than the usual indoor illumination. Duplicating the natural daylight illumination over the surface of the whole terrain board was found to be impractical in terms of the heat produced and the current re,.. quired by the necessary flood lamps. Because of the difficulties and disadvantages of filmand terrain board-type simulators, some efforts in recent years have been directed toward constructing visual simulators based on computer-generated images. General Electric has developed a visual simulator for NASA, used for space rendezvous, docking and landing simulation, which embodies few compromises. 2 The G. E. simulator output is generated in real time and displayed in color. However, from a cost standpoint, such a simulator is impractical for use as a highway visual.simulator because the G. E. simulator was implemented to a large extent in hardware. Consequently, the search for a visual-environment simulator which could be implemented in software was initiated. A study, investigating the feasibility of such a simulator was undertaken by the Highway Safety Research Center, Chapel Hill, North Carolina, an agency of the State of North Carolina. This study led to the development of the VES, for Visual Environment Simulator, a black-and-white approximation of the GE-NASA spaceflight simulator, adapted for highway environment simulation and implemented in software. Computer graphics has been seen since its inception! as a means of simulating the visual environment. I van Sutherland's binocular CRTs was the first apparatus designed to place a viewing subject in a world generated by a computer. When the subject in Sutherland's apparatus turned his head, the computer generated new images in response, simulating what the subject would see if he really .were in the 3-space which existed only in the computer's memory. This paper describes a system which is a practical extension of Sutherland's concept. The problem of simulating the visual environment of the automobile driver has attracted a variety of partial solutions. Probably the most used technique is simple film projection. This technique requires only that a movie camera be trained on the highway from a moving vehicle as it maneuvers in a traffic situation. The resulting film is shown to subjects seated in detailed mockups of automobile interiors, who are directed to work the mock-up controls to "drive" the projected road. The illusion of reality breaks down, however, when the subject turns the steering wheel in an unexpected direction and the projected image continues on its predefined course. Mechanical linkages from the mock-up to the projector, which eause the projector to swing when the steering wheel is turned, have also been tried. But that technique still breaks down when the subject chooses a path basically different from the path taken by the vehicle with the movie camera. Such film simulators are termed "programmed" . That is, what the subject sees is a function, not of his dynamic actions, but of the actions taken at the time the film was recorded. An "unprogrammed" simulator reverses this situation in that the image that the subject sees is determined only by his behavior in the mock-up. Unprogrammed visual environment simulators have been built for studying driving behavior. The U. S. Public Health Service at the Injury Control Research 153 154' Fall Joint Computer Conference, 1972 lated building, the visual, kinetic and ,auditory feedback should realistically reflect his actions. A visual simulator to provide the feedback described above must meet several requirements. To support the subject's unlimited alternatives, each image generated by the visual simulator must be determined only by the subject's inputs via the steering wheel, accelerator and brake, together with the subject's position in the simulated terrain. Therefore, the entire image representing the visual environment must be calculated in the time span separating subsequent images. REALISM Figure I-Mock-up of an automobile interior VES DESIGN REQUIREMENTS The requirements laid down by the Highway Safety Research Center were for a visual simulator that could be incorporated in a research device to totally simulate the driving experience to the subject. Not only was the visual environment to be simulated, but the auditory and kinesthetic environment as well. The subject was to be seated in a mock-up of an automobile interior; complete with steering wheel, brake and accelerator (see Figure1). The kinesthetic environmentwas to be simulated by mounting the mock-up on a moveable platform equipped with hydraulic rams. Under computer control, the mock-up could be subjected to acceleration and deceleration forces, as well as pitch, yaw and Toll. Similarly, a prerecorded sound track would be synchronized with the visual simulation to provide auditory feedback. To as great a degree as possible, the subject was to be isolated from the real environment and presented only with a carefully controlled simulated environment. From the researcher's point of view, this simulation device should allow him to place a subject in the mockup, present him with a realistic simulated environment and then study the subject's reactions. Further, the choice of reactions available to the subject should not be limited in·any way. So, if the subject were to "drive" off the simulated road and through the side of a simu- The high premium placed on realism in the visual simulator implied that the time span between subsequent images would be short, comparable to the time span between movie or television frames. The realism requirement also made hidden surface removal mandatory. Transparent hills, cars and road signs were unacceptable if the illusion of reality were to be maintained. Further, television-type images were preferable to wire-frame drawings. If the images were to be of the wire-frame type, then objects would be represented by bright lines on the otherwise dark face of the CRT. For objects at close range, this representation presents few problems. But for objects at long range, the concentration of bright lines near the horizon would resemble a sunrise. SYSTEM DESCRIPTION The visual simulator software runs on a stand-alone IDIIOM-2 interactive graphics terminal consisting of a display processor, a VARIAN 620f mini-computer and a program function keyboard3 (see Figure 2). The display processor is itself a computer, reading and exe-' cuting its program (called a display file) from the core of the mini-computer on a cycle-stealing basis. The display processor's instruction set is extensive, but the visual simulator uses only a few instructions. Those used are instructions to draw horizontal vectors at varying intensities at varying vertical positions on the screen. The display processor is very fast, drawing a full screen (12") vector in about 20 microseconds. This speed allows a display file of seven thousand instructions to be executed in about ~~oth of a second, effectively preventing image flicker at low light levels. The VARIAN 620f mini-computer is also fast. Its Core has a 750 nanosecond cycle time and most instructions require two cycles. Word size is 16 bits and core size is 16,384. Simulating Visual Environment in Real-Time Via Software In its present configuration, the simulator receives its steering, braking and acceleration inputs from an arrangement of push buttons on the program function keyboard. The design configuration calls for the installation of an analog-to-digital converter and a driving station mock-up to replace the PFK. At the same time that the analog-to-digital converter is installed, a VARIAN fixed-head disk with a capacity of 128K words will be installed, giving the simulator nearly unlimited source data set storage. The visual simulator (VES)· accepts a pre-defined data set which describes a plan view of the terrain through which travel is being simulated. The terrain data set consists of (x, y, z) triples which describe the vertices of polygons. At present, the YES input data set resides in the computer memory at all times. The main function of the VARIAN fixed-head disk mentioned above will be to store the YES input data set. In operation, the YES system accesses a portion of the input data set corresponding to the terrain which is "visible" to the subject as a function of his position ,............... 155 J\ F c 1~ • E • 1 A Figure 3-Diagram depicting subject's position (light triangle) moving through terrain data set versus data set moving past subject's position Display File NO Last Plan on Disk Figure 2-YES system block diagram in the simulated landscape. Then, the steering, brake and accelerator inputs from the mock-up are analyzed and used to compute a wire-frame type view of the terrain which would be visible through a car's windshield as a result of such steering, braking or accelerating.Next, the hidden surface removal routine (HSR) processes each polygon to determine which polygons "obscure" others and to remove the parts of each that are obscured. The output of HSR is then converted into a program (display file) to be executed by the display processor. The display processor executed this program to draw the horizontal vectors at up to 8 different intensities which make up the television-like final image. The subject's position (see figure 3) in the terrain plan view is represented by the light triangle. The dark triangle represents a fixed object in the terrain. If the terrain is established as the frame of reference, the subject'sposition moves across the terrain. But from the point of view of the subject, who is stationary, the terrain must move toward him. The current angular position of the mock-up steering wheel in radians, relative to a fixed heading, is found in variable ALPHA. AL- 156 Fall Joint Computer Conference, 1972 DIST x COS (ALPHA) SIN(ALPHA} N