On Line_Computing_Systems_1965 Line Computing Systems 1965
On-Line_Computing_Systems_1965 On-Line_Computing_Systems_1965
User Manual: On-Line_Computing_Systems_1965
Open the PDF directly: View PDF .
Page Count: 153
Download | |
Open PDF In Browser | View PDF |
ON-LINE COMPUTING SYSTEMS EDITED BY ERIC BURGESS DATA PROCESSING LIBRARY SERIES Proceedings of the Symposium sponsored by The University of California a t Los Angeles and Informatics Inc. February 2-4, 1965 Los Angeles, California © AMERICAN DATA PROCESSING, INC., 1965 22nd Floor, Book Tower, Detroit, Michigan 48226 Foreword EARLY IN ITS HISTORY as a corporate entity, Informatics Inc. sponsored a symposium on disc files. The response to that symposium was beyond all expectations. More applicants were turned away than were able to register. Experience with that disc file symposium showed that much management time and effort must be devoted to the staging and programming of a successful presentation. Therefore, Informatics Inc. proposed to Engineering Extension of the University of California, Los Angeles, joint work on an On-Line Computing Systems Symposium to be held early in 1965. Acceptance of this proposal by UCLA meant that the full facilities, prestige, and know-how of a great university could be brought to work on this symposium. The results have fully vilJ.dicated the wisdom of this decision. The final registration of 768 shows the great current interest in on-line systems. This registration also exceeded all estimates. The cooperation of Informatics Inc. with its heavy, first-hand skill and interest in programming and specifying on-line systems, and the University of California, devoted through such organizations as Engineering Extension to bringing new and current fields of knowledge to people, provided an outstanding example of the benefits to be gained by education and industry working together. Jackson W. Granholm Sherman Oaks, California 3 Library of Congress Catalog Card Number: 65-21221 Preface ON-LINE DATA PROCESSING SYSTEMS have recently become of interest in digital computer applications. Developments in digital transmission and availability of faster bulk storage devices and the use of man/machine interface devices have stimulated a new kind of data processing. In this processing, information is entered into the system as it is generated. Outputs are requested as they are required. These inputs and outputs are occasioned by external stimuli-man or machine-to which the computer responds. On-line computing systems include at least two important classes of systems. The first is one in which response times are measured in milliseconds. Such systems are automatic, and many of them are closed loop, since the timing requirements preclude the intervention of men. Examples are process control applications, military satellite control systems, and radar tracking and recording systems. The second important class includes computer systems to which several interrogation and display devices are connected, thus establishing man/machine communication. Examples are found in military command and control systems, space vehicle command and control systems, and various commercial systems. The three-day symposium at the University of California Extension, Los Angeles, sponsored by the Department of Engineering and Informatics Inc. included discussion of both classes of on-line systems. In addition, it covered, with a considerable degree of thoroughness, the principles, disciplines, and practices which are applicable to on-line systems design, both in machinery and programming. The symposium was divided into six morning and afternoon sessions each with a separate chairman. These sessions and their chairmen were: Session I: Motivations Chairman-Dr. Gerald Estrin, Professor of Engineering, UCLA Session II: Techniques Chairman-Francis V. Wagner, Vice President, Plans and Programs, Informatics Inc. Session III: Approaches Chairman-Dr. Michel A. Melkanoff, Associate Professor of Engineering, UCLA Session IV: Methods Chairman-Jackson W. Granholm, Vice President, Technical Communications, Informatks Inc. Session V: Applications Chairman-Dr. Bertram Bussell, Assistant Professor of Engineering, UCLA Session VI: Examples and Summary Chairman-Irving Cohen, Vice President, Command and Control, Informatics Inc. The proceedings are organized in parts to correspond with the sessions. The Welcome Address was given by Dr. Paul H. Sheats, Dean, University of California Extension. The Banquet Address was by Dr. Simon Ramo, Vice Chairman of the Board, Thompson Ramo Wooldridge Inc., and President, Bunker-Ramo Corporation. Dr. Walter F. Bauer, President, Informatics Inc., was Chairman of the Banquet, and Jackson W. Granholm, Vice President, Technical Communications, Informatics Inc., was Toastmaster. The papers for the symposium were selected by an advisory board consisting of: Dr. G. Estrin, Dr. C. B. Tompkins and Dr. S. Houston, of UCLA, and Dr. W. F. Bauer, F. V. Wagner, and J. W. Granholm, of Informatics Inc. Secretarial assistance was given by Mrs. Betty Leventhal of UCLA and Mrs Rose Marie Gonzales of Informatics Inc. Public Relations were handled by Tom Kramer of UCLA, and Frank Crane and Robert Stone of Informatics Inc. The assistance of many other people at UCLA and Informatics Inc. who helped to make the symposium possible is also gratefully acknowledged. Eric Burgess Editor, Informatics Inc. 5 CONTENTS FOREwoRD-Jackson W. Granholm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 PREFACE-Eric Burgess, Editor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 PART I-MOTIVATIONS THE FUTURE OF ON-LINE SYSTEMs-Dr. Ivan E. Sutherland. . . . . . . . . . . . . . . . . . . . 9 ON-LINE SYSTEMS-THEIR CHARACTERISTICS AND MOTIVATIONs-Dr. Walter F. Bauer 14 C. B. Tompkins. . . . . . .. 25 M. Amdahl...... ... 38 MATHEMATICAL TECHNIQUES FOR ON-LINE SYSTEMs-Dr. PART II-TECHNIQUES MULTI-COMPUTERS ApPLIED TO ON-LINE SYSTEMs-Dr. Gene . ON-LINE USER LANGUAGES-Professor Joseph Weizenbaum .. .................. " 43 PART III-APPROACHES ON-LINE CRT DISPLAYS: USER TECHNOLOGY AND SOFTwARE-Werner L. Frank.. 50 PRIORITY INTERRUPT CHARACTERISTICS FOR ON-LINE CONTROL-Emil R. Borgers.. 63 M. Rosenberg. . . . . . . . . . .. 69 GROUP COMMUNICATIONS IN ON-LINE SYSTEMs-Arthur PART IV-METHODS MESSAGE SWITCHING PLus-Dr. Herbert F. Mitchell, Jr.. . . .. . . . . . . . . . . . . . . . . . .. 84 GRAPHICAL COMMUNICATION IN AN ON-LINE SYSTEM-Donn B. Parker. . . . . . . . . .. 89 PART V-APPLICATIONS ON-LINE SCIENTIFIC ApPLICATION-Dr. David A. Pope . ...................... " STRUCTURING COMPILERS FOR ON-LINE SYSTEMS-Dr. 102 R. B. Talmadge . .......... 105 THE QUIKTRAN SYSTEM-John H. Morrissey .. .............................. 116 PART VI-EXAMPLES AND SUMMARY THE PAT LANGUAGE-Glen D. Johnson . ..................................... AN EXAMPLE OF MULTI-PROCESSOR ORGANIZATION-David 129 V. Savidge . .......... 131 ON-LINE COMPUTING SYSTEMS: A SUMMARY-Dr. Harry D. Huskey . ........... " 139 List of Attendees (SYMPOSIUM ON ON-LINE COMPUTING SYSTEMS) . . . . . . . . . . . . . . . 145 MOTIVATIONS THE FUTURE OF ON-LINE SYSTEMs-Dr. Ivan E. Sutherland. . . . . . . . . . . . . . . . . . . . 9 MOTIVATIONs-Dr. Walter F. Bauer 14 SYSTEMS-Dr. C. B. Tompkins.. . ... . . 25 ON-LINE SYSTEMS-THEIR CHARACTERISTICS AND MATHEMATICAL TECHNIQUES FOR ON-LINE Dr. Ivan E. Sutherland* The Future of On-Line Systems THE SYSTEMS THERE ARE SIX KINDS of on-line systems: 1. Systems for Processing Control let factories manufacture products more cheaply. Process control systems do everything from simple feedback control to optimizing profits through linear programming. These process control systems have, and will have, a big effect on our domestic production capability. 2. Inqu~ry Systems permit people at many different locations to find out what is going on. Most familiar is the airline reservations system, but more such systems will come into use in the years ahead. 3. Specialized On-Line Systems perform particular complicated tasks; often military tasks. Industry is only just beginning to make use of specialized on-line systems for engineering and design. 4. On-Line Programming Systems put the raw power of a computer at the immediate disposal of a human user. Evidence of today's great interest in on-line programming systems is that more and more of them are being used. 5. On-Line Problem-Solving Systems will be required for doing even the simplest tasks without human help. Such systems will require the techniques for pattern recognition, process control, and heuristic programming, and will unite them meaningfully. It will be so difficult to do even the simplest tasks automatically that we will be busy with these tasks for some time to come. 6. On-Line Instrumentation will bring us better understanding of the interplay of the programs and data within the computer. Simple devices and programs to keep track, on-line, of what the computer does will bring us better understanding of what our information reprocessing systems are actually doing. THE FUTURE I will try to divide the future into two categories: the immediate future and the distant future. During the immediate future, we can expect systems to come to fruition which we now know how to design and how to build. In talking about systems for the immediate future, we talk about systems which have some recognizable place in the current scheme of society. In the immediate future, there probably will not be any tremendous political upheaval or war, or natural catastrophe which would cause the entire technology of the world to take a new turn and, thus, render my predictions ridiculous. But, the future is long-probably longer than the past. Information processing is something we have proved can be done, and we are going to do more of it. I believe that, in principle, it is possible to make information processing systems which will do intellectual tasks that human beings cannot possibly hope to do. The creation of such systems is a challenge that society will accept. The only question is when? About when we can only speculate, because the future of any technology is so interwoven with the political, social, and scientific developments around it. SYSTEMS OF THE FUTURE 1. Process Control Systems-There is no reason why Man should have to work for a living. Everyone recognizes the trend toward more and more leisure time-time in which our *Director for Information Processing Techniques Advanced Research Projects Agency Department of Defense The Pentagon Washington, D. C. 9 activities are not prescribed. Leisure time is not necessarily idle time; if we do nothing during leisure time the world will continue as before. In the foreseeable future, process control and automation will make possible more leisure time for all of us. One of the major social issues yet to be faced is how to measure an individual's contribution in a leisure society. Today, we pay people for working, or for concentrating on a single assembly operation for a long and unpleasant period, or for being away from home, or for taking personal risk, or for artistry-for making or doing something "beautiful", or for inventiveness-for devising something which makes life more pleasant for other people. Or, we pay people for responsibility-for making decisions which will have to stand the test of history and expose the decider to historical recognition or historical contempt. In a leisure society, what would people be paid for? How would you recognize a person's contribution to society? If no one works, except when he wants to, or no one spends long onerous hours at menial labor, except if he wants to, how do we recognize a man's contribution to society? The long term future of on-line systems for process control and automation rests in our ability to answer these questions. 2. Inquiry Systems-Today, you can find out from anywhere in the country what space is available on any airplane, almost instantly. But that is only the beginning. In the foreseeable future, we could automate all sorts of information retrieval, from isolated inventories to the entire content of the Library of Congress. An important part of an information retrieval system is its completeness. If I could reach 75 per cent of the technical literature and 99 per cent of the technical experts in a field through a certain information retrieval system, I would not need to keep a personal library. Finding out what had been done in a certain field would be a simple one-inquiry task. Just as the utility of the telephone system is that everyone has a telephone, so the full potentiality of an information retrieval system will come only when it contains all pertinent information and almost everyone interested uses it. Today, our inquiry systems are systems of which we ask questions, and not systems which can phrase and ask questions of human 10 beings. On-lineness is a two-way street, and our success in using it comes from our willingness to make systems which can ask us questions as well as give replies to questions asked. Think of Socrates teaching by merely asking questions! Automated libraries will be most useful when they "understand" the information stored in them. Suppose you wanted to find out some fairly obscure technical fact. You could go to the library and ask the sweet young librarian a technical question. The librarian may know all about the books in the library and where they are stored, and what the numbering systems mean, and the procedures for signing them out, but she has not the foggiest notion of the content of all these books-and certainly not of the book which will contain the information you want. But imagine asking a technical colleague the same question. Your colleague has at his command the content of the book which contains the facts you need. He knows nothing about libraries, or numbering systems, or information retrieval, or cataloging methods, but he does know the facts that you want. Inquiry systems, in the future, will become more and more able to understand and correlate the facts. Imagine, if you will, in the far, far, distant future a computer which contains in its files all that has ever been written. In its spare time, this computer mulls over these facts to understand the implications of them and to try to come to new conclusions. This computer knows, of course, the interests of all the people it serves, because it has records of all the questions they have ever asked it. Given a new question, this machine of the future can not only regurgitate the information which it contains but also can put the asker in communication with other people interested in that subj ect. 3. Elaborate Special Purpose On-Line Systems have been devised primarily for military purposes. In the immediate future, we can expect industry to start using specialized on-line systems for design and management. There are obvious benefits to automatically performing mathematical computations which quickly and accurately predict the strength and weakness of designs and plans. There are bigger benefits to using on-line systems as a communications medium between people. When one person designs or plans something, he has no communication problem with himself. He can write cryptic notes on pieces of paper and leave them scattered around his desk in a positional notation which only he need understand. He can look in the upperright corner of his desk for that memo on what he did yesterday. His individual work will proceed at a certain pace. When more than one person gets in the act, however, a communication problem is created. If a design job is divided up between people, the separate parts have to mesh correctly. If it is possible for the separate actions of individuals to drastically affect the actions of other individuals, then an almost hopelessly confusing tangle is possible. Imagine us designing an aircraft. My responsibility is the electrical wiring; yours is the gas tanks. If either of us changes his mind about the position of our part of the design, the other must be informed as quickly and easily as possible. By merely providing up-to-date design information to each user, an on-line design system could be a big help. We have only just begun to work with computers as communication media between people. Today, by linking remote stations, we can allow one person to "look over the shoulder" of another through a computer. We have yet to combine the functions of the design system and the inquiry system. The ability of many people in widely separated locations to know exactly what is going on has already proved practical in the airline reservation system. It must be included in our computerassisted design systems. 4. Programming Systems-The biggest interest in on-line systems, judged at least by the noise people are making today, is in on-line programming systems. In the past two years, we have learned a great deal about how to get on-line service for computer users. We have a spectrum of on-line programming systems, from simple and fast to complex but slower. But we are only just learning how to time share our computers. There is a great deal still to be learned about memory sharing and routine sharing. Such techniques will enable more than one user to share the capacity as well as the time of the computer. Today's on-line debugging techniques are still rather crude. We can now communicate with a computer program in symbolic assembly language if we used such a simple language in writing the program in the first place. It is possible to insert break-points in the program, to stop it when certain conditions arise, and then to examine what went wrong. We have not yet learned to communicate with similar fluency in any higher-level language. We have not yet built the systems-although we could within the next year-which would let us have the same fluency of on-line communication with programs written in higherlevel languages. At the moment, we are still adapting offline techniques to our on-line systems. For instance, we still see remnants of the card image in on-line systems. If, in a truly on-line system, there is no need for punched cards, why maintain the card image? Of course, we maintain the card image because it is more economical to adapt our existing card-oriented programming systems to our new on-line techniques than to start afresh. Our progress in getting "on-line" can be measured by our success in abandoning entirely old concepts which do not contribute to an on-line system. Weare only just beginning to explore systems where the computer asks questions of the programmer to resolve ambiguities in what it is told. Imagine a situation five years from now when, given a problem to solve, I approach a machine and say, "I have a problem in numerical analysis to solve." The machine asks me a few questions about my problem and decides what the appropriate programming language is to use. I write a few expressions in the language, making, as usual, a few mistakes. The machine asks, in each case, what I really meant-perhaps giving m,e an interpretation of what I said in different terms. I recognize my mistakes and correct them immediately. What languages are appropriate to on-line use of a computer? McCarthy claims that programming a computer in English is like flying an airplane with reins and spurs. But, programming a computer in English is a much more reasonable proposition for me than programming it in Hindustani; just as flying an airplane with my hands and feet is a much more reasonable proposition for me than flying it with my elbows and knees, because using my hands and feet seems more "natural". To improve all our on-line systems, we need more and better languages of communication between the man and the machine which are "natural" in the sense that they are easy to 11 use and fit the task. Why can't I write mathematical equations which look like mathematical equations and have the machine accept, compile and perform them? Why can't I describe network problems to the computer by means of the picture showing the network? Why can't I, in filter design, place poles and zeros on the complex plane? The answer in each case is: I can in principle, but not in practice. As yet, the techniques which let me do these things are not widely used. The prospect of the next five years is exciting because there is so much that we now know can be done, so much that we even know how to do, so much that we can put into use by just taking the time and trouble to do so. The prospect of the next five years is exciting because we will be finding out which of the things that we know how to do are actually worth using, which are economically feasible, and which are truly useful. 5. On-Line Problem-Solving Systems-The time is ripe to collect the techniques of pattern recognition, process control, and heuristic programming together to gain a new capability. There are simple tasks to be done in places such as space where humans cannot go, or even communicate, which machines based on these three techniques could do automatically. In the near future, we can expect such machines-"automata" if you will-to come into experimental use. The development of automata will be good for the contributing disciplines. Pattern recognition workers have taken little account of systems which, by acting, can gather additional information to clarify ambiguous patterns. Have you ever had to move your head to complete your inspection of something? Of course you have. Similarly, we must learn how to make computers actively seek information about their environments. In the context of visual pattern recognition, this implies "taking a better look." In the context of manmachine interaction, this implies that the machine might pose a question to the man. Process control today is little removed from the servomechanism. While it is true that we control very complex processes, the rules used are relatively simple. We think naturally of assembly line balancing, to optimize profit. The processes controlled today are uniform.. In fact, industries which deal in non-uniform products have had some difficulty in automating. For instance, an automated coal mining 12 scheme failed because of the variation in the size of the coal seam. Automated shoemaking is made difficult because of the variability of leather. Pattern recognition and heuristic programming can contribute versatility to process control. Opening up this new area for application of heuristics will stimulate our heuristic techniques. 6. Instrumentation-On-lineness is a two-way street. Not only can we put computers on-line with human beings, but also we can- put human beings on-line with computers. We can devise and build instrumentation to let a human see what is going on inside the computer. The information processing industry is uniquely wanting in good instrumentation; every other industry has m,eters, gauges, magnifiers-instruments to measure and record the performance of the machines appropriate to that industry. Think of a gasoline engine under test. The test stand bristles with devices to measure temperature, speed, vibration, fuel consumption, and so-on. Civil engineers have even instrumented a huge block of concrete, a dam. Gauges embedded in the concrete measure strain, temperature and humidity deep within the structure. How else could you find out what the internal conditions of the structure are? N ow think of a computer program under test. We run several sample problems, and check the answers. We rarely bother even to measure the time it takes to run the program. Certainly, we do not bother to take statistics on the number of times the various program paths are taken. Yet, in the inform,ation proces$ing industry we are uniquely able to make instruments out of the very same stuff, computer programs, out of which the device being tested is made. Some simple computer program instruments have been made. I have used a program which interprets the program under test and makes a plot of the memory address of the instruction being executed versus time. 1* Such a plot shows the time the program spends doing its various jobs. In one case, it showed me an error which caused a loss of time in a program which nonetheless gave correct answers. At Stanford University, a program which plots the depth of a problem tree versus time was used to trace the operation of a *Numbers refer to bibliography at the end of each paper. Kalah-playing program. Kinslow printed out a picture of which parts of memory were "occupied" as a function of time for his timesharing system 2• The result shows clearly the small spaces which develop in memory and must remain unused because no program is short enough to fit into them. Project MAC is using a display to show the dynamic activity of jobs within its scheduling algorithm. Watching this display, one can see jobs moving to higher and lower priority queues as time passes. Such instrumentation is not in widespread use. We can and will develop instrumentation which will be automatically inserted at compile time. A user easily will be able to get a plot of the various running times of his program. Think of the thousands of dollars saved by tightening up that one most-used program loop. Instrumentation can identify which loop is the most used. CONCLUSIONS The future of on-line systems depends a great deal upon the future of off-line systems. There is a lot of talk these days about a semiautomated mathematical laboratory in which a mathematician could prove theorems that he could not prove without computer assistance. How about having the computer prove the theorems all by itself? Suppose the artificial intelligence people make a machine which can, in fact, prove new theorems all by itself. What then becomes of our semi-automated mathematical laboratory? It's useless. Suppose we finally write a computer program which is able to write computer programs. Suppose we could state our problems to a computer able to program itself to solve the problems. What then will become of the on-line programming system? It will be unnecessary. Today, we are in a very exciting period when interest in on-line systems is very high. Our great surge of interest in on-line systems cannot last forever. What is next? What comes after the on-line systems? Perhaps we shall return to off-line systems as our capability grows to have machines become better able to do things all by themselves. Probably it takes a very large computer to solve useful mathematical theorems automatically. But, it is nonetheless likely that we shall eventually build such a system. In the past, there have been cycles in our interest in on-line systems. In the early days, on-line use of computers was common because no one knew anything else to do. Then there were the bleak years of insulation between users and computers to gain computing "efficiency." N ow, we are again in an outburst of interest in on-line computer systems. In the future, also, there will be changes in the emphasis on on-line systems. In five years, on-line programming systems will be commonplace, and a conference on on-line systems would be out of place. Research interest in on-line systems will have faded, although application of them will still be widespread. Perhaps general-purpose automatic problem,solvers will come into use soon after that. If so, even the use of on-line programming systems may decrease. Eventually, the process control on-line studies and the automatic problem-solving work will come together to make automata. Computers will then be truly on-line with the physical world in the same sense that we human beings are on-line with the physical world. Once again, there will be a resurgence of interest in on-line systems. What I am predicting is that today's interest in systems in which a man and a machine get together on-line will be replaced in the distant future by interest in systems in which a computer gets directly on-line with the real world, sensing and interacting with it directly through transducers. The "real world" with which such systems interact will include human beings, of course. CHARGE Weare embarked on the greatest adventure of all time. We believe that human beings are individually valuable and have inalienable rights. We believe that human beings are not to be used as slaves. We must find something else to give us the freedom of action which we call leisure. We turn, of course, to the machine. It will do our work for us so that we may be free to do only things which we wish to do. We will be free to exercise our creative impulses. REFERENCES IJ.C.R. Licklider and Welden E. Clark, "On-Line Man-Computer Communications", AFIPS, Spring Joint Computer Conference Proceedings, 1962. 2 Hollis A. Kinslow, "The Time-Sharing Monitor System" AFIPS, Fall Joint Computer Conference Proceedings Vol. 26, Part I, 1964. 13 Dr. Walter F. Bauer* On -Line S ystems-Their Characteristics and Motivations INTRODUCTION THE CURRENT CONSENSUS among computer professionals is that on-line applications represent the wave of the future. The existence of this symposium itself stems from that conviction. However, as with all new subjects, some important basic questions arise. It is well to contemplate what on-line computing is and why it is becoming so important. First, some estimates and forecasts (Figure 1): on-line computing probably represents 1 per cent of the total computer activity in the country today. It will probably represent 50 per cent in five years. Within ten years it will probably represent nearly all computer activity. This symposium and our discussions come, then, just at the beginning of this new "revolution" . Modern computers are about fifteen years of age. The computer profession has undergone much strife during its formative years but now has reached some degree of structure, standardization and predictable growth pattern. Everyone in the computer world knows what subroutines, assemblers and simulation programs are. There is even a rather universal acceptance of the difference between an assembler and a compiler. However, just as this status is being reached, interest has rapidly developed in drastically new approaches to com.puter use. Weare now confronted with new words and techniques: time-sharing, realtime, on-line, and multi-programming. In view of this, it seems appropriate not only to define these terms, but also to discuss the total structure of on-line computing in an attempt to *President, Informatics Inc. 14 interrelate the various aspects of this new era. This is the major objective of this paper. Another objective is to examine the motives of those advocates of on-line computer use. Is such use cheaper? What does it gain for the user? What is to be gained by on-line computing versus batch processing? But the prime question may well be whether on-line computing itself is basically new, or does it represent a natural extension of older techniques? It is interesting and instructive to trace the evolutionary paths which brought us to our present capability (or desire for capability) of on-line computing. Last, but not least, is the series of interesting questions dealing with techniques and technologies which inspired or were made necessary by on-line computing. The implications to the user, the programmer and the machine designer are profound, but not unattainable. They should be spotlighted early to allow time for balanced development of all their facets. DEFINITIONS AND STRUCTURE First, it is the proper time for the computing field to rid itself of old fashioned words and adopt more meaningful terms. The phrase "real-time" is itself a meaningless expression. This hyphenated expression was important a decade ago when a computer was lashed to instrumentation or tied closely to the outside world. The term was used to describe those tasks that needed to be locked or synchronized on a second or millisecond basis to some real time occurrence. As applications of this type branched out, the term became more and more inappropriate. Question: is the SABRE system for airline reservations appropriately 90+ 1965 1970 1975 % OF COMPUTING ACTIVITIES FIGURE 1 ON-LINE COMPUTING GROWTH IN UNITED STATES called a "real-time" system? The fact that lengthy papers 1* have been written attempting to define real time is, in itself, ample evidence that this misnomer for an application category and its description should be straightforward and simple. The word on-line has been chosen as the title for this symposium. It is more meaningful than "real-time". It seems that definitions are long lasting and meaningful only if they are simple. With this in your minds, the following definition of on-line computing is put forth for your consideration. "On-line computing is the efficient use of a computer in a system in which the computer interfaces with man or other machines to which it reacts in receiving and supplying information." Let us not attempt to define on-line or realtime by some abstract reference to the passage of time or the urgency of the receipt of data, but rather, let us define it in terms of the . environment of the computer system itself and the manner in which it is used. The above definition should withstand your scrutiny. *Numbers refer to references at the end of the paper. Analogi digital hybrid systems are on-line computing systems since the analog computer itself is a clever machine which, like man, receives and supplies information. An airline reservation system is on-line since the computer reacts to signals generated at the ticket office via an input device. Systems oriented towards scientific problem solving by use of a console are again on-line, since the console itself is the interface which, in turn, receives information from, and gives information to, the human who has a need to know. The purist may argue that on-line computing then refers to all computer systems, since they must all have devices such as card readers and punches to give and receive information. We can avoid this weakness in the definition by insisting that interfaces with men and machines do not include conventional input/ output equipment. We can also insist that the words "to which it reacts" rule out conventional input/output since, in those cases, the computer itself controls or drives the input or output process instead of reacting to it. In other words, for on-line systems, the computer is embedded in a system, and the part of the system outside the computer synchronizes the system. The signals to which 15 the computer reacts are frequently random. This is in contra-distinction to those applications where the computer itself synchronizes input/output equipments. Referring to Figure 2, it seems natural to divide on-line computing into two major areas: man/machine oriented applications, and instrumentation-oriented applications. In the instrumentation-oriented case, the computer is locked into instrumentation to which it reacts; in this case human participation is incidental. On the other hand, in the man/machine oriented case, instrumentation primarily enables man to "talk" to the system. The system is necessarily oriented to the console and to the men who operate it. We should hasten to add at this point that in creating definitions and meaningful structures, the obvious weakness is that many systems are not pure but, in fact, blended. In reality, larger systems are both instrumentation-oriented and man/machine-oriented. A large scale communication system, for example, will probably have an elaborate man/ machine subsystem which allows extensive monitoring of message processing, or for human intervention for pathological cases which might arise. Instrumentation structured systems might further be broken down into "simulation" and "discrete" types. A simulation type is one which is closely synchronized by, or in concert with, events as they are happening. Now with the discrete type, the system reacts to signals which are less frequent and are mostly random, such as those described by Poisson distributions. The latter are systems in which queues form and service may be relatively unpredictable, and may vary considerably relative to demand. But we are here to examine' the man/machine systems. Therefore, let us look with greater precision to applications and systems where the man is closely interacting and reacting with the system. Referring again to Figure 2, there seem to be three major areas for man/machine applications; these are problem solving, programming and computer use. In problem solving, man wants to have the computer carry out complex processes whose parts are chosen and initiated by him. The computer is solving the problem in the sense of carrying out the detailed m,anipulations required. The man acts more as a control system monitor; he controls the pieces of computation. One of the best ON-LINE APPLICATIONS I I MAN/MACHINE ORIENTED IN STRUMENTATION ORIENTED I I COMPUTER USE PROBLEM SOLVING DISCRETE PROGRAMMING SIMULATION-TYPE FIGURE 2 TWO MAJOR AREAS OF ON-LINE COMPUTING 16 I examples in problem solving applications is that developed by Culler and Fried 2• In their application, the computer can perform a wide variety of mathematical operations on a collection of points on an interval. On-line programming is the second area identified. In this application, the man uses the computer to develop an end product; an obj ect program which accomplishes a prescribed known processing result. The programmer has used the computer to assist him in the process of selecting subroutines, preparing programming pieces for proper em,bedding into larger systems, for supplying data, for correcting the format of his instructions, or for examining the logic of his program structure. There seems to be little of this being done now as I have defined the problem here. Most on-line programming systems involve simulation of problem-oriented language statements which comes under the heading of "computer use", as described in the following paragraph. The third area of on-line applications is heavily oriented toward man. This refers simply to the use of the computer by the man. In this application the computer can be considered as his strong right arm. The problem is solved by the man assisted by the machine. In input and output of data, for example, the computer is assisting the man in establishing formats, priorities, data locations, and so on. Another application is the question of a data base where the man is asking questions, and succeeding questions depend on previous answers. Still another application in this category, is the control and monitoring of large scale systems. While we are in the business of defining and naming processes, there is another term which needs attention. We have already dissected "real-time" and have concluded that it is an old-fashioned phrase which has little meaning in a modern sense. Another unfortunate phrase is "time-sharing". This phrase is usually used to describe the simultaneous use of a machine by a number of programmers or analysts. Many descriptions appear in the literature 3,4,5,6. However, in these systems it is not the sharing of the machine which is the most important. Rather, it is the orientation of the machine to the human. To be specific, time sharing comes into the picture because this is the only current efficient way of using a computer in close cooperat~on with a human, since the machine would not be used efficiently during the "head scratching" period of the operator, and in a "time shared" system, many operators can use it simultaneously. Thus idle time is reduced. Time sharing is a result of the fact that there is the man/machine orientation. To illustrate, the most important thing about an automobile is that it supplies transportation. The fact that it is efficient to have round wheels and a gasoline engine is overwhelmed by the transportation factor. And so with computers. Thus, the expression "time sharing" puts the emphasis on a secondary characteristic and is, therefore, not good terminology. I offer that "on-line" is a much more accurate and representative name. MOTIVATIONS FOR ON-LINE SYSTEMS In general, the motivation for on-line systems is to make the computer a more powerful tool. The computer is being made into a more powerful tool by building it more responsive to the user-more responsive especially in respect to type of information obtained, and time required to get it. These systems greatly increase the efficiency of the user by giving him information which reduces red tape and mundane clerical operations. They bring the user closer to the data inside the computer and make it more accessible to him. They give the user only the information he needs, when he needs it. In other words, with on-line systems there are no lengthy outputs from which the operator must laboriously select the desired information. The on-line concept is the skeleton key to the files. However, it behooves us to look carefully at on-line systems from the standpoint of the areas of assistance which the computer can provide to the user. If this is not done, the danger exists that we would expect too much from on-line systems; in our enthusiasm for all their merits we could create a considerable disenchantment among over-sold users. In the situation of man interacting and reacting with the cool gray computer, the appropriate question is, "How can the computer help the man 1" The following four statements about this assistance should represent a mutually-exclusive and mutually-exhaustive set of assistance areas. They are: 1. Correct an input. 2. Accept a query or the specification of an operation, then perform the required de- 17 sired operation which should provide the proper information to the user necessary to accomplish the next step. 3. Accept and appropriately handle information to enable the computer to interpret correctly future information which it may receive from the user, or 4. Direct, on a step-by-step basis, a procedure for information input and output. Let us examine briefly each of these areas. To correct an input and to immediately signal the man in the loop for a correction or the need for a correction, is an important time saving factor. Many needless hours of "turn around time" can be saved by finding out immediately that a transcription error exists. This is just one example. One of the important aspects of computer use is in the area of computer aided processes. The man is carrying out a number of logical steps and needs the extra strength of the computer to help him in each of these steps. Because the specification for each step depends upon the results of the previous step, there is need for fast response. Consider, for example, a military commander asking questions about the status of enemy forces. It should be easy for you to imagine the series of interrelated interrogations. While carrying out the procedures in many on-line systems, it frequently becomes obvious that certain procedures should be changed. For on-line problem solving, for example, if it is found in solving certain problems or in solving certain levels of problems that a procedure, (e.g., multiply by cos X and integrate over range 0 to 1) occurs frequently, it can be specified as a standard instruction, and the computer will react appropriately each time this instruction is received. Computer-directed procedures are an important aspect of on-line computing and are little understood or appreciated. For exam.ple, they allow a complex query to be asked of a machine under the control of another machine and with the assistance of a third machine. One thing should be borne in mind always about on-line systems. They increase the burden on the computer so that the efficiency of the man can be increased. We must realize always that a penalty or a price is extracted for each increment of increased human efficiency; more computer time, more complex computers, greater input/output equipment, 18 and increased console costs. The tradeoffs undoubtedly favor ever increasing on-line capability. However, no system design should occur without recognition of the prices demanded. COMPUTER-LEAD PROCEDURES The importance of this area and its apparent lack of appreciation by computer users suggests more attention should be given to the definition and the benefits which can be derived. Consider for example, the process of complex interrogation of a large data base. The computer must be an active participant in the process or the operation becomes unwieldy. Consider the functions as shown in the center panel of Figure 3. The user must perform the selection, but functions must be performed relating to the logic or syntax of the request and to the consultation of a dictionary or format specifications. These in turn require a look-up operation to files which provide the required data. In manual operation, as shown, the computer performs only the process of retrieval and presentation. Obtaining the procedure to be followed by consulting a comprehensive operator's handbook is left as a burdensome human-only operation. In the console-computer automated operation the only function left to the operator is that which must be left to him; the selection. The computer leads him by the hand, hopefully by the proper digits, down the rocky procedural path. As a simple example of the principle espoused, imagine that a military commander wishes to have information about POL (petroleum, oil, and lubrication) resources and airfields in a certain section of the country. Also, suppose he wishes to have a list of airfields in five western states which have a POL availability of 80 percent after an enemy attack. It is totally unacceptable and time consuming for him to make the request to a technician who would then transfer the information to an obscure code or punch it on a card. Rather, he makes the request to a staff officer who directly questions the machine. Consider the following as a sample pro... cedure. The staff officer may specify to the machine that he is interested in "installations" and chooses, from a list of installations which the machine gives him, the category MANUAL OPERATION CONSOLECOMPUTER OPERATION SELECTION MANUAL MANUAL COMPUTER DICTIONARY FORMAT OPERATIONS LOGIC-SYNTAX OPERATIONS DICTIONARY, CODE, LOGIC, SYNTAX FILES COMPUTER RETRIEVAL AND PRESENTATION FIGURE 3 COMPUTER INTERROGATION PROCEDURES "airfields". He then tells the machine he is interested in "resources" and the machine provides him with a list of resources from which to choose. A similar procedure takes place with respect to the geographic location. When he tells the machine that he is interested in an availability of 80 percent, the machine responds with a form to fill out. The officer enters the number 80 on the form and the list is printed out. The format of the request is natural. It is neither highly stylized, nor codified. The staff officer making the request and pushing the buttons is himself a military man who understands the commander's request and the reasons for it rather than the technical details of how to make the machine accept or respond to the request in its complex electronic way. The benefits of computer-lead procedures, however, are not limited to interrogations of the data base. The inverse procedure benefits equally man and machine. Consider the input of data which is relatively unstructured. The computer, of course, must finally accept and file the information in a highly structured form so that it may be retrieved efficiently. The computer can direct a procedure which allows the human to input the data in the order and in the form which the machine can accept. THE EVOLUTION OF ON-LINE SYSTEMS The advent of on-line computing systems has not blossomed suddenly, nor has it sprung full grown as a technical revolution. Rather, it can be viewed as a normally evolving ca- 19 CONVENTIONAL IN PUT / OUTPUT OPERATOR CONSOLE UTILITY AND ASSEMBLY PROGRAMS STYLIZED CONSOLE MONITOR SYSTEM OPERATION COMPILER AND ASSEMBLY SYSTEMS CONSOLE DIRECTED COMPUTER USE ON-LINE COMPUTER USE USER STATION QUERY LANGUAGE ON-LINE PROBLEM SOLVING ON-LINE PROGRAMMING FIGURE 4 EVOLUTION OF ON-LINE SYSTEMS pability. The capability and potential has been there and recognized for some time; it is the increased attention being given the techniques which is sudden and dramatic. To illustrate the evolutionary process, consider Figure 4. Portrayed there is capability, increasing from top to bottom accompanied by passage of time. In computer use in the early '50s it was the conventional input/output, the operator console and the utility and assembly programs. People who used computers and sat at consoles for long hours could not help but conclude that there was a faster, better way to give and receive information from an oper- 20 ating computer. The SAGE system7 and the early output device using a cathode ray tube were examples. Also, operator console procedures became more sophisticated. Many system designers concluded that if you could give information at the console for diagnosing hardware failures as the maintenance men did, then why could not the user give complex instructions to the machine dealing with the processes and procedures being carried out by the machine? These were illustrated in a number of systems which have been described in literatures. At the same time, strides were being made in non on-line areas with the development of new, advanced compiler and assembly systems in step with powerful computer-user languages. System designers, naturally, borrowed heavily from the techniques being developed in stylized console and monitor system operation to develop console-directed computer use. In other words, instead of submitting information to the computer via punched cards and waiting for the results to appear on the printed edge, these designers suggested keying in data directly into the computer at a console or user station and receiving back almost immediately the information at either location. The computer actions being directed by the human then became very diverse and flexible. Many of these have been highlighted in the literature 9,10, especially those relating to exotic military systems. About this time people took still a different -but related-approach to implementation of query languages at user stations". The query language approach borrowed heavily from the technical developments of problem-oriented languages. The resulting query language was highly formatted, bearing many family resemblances to the conventional compiler lan- CONSOLE AND COMMUNICATION PROGRAM .... ..... .... ["'I PROGRAMMING STRUCTURE AND FUNCTIONS There are five major parts to the programming system of an on-line system,: the console and communication programs, the executive, the utility programs, the operating programs, and the data base. These are shown schematically in Figure 5 along with the information flow among them. The console and communication program provide the linkage between the human and the system with a display console as interface. In some cases where information is flowing to and from communication devices, programs to handle these functions would also come under this category. Three major functions are included: input/output data buffering, input/output data formatting, and operator logic controlling. Data which is carried into the system at the console must be temporarily f EXECUTIVE 1/ o 1/ o o guages which were designed for immediate question and answer on-line use. These two developments gave rise to what seem to be the present three streams of effort; on-line computer use, on-line problem solving, and on-line programming. DATA BUFFERING DATA FORMATTING PERATOR LOGIC CONTROLLING .... .oil ..... r ~ SCHEDULING MONITORING CONTROLLING SYNCHRONIZING UTILITIES DATA MOVING PRINT-OUT CONTROLLIN G AUXILIARY MEMORY CONTROLLING ." OPERATING PROGRAM .... """"'- """ PRODUCTION PROGRAM OPERATING DATA BASE STORING PROGRAM RESULTS OR WORK IN PROGRESS FIGURE 5 MAJOR PARTS TO ON-LINE PROGRAMMING SYSTEM 21 buffered. Frequently a number of characters are buffered until an "end of message" signal is provided, after which the total message is sent to the executive program. Also, format changes are frequently required to shape information to the form that the executive can accept. One of the newer concepts in operator consoles is represented by operator logic controlling. Under the topic computer lead pro;.. cedures described above, there is console procedure logic which is implemented by the console and communication programs. These programs lead the operator; the steps depend on the state of the system and the particular operation being performed. They may lead him by providing information on the next step, by informing him which next console steps are permissible, or by signaling him when a console step has been initiated which is not permissible. Properly designed, the console and communication programs are flexible and modular, and they can be modified easily to accommodate changing procedures. The executive program provides the basic control for the system,. It schedules all work to be performed; whether that work is to be provided by operating programs or by centralized input! output. It also monitors the entire operation and reacts to ,any system interrupt which signals the occurrence of an undesirable circumstance. For example, the executive may be alerted when a reserved portion of the memory is about to be used up. The executive also provides the functions of controlling the entire operation; that is, it initiates the various programming pieces and provides them with the operational parameters required to define a requested task. Last, but certainly not least, there is the job of synchronizing. In time-sharing, for example, each operator may receive computer time in cycles of, say, 200 millisecond increments, allotted to him by the executive in synchronizing the system, operation. The utility programs provide three basic functions: the movement of data within the system required by time sharing -or pooled procedures, the controlling of the printout of information on a pooled basis, and the controlling of accesses to auxiliary memory. One of the most challenging problems in online systems is the correct design of utility programs which can be considered as overhead programs initiated by the executive program. 22 Data must be moved, for example, from working memory when the user or operator leaves his console. Data is constantly shuffled within the system to accommodate the various modes of system operation and the requirements placed on the system. Similarly, utility programs are necessary for controlling the printout of information when the printout facilities are to be shared by all of the users, as they usually are in an on-line system. Controlling the auxiliary memory is another utility type task since the bulk storage devices are usually shared by all. The last two parts of the programming system are the operating programs and the data base, the latter referring to the storage of intermediate or final data. The data base communicates primarily with the operating program; for flexibility, there is a secondary link to utilities. The utility routines may, at the request of the operating program working through the executive program, request utility programs to handle certain data in the data base. Some of the general principles to be considered here are: 1. Utility programs exist for the benefit of each system user and for the system itself. They are not considered part of the operating programs, but rather they are of a general utility nature which is called u,pon by the executive. 2. The executive is in reality the control device of the system. Nothing occurs automatically within the system without its being initiated and monitored by the executive program. 3. Although the data base is a passive program it is included in this structure because the data base can itself be a computer program which is being operated upon by an operating program in an on-line programm.ing configuration. THE NEW TECHNOLOGY On-line systems are giving rise to new hardware and software technology. Computers must necessarily be designed quite differently if they are to work efficiently in these kinds of systems. Also, there are a number of badly needed software and programming techniques to provide efficient and economical system implem,entation. Some of the hardware aspects which need attention are as follows: 1. Memory protect hardware. This is a device which allows the currently operating program to be "locked out" of all but one part of the memory. The capability must be dynamic and under flexible control of the executive program, since the allowed operating areas of the memory change on a millisecond basis. 2. Inexpensive consoles. The computer industry must develop inexpensive consoles. A reasonable goal is that the console should sell for under $15,000, that it should have a cathode ray tube or something equivalent, that it should allow for some buffering of information, and that it should have a flexible and adaptable keyboard structure and status information display. 3. Multi-computers. Computers must be designed which allow the incremental addition of modular components, the use by many processors of high speed random access memory, and the use by many processors of peripheral and input/output equipment. This implies that high speed switching devices not now incorporated in conventional computers be cleveloped and integrated with systems. 4. Improved input/output. The entire range of input/output parameters needs overhaul. For example, the random access memory device must be accessible through two to five channels. Channel logic and interrupt procedures must provide greater capability than they do on most present computers. Of equal significance are some of the new software techniques which need attention: 1. Automatic segmentation. Since running programs will have access to only a portion of the memory, frequent "page turning" will be necessary .as the program goes through its major operating pieces. Segmenting of the program can be very difficult if it must be done manually by the programmer. Assemblers or compilers with automatic segmentation or semi-automatic segmentation are needed. 2. Relocatable programs and automatic, dynamic relocation. There is the need to be able to produce relocatable programs, and to relocate these programs automatically and dynamically. A part of a program in auxiliary storage is seldom placed into the same spot in high speed memory from which it cam,e, and the program segment is called into high speed memory under a wide variety of circumstances and conditions. 3. Computer use and programming modification languages. Entirely new languages are needed to allow flexible and powerful use of the computer from remote stations. A standardized set of macros is needed to provide the user with many of the functions he must perform very frequently. Two of the simplest examples are "begin" and "end" instructions. These signal the system to make ready for the programmer's future activities which he may call upon the system to perform, and they signal the end of his activity to enable the system to begin to accommodate other activties. 4. Console utility programs. Much of the new procedure revolves around the display console. Programs and techniques are necessary to allow operator efficiency, and to allow the easy modifications of programs. 5. Scheduling algorithms. Increased attention needs to be placed on the problem of techniques for scheduling the many users with their different priorities. Priorities, for example, can be assigned externally or they can be assigned on a dynamic basis depending on how long the program has been in the system, or how much time remains before the deadline for results. SUMMARY AND CONCLUSIONS Virtually all computer applications will become on-line in the next ten years. ' "On-line" is much better terminology than either real-time or time shared. On-line applications can be considered man/ machine oriented or instrumentation oriented, the latter breaking down into three major on-line application areas; problem solving, programming and computer use. It is important to understand how the computer can assist the man in on-line applications, and it is likewise important to realize that a price is paid in terms of computer time and programming complexity in gaining this user efficiency. On-line computer developments are in reality normal evolutionary steps which developed from early console and monitor system opera- 23 tion and the initial SAGE-type display consoles. There exists a canonical structure for the programming system of on-line systems which consists of five maj or pieces; console and communication programs, executives, utilities, operating programs and data base. There are a number of challenging aspects in computer technology generated by on-line systems. Challenges in hardware designs must result in hardware efficiencies to meet the new software techniques and then the challenge of efficiently blending the "hard and "soft" for a truly efficient system.. Respectful attention by all computer users to these essential factors or opinions is necessary for the new technology to keep pace with the demands and the potentials. Those computer professionals who are keenly aware of the problems and needs, and who continue offering improvements, will find the challenges stimulating and their efforts well rewarded. 24 REFERENCES 1 T.B. Steel, "The Fabulous World of Real Timeland", Datamation, March 1964. 2 G. T. Culler, and B. D. Fried, "On-Line Computations and Faster Information Processing", IEEE Pacific Computer Conference, March 1963. 3 J. I. Schwartz, E. G. Coffman, C. Weissman, "A General-Purpose Time-Sharing System", Proceedings of the SJCC, 1964. 4 A: J. Critchlow, "Generalized Multiprocessing and Multiprogramming Systems", Proceedings of the Fall Joint Computer Conference, 1963. 5 F. H. Corbato, et aI, "An Experimental Time Sharing System", Proceedings of the W JCC, May 1962. 6 J. C. R. Licklider, and W. E. Clark, "On-Line ManComputer Communication", Proceedings of the SJCC, 1962. 7 R. R. Everett, C. A. Zraket, H. D. Bennington, "SAGE, A Data Processing System for Air Defense", Proceedings of the EJCC, 1957. 8 W. F. Bauer, "Integrated Computation System for ERA-l103", ACM Journal, Vol. 3, pp. 181-185, 1956. 9W. F. Bauer, "Military Command: A Challenge for Information Processing", Computers and Automation, April 1963. 10 W. F. Bauer, and W. L. Frank, "DODDAC-An Integrated System for Data Processing, Interrogation, and Display", Proceedings of the EJCC, December 1961. 11 T. M. Dunn, J. H. Morrissey, "Remote ComputingAn Experimental System", Part 1, External Specifications, Proceedings of the SJCC, 1964. Dr. C. B. Tompkins* Mathematical Techniques for On-Line Systems t INTRODUCTION My PERSONAL INTEREST might be more nearly the study of mathematics and mathematical techniques from on-line systems than the title assigned me. However, the techniques which are currently usable for on-line systems are tools with which we shall expect to contribute to the development of these new mathematical ideas and techniques. I do not agree that "on-line" computation is almost synonymous with "multiple-console" concurrent computation. I shall consider problems which may impose small computing loads and small communications requirem.ents and are, therefore, also suitable for multiple-user operation; but that will not be my aim. One of the most famous multiple-user devices is the chalkboard (as it might be used in an old fashioned way to teach arithmetic or algebra when a sizable fraction of a class is sent to the chalkboard to work). A pad of paper and a library can provide another mUltiple-user system, at least under communistic policies of using the paper, and under reasonable rules for circulation of the library volumes. Use of a single computer by several users concurrently is now one of the exciting sectors in which computers and computation are developing. To the extent that these applications may make an extremely powerful, extremely fast, and extremely economical analogue of a desk calculator, the techniques are roughly those of a desk calculator. The functioning is tThe preparation of this paper was sponsored in part by the Office of Naval Research. Reproduction in whole or in part is permitted for any purpose of the United States Government. on-line, however, and such functioning is of interest here. An on-line computation has one principal advantage over off-line computation-the advantage of being subject to exploitation through implicit decisions by the user. This is independent of the multiplicity of simultaneous uses. The classical view of automatic computation (prevalent in the "remote" past of, say, 1958) included a high evaluation of austerity. Thus, then (and still in some laboratories) efficient and effective use of the available equipment required that sizable bites of the calculation be described explicitly_ This was so that every decision in the course of the calculation of each bite could be carried out by the automatic machine in accordance with an explicit rule furnished to it in the coding of the problem. If a process of sequential decision was contemplated, with decisions based on the calculations which had preceded the decision, and if the proprietor of the problem being processed planned to intervene personally at the time of decision, then either his intervention was seriously limited, or the problem would be ej ected from the machine to await the decision (and to await the next period of machine operation at which it could be introduced, often a wait of a sizable fraction of a day). Men pursuing their various goals normally find it necessary to assign various portions of their problems to subordinates in an organization. The assignments normally are of two types: those which can be carried out mechanically without exercise of ingenuity (al*Director, Computing Facility and Professor of Mathematics, UCLA 25 though a high degree of professional competency may be required), and those which call for judgment and authority and which carry more responsibility than that of blind but competent obedience. The first of these types of assignment might be compared to use of compilers and computers in the austere classical manner; the second is more like on-line computing. The fact that formulas cannot suffice for all the decisions made in the simplest of the pursuits of our lives, might indicate that the inventive process is required in our attempts to solve our (formally) most difficult problems. Thus, I liken the console operator of an on-line system to a leader of immediately and competently responsive subjects in attempts to solve problems which require exercise of judgement by the leader from moment to mom,ent, rather than from day to day. These subjects comprise the computer system involved in the on-line operation. In all this, we must recognize that the more we can automate a desired operation, the more tiDfe we have available for obscure inventive processes. However, the formulation of a problem in terms of available procedures for automatic processing may impose uneconomical difficulties. The inventor may be able to describe his process of invention explicitly, but it seems likely that the energy and attention expended in this descriptive process will seriously interfere with, or even annihilate, his powers of invention. The question of on-line mathematics is, then, the question of determining when implicit observation, weighing, and exercise of judgement involving one or more highly competent scientists should be preferred to automatic processing of data, with all observation, weighing, and selection of sequence of operations left to automation operating under explicit rules. THE MATHEMATICS OF GENERAL SHAPES AND FORMS In seeking an answer to the question of on-line mathematics formulated above, we are naturally drawn to consider those qualities of concrete and abstract devices which are still efficiently recognized by and reacted to by humans. This immediately suggests "eyeballing," and this, in turn, brings to mind topological aspects of mathematics, again in 26 the classical sense (which this time starts with Gauss and continues through many of the contributions of the less abstract current contributions. to topology). However, there are sensory devices which determine general forms and shapes which can be applied in non-topological ways. It seems to be well established that the eye can detect a straight line segment in a photograph while limited resolution and other imperfections effectively hide the segment from automatic recognition by instruments. The power of the human to hear through noise is equally remarkable. Patterns are recognized by humans of low intelligence, while humans of the highest intelligence find themselves unable to explain the process satisfactorily. Finally, the artistic fitting of theory to fact with a feeling that "a slight modification right there should do the trick" is sometimes known as genius; here it is essential that the theorist be provided with data which makes him believe that the slight modification (or some modification) is needed, and that he be able to obtain experience which lets him "feel" rather than know unequivocally the proper direction and size of the modification. To list all the interfaces which allow implicit judgement and implicit recognition to enter into mathem,atical calculations would be undesirable here-the list would be too lengthy and would force me to depart from my subject, and, besides, neither I nor anyone else could possibly present a complete list. We can only try to find the necessary interfaces and mathematics in an implicit way by on-line observation of candidates. Therefore, I shall turn to some examples of profitable on-line calculations. THE ISOGRAPH AND ITS TOPOLOGICAL PRINCIPLES I shall not develop the topological tools needed for calculation here, but I shall describe them as I go along. An elementary exposition of many of them can be found in my paper 1 and the references in its bibliography. One fundamental problem which has always been present in algebra is the problem of finding the roots of a polynomial. An ingenious on-line device for using topological tools, properly supported electrically and electronically, has been described by R. L. Dietzold2• This was an analogue device. First, I shall describe a proof of the fundamental theorem of algebra because the principles of this proof are pertinent here. In this proof, I shall denote the polynomial by P and its argument either by a number, a symbol z, or another symbol to be introduced later when the polynomial will be evaluated at all points on a circle. I shall assume that the highest power of the argument appearing in the polynomial is n, and that the coefficient of this . highest power differs from zero. Such a polynomial is called a polynomial of n-th degree. The fundamental theorem of algebra is: THEOREM. If P is a polynomial of n-th degree, where n ~ 1, in ·a single complex variable, there is some argument value for which the value of the polynomial is o. A proof is along the following lines. First, examine the coefficient of the zero-th power of the argument in P. If this coefficient is 0, then the proposition is correct, for then P( 0) =0. If not, call this coefficient a o . A polynomial is a continuous function of its complex argument to the complex plane, and hence for any argument value sufficiently close to 0, that is, for any z with Izl small enough, the value of the polynomial will lie as close to a o (in the complex plane) as any tolerance specified in advance. At first we specify this tolerance to be laol/2. In particular, we take ro to be a positive number such that a IP (z)-a o I< I~I whenever Izl ~r o· 2 We now consider a circle in the complex plane containing all values of z with absolute value r, where r is any positive number to be specified. We denote by PR (r) the set of values taken by the polynomial as z takes on all values on the circle Izi =r. For r=r 0 a curve PR(r 0), restrained to lie close to a o ' is illustrated in Figure 1. The important point is that the locus PR (r 0) does not surround the point z=O. If we think of the circle Izl =r 0 being traversed by a moving point, and of a small being at z=O watching the image P(z) as z traverses this circle, then this being might have to move his head back and forth like a spectator at a tennis match. But when the transit of the circle has been finished the being at the center will again look at the point on PR (r 0) where his vigil started, and his neck will be untwisted. Imaginary Axis Ci rc Ie of poi nts of distance lao 1/2 from 00. The squigly curve inside the circle around 00 is the locus PR(ro) Real Axis FIGURE 1 Circular locus I z I =ro 27 assert that it is a comparatively easy job to present PR(r) on the screen of a cathode ray tube for any fixed value of r in the interval (0,1] (that is, for O- U VI >Q) III ::.:: ~ C'l C >- III ~ Q z "« ~ ~ Q ... U i VI ~ .J:lC 1110 .~ > >0 Q) >- Q) N B U Ill'" C Q) ~ C Q) a.. .... ~ C'l ...J ... Q) ~ III ~ ~ ... ~ !4- ... ::J U ::J CD C 0 C 0 .... U C ::J ....U IJ.. IJ.. ! C ::J .J:l 't:I Q) X ~ 0 ::.:: III ...III > .... ... U VI ....., Q) B III t III u -g C'l VI ~ ~ (I) « C Q) ~ .J:l C 't:I CD ::J « e a.. -g ::J e C'l ~ U III !O C 0 .... ~ 0 cc:: ... 2U ~ III ~ u .... ::J Co C - ~ III 0 .J:l >- Q) ::.:: Z "- « ?: VI C 0 0 0 2C Q) > C 0 ........ . ;;;. C U III !.;: ....VI Q) (I» III 0 u Remarks Typewriter (for compari son) x Data Displays: x Data ~isplay: 0080 x General Dynamics: 5C 1090 x x Bunker Ramo: BR85 x x 0105500 x lIT: I nformat ion Oi splay Console x x IBM: x 2550 x x x x Raytheon: 52 DOlO x x x x x x x x x x x x x x x x x x x x x x x x )( x x 3.5 x 5.2 Up to 64 displays per control unit at $25K extra 125 Has assoc i ated fi 1m record i ng capabi I i ty x x x x x x x x x x 40 x x x x x 160 x ? x 200 x 75 Add i t ional opt ions avai lable Joyst ick provides equivalent of light pen I nc I udes color presentation Buffer memory opt iona I TABLE 3 RELATIONSHIP OF CONSOLE FEATURES TO GENERIC MAN/MACHINE OPERATIONS Conso 1e Features VI >- VI >- ~ Ql U c: >- Ql c: Ql a. E .... ::J .- ~= 111-0 co a. a. - III Ol III -' ~ co Ql I.J.. ..a Q '- c: -' 111 O~ '0 Ql a. >-:.:1Il /. a; III <0 u x x x Ql ::- I.J.. x Send a Message to Other Consol es or to the Computer x ~ 111 x x 3: III x Initiate Action of Program g> 5 I.J.. c: ::J I .... Ql .... u E ::J "Cz '- -;; U 0 I.. Ql '- c: .... 0 ~ u Q Generate Vi sual Displays for Storage 0 ~ 111 Function c: Ql >- x x x X X *Can be implemented hardware or software TICS. It may be that the item selected requires further detailing. Hence, a second display automatically appears forcing the operator to make a further choice, as in Figure 3. Actions for manipulating the "list" display require only the variable functionke.ys, an alphanumeric display capability, and the light pen for selection. The selection of CORE MEMORY from the list in Figure 3 may force the format of Figure 4 on the CRT, requiring the indicated input from the operator. The symbol" " represents the "marker" and indicates the first entry point when the alphanumeric keyboard is used for data entry. This marker moves either under hardware or software control to the next succeeding underline as each character is entered. For this latter action of handling "format" displays, the marker and data entry keyboard is required, completing the explanation of features needed to accomplish the first function of Table 3. AVAILABLE SYSTEM DIAGNOSTICS (CHOOSE ONE) CORE MEMORY TAPES, CHANNEL A TAPES, CHANNEL B DRUM 1 DRUM 2 DISC 1 DISC 2 CARD READER CARD PUNCH PRINTER 1 PRINTER 2 FIGURE 3 LIST OF CHOICES: "SYSTEM DIAGNOSTICS" CORE MEMORY DIAGNOSTICS TEST LOCATION: 1 ___ TO BIT PATTERN (OCTAL): APPLICATIONS The on-line CRT display console is rapidly reaching a degree of acceptance in the commercial environment as evidenced by the availability of display devices as standard peripherals to many of the newly announced computer systems. The development of application areas and user techniques has, however, lagged so (IF NONE STATED, STANDARD IS USED) FIGURE 4 FORMAT FOR ENTERING REQUESTED INFORMATION 53 an investor wants to know the status of his account with his broker; and a policy holder wants to know how much he can borrow on his life insurance policy. Included as candidate applications for demand query are: reservations (travel, hotel, etc.), merchandising inventory, manufacturing inventory, manufacturing and production control, insurance policy information, credit information, bank or brokerage customers account information, real estate information, that there is still considerable potential user caution and hesitancy. Three major application areas have developed: demand query, monitoring, and analysis. In the first, a service is provided to a person who wishes immediate information that must be extracted from a large mass of data and may require some rudimentary analysis or transformation before it is useful to him. Examples are: a manufacturing planner wants to know the status of a release order for certain parts; TABLE 4 DISPLAY CONSOLE OPERATING CHARACTERISTICS FOR SELECTED APPLICATIONS Characteristics --:0' 0 l- 10 .e 0' u 0 0 0 ..... ~ - C 10 0 0 0 '-' '-' - ~ d -10 E ::E - d ~ u C 0 lQ) u Q) IQ) l- E en 10 ...J .I- 11'1 c .E - en :l Z E :l C 10 .e c. :;: 0 en c >c - ~ E Q I- 0 ..... U Q) > 0 >- ..... C. 0 :l U 0 ~ " 10 :J: 10 Q) ex: c 0 ::E 11'1 Q) 10 " "0 Q) U 11'1 11'1 Q) ::E 0 w c:- .-..... ~ .e .-..... :J: ~ :l :l C C 0 u C en .- ~ f c c 11'1 C 0 0 0 ::E ..... ..... Q) 0.. ~ ~ .-0 en u u C 0 Q C ~ ~ 11'1 ~ ~ ..... ~ 11'1 10 C .-..... - Q) C. 0 :l C. E - en .-.....C ..... >- u0 CD >- >..0 ..... en c E Q) Q 0 11'1 11'1 0 0 0 ..... 0 C --:- - -E - E -E 0 Man/Computer Interaction Input Output General c C ~ ..... 11'1 0 ::E Q) c 0 Q 11'1 2 .....10 Max(Xi, Xi+l)Fjk+l for the m entities in the subrange Xi, Xi +'1 and for each i, i=l, 2, ... ,n-l. Note that if Max (Xj, Xi+1 )Fj =Max (Xi, Xi+l)Fjk+1, it must be at Xi or Xi +1 for p > 2. If this is the case, say at Xi, then the ordering is based on Fjk(Xi+1) > Fjk+1(Xi+d. For p=2, it is based on 6. Working from smallest to largest values of Xi and largest to smallest values of Y, compute contributions to the total area and first moment using the appropriate algebraic expressions which are based on the geometric property of each entity. Area contributions come only from alternate regions bounded by the X subrange boundaries and ordered entities in pairs. This correctly chooses the inside area contributions. The area and moment equations are n-1 Area = L i=l xi +1 f [Fh -F j2+Fj 3-' •• +F j .._ -F j .. ] 1 dX Xi L +L ACi i=l n-1 Moment= X LJ 1+1 i=l X [Fh-Fj 2+F j3 - ••• +FJ,._I-FJ,.] dX Xi L + L Xc iAc i i=1 Xc=Moment/ Area where Xc is the X coordinate of the centroid. 7. Step 3 through Step 6 are repeated interchanging X and Y. 8. The centroid point is displayed contained in a small box, the X, Y values of the point are placed in the X, Y registers and the properly scaled X-calculated area and Ycalculated area are stored in the second set of X, Y registers. "Rej ect/ Accept" is put in the message register. 9. The user may push the accept button, the point is made a dot, the box and message disappear, and the program terminates. If the user presses the rej ect button instead, the point, box and message disappear, but the contents of the X, Y registers are not changed, and the program terminates. During steps 1 through 7 the message register contains "Computation Proceeding". If the parameters chosen are limited to point-area centroids instead of closed polyconics, then the program computes the centroid of the system of these centroid points. CONCLUSIONS The major limitation of the graphic system is that the software would be highly complex and extensive which, like APT Systems, is costly to produce. Use of standard program- 99 ming languages and isolation of required basic machine language subroutines will minimize this problem. The volume of data required to represent graphics in visual form is very high requiring large, cheap storage systems. This is especially true when mathematical capabilities are implemented for processing of higher degree curves and three-dimensional curve and surface perspective projections into two dimensions in a general way. There is expected to be an extensive evolution of console and application program features as graphic systems come into general use. For this reason, modularity in software implementation and large amounts of financial resources will be needed. It is expected that the potential revolutionary effect of graphic systems on design activities will justify such expenditures. Although this system as partially described here is idealized and not implemented, many of the features will be realized in systems being planned and in development at the Digigraphic Laboratories and other facilities of Control Data Corporation. It is believed that the goals of this graphic system are met to a great degree by the features described. I t is a practical system in that all the features can be implemented within the current technological state. Powerful features are found in the macro capability, the data structure and grouping capabilities, and in the potential power of the application interface. Economy is found in the relatively simple console hardware and in emphasis on 100 minimization of the computer time required. Sophistication is found in the use of order dependent operations, macro availability and potentially in applications which can be easily included in the system. These last mentioned features make the system easy to use on its simplest level, but a challenge to the imaginative and strongly motivated person. ACKNOWLEDGMENT The ideas and basis for this system have come from many sources. Besides the authors in the Bibliography, the following people have significantly contributed: Thurber J. Moffett, Jack Antchagno, Joe Koenig, Robert Cushman, Henry Haig, Russel Peterson, James Thebodeau, Chester Small, Paul Downey, John T. Gilmore, Kenneth Gielow, Frank Greatorex, Edward Fitzgerald, Phillip Peterson, Richard Stewart, and Dan Paymer. REFERENCES IE. L. Jacks, "A Laboratory for the Study of Graphical Man-Machine Communication", Fall Joint Computer Conference Proceedings, Oct. 1964. 2M. R. David and T. O. Ellis, "The RAND Tablet: a Man-Machine Graphical Communication Device", Fall Joint Computer Conference Proceedings, Oct. 1964. 31. E. Sutherland, "Sketchpad, a Man-Machine Communication System", Spring Joint Computer Conference Proceedings, May 1963. 4Norman H. Taylor, "A Fully Integrated Digitalgraphic Processor", IRE Professional Journal on Instrumentation, Page 377, 1962 and private correspondence to Donn B. Parker. sR. Stotz, "Man-Machine Console Facilities for Computer-Aided Design", Spring Joint Computer Conference Proceedings, 1963. 6B. M. Gurley and C. E. Woodward, "Light-Pen Links Computer to Operator", Electronics, Vol. 32, No. 47, November 20, 1959. 7D. T. Ross and J. E. Rodriguez, "Theoretical Foundations for the Computer - Aided Design System", Spring Joint Computer Conference Proceedings, 1963. APPLICATIONS ON-LINE SCIENTIFIC ApPLICATION-Dr. David A. Pope. . . . . . . . . . . . . . . . . . . . . . . .. 102 STRUCTURING COMPILERS FOR ON-LINE SYSTEMS-Dr. THE QUIKTRAN SYSTEM-John R. B. Talmadge. : ......... 105 H. Morrissey . ............................... 116 Dr. David A. Pope* On-Line Scientific Applications CONCEPTS OF ON-LINE COMPUTING ONE OF THE MOST INTERESTING recent developments in computing is the on-line concept of rapid interaction between the digital computer and the user. This development has immediate application in the area of scientific computing, particularly in those problems which might be characterized as research computations, rather than production computing. In the former type or problem, very often the specific algorithms to be used in the numerical solution are unknown, and a major part of the problem is to find a reasonable set of such algorithms and to demonstrate that they do, indeed, work for the specific problem. In this effort, the possibility of a dialogue between the user and the computer presents some real advantages. An on-line computing system in this context might be characterized in the following way. Access to the computer should be immediate, at least on the user's time scale. In practice, this means a delay never exceeding a few seconds, and usually less than 0.1 second. The results of a computation should be available immediately, and in a form easy for a human user to comprehend. Finally, the programming should be easily modifiable so that changes in the algorithms can be made quickly, on the basis of intermediate results. The computational requirements both for the problem being investigated and for the software package necessary to implement the on-line system mean that a digital computer of moderate to large size is involved in the system. This means that, for economic reasons, *Chief of Programming, UCLA Computing Facility 102 most on-line computing systems will need to have the large central computer shared by several users; thus some kind of time sharing scheme must be provided which does not conflict with the immediate availability requirement above. Therefore we are led to· the concept of a central computer provided with a number of time shared user consoles, each console provided with convenient input-output devices. THE CULLER-FRIED APPROACH TO ON-LINE COMPUTING One such approach is the on-line system developed by G. J. Culler and B. D. Friedl. In this system the user is provided with two typewriter keyboards for input, and a storage CRT for output. The data is presented to the user and computed functionally. That is, the basic unit which the user manipulates is a single real or complex valued function, which is represented in the computer by a pair of vectors of up to 125 points each. The output CRT displays a pair of vectors as a set of point pairs (Xj, Yj), j=l, 2, ... 125, with the adjacent points (Xj, Yj) and (Xj+b Yj+l) connected by a straight line. This gives the appearance on the CRT of a smooth curve, which may be interpreted as a real valued function, or as a curve in the complex plane. One typewriter keyboard is used for the designation of storage addresses, each address being given by a number and a letter, such as lA, 3F, etc. This keyboard is also used for typing alphanumeric information on the display CRT. The other keyboard is used to designate operations on the functions. On the basic levels, the computer can be operated as a glori- fied desk calculator. The arithmetic operations add, subtract, multiply and divide are supplemented by the commoner mathematical functions such as square root, sine, cosine, logarithm, exponential, forward difference, and sum. Each operation key, when depressed, initiates a subroutine in the computer which performs the operation on the entire function. To supplement these, there are also operations to load and store functions, and to display one or more functions on the CRT. Thus the user can manipulate functions, displaying the results instantly whenever a display is wanted. As an example of this, we may take the generation and display of the function y 2 / = e-x /2' for -1 ~ x ~ + 1. The operation keys to be depressed would be: J -generate, square, divide, -2, exponential, store, 1, E, display, 1, A. The J-generate is an operation which generates the standard linear function y=x, for -1 ~ x ~ + 1. This function is then squared, divided by the constant function -2 (entered by numerical keys on the operation keyboard), exponentiated, stored in location IE, and then displayed on the CRT. However, this system would be of limite? usefulness without some method of convenIently building up more complex algorithms from the simple ones provided. This is done by using the concept of console programming. A special program key is provided which, when depressed, puts the computer in a program writing mode. While it is in this mode, a list is formed of any keys depressed, and this list is assigned to a chosen vacant key. Thereafter, whenever the chosen key is pushed, the computer automatically pushes the entire list of h:eys which was assigned to it. Furthermore, one key on the list may itself be a programmed key and call on a sublist of operations. This is done, of course, by automatically providing appropriate linkages between the basic subroutines. Thus, by nesting subprograms, a singl~ key may be constructed to perform an algorIthm of almost any complexity, involving perhaps thousands of operations. In this way, the full power of the digital computer can be utilized, and yet controlled and monitored in detail by the user. This idea also simplifies the modification of programs by the user, since any programmed key, being a subroutine, can be changed without modifying the program of which it is a part, and without influencing programmed keys which went to make it up. In this way, a user can explore his problem, making up and discarding programs, while watching the results of his computation on the CRT. A computing algorithm will thus evolve which will solve his problem, or else perhaps enough will have been learned about his behavior so that it can be reformulated and a fresh attempt made. EXISTING CULLER-FRIED ON-LINE SYSTEMS The Culler-Fried system, as described above, was first developed at the TRW Canoga Park facility, and a two station prototype system is now there, using one RW 400 computer for each console, with no time sharing. An advanced model is currently being delivered to TRW jSTL in Redondo Beach, which will have four consoles, time sharing one RW 340 computer. Also a Culler-Fried system similar to the prototype system is being developed at the UCLA Computing Facility, which uses the IBM 7094. At present the UCLA system has one console and ties up the 7094 completely, but in a few months it is planned that the Culler-Fried system, along with other on-line systems such as the PAT language will share the 7094 memory together with standby 7094 problems, and the on-line computation will be done on an interrupt basis, but without the necessity of exchanging memory. KINDS OF PROBLEMS AMENABLE TO THE CULLER-FRIED APPROACH Various problems have been tried on the prototype system in Canoga Park for approximately two years. The problems for which this computing system is particularly suited may be characterized as "fixed point" problems in a function space. That is, a function f is thought of as an element or "point" in a set of functions, or function space. There is, in this type of problem, an operator T on the space, which maps the function space into itself, and the problem is to find a function f which is "fixed" under T; that is, f satisfies the equation T [fJ =f. A simple example of this kind of problem is the solution of an ordinary differential equation y'=f(x, y) with initial condition y(a) =b. If this is written in integral form, we have y (x)=b + a f (t, y (t» d t IX 103 We identify the integral operator T as T[YJ=b+lxf (t,y (t» dt and we have the problem expressed in the fixed point form T [y] = y. The Picard iteration for the solution can then be written Yn+1 =T[Yn]' yielding successive guesses Yb Y2, • • • from an initial guess yo. Other problems with essentially this same structure are found in partial differential equations, calculus of' variations, integral equations, control theory, and other mathematical and physical problem areas. Given such a formulation of a problem, the problem analyst uses his experience to set up algorithms for solving the fixed point problem, usually with some iterative scheme. It becomes immediately apparent during the computation whether or not the algorithm is 104 converging and, if it is not, just where the difficulty lies. This information is then used by the analyst to revise the algorithm and try again. In practice it is found that a great deal of insight into his problem can be obtained by a skilled user. One of the most significant advantages of this type of on-line system, in fact, seems to be that the human user of the computer does not need to be a computer expert, or indeed to know very much about computers. What he does need to know is that area of mathematics and numerical analysis which is involved in his problem. The role of the computer is to do the tedious work, including the tedious programming bookkeeping, and free the problem originator to think about his problem. REFERENCE lG. J. Culler, B. D. Fried, and D. A. Pope "The TRW Two-Station, On-Line Scientific Computer", TRW / STL Physical Research Division Report 8587-6002- RU-OOO, Vols. II, III, IV, July 1964. Dr. R. B. Talmadge* Structuring Compilers for On-Line Systems INTRODUCTION RECENTLY a number of systemst have been produced for on-line operation of a computer job shop which have aimed at improving user productivity by allowing continuous manmachine interaction, while at the same time preserving compatibility with previous systems. Most of these have employed modified versions of compilers written for a batch processing environment. As might be expected, the internal techniques used for handling multiple input strings, especially resource sharing techniques such as multiprogramming and program commutation (time sharing), conflict with the single string orientation of the original implementation. In particular, the program segments themselves, and the intermediate data retained, are much too large; so that system response is degraded because of wasted blocks of core and excessive swap times. Most of the proposed remedies, such as incore compilers, and the use of read-only code, while salutary, apply equally well to any program operating within that envoronment. The oquestion naturally arises as to whether distinctly different principles of organization, as contrasted to techniques of mechanization, are desirable. This in turn leads to consideration of the compiler as a system component, to its role in the relation between system and user, and to the question of how much compiler design is influenced by the process one wishes tThree well-known examples are discussed in references 1, 5 and 8. to optimize (in whatever sense that word is being used). I t is the purpose of this paper to suggest that a critical examination of the overall objective of the compilation process leads to an organization founded on the requirements of providing optimal user! system interaction; that this structure is, to a large extent, independent of internal techniques; and that past experience will therefore prove a valuable guide to future development. USER/SYSTEM INTERACTION The basic starting point in this exploration is the functional relationship between user and system. This is outlined in Figure 1, which illustrates the paths of information flow , and the functions involved. For the system, these functions are compilation and execution; for the user, formulation, mod1:fication, and checkout. Interaction then occurs in the following way. 1. The user's original formulation of his problem in some source language, or combination of source languages, is entered into the system through the compiler. 2. At some point in the course of the compilation, misuse of the language is detected. Information is returned to the user which causes him to modify the original formulation. 3. When the modified formulation seems satisfactory to both user and system, the prob- *Manager, Experimental Systems Group, Los Angeles System R&D Department, IBM. 105 USER SYSTEM CHECKOUT .. ..... y _\ 3 . ... EXECUTION ... .... MODIFICA TION ... ~ COMPILA TION FORMULATION <0 .. ~ FIGURE 1 USER/SYSTEM INTERACTION lem proceeds to execution. As a result, information is generated which leads the user to produce further modifications, and statements intended to help check out the program. 4. Eventually (it is hoped) execution is successful, and the program is considered operational. It is important to realize that this process is essentially the same irrespective of the number and timing of the interactions, and the rate of information flow in the data paths. The picture is not affected, for example, whether or not compilation proceeds to a nominal end before messages are conveyed to the user, or whether execution is interruptable immediately upon the occurrence of some unforeseen condition. Thus, although an optimal system involves some compromise between minimizing the number of iterations of the process, the time per iteration, and the cost of 106 the equipment, there is firm reason to believe that certain structural principles remain invariant to any adjustment. This discussion, of course, covers only a portion of the actual system. To the user, however, it is the most important part; indeed, it is virtually the only part. For the term compilation, as used here, represents the entire program preparation activity. So that a compiler is a system processor, primarily a language processor, whose function is to turn the user prepared formulation of his problem into a form suitable for use within the hardware-software complex. This is an enlargement, perhaps, of the traditional notion of compiler, but it is pertinent to the course of the discussion. As a complement to this, the term execution is used to represent the carrying out of the intent of the formulation; that is, the actual performance of the stated algorithms. Thus the interface between compilation and user is fixed, as determined by external con.. siderations; the interface between compilation and execution is variable. As will be seen later, adjustment of this interface can be used to improve the overall operating efficiency. To see how this is possible, to get an idea of the minimum point to which compilation should go and the maximum beyond which it should not proceed, it is necessary to make a closer examination of the user functions (Figure 1)' Formulation. Although the mechanics of formulation may be rather complicated, interface with the system reduces to expressing the problem in processing languages. For this discussion interest centers not in the details of a particular language, but in characteristics observable of programming language usage in general. The important point is that the number of problem oriented languages in use today is large, and the number of dialects within these languages is even larger. This trend is not likely to be reversed however much one might wish to the contrary. Linguistic expression is rather personal, so that users tend to specialize in forms which suit their temperament as well as their particular class of problems. Furthermore, even if one were to produce a language capable of expressing all data processing activity, balkanization would occur; not especially out of pertinacity, but because user efficiency is enhanced by languages which permit simple and clear expressions of the problem. Therefore, it is really the responsibility of the compiling portion of the system to handle a variety of languages; and to do so in such a way as to permit easy intercourse between them. At the same time, it remains a fact that processing requirements are much the same for large classes of problems. Hence, underneath their obvious differences, problem oriented languages fall naturally into families whose members express virtually the same functional capability. An outstanding example of this is that class whose most well known members are FORTRAN, COBOL, and ALGOL. Modification. There are two ways that modifications to existing programs are done in current systems, depending upon the facilities provided, and the preference of the individual user. First, changes may be expressed in modification units which are independent of the content of the program. A common method, for example, is the ALTER mechanism for replacement, insertion, or deletion in card record files. Second, changes may be expressed in terms of the content of the program.; that is, in units depending upon the formal syntax of the source language. For instance, a particular symbol in the program might be replaced, or a string of characters inserted at some arbitrary point. For maximum use and flexibility, the compiler must find a simple way of reconciling these two distinct types of procedure. Checkout. Checkout embraces all activity required to obtain a correct program statement, from detection of misuse of the language to debugging the logic of the problem. The former is clearly the province of the compiler. As for the latter there are two points of view. Some users regard debugging as a general problem which should be handled in a general way. If this principle is adopted, it leads to the formulation of a separate debugging language, and so to another language responsibility for the compiler. On the other hand, some prefer to use statements in the original source language; and hence to use one of the two modification methods to insert the debugging requests. In either case, the volatile nature of these changes requires that they be inserted at a point which is effectively beyond the permanent information retained by the system. Most important, however, is that changes, whether permanent modifications or temporary debugging statements, should be on an incremental basis; that is, should not require the entire resubmittal of the program. Reduction in the amount of superfluous data transmitted between elements of the complex is the biggest single factor in obtaining superior performance from both user and system. FIRST PHASE OF COMPILATION These considerations, which obviously apply to any system, lead naturally to the expectation that the convergence of function for a related family of languages can be profitably paralleled by a similar convergence in the compilation process; and hence, that the first phase of compilation should be the production of a standard program representation, free of external irregularities, and permitting easy incremental modification. This is, in fact, the 107 ERROR MESSAGES ----~ ...... -~ ........ -, I I I SOURCE LANGUAGEl CONVERSION PROCESSORl SOURCE LANGUAGE 2 CONVERSION PROCESSOR 2 • • • • • • SOURCE LANGUAGE n I I PRIMARY REPRESENT~TION MODIF PROCESSOR I MODIF STATEMENTS DEBUG DEBUG PROCESSOR STATEMENTS CONVERSION PROCESSOR n FIGURE 2 FIRST PHASE OF COMPILATION (Family of Languages) method adopted in most recent compilers~ It results in a functional organization like that illustrated in Figure 2. 1. The source languages of the family are treated by individual conversion processors to produce a primary (internal) representation of the program. 2. Errors detected by these processors are sent to the user by whatever means are natural for the system. In an off-line system, for example, they would be collected and dispatched as a group. In an on-line system, with the user at a responsive input device, they would be sent immediately, as individual messages leading to possible intervention. 3. In either case, the resulting modifications are handled by a modification processor which replaces individual statements in the primary representation. *Such as 7090/94 IBCBC (IBM), 1107 FORTRAN (Computer Sciences Corporation), and QUIKTRAN (IBM). For an excellent discussion of the primary representation used in the last of these, see reference 4. 108 4. Similarly, debugging statements which the user generates as a result of execution are passed through the debugging processor. Of course, the modifications and debugging processors, which are shown in the diagram as separate from the conversion processors because they represent separate functions, would, in practice, be entirely absorbed in them. As for the conversion processors themselves, their functional capability is delimited by the properties and information desirable in the primary representation. Some of these can be fairly clearly established at this time. First, since the representation is to be free of language errors, conversion must, at least, accomplish a complete lexical and syntactical analysis of the program. Second, as the representation contains all the symbolic information of the original, and serves as the basis for modifications, it must contain explicit markers for modification units and be structured so as to facilitate these changes. This puts an upper limit to the amount of processing which can be done in conversion. For, since even the simplest external change can have profound effects on the meaning of a program, it is ex- ceedingly wasteful to attempt any instruction generation, or even to make tentative attempts at optimizing execution efficiency. At the same time, good practice dictates that as much information as will be useful should be collected from the program string as it is converted, and placed in a form most suitable for later processing. Hence the representation will consist of, and the conversion processors must produce, some combination of lists , tables and formal expressions which display the explicit structure of the program (particularly loops and transfers), the usage of symbols, and the characteristics of the data; and facilitate the functions of optimization, instruction generation, and (as will be seen) direct execution. Of the many advantages of this organization, the following are probably the most noteworthy. 1. Production of the primary representation effectively isolates the external world from the interior of the complex. This makes a substantial part of the system immune to language changes, emendations, and additions. It also permits the introduction of a new language into the family by simply constructing another conversion processor, most of whose pieces are already available. 2. Retention of the primary representation within the system as the permanent symbolic form of the user's program permits additional useful services to be supplied, as well as providing the user with the operational convenience of handling small volumes of data. Special system processors can be easily designed to exploit the precise, explicit information available. A natural one, for example, would be a flow charting program. 3. Data flow is, of course, drastically reduced. Further efficiency is obtained because recompilation starts from an advanced base: an appreciable fraction of the work of compilation is expended in the data gathering and syntactic analysis of conversion. This, in effect, gives a specific meaning to the somewhat loose term incremental compiling (and probably the only sensible one). There are thus persuasive reasons , dictated by general considerations of user convenience and overall efficiency, for adhering to the structure so far described. It is, therefore, of some interest to turn to the systems functions to see if these reasons are reinforced·, and , if so, to determine how much can be deduced about the structure of the rest of the compiler. DIRECT EXECUTION With the basic statements of the program resident in the system in internal form , the next step is to consider the boundary separating compilation and execution. The primary factor influencing adjustment of this boundary is the desire to attain a favorable resolution of the inevitable conflict between service to the individual user and service to an entire group of users. Here, for the first time, a distinct difference becomes apparent between an off-line system and a responsive on-line system. In the latter case, there are a substantial number of programs for which the overall efficiency will be markedly improved by direct (interpretive) execution from the primary internal representation. This arises for the following reasons: 1. There are a number of jobs in any installation which are processed repeatedly by the compiler, and (partially) executed many times solely for the purpose of executing correctly once. Examples abound, but the most obvious ones are student problems at universities, and the desk calculator type so common in the open shop installations of the aerospace industry. Online systems are, of course, aimed directly at this class of personal computing:'< Direct execution can significantly reduce the total effort by bypassing much work that is ordinarily wasted in compiling, re-compiling, and executing incorrect instructions. For in this mode of operation, the immediate availability of debugging information, in fact the absolute necessity of supplying error information which the user could not have anticipated he would need, together . with his presence on-line, cuts short the execution of most unwanted sequences. 2. One of the touted advantages of an on-line system is the use of the computer as an intelligence amplifier. In this form of operation the user designs his program as he goes along, presumably building on the results of previous (partial) executions to decide what to do next. Direct execution supplies a convenient, readily available tool *To use the apt classification of R. L. Patrick7• 109 for conversational interaction between program and user. Furthermore, even though such programs are undebugged almost all the time, the interpreter is in full control and so supplies automatic protection for other programs concurrent in the system. Errors which occur result in messages to the individual user without requiring system interruption. The resultant reduction in overhead tends to improve overall efficiency and maintain satisfactory response times. N one of this applies to a batch processing system because the real advantages of the direct method (which still exist) are nullified by the temporal length of the communication paths, and the right granted every program to exclusive use of the machine. Nevertheless, however well suited direct execution is for some problems, there are others, more numerous in most installations, for which the necessity of repeated high level interpretation is a serious drawback. Even the least experienced user can, and will, write programs for which this would become a problem for him: programs, for example, with loops traversed many times, programs which once checked out are to be used repeatedly; or programs so large as to overstep reasonable limits for interpretation. These considerations are reflected in current responsive on-line systems, which exhibit a complete dichotomy of attitude. On the one hand, systems which encourage the user to build his programs on a piecemeal basis are dedicated to a particular language and are fully interpretive. * On the other hand, compatible systems provide for interpretation only as a user program, thereby severing any direct connection between this mode of operation and the system processors. The result is an appreciable drop in effectiveness, and hence an overall diminution of system utility. Therefore a system which is to be seriously regarded as general purpose must find a way to integrate both modes easily into a common framework. Figure 3 illustrates how this can be done. Starting from the primary representation, processing follows one of two paths: either to direct execution, or to compilation in the *Of these, JOSS9 and QUIKTRAN3 are perhaps best known. 110 more traditional sense. The decision as to which path to follow might be left entirely to the user. More likely the total interest would be better served by having system control make the choice subject to general rules laid down by installation management. A simple rule, for example, which would serve many installations well, would be to go to direct execution with (pieces of) programs which are yet undebugged. More sophisticated rules depend upon how much the installation is willing to acknowledge direct execution as an important option in any well designed online system. An interesting observation on this point is that since many currently announced computers use read only memory for control and instruction interpretation, a little care in the design of the primary representation would make it possible to do most, perhaps all, of the interpretation in the hardware. This is tantamount to direct execution of a symbolic program (hence the choice of a name), a subject of some interest today6). The gain in efficiency would greatly enlarge the class of programs for which direct execution is a distinct advantage. SECOND PHASE OF COMPILATION If the choice is to proceed to hardware execution, the second phase of compilation is entered. The object of this phase is to convert the primary representation to program text which is the interface to system execution control, and hence to execution proper. Production of this text, the analogue of the reloeatable code used in most current systems, is carried out in the following manner, by a process which separates naturally into the functions of optimization, instruction generation, and ,assembly (see Figure 31. Optimization. The first step is to apply an optimization procedure to the primary representation, in order to improve the execution performance of the program. This involves such activities as flow analysis; noting where auxiliary calculations may be better performed, and where they may be suppressed; analysis of loop structure to determine positional indicator usage, common expression pre-calculation, and the possibility of loop collapse; all of which are done now in most compilers, though not, perhaps, at quite the same place. That this is the proper place for PRIMARY REPRESENTATION I ~ ~ (WEAK) ~ OPTIMIZATIONt---.. DIRECT EXECUTION INSTRUCTION GENERATION OPTIMIZA TIOl" FIRST VERBl ASSEMBLY ~ FINAL ASSEMBLY i ~ VERB 2 I- • • • 4 VERB n PREVIOUS PROGRAM TEXT PROGRAM TEXT J I I' EXECUTION CONTROL !--. EXECUTION I- FIGURE 3 SECOND PHASE OF COMPILATION such action is not hard to justify. It must occur following all modifications, since it is meaningless to optimize without knowledge of the entire context of the program; and it should precede any instruction generation, since preventive measures are always better than remedial ones. There is a further advantage in that it permits the easy mechanization of considerable latitude in the amount and type of optimization applied to any given program. A user, for example, might choose to skip all, or part, of the procedure if he has good reason to believe it will not notably improve his program. In this connection, the box marked "(weak) optimization" in Figure 3 merely signifies that some type of gross optimization, for example the loop collapsing analysis, might well be useful prior to direct execution. It should also be noticed that this step completes the catalogue of information desirable in the primary representation, and so serves in implementation for the final determination of the functions undertaken by the conversion processors. Instruction Generation. The function of instruction generators is to interpret statements within the (optimized) context of the program, and the hardware to be used for execution, to decide which instructions are to be used. At this stage, the verbs of most procedural languages can be handled by autonomous processors. Within these processors most remaining decisions can be made by selecting one of several pre-planned possibilities, according to the descriptions of operands within the scope of individual operators. For example, if the expression A + B is to be calculated, the addition generator would examine the data descriptions of A and B: if they were floating point numbers, a simple floating addition sequence would be used; if fixed point, some preliminary scaling Il1ight be required, as well as the use of fixed point hardware operations. Thus the organization favors a high degree of modularity in the compiler; it also localizes treatment of most changes in the interpretation of a statement to an easily accessible place. It must be clearly understood that the gen- 111 erators, in spite of their name, are confined to making decisions about what instructions to use, but do not themselves act to issue the instructions. This takes on added significance in view of the fact that these decisions are the same as those made in the implementation of direct execution, for it means that the same routines can serve this function in both paths of the compilation process. First Assembly is the name given to a small set of routines which operate concurrently with instruction generation to implement the decisions made by the generators. Hence, the main functions undertaken by these routines are to substitute particular instances of operands into lists of instruction forms, to (logically) separate and count the independent streams of instructions, and to record usages of symbols which will result in interprogram references. In this effort considerable use is made of advanced assembly techniques such as multiple location counters and deferred symbol definition. First assembly, however, should not be confused with the first pass of a typical assembler, most of whose work is expended in conversion. In the direct mode this function is replaced by execution. More precisely, first assembly is supplanted by a routine which carries out, or attempts to carry out, the intent of the instructions produced after the operand substitutions are made. Final Assembly embraces preparation of the form required by execution control; and, perhaps, production of supplementary information for the user (such as a listing). Little can be said about the implementation of either of these functions since the processing is very much dependent on the form adopted for the program text, and the sophistication of the techniques used to take advantage of the available hardware. It should be noted, though, that the supplementary information, so common in off-line systems, is of no practical utility to an on-line user and might well be eliminated. RELATION TO ON-LINE SYSTEMS The organization thus described embraces all the functions assigned to the compilation process, so that the picture of compiler structure is essentially complete. Furthermore, the discussion has emphasized that the considerations used in developing this picture apply 112 to most systems, being based on overall optimization of the user/system relation. But in actual practice the compiler must operate, and produce code which operates, consistently well within the internal conventions and techniques of a particular environment. Therefore, to verify that the description is sound, that the structure is not just suitable but desirable, it is necessary to consider requirements which are peculiar to dynamic environments, particularly on-line systems. The salient feature of such systems is that they attempt to amplify the effective computing power of the hardware by the use of techniques which permit mUltiple concurrent users. Multiprogramming and multiprocessing, for example, exploit the possibilities of parallelism, while time sharing exploits the disparity in human and computer speeds. Much of the advantage gained by these techniques would be dissipated without effective resource allocation and minimization of system overhead. Segmentation of programs into fairly small pieces is by far the most significant factor in this effort, on both counts. First, it improves core utilization by reducing the occurrence of unused blocks. Second, it reduces system overhead because the presence of pieces of many programs in core at the same time has a double effect in diminishing the number of words swapped between core and backup storage. Now, it has already been observed that the organization presented is favorable to fragmentation of the compiler itself. Moreover, the functional division simplifies use of techniques for dynamic reduction of the operating size. For example, the independence of individual instruction generators permits implementation in which only those generators actually in use by any program need be brought into core, where they can remain until not needed. Further improvement can be obtained by designing the primary representation to separate the global information from the local, which reduces the· amount of data that has to be swapped in compilation; and by limiting the size of any segment which is compiled down to program text, thus limiting the space needed for such data. Limitation of the size of segments is a technique practiced in many current systems. Its success depends upon having an execution control which can readily build, and efficiently operate, a program composed of smaller segments. In attempting to do this for on-line systems, there is much to be learned from software solutions already in use. That subj ect, however, would need a lengthy discourse to do it justice. In this discussion, it is possible only to touch briefly upon a few topics of general interest. First of all, program segmentation has its darker side. It increases the core management problem because of the appearance of fragments: small pieces left over in allocation which must somehow be collected into useful sizes. Similarly, program protection is more difficult, for occasions arise when non-contiguous segments of the same program must reside in core simUltaneously. The difficulties of interprogram communicatipn are also magnified: the smaller the pieces, the more likely references will be external to a given segment. The first two of these problems occur only in dynamic systems, while the last has been around for some time. In attacking it, current systems have developed program text and loading techniques which could be quite useful if properly converted to a dynamic environment. In those systems, program text consists of instruction strings in which the addresses are supplemented by relocation bits whose normal function is to indicate whether the address is constant or relative. For interprogram references, however, the encoding refers toa dictionary in which enough symbolic information is retained to identify the desired reference. It is the function of that portion of execution control called the loader to combine various segments into a single operating program, as desired by the user. In this process, the text is translated to specific core locations by interpretation of the relocation bits in conjunction with the combined dictionaries. In addition, for programs too large to fit into core, the user indicates how the program is to be prepared for retrieval of specified parts, (called overlays) by execution control. N ow in on-line systems this pre-planned retrieval and pre-calculated translation is better· done dynamically. Not only would this improve efficiency by fetching only those portions of a program actually needed during a particular execution, but also it would provide relief from the fragmentation problem by severing the ties to specific locations. But implementation of such dynamic control entirely by software involves considerable overhead during swaps, and in execution, so that some hardware assistance is required. Two approaches are currently in vogue. First, there are page schemes in which segmentation and retrieval are tied to core blocks of hardware determined size and location. Second, there are address translation schemes in which the illusion of a contiguous program is obtained by comparing all effective addresses generated during execution with a translation dictionary to obtain a true address. Again the page concept is in evidence, since the translation shifts the low order bits of the address. Neither of these methods really eliminates much pre-planned effort on the part of the user or the system. Therefore it is surprising that there have been no serious attempts to embed the loading function into the hardware; that is, to design the CPU to execute program text directly, relocation bits, dictionary, and all. Such an approach, which requires no more hardware than others proposeo, has several distinct advantages. 1. It is quite conservative of space. The relo- 2. 3. 4. S. cation bits can be absorbed into the address without any practical limitation on segment size. Furthermore, the dictionary is only as large as required by the given program. It is efficient in execution. Address comparison occurs only for instructions which specifically request it. Segmentation is performed according to the natural division of the user. The elimination of artificial fences in core relieves system processors, for one, of the problem of adjusting recalcitrant segments to prespecified block sizes 2 • Apart from the dictionary, the programs are absolutely location independent. If read only code is used, very little additional software is needed to implement a scheme in which segments of operating programs are swapped in, but need never be swapped out. Flags in the dictionary can be used to provide program protection on an individual basis, not tied to any particular core locations, and for any number of independent programs. Similarly, dictionary flags can 113 be used to trigger a segment retrieval mechanism. For these reasons, design of a program text in conjunction with hardware merits serious consideration. In this design, it is, of course, desirable to make use of negative as well as positive information gained from. previous software experience. For example, the program text of several current systems, contains a dictionary for debugging as well as a dictionary for interprogram reference. Because of the reduction in symbolic content, and bec~use the nature of undebugged programs is such that one cannot anticipate what information will be needed, or when it should be obtained, considerable skill is required of the user to keep this dictionary of manageable size and still have it fulfill its purpose. Moreover, loading is complicated by having to undertake the additional functions of interpretation and insertion of debugging requests into the program prior to execution. It is oper. ationally superior in all respects to eliminate the debugging dictionary and rely upon insertion of requests into the primary representation. Objection might be raised that this is not suitable for code produced by assemblers. Such, however, is not the case. With the previous discussion as background, it is not difficult to quickly arrive at the functional organization of a processor for an assembly language family. Figure 4, for example, shows the plan of a macro assembler which would take source language through conversion and first assembly to a primary representation, and from there through final assembly to program text identical in form to that produced PROGRAM TEXT FIRST ; MACRO PROCESSOR - PASS FINAL FIRST ASSEMBLY ASSEMBLY SECOND PASS PRIMARY REPRESENTATION SOURCE LANGUAGE CONVERSIO!' DEFINITION t t .. FIGURE 4 ASSEMBLY LANGUAGE PROCESSOR 114 ...... by any other .system language processor. The primary representation, which is naturally quite different from that of a problem oriented language family, again serves as the resident symbolic form of the program, and as the point for incremental modifications. A significant feature of~ this structure is that instruction generation and first assembly occur prior to output of the preliminary representation. Hence the processing time from it to program text is substantially less than that of other families. Thus, user convenience, uniformity of procedure, and overall operating efficiency combine to form a powerful inducement for a return to completely symbolic debugging. SUMMARY But to pursue these ideas farther would necessitate a depth of detail far beyond the scope of this paper. So, in conclusion, it is perhaps in order to summarize the main points of this discussion. First, consideration of the basic relationship between user and system, of the existence of many languages and the need for incremental changes, leads to a compiler whose first phase is dedicated to the production of a primary, internal, representation intended for residence in the system. This step is independent of any internal considerations. Second, because of rapid intercommunications, on-line systems are particularly suitable for direct, interpretative, program execution. The primary representation then provi 3,3,5 120. • 5 PAUSE 1 122. = GO TO 1 123. • END 141. +REAOY INDEXCX) X PRINT 2 -118. +115. x-o PRlf4T ! 108 •• 0.25000000 DELY- 1 103. • X- 0.12499999E 01 EnITCF15.8) DELX·0.2 102. • DELY- 0.55446625F. 00 134. +REAf)Y PROGRAM DIFEQ1 CF 101. • DUMP DELX- 0.25000000E-00 -118. CHECK CHECK *OIFEQ1 139. +REA'>Y AUDIT AUDIT AUI)IT FIGURE 9 122. /123. OlFEQ1 NOT NOT XEO PROGRAM LISTING SF.T INTERNAL DESIGN Introduction FIGURE 8 DISPLAY STATEMENTS under control of two different output formats. Next there is an INDEX cross reference listing of control flow and data usage. Finally, there is a CHECK for potential errors, followed by an AUDIT, a post-execution history of actual control flow and data usage. In the next example (Figure 9) the user performs a LIST of the program he just finished testing. A variation of this would enable him to punch a card deck ready for processing by any other FORTRAN compiler. 122 Since detailed information on QUIKTRAN's internal design is available 8 , only a brief outline is given here. The system is structured into two major sections (Figure 10), The process subsystem translates source statements to an equivalent intermediate representation that is then interpretively executed. The I/O subsystem controls the communications network and the swapping of programs between primary (e.g. core) and secondary (e.g. drum and disk) storage. Records (Figure 11) The processor translates all source statements into two types of internal records. The fixed length element records, corresponding to source data and procedural elements, con- SUPERVISOR , - - - - - .. - - - I -- - - - - - - - - --, I I I U r-----' I I A cc ~ ~I------l I =m ~ z ~ I I PROGRAM AREA -----' I ~+----- I I PROGRAM AREA ffi g: 1----...-..., ffi LIBRARY ~ SUBPROGRAMS I IL _ _ _ _ ..... - - - LOGIC FLOW - MASTER t;j INTERPRETER I - LINK cc I- I tJ - SCAN o I OATA FLOW FIGURE 10 INTERNAL ORGANIZATION tain the usual information collected in the early phases of all compilers (e.g., symbolic name, value attributes, and addresses). In addition, these records contain the value itself and list control information. The variable length statement records, in one-to-one correspondence with source statements, contain the source data in a form that is more compact (to save space) and more suitable for execution (to save time). Lists All records are organized by lists. Element records are chained on to one of 26 lists each containing all elements with the same initial letter. This not only reduces symbol search time but also facilitates the generation of alphabetized memory dumps and cross reference listings. All statement records are chained onto two lists: one organizing the statements in entry order; the other classifying state- Element Id I 11I~: Next Address Size Indicators Dim/Com/Equ Address Symbolic Name Numeric Value Element Record Statement Id Indicators Code Size Next Entry Address Label Next Class Address R(C) Null Null Null * R(A) R(B) PAROP R(D) FUNOP R(C) + TT T +- T / R(SQRT) R(C) Statement Record C = A * B + C/SQRT (D) FIGURE 11 RECORD FORMATS 123 ments by type. The former is used to determine proper ordering when reconstructing the source program and when executing the intermediate text; the latter facilitates interstatement error checking. proper DO nesting, label referencing, and control flow. Fourth, limit checking for arithmetic spills, proper subscript values, and valid control branching. Addressing A 7740, a separate communications computer, controls the network of remote terminals performing such functions as terminal polling, code conversion, error checking, and buffering of messages awaiting computer attention. Essentially three distinct types of addressing are employed; associative list searching for symbol matching, look at addressing for mapping internal identifiers to relative locations, and implicit addressing (e.g., pushdowns) for storage of intermediate arithmetic values and for control of DO nesting. Blocks (Figure 12) Each user program block consists of two parts; the header contains the control, relocation, and addressing data; the body contains the intermixed element and statement records. Each time a program block is swapped into primary storage, parts of the header are processed to reflect storage relocation and to accumulate usage statistics. Checking Checking is performed at four levels. First, composition checking for correct syntax within a statement (e.g. parenthesis imbalance). Second, consistency checking for proper association between statements (e.g., GO TO A FORMAT). Third, completeness checking for Process Control Data '.\ \ Element Control Table Control Scheduling (Figure 13) A section of the 7040 program continually samples a small in-core terminal status table to determine which user program will next receive service. Logging (Figure 14) Another 7040 control routine continually records usage statistics on magnetic tape for later off-line analysis and summarization. EXPERIENCE The system has been in experimental operation since mid-1963. Early human factor analyses led to some revision in system function and operating procedures 9 • Preliminary evaluation led to the decision in August 1964 to announce the system as a standard IBM program product to be available for customer use Id Circuit Type Id User Priority Quantum Id Terminal Formats Components Id Job Time In Time Job System Indicators \ \ Statement Control Table \ \ \ Element Address Index Header) / Statement Addre s s Index / / / Parameter Stack Temporary Stack Space / / V ~ Program Program Name \ \ \ ~/ Program List of Element & Statement Records \ :::v \ Body / I Common Area I II 124 Size \ / / / I I Program Location Program Calling Program Name Circuit Index Program is evicted when: 1) allotted time interval has expired and successor program has arrived 2) input data is requested 3) output buffer is filled 4) external subprogram is called FIGURE 12 FIGURE 13 PROGRAM BLOC K SCHEDULING DATA History USE System Recovery Performance Device Simulation 8. Billing Function Language Procedure Hardware/Software tradeoffs Specification Changes + User Revised Training Methods Response Rate Error Frequency User Evaluation Productivity FIGURE 14 LOGGING in April, 1965. Further evaluation led to the December, 1964 announcement of Datacenter access to be available by mid-1965 from terminals installed on client premises. Initial operating experiences can be partly summarized by the following observations. The remote user must have the ability not only to state his program but also to control its execution. Thus, many functions conventionally performed by the console operator or by monitor control cards must be identified, structured, and defined by simple, yet comprehensive, language commands. The remote user is, more often than not, an engineer or scientist, not a professional programmer. Unnecessary sophistication (no matter how elegant ) in the system language and operating procedures must be avoided. Remember too, the remote user does not usually have ready access to expert helpers and consultants. The remote user must be given the impression that he is the only user. All possibility of interference by others must be prevented. The response rate should be reasonably uniform; uneven response fluctuations are particularly disturbing. In addition, some periodic terminal indication of action (e.g., console "blink") helps reassure the user that his program is running. Finally, there must be ways for the user and central operating personnel to communicate both by transmitted messages and also by verbal exchanges. The terminal itself is a potent training tool. The ability to proceed at one's own pace, coupled with the immediate detection of most programming errors, enables most people to start using a' terminal with very little formal training. Conversational, source language techniques benefit amateur users relatively more than professional programmers who already understand machine language and know how to debug effectively. Experienced programmers are strongly conditioned to traditional batch computing techniques and have considerable trouble in adapting to the conversational approach. The types of problems run from remote stations differ from those entered at a conventional computer installation. There are many more relatively simple problems previously processed on desk calculators, slide rules, or small computers. Problems with large input/ output volumes are obviously not suitable for remote operation. This is also true of many production type problems where object efficiency is important and there is little need for the injection of human insight, judgment, or experience. Conversely, the debugging of complex programs is greatly facilitated not only in terms of elapsed time and expense, but also in terms of the user skill level. Time .sharing systems are very complex, difficult to develop, and challenging to debug. The system must be designed with these factors in mind. In particular, it is essential to provide means of measuring such factors as user and system response rates, use of language features, causes of errors, and equipment utilization factors. This data can be used to simulate alternative system configurations and scheduling algorithms and thereby lead to improv~~d system performance. Analysis of this data is also essential to designing improvements in the language specifications, operating procedures, and training methods. Finally, the data provides a means of equitably allocating overall system expenses among the individual users. Man-machine systems are no magical panacea. Properly applied they can be very effective; improperly used, they prove to be a very expensive novelty. EXTENSIONS Although the QUIKTRAN system uses typewriter oriented consoles, other terminal equipment could be employed: dictation to a remote terminal operator, dialed input with audio 125 response 1 0, or graphic input with visual displaysll. System performance could be greatly enhanced by computer organizations that provide improved interrupt capability, object time address protection and relocation, larger primary storage, and secondary storage equipment with faster access times and data rates. System performance could also be improved by a program design that permits the intermixed generation of object code along with the interpretive intermediate representation. Execution efficiency of the intermediate code could be improved by incorporating some of the frequently used interpretive functions into micro-programmed logic contained in a read only storage 12 • I t is also profitable to explore the translation of several different source languages into the same intermediate form. This would not only reduce implementation time and expense but would also serve as a useful vehicle for translation between related source languages 13 • From a marketing viewpoint, small systems 14. 15 like QUIKTRAN will undoubtedly be absorbed as subsets of larger time sharing systems 16 • 17. 18. However, it is also probable that the stand alone, shared system will continue to provide functional capabilities in the range between the desk calculator and the small computer. Initial use will include the areas of scientific computing, text editing, computer assisted instruction, and certain commercial applications. CONCLUSIONS The QUIKTRAN system demonstrates that a standard medium scale computer can be time shared to provide an economical, but not fully general, form of computer service. It is also indicative that effective remote computing requires not only different real time operating systems but also new approaches to translation (e.g., incremental) and debugging (e.g., source language) techniques. It appears that the system offers little, if any, economic improvement over conventional small computers if measured in established throughputl 9 • 20 units. However, there are significant advantages in terms of improved convenience, faster turnaround, higher manpower productivity, low'er personnel skill levels, and greatly reduced total solution time and expense. 126 Most importantly, such systems not only permit old problems to be solved in new ways, but also enable new users to solve new problems in ways not hitherto possible. It is this factor that will lead to wide utilization of similar manmachine systems in a wide spectrum of applications. ACKNOWLEDGMENTS Many individuals contributed to the QUIKTRAN system. The following people made major contributions to its design and development: T. M. Dunn, J. M. Keller, E. C. Strum, and G. H. Yang. REFERENCES lE. G. Andrews, "Telephone Switching and the Early Bell Lab Computers", Bell System Technical Journal, March 1963. 2IBM 701 Speedcoding System, IBM Form Number 24-6059, 1953. 3IBM 705 Print System, IBM Form Number 32-7855, 1957. 4W. R. Brittenhan, et al., "SALE-Simple Algebraic Language for Engineers", ACM Communications, October 1959. 5Bendix Intercom 1000 System, Bendix Manual CB029, 1958. 6A. Newell (Ed.), "Information Processing Language .-V Manual", Prentice Hall, 1961. 7IBM 7040/7044 Remote Computing System, IBM Form Number C28-6800, 1964. 8J. M. Keller, E. C. Strum, and G. H. Yang, "Remote Computing-An Experimental System. Part 2: Internal Design", Proceedings of the Spring Joint Computer Conference, 1964. 9T. M. Dunn, and J. H. Morrissey, "Remote Computing-An Experimental System. Part 1: External Specifications", Proceedings of the Spring Joint Computer Conference, 1964. lOT. Marill, D. Edwards, and W. Feurzeig, "DATADIAL. Two-Way Communications with Computers from Ordinary Dial Telephones", ACM Communications, October 1963. llG. J. Culler, and R. W. Huff, "Solution of NonLinear Integral Equations Using On-Line Computer Control", Proceedings of the Western Joint Computer Conference, 1962. 12J. Anderson, "A Computer for Direct Execution of Algorithmic Languages", Proceedings of the Eastern Joint Computer Conference, 1961. 13J. J. Allen, D. P. Moore, and H. P. Rogoway, "SHARE Internal Fortran Translator (SIFT) ", Datamation, March 1963. 14S. Boilen, E. Fredkin, J. C. R. Licklider, and J. McCarthy, "A Time Sharing Debugging System for a Small Computer", Proceedings of the Spring Joint Computer Conference, 1963. 15J. C. Shaw, "JOSS, A Designer's View of an Experimental On-Line Computing System", Proceedings of the Fall Joint Computer Conference, 1964. 16F. J. Corbato, M. Merwin-Daggett, and R. C. Daley, "An Experimental Time Sharing System", Proceedings of the Spring Joint Computer Conference, 1962. 17E. G. Coffman, J. I. Schwartz, and C. Weissman, "A General-Purpose Time Sharing System", Proceedings of the Spring Joint Computer Conference, 1963. 18H. A. Kinslow, "The Time Sharing Monitor System", Proceedings of the Fall Joint Computer Conference, 1964. 19R. L. Patrick, "So You Want To Go On-Line", Datamation, October 1963. 20" A Panel Discussion on Time Sharing", Datamation, November 1964. EXAMPLES AND SUMMARY THE PAT LANGUAGE-Glen D. Johnson . ... .. . ............. ....... . . .... ... .. 129 AN EXAMPLE OF MULTI-PROCESSOR ORGANIZATION-David V. Savidge. . . . . . . . . .. 131 ON-LINE COMPUTING SYSTEMS: A SUMMARy-Dr. Harry D. Huskey .. ... ...... ... 139 List of Attendees (SYMPOSIUM ON ON-LINE COMPUTING SYSTEMS). . . . . . . . . . . . . .. 145 Glen D. Johnson* THE PAT LANGUAGE THE SYSTEM to be described is an on-line interpreter for a structured, algebraic language. This interpreter is operating on the UCLA Computing Facility 7094 with the SW AC computer used to maintain a typewriter console. There is also a similar interpreter for the IBM 1620. The 1620 version was produced by Dr. H. Hellerman of the IBM Advanced Systems Development Division in Yorktown Heights, New York, where it has had several hundred hours of productive use. The language is called the Personalized Array Translator-just PAT for short. The PAT language is a subset of the IVERSON language which was designed by Dr. Kenneth Iverson of IBM as a mathematical description tool. The character set used by Iverson has been reduced to a manageable size in the PAT implementation. A program in the PAT language operates on data which is highly structured. The basic data structure is considered to be a vector. Each symbolic name is specified by the system's operator to be either a scalar, a vector, or a matrix. Data names specified to be matrices or vectors also must have their maximum sizes specified. There are statements in the language to access and modify the sizes of variables. Each variable is stored by the system as a vector with matrices represented as a series of column vectors stored in the computer. Scalar data items are represented as vectors of unit length. Currently, the 7094 system allows Boolean and floating point data items. Floating point constants are represented as numbers with or without a decimal point. False is represented by 0, true by 1. A constant may appear anywhere that a variable is allowed. P AT allows portions of data items to be operated on using a subscripting rule. All subscripting is zero-origin with Xo as the first element of X. A single element may be selected from a vector by using one subscript or from a matrix by using two subscripts. A vector, either row-wise or columnwise, may be selected from a matrix by giving one subscript with an indication that the other subscript is empty. For example, let S be scalar, V be a vector, and M be a matrix. Then: VS is a scalar M SI S2 is a scalar M S is a row vector is a column vector M . S A . indicates an empty position Most statements are of the form: Variable = Expression They have the conventional meaning: the value of the expression is computed and stored in the variable on the left. This meaning is extended by allowing variables to have more than one element. An expression is evaluated using the first element of every variable contained in it to form the first element of the resulting variable. The expression is then evaluated using the next element of each argument to form the next element of the result, and so on until the resulting variable is filled. For example, if X=l, 2, 3, 5 and Y=7, 11, 13, 17 *UCLA Computing Facility 129 and the current size of Z is 4, after execution of Z=X+Y the value of Z is 8, 13, 16, 22 If the end of any variable in an expression is reached before the left part of the statement is filled, indexing on this variable is restarted at its first element. This may be formalized by considering each operand, having a length L to be periodic of period L. The interpreter stores the first cycle of each variable and generates later cycles as copies of the first cycle. There are three types of data combination operations allowed by the PAT system. They are binary operations, unary operations, and reduction operations. Binary operations are represented in infix form as an operator between two variables and include addition, subtraction, multiplication, division, and maximum, minimum, and, or, and relational operators. BINARY STATEMENTS X=Y+Z X=Y-Z X=Y*Z X=Y@DIV Z X=Y@MIN Z X=Y@MAX Z X=Y@EXP Z Note: Operation names are preceded by "@", and only the first character of name is used. Unary operations include trigonometric and logarithmetic operations. UNARY STATEMENTS X=@SINE Y X=@COS Y X=@ABS Y X=@LN Y Reduction operations are binary operations across data structures. Summation is a specific case of reduction. In general, any binary operation is applied to a vector or a matrix as: + / Vector Summation o / V Generalized V 0 0V 1 0V 2 ••• Vn o / Matrix Row Reduct~on o / / Matrix Column Reduction 130 For example, to compute: n n L : (X_X)2 L:x. o 1 x= n s=, 0 n we write: sum X over N differences squared summed over N XB=+ / X XB=XB @DIV N T =X-XB T =T * T S =+ / T S =S@DIV N To read and print data for this, we write: @GET @DIM @D @G N X,N T,N X { } @TYPE XB @T S A statement line may be labelled. If a line is given a name, the name starts in the first position of the line. Otherwise, the first position is blank. Normal sequencing of a program is from top to bottom, left to right. Sequencing may be changed and explicit looping effected with a compare statement. 1=0 L ZI. = X . I I = 1+1 I @CMPN, L, L, 0 The console has a series of commands used to communicate with the program. These have a * in the first typed column of a line. They are: *R Reset system ** Define symbols, followed by (name j ~:~~~: t ~BFloalting t[dim1] [dim2]) 1Matrix ~ ~ ean ~ *E [name] Enter program statements (program statements) *X [name] Execute program *D name Display 00 David V. Savidge* An Example of M ulti -Processor Organization INTRODUCTION As the purpose of this meeting is the free exchange of ideas, it is necessary to estaplish the means of communication by defining the terms in this title. We hope to continue online throughout this discourse. The justification for the inclusion of this subject in this symposium is the fact that at some point in time someone will put together a system to handle a multiplicity of problems on-line. Before programs can be run on such a system they must be organized in such a manner as to facilitate their execution. By exploring ways to organize programs now we will be better able to utilize the hardware when it becomes available. DEFINITIONS The kind of example we will show is that of a method. References to hardware will only be used to demonstrate feasibility and the fact that the ideas discussed are not novel. The intent of this paper is to show a method which presents some interesting possibilities for further exploration. Before defining multi-processor it is necessary to accept a definition of a processor. We consider a processor to be an assembly of hardware which is capable of performing one or more arithmetic or logical functions in a specified manner. By this definition the word processor could describe a device which only performs addition. It could also be applied to the computing unit of the LARC. This leads to a definition of a multi-processor as being any mixture of processors which share one or more components such as memories, input devices or output devices. This permits us to include the MARK III and the Bell Labs MOD V2 in the category of multi-processors as well as the LARC itself. Each of the above contains two processors by our definition. A more complex array is pres'ented by the ENIAC as described in Patent Number 3,120,606, granted Feb. 4, 1964. The ENIAC contained twenty accumulators. In addition to operating on two problems concurrently, the ENIAC could perform several operations at the same time on one problem. All of these systems are considered, in this paper, to be multi-processors. Whenever two or more pieces of anything are put together they must be organized. Certain customs or environmental conditions often impose constraints on the organization. Even though the engine can be found in either the front or the back of an automobile, the driver's seat remains in the front. A horse is generally placed in front of the cart it is intended to pull. The element of direction or control is integral to any organization of pieces required to do work. This paper will describe how the control of a multi-processor can be set up to provide the response times needed in closed loop applications as well as the generalized treatment required in time sharing systems. Weare not concerned with the capability of the individual processors but rather with the broad question of the organization of work to be performed by a multiplicity of processors. We do not consider that *Manager, Product Line Planning, UNIVAC. 131 the assignment of one procedure to one processor while other processors remain idle represents efficient utilization of the hardware. The idea of a system comprising a multiplicity of processors seems to be a natural extension of the time sharing concept. Time sharing was the outgrowth of the imbalance between CPU and 1/0 device speeds. If the CPU was loafing, it was given more work to do. We now have the line capability to overload one CPU within an installation. The logical extension is to provide more than one CPU. The question immediately arises-what does one do if there .is only one problem to be run at this time? Is it satisfactory to use only one of the CPU's which are in the system.? Various schemes have been proposed for the design of multi-processors of varying capabilities-the Holland system 3 and the Solomon computer4 are examples of this. Concepts of control are being explored by many research groups. The application of NDRO memories is increasing. A similarity is noted between the use made of NDRO memories and the utilization of the function tables of the ENIAC5. The organization of the ENIAC permitted the programmer to organize the solution of a problem so that more than one arithmetic operation was being performed at one time. This was difficult, but was one way to shorten the execution time for a problem. We are faced, today, with the same logical problem as faced ENIAC programmers. The difference lies in the fact that we now have a variety of gear and a multiplicity of problems brought together in a multi-dimensional array. It is our thesis that it is possible to automate the organization of a single procedure to maximize the utilization of multiple processors. Unless the organization of the procedure is performed according to a very rigid set of rules it will provide another source of subtle errors. While it is assumed that all parts of a stated procedure are interrelated within the total network, it does not follow that all steps must be performed in series. One of the ways in which processors gain speed is to overlap input, processing and output. We now want to extend this philosophy to the internal parts and determine the extent to which overlaps can occur within the process. Some equipment provides "look ahead" which permits the overlapping of the time of instructions which occur in series. This is accomplished within a single processor. When dealing with a multiplicity of processors similar functions could be performed in parallel. FIGURE 1 PERT NETWORK 132 FIGURE 2 PERT NETWORK ARRANGED BY TIME PERIOD This can be achieved at the software level by what we choose to call "plan ahead". The organization can be accomplished by the compilers and the executive routines. We must make several basic assumptions and accept certain definitions in order to develop a set of rules: 1. A procedure is defined as the collection of operations required to produce specified output data from specified input data. 2. A procedure generally consists of a set of subprocedures which are linked together in the form of a network. 3. An individual subprocedure can be defined as a completely seif contained process with a prescribed set of inputs and outputs. 4. The communication between one subprocedure and any others occurs only at the beginning and at the end of the subprocedure. 5. It is possible to depict the flow of da:ta by means of a diagram which shows the interrelationship of all subprocedures within the procedure. 6. The flow diagram can indicate those subprocedures which could be executed in parallel. PROCEDURES Figure 1 and Figure 2 demonstrate an application of these definitions. Figure 1 is a PERT chart of a procedure6 • Each of the 33 numbers represents a subprocedure. Figure 2 shows the same flow diagram arranged to indicate the parallelism possible. In Figure 2, the lines connecting the subprocedures have been identified with letters which we will use to denote the data (operands) flowing between the subprocedures. Twelve of the subprocedures fall into time periods by themselves (1, 2, 8, 19, 20, 21, 23, 26, 29, 30, 32, 33). Eighteen can be paired: 6,7 9,12 10,13 11,14 15,17 16,18 22,31 24,25 27,28 One group could contain three (3, 4,5). If we shifted either 3 or 4 to occur in the same time period as 8, a two processor system could accomplish the entire procedure in twenty-two time periods. The reduction in time over a serial operation would be the sum of the times for the shorter of each of the eleven pairs of subprocedures. One of the processors would be available for the execution of another procedure during the eleven time periods when the illustrated procedure does not permit parallel execution of its subprocedures. This example is used to indicate the method by which a reduction in elapsed time could be achieved. The evaluation of any saving requires additional specifications such as the time for each subprocedure. This is related to the problem and the hardware. The method is independent of the specifications of either the problem or the hardware. The first question to 133 be resolved is whether the problem lends itself to parallel operation. The second is to measure the savings which could be achieved. An improvement in running time can be achieved by proper pairing (in a two processor system). Depending on how nearly the time requirements match, the pairing of 5 with 4 and 3 with 8 might be better than pairing 5 with 3 and 4 with 8. Similarly, 31 should be paired with an operation which requires more time than it does. This could be 22, 23, 26, 29, or 30. The rule is that a subprocedure which could be performed in one of several periods should be paired where possible with a subprocedure which required more time than it does . If not possible, the longest fixed position subprocedure should be used. A further consideration in selecting time periods in a complete procedure is the storage of intermediate results. To shorten the period for storing intermediate results it might be better to perform subprocedure 3 in parallel with subprocedure 8, even though 3 is shorter than 5 and longer than 8. Such considerations are of value only where there are alternatives. The fact that alternatives do exist is evident from a visual examination of Figure 2. If the procedure were large, of several thousand subprocedures, a visual representation of the entire net would be very difficult to prepare. It would also be difficult to examine and analyze. It is possible to represent the intelligence represented by Figure 2 in a form which can be used for processing by a computer. Table 1 contains this information. List I shows one entry for each operand result relationship arranged in order by subprocedure identification. List II is the same data arranged in order TABLE 1 SUB PROCEDURE ENTRIES FOR ANALYSIS List I Q R Q 2 2 2 A A A B List I R R SP A B C D 2 2 2 AAP 20 AAZ 20 AB AA 21 17 T 18 W 19 X 3 3 4 B B C 5 5 6 D D H H 7 8 8 I J J 1< L 8 8 9 K K L Q SP List I I List I I I Q R SP R Q SP W Y Z D D E H I V 5 5 15 F G H B C D 3 4 5 D Q R C A D A E F AA AB 21 AA AC 21 AB AD 31 AC AA 21 AD AB 31 AE AC 22 19 Y 20 P 20 Z Z AA AA F F G S T S 11 11 11 I J K H I 5 6 7 AC AE 22 AC AF 22 AD AR 32 AF AC 22 AG AF 23 AH AF 23 21 AA AB 21 AA AC 22 AC AE .G I T J K 11 6 7 L L M J K J 8 8 AE AI 25 AE AJ 25 AF AG 23 AI AE 25 AI AG 25 AJ AE 25 22 AC AF 23 AF AG 23 AF AH J J K L M L 8 8 8 M N 0 K L M 8 AF AH 23 AG AI 25 AG AJ 25 AJ AG 25 AK AH 24 AL AJ 26 24 AH AK 25 AE AI 25 AE AJ K L M M N 0 8 9 12 P Q R M N 0 12 10 13 AH AK 24 AI AN 28 AJ AL 26 AL AK 26 AM AJ 26 AM AK 26 25 AG AI 25 AG AJ 26 AJ AL M N 0 P Q R 12 10 13 S S S F G Q 11 11 11 G I J M L M N H 8 9 12 10 N 11F 11F Q 11G 11G 11Q S T s AJ AM 26 AK AL 26 AK AM 26 AN AI 28 AN AL 28 AO AM 27 26 AJ AM 26 AK AL 26 AK AM P Q Q AA 20 S 11 T 11 T T T F G Q 11 11 11 11Q 12 M 12 M T 0 P AL AN 28 AM AO 27 AN AP 29 AP AN 29 AP AO 29 AQ AP 30 27 AM AO 28 AI AN 28 AL AN R S T U V W 14 15 17 U V V R E S 14 15 15 13 0 14 R 15 E R V AO AP 29 AP AQ 30 AQ AR 32 AR AD 32 AR AQ 32 B A 2 29 AN AP 29 AO AP 30 AP AQ U V W V X Y 15 16 18 V W X U T V 15 17 16 15 S 15 U 16 V V V X B B C C D E 31 AB AD 32 AD AR 32 AQ AR X Y Z 19 Z Z 19 AA 20 y W X Y 18 19 19 S T U Legend SP -S ubproced ure O-Operand R-Result 134 List I I I SP List I I SP E F G 3 3 4 A A B 2 2 3 Z Z by operand identifier. List III is the same data arranged in order by result identifier. As subprocedure 1 and subprocedure 33 do not each have an input and an output, they are not included in the lists. By truncating List I at different points and rearranging the remainder into Lists II and III, a variety of combinations can be produced. Subprocedure 2 generates three table entries. Subprocedure 11 generates six entries, while subprocedure 16 generates only one entry. The rule is that each subprocedure generates a number of table entries equal to the product of the number of inputs times the number of outputs. Each entry must appear in each of the three tables. To facilitate scheduling, each entry should carry additional data concerning the facilities used by the subprocedure, the input and output volumes involved and the time required. If this is done the complete network can be timed out, scheduled, and controlled for any combination of processors. Before describing the techniques to be applied to the three lists and the results which can be secured!L it is necessary to delimit the terms further. 1. An operand is all of the data which flows from one subprocedure, or an input, to another subprocedure, or an output, in one movement as one record or piece of intelligence. Thus it can be an element of data, as a quantity to be added, or a set of data as a stock record. It must be received from some point outside the subprocedure which operates on it. 2. Parameters which are used by a subprocedure are not considered to be operands within this definition. This does not preclude their use in arithmetic or logical operations within a subprocedure. 3. The definitions of individual subprocedures, operands and parameters are always unique within a given environment consisting of problems and hardware. The treatment of necessary prior conditions will be discussed later. We seldom go through all paths in all' subprocedures for one record or piece of intelligence. Conditions which must be met before executing an individual path represent intelligence which is derived from the data processed. There are options in the way such conditions are treated, depending on the problem and the hardware. For this reason a discussion of their treatment is deferred. ORGANIZING PROCEDURES The technique outlined below (Steps 1 through 6) produces a list which represents the same time series as shown in Figure 2. If the operation is started with the final results (Steps 1 through 6A), a list is produced which shows the last possible time period by which subprocedures must be executed. Step l-Compare the Operand Fields in List II with the Result Fields in List III. Note four conditions: a. 0 Field in List II does not match an R Field in List IIIRecord items on a list of Unmatched Operands- o R SP A B 2 A C 2 A D 2 This condition must be noted for each relative time period when analyzing the 'subprocequres for first possible time period. It will be ignored when analyzing for the last possible time period. b. 0 Field in List II does match an R Field in List IIIRecord items on a list of Matched Operands. This is a reduced List II. c. R Field in List III does match an 0 Field in List IIRecord on a list of Matched Results. This is a reduced List III. d. R Field in List III does not match an 0 Field in List IIRecord on a List of Unmatched Results. R 0 SP AR AD 32 AR AQ 32 This condition will be ignored when analyzing for the first possible time period. It must be noted for each relative time period when analyzing for the last possible time period. Series for first possible time periodStep 2-Rearrange the data from Step la to read- 0 SP B A C A D A Sort on the R Field. 2 2 2 R 135 Step 3-Delete the items from the List of Matched Results (Step 1c) which are identical to those from Step 2. If this deletion removes all references to the result identifiers we state that these results can pe secured in the first relative time period. Step 4-Rearrange the data from Step 2 to readSP ", 0 R 2 A B 2 A C 2 A D Sort on the SP Field. Step S-Delete the items from List I which are identical to those from Step 4. If this deletion removes all references to the subprocedure identifiers we state that these subprocedures can be accomplished in the first relative time period. Step 6 and continuing-Repeat Steps 1 through 5 for successive time periods until Lists I, II and III are exhausted. ;'\' Series for last possible time period. Step 2A-Rearrange the data from Step 1d to reado R SP AD AR 32 AQ AR 32 Sort on the 0 Field. Step 3A-Delete the items from the List of Matched Operands (Step 1b) which are identical to those from Step 2A. If this deletion removes all references to the Operand identifiers we state that these operands must be used by the last relative time period. Step 4A-Rearrange the data from Step 2A to readSP 0 R 32 AD AR 32 AQ AR Sort on the SP Field. Step SA-Delete the items from List I which are identical to those from Step 4A. If this deletion removes all references to the subprocedure identifiers we state that these subprocedures must be accomplished by the last relative time period. Step 6A and continuing-Repeat Steps 1, 2A through 5A for preceding time periods until Lists I, II and III are exhausted. 136 Unusual conditions can be detected by this technique. The operands which are initial inputs, and the results which are final outputs, are identified as the lists secured in Step 1a and 1d the first time. Any errors in their identification are corrected before performing the other steps. In addition, if items remain in Lists I, II and III and no items fall out in step 1a or 1d as the case may be, a closed loop exists which must be corrected. Table 2 shows the results of this analysis by relative time period. The fact that subprocedures 3, 4 and 31 can each be scheduled in one of several relative time periods is evident. The selection of the best fit can be accomplished on a computer by applying the rules stated previously. One of the advantages which can be gained by using a multiprocessor is the reduction in the time and effort required to store and retrieve intermediate results. The above technique organizes the subprocedures which produce results. For each relative time period, that subprocedure which requires the greatest amount of time can be identified. The complete chains of subprocedures which can be assigned in series to one processor can be identified. The exchange of data between processors can be scheduled to minimize memory requirements. These are positive benefits which can be achieved with this technique. It applies a modification of PERT and CPM to the organization of work for a computer. CONDITIONS The treatment of necessary prior conditions can be accomplished with the same technique. We need only to regard a comparison, which determines the path to be followed between subprocedures, as generating data. For our purpose we treat intelligence about data in the same way we treat the data itself. It is only necessary to identify this intelligence in the same way we identify data and then proceed through the same steps. We can identify intelligence about data with the form: Operand 1, Operand 2, Value. Operand 1 and Operand 2 can be data fields or can be literals. Any data fields must be shown as an input to the subprocedure performing the comparison. The value field is considered necessary because we can never have only one such result coming from a subprocedure. The value field permits a verification that all conditions have been considered. TABLE 2 ALLOCATION OF SUBPROCEDURES Relative Time First possible execution Last possible execution Period of Sub-Procedure of Sub-Procedure 1 2 3 2 3,4,5 6,7 2 5 6,7 4 5 6 8 9,12 10,13 8 9,12 3,4,10,13 7 11,14 15,17 16,18 11,14 15,17 16,18 10 11 12 19 20 21 19 20 21 13 14 15 22,31 23 24,25 22 16 17 18 26 27,28 29 26 27,28 29 19 20 30 32 30,31 32 8 9 The sum of all value fields must equal seven for each comparison. TABLE 3 CONDITION VALUES ON COMPARISONS Condition Value < > 2 ~ 6 3 5 1 £:: ~ 4 23 24,25 From the foregoing it is obvious that the simplest technique to use for comparisons which affect branching between subprocedures, is to treat the comparisons as subproced ures by themselves. Each decision affecting branching would be a subprocedure. This leads us to an alternative method. The main flows of data in a multi-processor could be executed without regard to data dependent branching. All data dependent decisions could be made by a separate processor and the proper final results selected. By this method some operations would be performed which would prove useless. Depending on the environment this alternative might save considerable elaped time. 137 COMPILING AND EXECUTING We assume that the organization of the procedure will take place prior to the compilation of an object program. The compiler should be able to provide, with the loadahle program, all of the intelligence concerning the network, the time periods and the execution times for each subprocedure. In turn the executive or monitor routine must be able to treat each scheduled subprocedure in the same manner as it treats any interrupt from an outside source. A subproced ure would take on the priority of its governing procedure. In effect we are subdividing every piece of work to be done into the smallest practical unit. A subprocedure could be the inversion of a matrix or the comparison of two data fields. The compiler determines how the work can be subdivided and over-lapped. The executive routine determines which component shall execute it and when. HARDWARE To provide complete flexibility in the hardware associated with the central memory, a binary addressing scheme seems to be the logical choice. It also seems desirable to provide one static register per processor with a bit size equal to the memory width. Each program would be assigned a base register which would contain a binary number, the starting bit address. Each processor can contain a one, two, or three bit multiplier (hardware) which 138 would operate on the address portion or portions of an instruction before incrementing with the base register. If a processor contains only two static registers and a plugboard, all intelligence as to sequence must be within another processor, possibly the one which contains the executive routine. CONCLUSION We have described a technique to break down the solutions of problems to permit maximum parallel operation within one problem. We have not described in detail the hardware needed to execute programs in this manner. We consider that the hardware will be developed. REFERENCES 'Staff of Engin~ering R~search Associates, Inc., High Speed Comput~ng Devwes, McGraw Hill Book Co., Inc., New York, 1959, p 185. 2Ibid, P 188. 3J. H. Holland, "A Universal Computer Capable of Executing an Arbitrary Number of Sub-Programs Simultaneously", Proc. Eastern Joint Computer Conference, (1959). 4D. L. Slotnick, W. C. Borck, and McReynolds "The Solomon Computer", Proc. Fall Joint Compute~ Conference, (1962). SMartin H. Weik, "The ENIAC Story", Ordnance American Ordnance Association, Washington, D. C.: January-February 1961, p 571-575, also Ref. 1 p 194197. ' 6Reston M. Aras, ASCE, and Julius Surkis "PERT and CPM Techniques in Project Manageme~t" Journal of the Construction Division, Proceedings' of the American Society of Civil Engineers, Vol. 90, No. COl, March 1964, reproduced in UNIVAC publication UP3952. Dr. Harry D. Huskey* On-Line Computing Systems: A Summary ON-LINE COMPUTING was defined by Walter Bauer as the efficient use of a computer in a system in which the computer interfaces with man or other machines, to which it reacts in receiving and supplying information. Reviewing the six kinds of on-line systems described by Ivan Sutherland, I would propose the following revision of the definition: "On-line computing is the use of a computer interfaced with man or machines, to which it reacts in receiving, processing, and transmitting information." I am most interested in the interface with man, and here the computer is on-line if the man awaits the computer's response, not having turned his attention to other matters. It is clear that during the era 1949 until, perhaps, 1954, all automatic computers were essentially used in an on-line manner. In other words the customer sat at the console, pushed buttons, and tried to interpret the displays (binary neon indicators) in terms of the expected progress of the problem. In 1954 with the delivery of large commercial computers (with more attention to actual dollar costs) it was no longer sensible to let a user flounder through a problem by pushing buttons and "thinking" at a console. Thus , we have the era of BATCH PROCESSING. Now in 1965, using the experience gained in developing military systems, many efforts are underway to improve the communication between man and the computer. Obviously the man will make maximum progress on his problem if he can work on it with sustained attention. This implies the use of a personal console available to the man for relatively long periods of time. In most situations his optimum progress on his problem requires computing over very short intervals of time. Consequently, a well organized central computer capable of servicing many low cost stations is required. ECONOMICS OF ON-LINE COMPUTING At the University of California in Berkeley 16,000 problem passes were made in the month of January, 1965, on the DCS (7040-7094) system utilizing 57 percent of the available computer time. The average problem time was 1.56 minutes and the average equipment cost per problem-pass was $3.00. The cost of expendable supplies per problem-pass was about $0.30. The capital cost of a teletype style station ranges from $400 to $2,000, and communication costs range from $2.00 per month (for direct lines on the Berkeley campus) up to $4,300 per month for a leased private voicegrade line across the United States. Since the teletype user can see the immediate effect of his editing of his program he will use substantially more central computer time. I estimate three times as much which I think is a conservative figure. Certainly the use of the on-line stations reduces th€ use of cards and high speed printer supplies. Let us assume that this goes to zero *Professor of Electrical Engineering, University of California, Berkeley. 139 and that the cost of paper and ribbons for teletypes can be ignored. If a teletype station is used 300 hours per month (14 hours per day for a five day week) and costs $150 per month (the price of a Q UIKTRAN stationnote that Morrissey quotes $1,000 per month per station as total cost), then the remotestation hardware cost per problem is $0.50 on the basis of one hour use. Central computer use per problem is probably up by a factor of three, e.g. instead of three compilations to debug a ten statement problem there is perhaps eight to ten passes. Therefore, the actual cost of doing a problem using remote stations on the basis described above is perhaps three times the cost by batch processing methods, primarily because of the extra problem-passes which can be easily taken from the console. Certainly reduction in cost of in-output equipment at the central computer, improved methods such as incremental compiling (statement by statement), incentives to encourage non-wasteful use, can improve the cost ratio. However, even under optimum arrangements the total problem cost for remote station computing appears to be perhaps twice that for batch processing. Those who argue that remote station computing will be cheaper, do so on the basis that (1) incremental compiling will permit clearing syntax errors in one pass (most current batch processing compilers could be much better designed from this point of view), and (2) that fewer results will be printed since the person with the problem knows he can easily get other results if needed. I am willing to admit that there will be a reduction in the printing of wrong results, but at reasonable traffic levels the person with a problem to be solved does not 'easily get other results. Consequently, when he thinks his program is correct (and he is an eternal optimist) he will ask (curbed only by hard cash economics) for extensive print-outs at the central computer facilities. It is unfortunate that the problems of saturation of the on-line console system were not discussed. Whatever the true picture relative to the costs of batch processing versus console computing, there is no doubt whatsoever that from the point of view of elapsed time the console is far superior. As Weizenbaum says: Man is conserved, not the machine. 140 On the other hand, the system designer has a tremendous task in discouraging the user from experimenting trivially just to see what wonderful things will be done for him. It is exactly this last point and the economics discussed above that leads us at Berkeley not to plan for consoles for undergraduate teaching purposes. In fact, these students do not even get normal turn around. If their problems are left by a fixed time in the evening the results will be back the next morning. This system permits the assignment of a substantial problem once a week due one week later. At the same time we are installing remote consoles (1050's) which, during scheduled periods provides Q UIKTRAN, and at all other times permits communication with problems in process in the DCS system (problems running under IBSYS, FORTRAN II monitor, and numerous special languages such as COBOL, COMIT, SNOBOL, IPL, NELIAC, etc.). Such facilities will be available to graduate students and staff but not in undergraduate teaching. FUTURE OF ON-LINE COMPUTING Walter Bauer has estimated that by 1975, 90 percent of computing will be done ONLINE. I would like to examine this for a moment. Following I van Sutherland we can divide computing into the following categories: 1. Process control 2. Inquiry Stations 3. Process control Specialized Systems (Engineering Design) 4. Instrumentation of on-line systems S. Programming Systems 6. Problem Solving Systems Categories one through four are intrinsically on-line. The tasks associated with five and six can be accomplished by batch processing or on-line techniques. My opinion is that (because of economics and, in line with the discussion of the preceding section) program debugging and problem solving will always be done both on-line and by batch processing. In these two areas the balance is anyone's guess-it depends strongly upon the relative accomplishments in the improvement of batch processing techniques and in the improvement of on-line techniques. My guess is that the division will be about 50%-50%. It is possible that in 1975 process control, inquiry stations, and on-line instrumentation will represent 80 percent of the total computing done, in which case Walter Bauer's estimate would be right. However, I believe that more than 10 percent of the total computing will still be batch processing. INSTRUMENTATION The evidence of the lack of precise information in the two preceeding sections supports the contention of Sutherland and Amdahl that much more instrumentation of on-line systems is needed so that we know what is going on, what the typical user does, and what the variations are from the norms. It is only with this information that systems can be "trimmed" so as to optimize usefulness to the customer array. TYPES OF COMPUTER USERS Computer users can be classified into two groups: Those who require hardware response before loss of attention occurs or frustration cancels any advantage, and those who only need results some time later (minutes, hours, or days) as they can, without ultimate loss, turn their attention to other tasks in the intervening period. The on-line user is thus distinguished from the batch processing user. I t is also possible to classify users on another scale as follows: 1. The query user, who asks the machine (hardware and software) questions and expects answers in real-time (he awaits the answer without transferring attention to other matters) . 2. The reaction user, who submits himself to the control of the machine. Usually the decisions involved depend upon an environment too complex and too quickly changing for a man (or team of men) to make the decisions. The answers to all questions are programmed a priori. An example is the abortion program of project Mercury. 3. The heuristic user. A complex problem is under consideration, and the complete exploration of the solution tree is beyond the capability of the system (probably time-wise). Therefore, on some basis the solution tree is pruned by the man and the machine moves him through the reduced tree. 4. The partnership. The partnership be- tween man and the machine can range from a limited partnership as represented by Morrissey's QUIKTRAN to a more general partnership represented by project MAC at MIT, or the time sharing system at SDC. The work on self-organizing systems is looking forward to even more interesting part:Q,erships. FAIL SAFE On-line systems are still in their early development stage, but now that systems are beginning to work, I think that it is obvious that more attention should be paid to the fail safe aspects of the problem. Topics to be considered here are memory protection or assignment, system controlled input and output, and well designed user language features. The merits of simpler systems, like JOSS, should not be discounted. Walter Bauer made a significant point in this respect: the user should be led down a procedural path. THE CULLER-FRIED APPROACH The use of a storage tube simplifies the remote station device, placing it in the same cost area as sophisticated teletype-style stations. Although such stations are not as versatile as the dynamic or memory buffered types, they nonetheless accomplish the most significant improvements in communication that display scopes represent. In fact, the comments of Pope and Tomkins support the old adage: a picture is better than a thousand words. As indicated by Pope, perhaps the most important aspect of such station systems is their aid in the development of the solution algorithm. In comparison, teletypewriter methods might be like trying to appreciate the art in the National Gallery by exploring it at night with a pencil size flashlight. ENGINEERING DESIGN WITH DISPLAYS The use of display consoles in engineering design seems to me (an outsider) to be pushing the state of the art a bit. My feeling is that here is a powerful technique with almost unlimited potentialities and, therefore, I strongly support all the work in the field. However, at this early stage in the develop- 141 ment I feel that at best it is a clumsy tool. Therefore, we should be extremely careful about raising false hopes. On the other hand, the very complete presentation of Donn Parker shows a tremendous effort in this direction. And whatever the relative efficiencies of this tool, I "drool" when thinking about the possibility of using the system. In fact, I wonder about incentives to minimize the "playing around" of the operator? LANGUAGES As illustrated in the MAC system and in Donn Parker's system, we are going to see many problem oriented language systems for use with remote stations. So far, many of these are adaptations of batch processing languages, but time will see the development of systems optimized for remote-on-line use. In the area of procedure oriented languages QUIKTRAN is an adaptation of the batch processing language FORTRAN. In reviewing Johnson's PAT language one wonders if it could not have been developed with a much . 142 closer relationship to either ALGOL or NPL. For language exploration this is not too important. However, if anyone is developing a language which he expects to be widely used, then he should base it on either ALGOL or NPL if possible. SUMMARY I am impressed by the large show of interest in the subject, and I hope this is symptomatic of a hard look at the problems of improving batch processing systems and, more importantly, at the greater variety of tools available by improved man-machine interface equipment (hardware and software). I feel that the culmination of the developments described in this set of papers marks the first real step in improving the use of the computer as a research and development tool. In other words, making the computer available to an individual on a more or less continuous basis is the first significant step in improved use of computing equipment since the first automatic computer was put into operation in 1949. List of Attendees Symposium on On-Line Computing Systems NAME Robert Abbott Milton B. Adams NAME AFFILIATION Lawrence Rad. Lab. Stanford Res. Inst. Gale R. Aguilar K. D. Allen Robert H. Allen James R. Ameling Jeffry Amsbaugh Beckman Instruments Technical Operations Sun Oil Martin Anderson Sypko Andreae Frank B. Andrews, Jr. G. W. Armerding W. D. Armour Lawrence Rad. Lab. NCR Company RAND Corp. IBM Wendell Arntzen Lester G. Arnold Eastman Kodak Co. J. R. Ashcraft H. L. Asser UNIVAC Sandia Corp. IBM Marc Auslander IBM-Boston Herbert F. Ayres Morgan Guaranty Trust C. L. Baker Frank R. Baldwin Berton E. BarkerHugo H. Barlow, Jr. Sheridan F. Barre A. Lewis Bastian, Jr. K. E. Batcher Edgar A. Bates Robert S. Bauder Fred W. Bauer T. J. Beatson Dr. D. C. Beaumariage Theodore S. Beck James O. Berish Marvin Berlin Martin F. Berman U. Berman P. Berning Marvin N. Bernstein IBM Gen. Instrument Corp. Natl. Cash Register Co. IBM Perkin-Elmer Corp. TRW Space Tech. Lab. Litton Industries Mort Bernstein Hilmer W. Besel L. P. Best Thomas P. Bianco Lyle P. Bickley Eugene P. Binnall Dr. Garrit A. Blaauw Max Blakely Hawley Blanchard La Sierra College Lockheed Calif. Co. IBM IBM Lawrence Rad. Lab. IBM Data Systems Div. Boeing Co. Planning Res. Corp. R. J. Blanken Bunker-Ramo Corp. IBM Martha Bleier System Dev. Corp. Sandia Corp. Aerojet-General Corp. Natl. Cash Register Co. IBM Goodyear Aerospace Corp. Stromberg Carlson Corp. Security 1st Natl. Bank Burroughs Corp. R. E. Bleier Melvin H. Blitz System Dev. Corp. Sylvania Elec. Systems Joseph Blum Brooke W. Boering Joe Bogar Robert G. Bolman Talman Federal Savings Friden, Inc. Bunker-Ramo Corp. Robert Bomeisler Elaine R. Bond IBM-Boston Richard C. Bond Natl. Cash Register Co. R. U. Bonnar General Electric Co. J. M. Bookston Bunker-Ramo Corp. J. C. Borgstrom UNIVAC A. A. Borsei John D. Beierle J. F. Bost Alan Bell Edward C. Boycks Robert W. Bemer UNIVAC Douglas N. Brainard J. Philip Benkard IBM J. Russ Bennett IBM Digitronics Corp. RAND Corp. Bob Bearden George C. Beason John R. Bennett Robert Bernard Boeing Company James Ashbaugh James Bennett Evelyn Berezin Moore Business Forms AFFILIATION Burroughs Corp. Daniel Brayton W. J. Brian Shell Development General Motors UNIVAC Beckman Instruments, Inc. Electro-Mech. Res., Inc. Raytheon Computer Lawrence Rad. Lab. Sanders Assoc., Inc. Control Data Corp. 145 Harold L. Brint Sandia Corp. Wm. Broderick General Electric Co. Charles Brodnax Texas Instruments, Inc. J. H. Brown J. Reese Brown, Jr. Robert J. Brown Ross H. Brown Amcel Buchanan Capt. Thomas K. Burgess David L. Bussard E. Daniel Butler E. D. Callender Carlo Paul Calo S. H. Cameron Richard G. Canning F. C. Carlin Charles E. Carlson Charles H. Carman Herbert F. Congram Mitre Corp. George E. Comstock Natl. Cash Register Co. Univ. of Calif. Tinker AFB USAF Office of Sci. Res. Hughes-Fullerton Ernst and Ernst Aerospace Corp. USN Sta., BI. 796, Washington I. I. T. Canning Publications, Inc. Lockheed Calif. Co. N. Y. St. Dept. Public Works U. S. Dept. of Interior J. E. Carrico Union Bk. Compo & Servo Ctr. Keith Carter USAF Arthur F. Casey C. T. Casale Stromberg Carlson Control Data Corp. Perry Cassimus Natl. Cash Register Co. David Caulkins Caulkins Assoc. James V. Cellini Herb Chalmers Carl Chamberlin S. H. Chasen Tien Chi Chen Leonard G. Chesler USAF ITT-DISD IBM Lockheed Georgia Co. IBM RAND Corp. IBM D. L. Clark Charles A. Clark David L. Clark Donald J. Clark Charles Corderman E. A. Corl Charles F. Corley W. L. Coultas Daniel L. Covill Daniel E. Cowgill E. C. Craddock UICol. Albert Crawford Charles Crawshaw James J. Crockett Timothy Cronin Holiday Inns of America General Electric Co. Friden, Inc. IBM Dev. Lab. US Army Eng. Div., So. Pac. Scientific Eng. Inst. R.C.A. IBM Shell Oil Co. Burroughs Corp. Research Analysis Corp. Aerojet-General Corp. General Electric Co. USN USMC F. C. Cunningham Wm. S. Currie D. J. Dantine Howard Davis George W. Dawson R. L. Day George DeFlorio James W. Dening Donald E. Denk R. P. D'Evelyn Richard Devereaux Robert J. Dolan John P. Dolch Texaco, Inc. Clark Equipment Co. USAF IBM Natl. Cash Register Co. System Dev. Corp. Dept. Hlth. Educ. & Welfare Pacific Missile Range Autonetics, Div. N. Am. Av. IBM Clerk, Bd. Supervisors University of Iowa Terence P. Donohue Control Data Corp. 7st Natl. Bk. of Chicago State of Calif.-DWR Richard Dorrance United Res. Servo Corp. IBM Roger R. Dougan Westinghouse Elec. Corp. MIT Lincoln Lab. James O. Drake Arthur D. Little, Inc. Walter F. Dudziak General Electric Co. Douglas M. Duffy C. I. A. Bell Telephone United Res. Servo Corp. Dorothea S. Clarke General Electric Co. Donald M. Clarry Technical Operations D. B. Earl Daniel P. Clayton Bell Telephone Labs. Tom S. Eason Mesa Scientific Corp. U.S. Govt. Patricia Eddy System Dev. Corp. Richard Cleaveland Theodore M. Dunn Wm. Clelland III USN Ord. Test Sta. H. P. Edmundson C. T. Clingen General Electric Co. Charles M. Edwards Richard Clippinger W. R. Coates E. G. Coffman Honeywell Inc. AT&T Co. System Dev. Corp. R. Wade Cole R. E. Edwards John B. Eichler Stanford University Control Data Corp. Bendix Corp. General Electric Co. ITT Research Inst. Vernon Eisenbraun J. Eliades Victor Colburn 146 Robert P. Cook Richard Dooley Lloyd Christianson George J. Christy Carl J. Conti Owens-Illinois Fletcher Donaldson Clayton Chisum Duane M. Christ Boyd F. Connell White Sands Mis. Range Burroughs Corp. McDonnell Automation Ctr. Donald Colvin Jack Connolly Robert J. Broen Charles Broman B. D. Collins B. Elliot Aerospace Corp. TRW Space Tech. Lab. R. Ellison Herman Englander U. S. Steel US Navy Electronics Lab. R. Gillespie William English Robert Gilmour Barry Epstein R. G. Glaser Fred Erman Edward Golden Nathan Estersohn R. E. Etheridge Bob O. Evans Bendix-Pacific Div. General Electric Co. IBM Phillip Goldstein D. R. Goodchild Harry S. Goples George F. Fagan The MITRE Corp. Harold Gottheim H. M. Farmer Aerospace Corp. Andrew M. Gould Wm. A. Farrand Autonetics C. C. Farrington TRW Space Tech. Labs. Alex G. Fedoroff Grumman & Assoc., Inc. C. Nick Felfe B. Fenton Harold D. Feldman Marion L. Fickett Kent Gould Dale Green Electro-Optical Systems I. C. I. (New York) Inc. T. H. Green, Jr. UNIVAC Inst. for Def. Analyses Walter W. Flood Larry Forrest Ed Forsberg Wm. Fortenberry Carl Forth K. Scott Foster R. S. Fox Leonard Greenberg Bell Telephone Labs. Douglas Aircraft Co. RAND Corp. Herbert L. Gross Natt. Cash Register Co. Stanford University Harry Grossman Security 1st Natl. Bank Robert Grummett Natl. Cash Register Co. IBM R. C. A. Eastman Kodak Co. Walter Fredrickson J. A. French Robert Gunderson Matrix Corp. Lothar F. Haas, Jr. Natl. Cash Register Co. Wm. Hagerbaumer Electronic Assoc. Inc. Continental Oil Co. NASA-Marshall Lockheed Calif. Co. Honeywell General Electric Co. Vern E. Hakola A. Halenbeck Iver C. Hansen IBM Autonetics, Div. N. Amer. Av. Radiation, Inc. J. Harrington City & County, San Francisco Natl. Cash Register Co. Dr. J. Harriman General Electric Co. Don F. Harroff Research Labs, GMC H. P. Hart Melpac Inc. No. Amer. Avia. S&ID A. H. Hassan Erwin A. Hauck Lewis Gallenson System Dev. Corp. J. D. Hawkins UNIVAC Space Tech. Lab. Robert F. Hays Halroyd Haywood IBM Research Stanley Hawkinson George Hazelworth Herbert Getreu IBM Kenneth R. Gielow Lockheed Burroughs Electro-Data Lockheed Calif. Co. Clark Hayes General Electric Co. William B. Gibson Natl. Cash Register Co. Dean A. Hatfield MTS Bell Tel. Labs. Rexall Drug Co. Hughes Aircraft Thomas E. Hassing L. E. Gallaher H. Gelernter Burroughs Electro-Data D. L. Harmer W. H. Fuhr C. L. Gerberich Aerospace Corp. L. B. Hamilton, Jr. United Air Lines US Navy Electronics Lab. Alan E. Geiger Touche, Ross, Bailey, Smart Norton P. Halber Warren B. Harding Dr. H. Friedman J. E. Garvin Natl. Cash Register Co. White Sands Mis. Range W. H. Frye Richard J. Garcia Applied Logic Corp. Frank Gummersall W. Fred Frey Heinz Gabloffsky James R. Guard Univ. of Michigan S. J. Fox James W. Fraher Shell Oil Co. T. S. Greenwood Harold Paul Gross George W. Fleming Friden, Inc. Douglas Aircraft MSSD Bonner & Moore Assoc. UNIVAC Merrill M. Flood Control Data Corp. N. Y. St. Dept. Public Works Tulane University Robert C. Fife F. P. Fisher Rexall Drug Co. Imperial Chem. Ind., Ltd. Kellogg Company E. Griesheimer A. M. Fleishman McKinsey & Co., Inc. P. A. Greco NCR-ED Donald D. Fisher Stanford Research Inst. Charles Grayson Boris Field Sidney I. Firstman Control Data Corp. John T. Gilmore Bunker-Ramo Corp. Security 1st Natt. Bank James L. Heard No. Amer. Aviation Frank R. Heath Westinghouse Elec. Corp. R. C. Heath IBM 147 W. J. Heffner General Electric Co. L. B. Heggie Armin W. Helz Prof. Hetherington US Geological Survey University of Kansas NASA Munson Hinman Verlin Hoberecht Donald Hodges G. E. Hoernes IBM Argonne Natl. Lab. IBM D. J. Hallinger Stromberg-Carlson Robert R. Hohl Naval Res. Lab. No. Amer. Aviation Dr. Julian Kately Arthur Kaupe, Jr. Michigan State Univ. USAF Westinghouse Res. Lab. Bob Kawaguchi L.A. Dept. W & P C. S. Kazek, Jr. Los Alamos Scien. Lab. Roy Keir J. P. Kelly Mary Anne Kelley Neal M. Kendall Beckman Instruments IBM Brookhaven Natl. Lab. Natl. Cash Register Co. MTS-Bell Labs. Natl. Cash Register Co. Douglas T. Kielty Burroughs Electro-Data Fred Holloway Lawrence Rod. Lab. Daniel B. Killeen Tulane University John Holloway Douglas Aircraft Co. Robert Hollitch liT Research Inst. Thomas Holloran Samuel Hoover IBM Wm. J. Hoskins Douglas Aircraft Co. F. J. Killian G. W. Kimble Paul D. King H. A. Kinslow Donald Houck United Res. Servo Corp. Elliot B. Kleiman Richard Hovey Booz-Atlan Applied Res. John R. Knight R. C. Howard Giannini Controls Corp. Fred Koepping Robert Hsu Natl. Bureau of Stdrds. George M. Kohn Arnold H. Hubert Willis Hudson Larry D. Hughes D. Hull G. O. Hummel Roger L. Hummel Frank J. Hundt Norman L. Hunt Bernard Hurley Computer Usage Co. Control Data Corp. Computer Applications Chrysler Corp. No. Amer. Aviation Rexatl Drug Co. Natl. Cash Register Co. IBM RCA Daniel T. Hurley Thomas Jackson W. J. Kosinski Roger P. Kovach Dr. M. Krichevsky Dean C. Kriebel Norman L. Krueder Gene Kucinkas Akio Kumamoto Francis Kurriss Wm. J. LaBelle Jack G. lanotti F.lvie Alexander Korwek J. M. Kusmiss Pauline Hutter TRW Space Tech. Lab. Southwest Res. Inst. R. P~ Lagerstrom John A. Lamkin S. L. Jamison Charles Lander LtJCol. W. Jarrell Gerald Landsman E. A. Jennings M. K. John N. E. Johnson USAF Academy UNIVAC U. S. Steel Corporation Lockheed Calif. Co. G. L. Lane Robert E. Laws Bennett S. Lebow Isabel F. Johnston Sidney Lechter Robert Johnston David Legge Northrop Corp. TRW Space Tech. Labs. UNIVAC IBM Martin IBM UNIVAC IBM Bacchus Works General Electric Naval Supply Center NIDR, NIH Sun Oil Burroughs Electro-Data Foxboro Co. Data Processing Douglas Aircraft MSSD IBM Security 1st Natl. Bank Stanford University Datatrol Corp. NASA Motorola Sandia Corp. U.S. Steel Corp. DA, DCSLOG Data Proc. Div. NASA IBM Union Carbide Gary W. Jones Burroughs Corp. Ronald P. Leinius Glyn H. Jones Burroughs Corp. Leon Leskowitz U.S. Army Elec. Labs. Paul Leslie Westinghouse Electric Frank G. Jordan Earl C. Joseph K. R. Joseph Theodore Kallner Stanley L. Kameny 148 David Katch S. V. Khoury Dean A. Holdiman Control Data Corp. Western Union Tel. Co. Lt. H. Kaufman Bill Herr E. D. Hildreth H. Kanner J. D. Kassan IBM UNIVAC NASA IBM System Dev. Corp. Donald L. Lewis Frank J. Lewis IBM Radiation, Inc. Neil Lewis S. H. Lewis Aerospace Corp. Arlyn G. Liddell D. C. Lincicome Neil Lincoln Boeing Co. Stanford Research Services United Research Services G. J. Liviakis IBM Beckman Instruments, Inc. Richard Loewe Aeronutronic Div. Edward Loges Booz-Allan Applied Res. David H. Long Paul J. Long E. B. Loop David F. McAvinn J. S. McBirney Douglas Aircraft Co. Foxboro Co. Shell Oil Patrick McClung Andrew T. Ling Peter Linz M. D. Mayer USAF Holiday Inns of America C. C. McClurkin George McCormick Gerald McFadden Bonner and Moore Assoc. Aerojet-General Corp. IBM L. E. McHenry C. R. McKelvey John B. McLean Sandia Corp. Rome Air Dev. Center Union Oil Co. of Calif. Harvey McMains Amer. Tel. & Tel. Co. Floyd Looschen Burroughs Electro-Data Claude McMillan Michigan State University Jack A. Lord Gen. Tel. Co. of Calif. Torben Meisling Stanford Research Inst. University of Po. David Meldrum United Res. Serv. Corp. John F. Lubin V. H. Lucke Donald Lumbard Dr. R. F. Lyjak H. Lykken Hugh J. Lynch John C. Lynn J. D. Lyon Alan C. MacDonald General Electric Co. Natl. Cash Register Co. University of Michigan Minn.-Honeywell Reg. Co. Natl. Cash Register Co. NASA Sandia Corp. IBM Wellen B. MacLean John B. MacLeod Patrick McGovern Thos. McKinney Harold Malliot Donald Machen Charles Mackenzie F. B. Mackenzie Dr. Dale Madden R. A. Ma"et Wm. Marchman Lowell G. Market N. A. Marte"otto Wren Middlebrook Barrett Miller Naval Supply Center IBM Union Carbide Corp. Warren Milroy U.S. Navy Electronics Lab. NASA Lester Mintzer Interstate Elec. Corp. Ronald Miranda Tech. Operations Res. USN USNOTS James Misho Stanford Research Inst. Charles Missler Ford Motor Lawrence Rod. Lab. Don E. Mitche" AT&T IBM Burroughs Corp. IBM John Mitchell Rick M. Moore Stanford Research Inst. United Res. Servo Corp. John C. Morgan Bunker-Ramo Corp. Maynard Morris IBM Automatic Elec. Co. J. P. Morrison IBM Autonetics, No. Amer. Av. Arthur Moskin U.S. Navy Dept. Kern A. Moulton J. R. Mue"er United Aircraft Corp. George Mulho"and Douglas Aircraft Co. Wm. Murray John J. Manyak John Marez No. Amer. Aviation Natl. Cash Register Co. Stanford Research Inst. E. L. Manderfield R. B. Mapes Bart Michielsen Stephen Miller Allan Mandelin Wilbur Mann Karl Meyer Douglas Aircraft Co. University of Michigan John F. Miller J. L. Maddox R. J. Maguire Don W. Mercer John F. Meyer Inland Steel Co. Imperial Chem. Ind., Ltd. Minn.-Honeywell Reg. Co. Roger A. MacGowan A. Maclean W. R. Melton Si r J. S. Menteth Joseph P. Murphy IBM U.S. Navy Electronics Lab. UNIVAC Bell Telephone Co. J. P. Stevens & Co., Inc. IBM Assoc. Universities, Inc. IBM Tech. Operations, Inc. Henry F. Nanjo S. A. Nastt'o IBM Michael R. Nekara IBM J. O. Neuhaus Control Data Corp. Milton Martenson Dept. of Defense R. A. Nichols Autonetics, N. Amer. Av. Norman Marshall Bendix-Pacific Div. Ralph Niemi U.S. Navy Elec. Lab. Maj. Wm. Marsland Samuel Matsa Jesse Maury Justin N. Mead Seiler Research Lab. IBM W. E. Niemond General Precision Albert Noble, Jr. NASA-Greenbelt Jerre D. Noe Stanford Research Dow Chemical R. V. Norvill Sandia Corp. 149 R. W. Notto Thomas Nourse UNIVAC Foxboro Co. Wm. H. Ninke Bell Telephone Labs. Clark Oliphint Burroughs Electro-Data Glenn A. Oliver Ron Olsen Joseph Olsztyn James J. O'Neill H. N. Oppenheimer Robert C. Oram James O'Reilly Dr. R. Orensteen Vance E. Page D. J. Pagelkopf Maxwell O. Paley United Research Services General Motors CBS Labs. IBM Assoc. Universities, Inc. IBM Pacific Tel. and Tel. Control Data Corp. IBM Oscar M. Palos Alexander Papp Gabriel A. Pall Taylor Township IBM N. C. Panella R. A. Paris Wm. W. Parker R. Pash Lockheed Burroughs Corp. Minneapolis-Honeywell Frank Patton Lawrence Radiation Lab. P. F. Paul, Jr. Autonetics, N. Amer. Avia. C. R. Pearson 1. P. Stevens & Co., Inc. Ray S. Peck Gordon Pelton IBM Lockheed Missiles John Pendleton Douglas Aircraft MSSD James W. Perry University of Arizona W. M. Perry Eugene A. Peters Kenneth Pergande IBM No. Amer. Aviation Natl. Cash Register Co. Bernard Peters Walter N. Phillips Wm. S. Pickrell Bunker-Ramo Corp. P. E. Piechocki Richfield Oil Corp. Charles L. Pierce M. A. Pilla Thomas G. Pine Elliot Pinson Mandrel Ind. Inc. Stanford Research Inst. Bell Telephone Digital Dev. Corp. Bell Telephone Lab. B. W. Pogorelsky Natl. Cash Register Co. Steven T. Polyak Inst. for Def. Analyses A. G. Pontius Ted Poole Ivan J. Popeioy Fred Pradko IBM Scantlin Elec., Inc. General Electric Co. U.S. Army L. I. Press Eugene R. Puckett Thomas M. Putnam P. H. Pyle F. I. Quinlan 150 University of Calif. Tide.water Oil Co. Douglas Aircraft Co. Peter M. Rakich R. W. Ralston Walter Ramshaw Russell W. Ranshaw Wayne F. Rayfield Donald E. Rea Daryl Reagan Ottis W. Rechard Samuel C. Reese Otto Reichardt A. Reichenthal Roger L. Reifel Harry Reinstein Dr. W. Rheinboldt V. Thomas Rhyne Dr. C. N. Rice John E. Rideout A. O. Ridgway Thomas L. Ringer John F. Riordan Dan Robbins Lt. K. Robbins C. R. Roberts W. C. Rockefeller C. E. Rodemann LeRoy J. Rodgers George F. Roe C. N. Rollinger W. E. Rood Henrik M. Roos Herb Rosenheck Arthur V. Rubino Harvey Rubinstein Joseph Rue Linus F. Ruffing Arno.ldRuskin Richard Russell R. J. Ruud Joseph Ryan J. J. Salyers Harold Sackman E. J. Samuelli Lt. Col. H. Sanders Margo A. Sass Dr. H. Sassenfeld Kirk Sattley Marshall Savage Don Savitt David Saylor Ronald Schauer Jerold Schleicher So. Permanente Serv., Inc. Amer. Tel. and Tel. Co. United Aircraft Corp. University of Pittsburgh University of Wisconsin General Dynamics Stanford University N.S.F. Honeywell, Inc. Raytheon Computer Hughes Aircraft General Dynamics/ Astro. IBM University of Maryland NASA-Langley Eli Lilly & Co. IBM IBM IBM University of Michigan Benson Lehner No. Amer. Aviation IBM No. Amer. Aviation The Boeing Co. Whirlpool Res. Labs. U.S. Steel Naval Supply Center Hoffman Electronics Advanced Scientific Inst. No. Amer. Avia. C.I.A. Harvey Mudd College IBM IBM USAF Natl. Cash Register Co. System Dev. Corp. Kennecott Copper Corp. USAF Off. of Naval Res. General Electric Co. Computer Associates, Inc. Universal City Studios Hughes Aircraft Automatic Electric Labs. Union Bank R. G. Schluter George Schmidt L. A. Schmittroth Space Technology Labs. USAF MA & MC PPCO James Spitze Donald R. Spivey IBM Conrad R. Springer IBM Robert Schnuck IBM Jon S. Squire Vernon D. Schrag IBM Howard R. Stagner Donald P. Schultze N.S. Navy Dept. Friden Inc. Richard H. Stark Westinghouse Elec. Corp. Washington State Univ. Earl Stearns Dave Schumacher USAF G. P. Schumacher U.S. Navy Elec. Lab. Betsy Schumacker IBM F. G. Stockton Shell Dev. Co. Federal Reserve System David R. Stolz American Tel. and Tel. M. H. Schwartz Lee Stephens Friden, Inc. D. R. Strickland Robert Schwartz Raytheon Company Walter Schwartz MITRE Corp. Donald M. Styer Union Carbide Corp. IBM George R. Sugar Natl. Bureau of Stdrds. V. J. Scimone Sheldon Simmons Electronic Specialty Co. Dan W. Scott Richard B. Scott NASA John Sutherland IBM General Electric Co. Dr. G. H. Swift Bruce Sypkens Sears Roebuck Wm. A. Sensiba Western Electric Co. Lt. E. A. Schmidt USAF A. M. Shalloway Natl. Radio Astr. Obs. Wm. C. Shannon Crocker-Citizens Natl. Bk. Theodore Shapin Beckman Instruments Richard S. S'harp Christopher Shaw J. C. Shaw Burroughs Electro-Data System Dev. Corp. RAND Corp. Regis Shinger James L. Shira H. L. Shoemaker United Aircraft Corp. Bunker-Ramo Corp. Glenn W. Shook Dept. of Defense AI Shore Burroughs Corp. W. L. Sibley RAND Corp. Jeffrey Tai G. Platt Talcott Mary Tate Joseph E. Taylor Norman H. Taylor Richard Tedrick Kas Terhorst David J. Thomas G. A. Thompson K. L. Thompson R. Tink C. E. Skidmore Westinghouse Electric Dolan H. Toth Clark Equipment Co. R. E. Trainer Robert Skoog Lawrence Radiation Lab. N. E. Truitt IBM H. S. Tsou R. J. Smith Control Data Corp. R. D. Smith R.C.A. Robert W. Smith R. L. Snyder Glenn D. Sorensen Branko Soucek Arthur Speckhard Monroe Spierer Robert J. Spinrad A. J. Trangle Control Data Corp. Natl. Cash Register Co. IBM Western Union Tel. Co. Honeywell E.D.P. Brookhaven Nat/. Lab. Bellcomm, Inc. Technical Operations, Inc. Brookhaven Nat/. Lab. Honeywell, Inc. Control Data Corp. NASA Burroughs Corp. Control Data Corp. Don Thiede N. J. Skoner Earle Smith USAEL, Ft. Monmouth, N. J. William Thelsner Dale Tolin Melpar, Inc. Raytheon Co. Douglas Aircraft Co. N. Ralph Tipaldi Blanchard Smith Natl. Cash Register Co. Thomas Theberge H. W. Siegmann CarlO. Smarling IBM Dept. of Defense Robert G. Tantzen Lt. R. Singleton Derrel L. Siais General Motors Res. Lab. General Electric Co. J. Sedgewick Wm. S. Selers R. J. Sullivan Audrey Summers Harry Schwartz J. C. Tu S. T. Tuttle Donald L. Trask Fred Tremaine Rein Turn H. F. Tweeden Gordon T. Uber B. A. Udovin R. H. Udy L. L. Van Oosten R. Van Buelow IBM MITRE Honeywell, Inc. State Govt. Nat/. Cash Register Co. Perkin-Elmer Corp. Phillips Petroleum Co. Control Data Nortronics UNIV AC-Div. Sperry Rand Shell Dev. Co. General Precision, Inc. Kellogg Co. Lockheed Mis. & Space Co. County of Sacramento RAND Corp. IBM Lockheed Bunker-Ramo Corp. Aero;et-General Corp. Allstate Ins. Co. System Dev. Corp. 151 T. A. Van Wormer E. R. Vance Howard Vann R. S. Vatilla A. H. Vorhaus Paul L. Voss J. W. Wager Univ. of Pittsburgh C. A. Williams USAF E. R. Williams Goodyear Aerospace Corp. System Dev. Corp. Robert Williams Raymond Wilser Hughes Aircraft Co. RAND Corp. F. W. Wallace Citrus College John Walley Hughes-Fullerton W. C. Walter Northrop Corp. B. P. Warner Edgerton, Germeshausen & Grier Gordon M. Watson Paul V. Webb Harlan Webster Gary L. Weeks J. H. Wegstein Dr. B. Weiss Eric A. Weiss R. C. Wheeler IBM Computer Usage Co. Phillips Petroleum Co. Douglas Aircraft Co. Kennecott Cooper Corp. Natl. Bureau of Stds. Johns Hopkins University Sun Oil Co. Airborne Instr. Lab. Carl L. White Richard A. White W. Y. Whittemore H. R. Widmann J. B. Wiener Richard B. Wilke 152 R. C. Wilson William Wilson James E. Wollum Vincent Wollum Pearson L. Wood H. D. Woods Thomas F. Woods W. E. Woods Stanford Research Inst. Natl. Inst. of Health Inst. for Def. Analyses General Electric Co. General Tel. Co. General Electric Co. IBM Bunker-Ramo Corp. Collins Radio Co. IBM General Electric Co. IBM University of Michigan Cutter Lab. Burroughs Electro-Data Archer-Daniels-Midland Co. IBM H. D. Woods & Associates NASA -H ouston Computer Control Co. L. H. Woodward Stanley H. Woster William Wright H. Wyle Tak Yamashita Georgiana Yang Space Tech. Labs. General Electric Co. Autonetics, Div. N. Am. Avia. Bunker-Ramo Corp. IBM Norman Yarosh Thomas L. Yates William Whitacre Dr. Oliver Whitby Mary Williamson Navy Electronics Lab. R. A. Wagner E. Wasserman W. Wilkinson General Electric Co. Robert C. Yens A. W. Yonda M. G. Young V. J. Zapotocky D. C. Zatyko Oregon State Univ. MITRE RCA Shell Dev. Co. General Electric Co. SPO James R. Ziegler Natl. Cash Register Co. Charles A. Zuroff Brookhaven Natl. Lab.
Source Exif Data:
File Type : PDF File Type Extension : pdf MIME Type : application/pdf PDF Version : 1.3 Linearized : No XMP Toolkit : Adobe XMP Core 4.2.1-c043 52.372728, 2009/01/18-15:56:37 Create Date : 2013:05:25 17:13:25-08:00 Modify Date : 2013:05:25 17:35:07-07:00 Metadata Date : 2013:05:25 17:35:07-07:00 Producer : Adobe Acrobat 9.53 Paper Capture Plug-in Format : application/pdf Document ID : uuid:22d89e8c-3041-430f-b08d-a04702096621 Instance ID : uuid:6f268b91-2ed9-4eca-aaa9-0282e4642bc3 Page Layout : SinglePage Page Mode : UseNone Page Count : 153EXIF Metadata provided by EXIF.tools