AL55182_473L_Query_Languge_II_Training_Oct66 AL55182 473L Query Languge II Training Oct66

AL55182_473L_Query_Languge_II_Training_Oct66 AL55182_473L_Query_Languge_II_Training_Oct66

User Manual: AL55182_473L_Query_Languge_II_Training_Oct66

Open the PDF directly: View PDF PDF.
Page Count: 71

DownloadAL55182_473L_Query_Languge_II_Training_Oct66 AL55182 473L Query Languge II Training Oct66
Open PDF In BrowserView PDF
<-•-, c

I
I

ESD-TR-66-637
U cys-

l-i

Pi ^
EI l i
Q E-i
CO oo

ff.NAL.E^DRKJOUDCO-v
RETURN TO
W€im«C a TjtiNMCAL INFORMATION foVISiON
(EMfe BWUXNG 1211

DEVELOPMENT AND EVALUATION OF SELF-INSTRUCTIONAL TEXTS
AND AN OPERATIONAL SPECIFICATION FOR COMPUTER DIRECTED
TRAINING IN INTERMEDIATE QUERY LANGUAGE, MODEL II,
FOR SYSTEM 473L, UNITED STATES AIR FORCE HEADQUARTERS

October 1966

Doris Clapp Slough
David P. Yens
Judi L. Northrup
Harris H. Shettel

DECISION SCIENCES LABORATORY
ELECTRONIC SYSTEMS DIVISION
AIR FORCE SYSTEMS COMMAND
UNITED STATES AIR FORCE
L.G. Hanscom Field, Bedford, Massachusetts

Distribution of this document
is unlimited.

Project:
Task:

7682
768204

(Prepared under Contract No. AF I9(628)-2935 by the American Institutes
for Research, Pittsburgh, Pennsylvania.)

AD6Wsr

LEGAL NOTICE
When U.S. Government' drawings, specifications or other data are used for any
purpose other than a definitely related government procurement operation, the
government thereby incurs no responsibility nor any obligation whatsoever; and
the fact that the government may have formulated, furnished, or in any way supplied the said drawings, specifications, or other data is not to be regarded by
implication or otherwise as in any manner licensing the holder or any other person
or conveying any rights or permission to manufacture, use, or sell any patented
invention that may in any way be related thereto.

OTHER NOTICES

Do not return this copy.

Retain or destroy.

ESD-TR-66-637

(FINAL REPORT)

DEVELOPMENT AND EVALUATION OF SELF-INSTRUCTIONAL TEXTS
AND AN OPERATIONAL SPECIFICATION FOR COMPUTER DIRECTED
TRAINING IN INTERMEDIATE QUERY LANGUAGE, MODEL II,
FOR SYSTEM 473L, UNITED STATES AIR FORCE HEADQUARTERS

October 1966

Doris Clapp Slough
David P. Yens
Judi L. Northrup
Harris H. Shettel

DECISION SCIENCES LABORATORY
ELECTRONIC SYSTEMS DIVISION
AIR FORCE SYSTEMS COMMAND
UNITED STATES AIR FORCE
L.G . Hanscom Field, Bedford, Massachusetts

Distribution of this document
i s unlimited.

Project:
Task:

7682
768204

(Prepared under Contract No. AF I9(628)-2935 by the American Institutes
for Research, Pittsburgh, Pennsylvania.)

FOREWORD
One of the research goals of the Decision Sciences Laboratory
is the development of design principles for automated training subsystems which could be built into future Information Systems. Such
subsystems would provide Information Systems with the capability of
automatically training their own operators. The need for such onthe-Job training capability has already become apparent. To be able
to design such a capability requires first the solution of many conceptual and experimental problems.
Task 768204, Automated Training for Information Systems, under
Project 7682, Man-Computer Information Processing, was established to
formulate and answer some of these questions.
This report is one in a planned series supporting Task 768204.
The project was undertaken by Decision Sciences Laboratory in support
of the 473L Systems Program Office. Dr. Sylvia R. Mayer of Decision
Sciences Laboratory served as Air Force Task Scientist and contract
monitor.
Contract support was provided by the American Institutes for
Research, Pittsburgh, Pennsylvania, under Contract No. AF 19(628)-2935,
with Mr. Harris H. Shettel, principal investigator; Mrs. Doris Clapp
Slough, project director; Mr. David P. Yens; and Miss Judl L. Northrup.
The technical guidance and support provided by Lt. Colonel Wood
Ellis of the USAF Command Post are gratefully acknowledged. Thanks
are also extended to the personnel of the USAF Command Post who served
in the field tests of this course.
In ensuring the development of the operational specification for
computer directed training in accordance with approved 473L standaros,
the A.I.R. project staff had the extremely valuable guidance of
Dr. William F. Bennett, Mr. Marvin L. Chenevert, and Mr. Jack D.
Schiff of the Federal Systems Division of International Business
Machines, Inc. This is gratefully acknowledged. In addition, helpful
contributions to the training design for the computer directed training
package were made by Lt. Colonel Wood Ellis, Dr. Sylvia R. Mayer,
Mr. Jack D. Schiff, and Mr. Marvin L. Chenevert.
This technical report has been reviewed and is approved.

DONALD W. CONNOLLY
/
Project Officer
/
Decision Sciences Laboratory

ROY MORGAN
Colonel, USAF
Director, Decision Sciences Laboratory

11

ABSTRACT
This report summarizes the development and evaluation of a
programed, self-instructional course for on-the-job training of
Air Staff personnel in the use of Intermediate Query Language,
Model II. This is an information retrieval language used with
the computer based, Air Force command and control system, System
473L. In addition, it describes a computer directed training
capability that was designed specifically to use System 473L itself
to effectively and efficiently provide training in Query Language.
The report describes the need for on-the-job training and the
rationale for a computer directed training capability to provide
this training. It describes the development of the programed text,
•^he text itself, and the effectiveness of the text materials based
on tryout data. Finally, a description of the proposed computer
directed training course is given, with emphasis on the training
design. The 473L System configuration using the AN/FYQ-11 computer,
towards which this study was oriented, will not be implemented for
the Headquarters U. S. Air Force Command and Control System. However, this design study for the training subsystem may be of interest to researchers on computer-directed instructional systems.

ill

TABLE OP CONTENTS
Page
FOREWORD
ABSTRACT

ii
ill

SECTION I. INTRODUCTION
1.1 Background
1.2 The Air Force Command and Control System (473L) .
1.3 The Training Problem
1.4 The Overall Training Strategy
1.4.1 The Need for a Programed, Self-Instructional
Text
1.4.2 The Need for an Operational Specification for
Computer Directed Training
SECTION II. THE DEVELOPMENT OP THE PROGRAMED TEXT
2.1 The Selection of Course Objectives and Content .
2.2 The Selection of Specific Training Strategies . .
2.2.1 Trainee Characteristics
...
2.2.2 The Course Objectives
2.2.3 Characteristics of the Course Content
2.3 Description of the Final Program
2.3.1 The Preprogram
2.3.2 The Textual Program on Query Language
2.3.2.1 Number of Frames
2.3.2.2 Content of Frames
2.3.2.3 Reference Materials
2.3.2.4 Program Format
2.3.3 Computer Exercises
2.3.4 Test
2.4 The Drafting of Frames, Initial Program Tryouts,
Technical Review, and Revisions
2.4.1 Subjects
2.4.2 Results
2.5 Field Tryout and Revision
2.5.1 Design
2.5.2 Subjects
2.5.3 Administration of the Field Tryout
2.5.4 Results of the Field Tryout
2.5.4.1 Completion TIme3 for the Programed Materials
2.5.4.2 Scores on the Final Posttest
2.5.5 Revisions After the Field Tryout
2.6 Discussion and Recommendations
DEVELOPMENT OF THE OPERATIONAL SPECIFICATION FOR COMPUTER DIRECTED TRAINING
3.1 Introduction
3.2 Development of the Operational Specification . .
3.3 Description of the Operational Specification . .
3.3.1 The Training Sequence Logic
3.3.1.1 Complexity of the Training Design

1
1
2
2
3
3
7
7
7
8
8
9
9
9
9
9
12
22
22
22
24
24
24
26
26
26
27
27
27
28
28
29

SECTION III.

iv

31
31
32
32
32

Table of Contents (cont.)
Page
3.3.1.2 Provision for Adaptation of the Capability
3.3.1.3 The Training Sequence Logic
3.3.2 Operating Procedures and the Procedural Flow
Diagram
3.4 Discussion and Recommendations

.

33
33

45
49

REFERENCES

51
APPENDICES

A - Number of Frames in Each Volume of the Program
B - Final Test
C - Final Test Scoring and Answer Key
D - Sequence and Contents of the Computer Directed Training Sets
TABLES
1 - Average Error Rates on Draft Volumes Pertaining to
QL that Were Used in the Preliminary Tryouts ....

25

2 - Percent Error for Each Trainee on the Program and
Percent Correct on the Te3t for Each of the
Preliminary Tryouts

25

3 - Background Data for Trainees Participating in the
Field Tryout

26

4 - Program Completion Time for Each Trainee

27

5 - Percent Correct Scores on Final Test for Individual
Trainees in the Field Tryout

28

6 - Design for Computer-Directed Training Overlay Process
Step Key Functions

46
FIGURES

1 - Schematic Diagram of Planned Information Flow in the
Design for a Second-Generation 473L System

14

2 - A description of the Proposed Procedural Steps Required
to Retrieve Data for a Specified Problem Using Query
Language
15
3 - A Partial Listing and Description of the Data Files .

17

4 - An Incomplete Table with Typical Values

18

Table of Contents (cont.)
Page
FIGURES (cont.)
5 - An Example of How Information is to be Retrieved from the
Data Files Using Query Language
19
6 - Formats and Functions of SUM in the Output Selector . .

20

7 - Examples of Input and Output Formats for SUM

21

8 - The Training Sequence Logic Flow Diagram for Computer
Directed Training

40

vi

Editor's Note
The reader should note that the 473L System configuration
using the AN/PYQ-11 computer, towards which this study was oriented, will not be Implemented for the Headquarters U. S. Air
Force Command and Control System.
This report Is presented since the general design of the
training subsystem could serve as a model for other computerdirected training courses in other computer-based military information systems.

vii

Section i
INTRODUCTION
1.1

BACKGROUND

Query Language is a constrained version of the English
language which has been developed as a mode of man-machine
communication in the Air Force Command and Control System,
System 473L (this system is described in section 1.2). Query
Language has evolved over a period of time to accommodate
projected changes in the system hardware (viz., from the IBM
1410 to the Librascope 3055)> and to permit problem solutions
for additional areas of resource management. This evolution
involved three successive versions of the language, in the following order: OTC Query Language; Query Language Model I;
and Query Language Model II. The modifications in the language
were substantial enough so that personnel trained in the use
of the Query Language designed for the Operational Training
Capability (OTC) phase of System 473L could not easily transition to the use of the Query Language designed for the Model II
phase of System 473L. In addition, appropriate training was
needed for new personnel. Therefore, a new training package
was requested to teach the version of Query Language appropriate
to the Model II phase of System 473L. It was further requested
that a detailed description be developed for a computer directed
training capability, in accord with 473L standards, so that the
Air Force could evaluate the feasibility of implementing such a
capability.
This report describes the development of a programed, selfinstructional course designed for on-the-job training
of System
473L users in Intermediate Query Language, Model II.!
In addition, it describes the development of an operational specification for a computer directed training (CDT) capability designed
specifically to use the 473L System Itself to effectively and
efficiently train Air Staff personnel in the use of Intermediate
Query Language, Model II. The proposed computer directed
training capability has further significance in its potential,
with appropriate adaptations, for providing training in other
uses of System 473L. In addition, the general design of the
training course set forth in the operational specification,
i.e., the training sequence logic, could serve as a model for
similar computer directed training courses in other computer
based military systems.
1.2

THE AIR FORCE COMMAND AND CONTROL SYSTEM (473L)

System 473L is an information processing and retrieval
system located in the Air Force Command Post at the Pentagon.
An integral part of the system is a large capacity computer.
^The scope of Intermediate Query Language, Model II, is defined
in Chapter 3 of 473L-OS-40: Operational Specification for
Query Language, Model II, dated 13 April 1965, Unclassified.

The system Is designed for use by Air Staff personnel In
solving USAF resource management problems. There are two
basic methods of communication between the System 473L
operator and the data processing subsystem: l) operational capability
overlays and 2) Query Language. With the overlay capabilities
the operator is guided in selecting and making his input but
his retrieval of information is restricted to a set of previously specified outputs. With Query Language, the operator's
input is not guided, but his retrieval of information is not as
restricted. Thus, Query Language is a vital complement to the
overlay method of communication with System 473L.
1.3

THE TRAINING PROBLEM

In developing an overall training strategy, consideration
was given to two interacting aspects of the training problem:
the training task Itself and the trainees.
The use of System 473L is not restricted to a small number
of system operators; it is intended that this system, including
Query Language, be used by a large number of Air Staff personnel.
These constituted the potential trainees for this course.
It was specified that the training course should teach
only Intermediate Query Language, since It includes those
aspects of the language that are mo3t commonly used by Air
Staff personnel. The more advanced uses of Query Language are
restricted primarily to programmer use (e.g., for maintaining
and updating the data base), and thus these uses were not considered appropriate for the proposed course.
Proficient U3e of Intermediate Query Language requires a
knowledge of both the structure and contents of the system
data base and the rules regulating the use of vocabulary,
syntax, grammar, and punctuation of Query Language. In addition, since Query Language must be entered on the System 473L
Integrated console, the user must ultimately become proficient
in the operation of this console. Thus, while training on tne
basic use of the console is not an integral part of this training course, it was considered desirable to incorporate practice
on the operations that are most frequently used to input Query
Language.
1.4

THE TRAINING STRATEGY

Due to the dynamic structure and large size of the data
base for System 473L, the dynamic nature of Query Language
itself, the frequent turnover of Air Staff personnel, and the
vagaries of their normal and emergency duty schedules, effective and efficient training of Air Staff personnel as proficient users of Query Language requires an on-the-job training
capability.

1.4.1 The Need for a Programed, Self-Instructional Text. The
need for maximum flexibility In the required training schedule
to adapt to the needs of individual trainees, in terms of both
individual learning rates and individual schedules of Air Force
duties, suggested the desirability of developing programed, selfinstructional materials that could be used for on-the-job training.
1.4.2 The Need for an Operational Specification for Computer
Directed Training. Since System 473L is an information processing
system, it is capable of implementing both training and evaluation
functions. The desirability of using the system itself as an
adaptive training device, in conjunction with programed training
materials, was suggested by several considerations:
1) The design for the second-generation System 473L has certain
physical characteristics that make it especially amenable to performing training and evaluation functions. The computer's integrated
console contains a keyboard to permit operator inputs and an overlay
board to permit access to the internal program. The overlay is
especially useful, since a special panel can be cesigned to provide
the trainee with direct access to and control over the computer
directed training functions. The computer can be programmed to
evaluate trainee inputs. In addition, the integrated console has a
cathode ray tube output that permits alphanumeric displays. Such
displays can be used to present both instructional (cue) anc evaluative (test) materials. Further, the system permits time-sharing,
so that more than one training station could be made available at
any one time;
2) Letting the trainee use the system itself for training
would, it is believed, make the training materials inherently more
interesting and, as a result, increase the trainee's motivation to
complete the course. It should also enhance the trainee's proficiency in performing specific system operations. Enhanced facility
in operating the system would probably increase the frequency with
which trainees would use the system in the future;
3) The design for the System 473L computer permits more
flexibility than conventional training materials in adapting the
content and sequence of instruction to fit individual needs.
Remedial sequences and content can be contingent on an evaluation
of a series of criterion responses;
4) In addition, a computer can be more flexibly responsive
than a text In complying with trainee requests for particular
training materials. That is, a computer is more adaptable to
learner-controlled instruction. This is a significant advantage
not only for the training phase of a course, but more especially

for proficiency maintenance training, when the trainee is more
capable of self-evaluation and determination of necessary remedial
materials;
5) Not only can knowledge of results be immediate, but
analysis-feedback can be provided to the trainee based on his
specific past performance;
6) Due to the computer's planned capability for rapid dataanalysis and its large storage capacity, detailed recording and
analysis of trainee responses and, therefore, increased accuracy
in assessing trainee proficiency and determining needs for specific
revisions in the training materials, is more feasible than with
conventional materials;

7) The potential high-speed printout capability provides automatic records of student progress;
8) A further, potentially significant advantage of a computer
directed training capability is that, with some adaptations, it
could also be used to provide training in other uses of System
473L.
Pretest data from the tryouts conducted with the programed
text for OTC Query Language2 indicated that the potential trainees
are relatively homogeneous in terms of their prior knowledge of
Query Language, e.g., on the pretest used for the initial tryout,
no trainee scored higher than three per cent. Therefore, computer directed training was considered most desirable for the
final phases of training, when there would be a greater need
for instructional flexibility due to increased variability in
trainee proficiency. However, it was felt that the increased
motivation for training and the projected increase in future
use of the system due to increased operator proficiency would
justify the additional expenditure necessary to provide computer
direction for all_ phases of training, provided that this training
capability would not impede the operational capabilities of
System 473L.
In view of the above advantages of a computer directed
training capability, the Air Force decided that a detailed
operational specification should be developed to describe such
a capability and this operational specification should then be
evaluated to assess the feasibility of implementation in terms
of the impact on System 4/3L operational capabilities and the
scope of programming required for this capability. Since the
need for the programed, self-Instructional text was immediate
and the feasibility of computer directed training was uncertain,
^A description of the development and evaluation of the programed
text for OTC Query Language is presented in reference 1. The
text itself is presented in reference 2.

the Air Force Indicated that the programed Instructional text
materials should be developed first and that they should constl
tute a self-sufficient package, not dependent on any aspect of
the computer directed training package.

Section II
THE DEVELOPMENT OF THE PROGRAMED TEXT3
The development of the programed text involved several successive phases of work: the selection of course objectives and
content; the selection of training strategies; the drafting of
the programed materials, and initial program tryouts and revisions; and the field tryout and revision. These phases are
described below. In addition, this section describes the final
program and supplementary materials, and evaluates the effectiveness of the text.
2.1

THE SELECTION OF COURSE OBJECTIVES AND CONTENT

The primary objective of the programed instructional text is
to enable trainees to write statements in Intermediate Query
Language, Model II, that could be used to retrieve information
needed to solve moderately difficult resource management problems,
stated in English. Subcriterion objectives that were prerequisites
to the primary objective Included an orientation to the 473L System,
and knowledge of the organization of information in the data files
in computer storage.
Before preparing the draft instructional program, it was
necessary to define the specific content of the materials that
would most effectively and efficiently reach the established
terminal objectives. Then, appropriate training strategies were
selected on a topical basis, according to their appropriateness
for each topic. Since Query Language transitioned from Model I
to Model II during the period of development, the original outline
and training strategies were specific to Query Language, Model I;
as Query Language evolved, changes were made In the program to
correspond to the changes in content. Thus, revisions during
program development required not Just changes In training strategy
but also substantial changes In program content.
2.2

THE SELECTION OF TRAINING STRATEGIES

Training strategies are based on the interaction of three
factors: trainee characteristics, the course objectives, and
specific characteristics of the course content.
2.2.1 Trainee Characteristics. The following profile describes
pertinent characteristics of the population of new trainees who
will use this course: most trainees will have reached a minimum
educational level of 12 years of public schooling; they will have
received a 10-hour indoctrination course on the 473L System,
3The programed text for Query Language, Model II, is presented
in reference 3«

including a one-hour orientation lecture on the use of Query
Language; they will have received no formal training in the use
of Query Language; they will have already acquired limited proficiency in using the Integrated console in the overlay mode of
operation.
Since initial variability of potential trainees in terms of
their prior knowledge of Query Language is negligible, it did not
appear urgent for the content of instruction to be varied for
individual trainees. That is, there seemed to be no need to use
special techniques such as branching or gating to accommodate for
wide differences in previous knowledge.
In writing the instructional materials, an attempt was made
to gear the reading difficulty to the ninth grade level in order
to facilitate training efficiency. This was done by exerting
control over sentence length; checking the level of infrequently
used words and, whenever possible, replacing those whose level
was unusually high; and, on a more empirical basis, by revising
those4 steps which appeared difficult in terms of initial tryout
data.
2.2.2 The Course Objectives. The primary objective of this
course, as mentioned previously, is to develop proficiency in
writing Query Language statements for a representative set of
problem types. This required the development of verbal, conceptual skills. The instructional programing techniques selectee
as most appropriate were the use of small steps, careful sequencing, overt written responding (with responses building in
complexity as more material is learned), immediate confirmation,
and self-pacing. The selection of this methodology was based on
previous experience at the Institutes with its application to
similar training problems.
2.2.3 Characteristics of the Course Content. The selection of
an overall programing methodology was primarily dependent on
trainee characteristics and the terminal objectives. More specific training strategies were selected on the basis of the
characteristics of the subject matter.
One of the strategies contingent on the course content was
the extent to which the data in computer storage should be sampled
by the training problems. The stored information in System 473L
is extensive and AF documents are available to aid the operator
in determining the location and format of the data to be accessed.
Further, some of the data files have varying frequency of use; an
individual file might be used rarely by some Air Force personnel
and very frequently by others, depending on the primary function
^he grade level of words was determined by referencing Thorndike,
E. L. & Lorge, I. The teacher's word book of 30*000 words.
New York: Columbia University, 1959.

8

of the particular user group (e.g., logistics, personnel, etc.).
Therefore, it was decided that while basic training should exhaust
the possible capabilities and variations in format of Intermediate
Query Language, this training could not feasibly be applied to an
exhaustive sample of the actual data in computer storage. Therefore, the range of data for which the trainees were to write
Query Language statements during the training course was not
intended to be exhaustive but to constitute a representative
sample of the types of information in computer storage. This
approach was expected to minimize training time by reducing the
redundancy inherent in an approach that sampled all data. To
provide for effective transfer of training for the retrieval of
types of data not specifically covered in the training course,
the trainees were to receive extensive practice in the use of
the AF documents specifying the location and format of data.
The sequencing of training steps was dependent on the
intrinsic interaction among the items to be covered. For example,
understanding the structure of a Query Language statement requires
knowledge of the organization of information in computer storage.
Therefore, the structure of the computer files was taught early in
the course. As in this case, whenever possible sequencing was
designed to optimize the amount of positive transfer from one concept to another.
2.3

DESCRIPTION OF THE FINAL PROGRAM

2.3.1 The Preprogram. A preprogram of 34 frames was developed to
famil:.iarize the trainees with the characteristics of the textual
program and take the place of an initial orientation lecture on
self-instructional programing. This preprogram is designed to be
taken prior to the programed books on Query Language and would
enable trainees to start the Query Language textual program without any additional assistance. Thus, the preprogram reduces to a
bare minimum the need to utilize the time of training personnel.
2.3.2

The Textual Program on Query Language.

2.3.2.1 Number of frames. Appendix A shows the number of frames
in each volume of the programed text on Query Language. The total
number of frames, counting the preprogram, Is 1448.
2.3.2.2 Content of frames. The program is cumulative in that the
ability to write Query Language statements depends on a number of
subcriterion skills, and, therefore, the subcriterion skills were
developed first.
The dependence of the criterion upon subcriterion skills is
reflected by the sequence of topics In the program. Thus, the
initial sections of the program develop the subcriterion skills:
successively, 133 frames are devoted to an orientation to the
473L System; 43 frames cover the organization of information in
the data files; 21 frames cover the selection of the file from
which the desired data might be obtained; and 51 frames cover the

selection of the attributes and values that need to be specified;
the major portion of the program, consisting of 977 frames, provides intensive development of the ability to write Query Language
statements. The last three volumes, consisting of 189 frames,
give extensive guidance and practice in the use of the Air Force
Data Control Manuals, so that the trainee can learn to write
"complex" Query Language statements, which require great facility
in working with a number of data files, and so that the trainee
can become proficient in using these manuals and be able, In the
future, to readily access data from files that he has not previously worked with or from files that he is not extensively
familiar with.
The strategy used in developing proficiency in writing Query
Language statements is to require that trainees learn the functional correspondence between the rules of format and the appropriate computer operations. An important factor in this development of proficiency is the ability to understand the organization
of the data files. The importance of this factor is illustrated
by the frames below, excerpted from Volumes XI and XII. For
illustrative purposes, the frame answers are also shown here,
enclosed in [ ] and/or underlined where appropriate.
Volume XI:

MIN and MAX Functions
MIN and MAX as Values

1. Another function which Query Language can perform is to select
entries with maximum and minimum values of a particular attribute.
To find an airfield with the longest runway, you would want to
select the entry with the (minimum/maximum) ["maximum] value for
RNWY LENGTH.
L "
J
3. Which of the two entries below will be selected?
(Airfield name) [Hobson]
... WITH RNWY LENGTH = MAX ...
AFLD NAME
RNWY LENGTH
RNWY WIDTH

HOBSON
15000
150

CHECKER
11500
200

5. When MAX or MIN are used, the entry must meet all other
qualifications first. In the following example, Hobson has the
longest runway. However, it does not qualify because it is not
in the ["united States] . The entry that will be selected is [Dow].
... WITH COUNTRY = USA, RNWY LENGTH = MAX ...
AFLD NAME
COUNTRY
RNWY LENGTH

HOBSON
CANADA

13700

DOBSON

USA
9500

HOWE
FRANCE

9000
10

DOW
USA
10000

14. If there is a tie for the minimum or maximum value, you can
select between those entries by specifying a further minimum or
maximum modifier. A second modifier will be in effect only if
there is a tie resulting from the first modifier. In the following example, circle the entries which will be selectee by the
first modifier. Of the circled entries, which one will be selected
by the second modifier? Place a check above that entry.
WITH RNWY LENGTH = MAX, RNWYiWIDTH = MIN
AFLD NAME
RNWY LENGTH
RNWY WIDTH

ABLE

BAKER

10000
150

12000
200

/CHARLIE \

12500

\

130

I

DOBBS

/HIBBLER

10000
250

12500
y 200

MIN and MAX as Computed Attributes

22.
WIT
UNIT LOC

131PIS
AMARILLO

MDS

F100F
19
16

ACFT RDY
CRWS RDY

Suppose you want to know whether the 131st FIS at AMARILLO
has enough crews to man the operationally ready aircraft. You
can answer this by determining which of two attributes has a
minimum value; the two attributes to be compared are
fACFT RDY] and [CRWS RDY]
either order
23

UNIT
UNIT LOC

MDS

131FIS
AMARILLO
F100F

ACFT RDY
CRWS RDY

19
16

The values of ACFT RDY and CRWS RDY may be compared within
each column, or entry. Therefore, the minimum (or maximum) of
these values may be determined for each column, or [en try"].
24. For a given set of alternate attributes, to specify the one
with the maximum value, you would use [MAX] (attribute, attribute,
to specify the one with the, iminimum value you would use fMIN
(attribute, attribute,
27. For all tanker aircraft, we want to know the MDS, anr' the AVG
TKCFF WT or the AVG LAND WT, whichever is lower. Complete the Query
RETRIEVE ACFT CHAR WITH ACFT CAT = TANKER THEN LIST MDS,
[MIN (AVG TKOFF WT, AVG LAND WT)"

]1

ii

Volume XII:

GREATEST and LEAST

2. When an attribute may have more than one value per entry, and
the values are numeric, we may not want to print out all the
values. We may want to print out only the value which Is greatest
or, on the opposite extreme, the ["least (smallest, etc.)~[.
5. When an attribute has more than one numeric value per entry,
you can write a QL statement to find the ["greatest or ("least"!
value for that entry.
L
J
L
J
either order
8. In the table shown below, for which attribute could you find
the greatest or least value? fSTAGE CRWS RQ.1
PLAN IDENT
BASIC PLAN
MDS
STAGE CRWS RQ

PLAN IDENT file
1234A
1234B

1235A

123
B32

123

123

F100A
PI TO

P100A
F100C

F100D

20
15

31
10

16
14

1235B
123
B47E
B5-S
21
25

33. While attribute = MAX is designed to select the one entry
that has the maximum value for the specified attribute, attribute =
GREATEST selects one value of the specified attribute for each
[entry], and a correspondingTvalue ] for every other attribute that
belongs to the same [ subset]
37. We want to know the plan identifications and all the sustaining phase data available pertaining to the sustaining MDS's
with the largest number of total flying hours for each entry in
the PLAN IDENT file. Write the Query. Obtain a display in any
format.
RETRIEVE PLAN IDENT WITH TOTAL FLY HRS 5 GREATEST THEN
LIST PLAN IDENT, SUSTAIN MDS, TOTAL FLY HRS, TOT ELAPSED
TIME-I
2.3.2.3 Reference materials. As ide from the illustrations which
are presented on individual frame s and the AF documents specifying
the location and format of data, the textual program utilizes 36
exhibits external to the programe d books. Of these, 3b are bound
together in one volume and are us ed to display diagrams, models,
descriptions of data files, etc., that are frequently referenced
by the program to avoid unnecessa ry repetition of material on
individual frames. An additional exhibit that is bound separately
consists of a series of review pa nels. The major types of exhibits
and their use are described below

12

a) Schematic Diagram of Information Flow In the 473L System.
This diagram, shown In Figure 1, was prepared as an aid In developing an understanding of the 473L System, in which Query Language
is a major mode of data retrieval. This diagram shows the possible
flows of information in the 473L System when Query Language is used
for data retrieval: it shows the types of input components, the
flow of Input through the computer to the storage files, and the
flow of output data to the output component. This aid is used
exclusively In the first part of the program, which presents a
brief orientation to the 473L System.
b) Illustrations of Various System Components. These exhibits
are intended to facilitate the trainee's orientation to the 4731
System.
c) A Description of the Procedural Steps Required to Retrieve
Data for a Specified Problem Using Query Language. This exhibit,
shown in Figure 2, gives a brief, step-by-step description of the
overall process of problem solution, Including an Initial problem,
the operator's actions, the operation of the computer, ana the
output of data.
c) A Description of the Data Files. This exhibit is used to
help orient the trainee to the names of the data files anc their
general contents. An excerpt from this exhibit is given in
Figure 3«
e) Charts Showlngthe Conceptua_l Categorization of the Data
Files. These charts group the data files according to subject
matter (e.g., airfield information, personnel information, etc.)
and according to the level of data description (e.g., narrative
descriptions, detailed characteristics, index data, etc.). These
are used to help the trainee become proficient in accessing the
proper file.
f) Sample Tables with an Incomplete Listing of Attributes
and Typical Values. these tables are used in the program to
develop the trainee's concept of the organization of data stored
in the data files. One such table is shown in Figure 4. Shorter
excerpts from tables are frequently shown on individual frames.
g) An Example of How Information Is Retrieved from the Data
Files Using Query Language. This exhibit, shown in Figure 5,
illustrates the overall retrieval process by presenting: a sample
set of files, including an excerpt (with hypothetical values) from
the file from which information is to be retrieved for the solution of a problem; and, the Identification of each element of a
Query Language statement with the successive selection from these
computer storage files of a particular file, particular columns
of data, and the desired output data.
h) The Basic Output Directors. This exhibit shows the basic
output directors, the output device for each, and the format cf
output data.
13

i) Examples of Corrective Action Following Error Messages . As
part of the Initial orientation to the 473L System, an exhibit Is
used to Illustrate the type of display on the Integrated console's
CRT display screen for several different types of error, and the
action that should be taken In each case to correct the erroneous
Query Language statement.
j) Rules, Formats, Examples of Input and Output, and Other
Teaching Aids for Special Query Language Functions (such as GCD,
SUM, titles, 3orts, special directors using the SAVE table, CHECK,
and Complex Queries). These exhibits are used to help the trainee
learn and discriminate between the different formats and functions
of each Query Language function. Examples of these exhibits are
Figures 6 and J.
k) Re view Panels. These are intended to be used for a cumulative revie w of the first 70$ of the program (through Volume XIX).
They summa rize the information taught about the sequencing and
functions of the basic Query Language elements, such as the file
Indicator, the qualifier, etc.; the organization of data in the
data files ; the output devices and formats; and, the function,
format, ru les for use, and relation to the other parts of a statement, of s pecific parts or types of the basic elements, such as
attributes , special kinds of attributes (e.g., SUM and GCD), values,
etc.

QL ENTRIES
QL ENTRIES

PUNCHED
CARDS

L-3055
COMPUTER

QL ENTRIES

QL ENTRIES

(AN/FYQ-11)
HIGH-SPEED
PRINTER

I/O
TYPEWRITER

OTHER
CONSOLES

QL ENTRIES T

MULTICOLOR
DISPLAY
SCREEN

CRT DISPLAY
SCREEN

INTEGRATED CONSOLE
(IC)

IAN/FYA-2]

CONTROL KEYS AND LIGHTS
PROCESS
STEP KEYS

ALPHANUMERIC
|A/N| KEYBOARD

IT

CONSOLE
PRINTER

Fig. 1. Schematic diagram of planned information flow in
the design for a second-generation 473L System.
14

Step 1. We define the problem. In this example, we need to find
out the names of all airfields with runways of at least 10,000 ft.
This information is stored somewhere in the 3055 data files. All
we need to do is get it out.
STORAGE UNIT

Millions of Items
of Information -• similar to a large
library -- revised
every day.

L-3055
DATA
BASE

L-3055 COMPUTER

Step 2
The operator selects a program (RETRIEVE) and the correct
file (AIRFIELDS) from a description of the data base in specially
prepared AF documents.

Step 3. The operator selects the correct attribute names (RftWY
LENGTH and AFLD NAKE) from a description of the data base.

> -

Step 4.

The operator writes a Query Language statement.
RETRIEVE AIRFIELDS WITH RNWY LENGTH > 10000
THEN LISTV AFLD NAME, RNWY LENGTH"]

Step 5-

The operator types in the QL entry on the Integrated Console.

TO COMPUTER 5

Fig. 2. A description of the proposed procedural steps required
to retrieve data for a specified problem using Query Language.

15

Step 6. The computer selects the RETRIEVE program and starts
processing the QL statement.
i
i

PROGRAMS
RETRIEVE

PROCESSING
/ UNIT
OL STATEMENT
FROM IC

L-3055 COMPUTER

Step 7.
AIRFIELDS

The RETRIEVE
program
selects the
correct
file,

PROCESSING
UNIT

.DATA_
RETRIEVE
PROGRAM

L-3055 COMPUTER

and the
correct
attribute

I

AIRFIELDS

AFLD NAME

]

RNWY LENGTH

and the
correct
qualifying
columns,
and the
correct
output
attributes
specified by
the selector.

13000

LENGTH

11 000

'

T_LS_|^S1^£1

8500

9 000

'

J

I

Step 8. The computer sends the output to the selected device
(LISTV specifies the CRT display screen of the IC).

INTEGRATED CONSOLE

L-3055 COMPUTER

Step 9. The desired data Is outputtec in the desired format
(LISTV specifies vertical format).

CRT DISPLAY SCREEN

Fig. 2 (cont.)

16

File Name
Abbreviation
ACFT CHAR

File Name
Clear
Aircraft
Characteristics

ACFT MSL MAINT

Aircraft/Missile
Maintenance

Schedules for normal anc compressed rates of maintenance
in a depot for each MDS, and
the origin and destination
unit of the aircraft.

ACFT PLAN FAX

Aircraft
Planning
Factors

Factors used in solving operational problems dealing with
aircraft/airfield compatibility
and traffic compatibility, and
the planning for contingencies
that involve aircraft anc war
materiel items.

AFLD CLIM INDEX

Airfield
Climatology
Reel Index File

Location of current and backup
tape reels, and the reel number
in which data for a given
country Is located.

AFLD CLIM REEL

Airfield
ClimatologyReel File

Climatic data such as temperature, precipitation, and frequency of occurrence of specific ceiling and visibility
conditions by month for approximately 3500 airfields.

AIRFIELD INDEX

Airfield Tape
Reel Index File

Location of current and backup tape
reels, and the reel number in
which data for a given country
is located.

AIRFIELDS

Airfield Disk
File

Basic, frequently used data
concerning airfields (rnwy
data only for longest runway).

AIRFIELD CLIM

Airfield
Climatology
File

Monthly meteorological data
on airfields in the disk and
reel airfield files.

AIRFIELD REEL

Airfield Reel
File

All data concerning airfields,
including climatology data.

AIRMEN AFSC

Airmen AFSC (Air
Force Specialty
Code) File

A breakdown of the AFSC13,
with the meanings of each
part of the code.

Fig. 3.

Description
"Physical, operational, anc
logistic information on USAF
and civilian reserve aircraft.

A partial listing and description of the data files

17

AIRFIELD REEL*

AFLD NAME
COUNTRY
STATE
AVOAS CAP*«
RNWY LENGTH 1
2
RNWY WIDTH

RNWY SPC

NAVAID
FACILITY

3
4
1
2
3
4
1
2
3
4

ANDREWS

LOGAN

MACDILL

USA
MD
200
12000
11G500

USA

9#

8000

USA
FLA
180
9000
8050
7500

230
200
190
150

210
200
180

190
210
180

CONCRT
CONCRT
ASPHALT
ASPHALT
TACAN
AIR/GND
TOWER

CONCRT
CONCRT
CONCRT

ASPHALT
CONCRT
ASPHALT

TACAN
GCA
TOWER

TOWER

MASS

150
10000
9500

6000

ILS

•One difference between the AIRFIELD REEL file and the AIRFIELDS
file is that the AIRFIELDS file is on disk and contains runwaydata for only the longest runway, but the AIRFIELD REEL file is
on tape and contains data for all runways.
**Values for AVGAS CAP are in thousands of gallons (K GAL).
Fig. 4.

An incomplete table with typical values.

18

RETRIEVE

/
/

PROGRAM NAME

„. ,ll„ ,.l

„

,i

!..< )ll.,

RCDR

nlloiil,

ACFT CAT

»il

>•' )

PLAN IDENT

FORCE
STATUS

l'l
«"i
..lli.llu iy„„ll

,",!„, ,,ll,,ll,, ,,•„,

>'"•' "1

(ll.llll ollol In, y

ollol I,,,-,,,

nollo nil,

Ilulclltll
,,,,, II,,!

II,. ,,,,,.11.

,,l„„, II..I ill

, (yi.nll

Hi, nil,,,,

lixyiu.lln ,,11

:••«-!

„l !„<,,„

n ylll, oli,„,

<> ,,;:,,::,,

,:„, oiiniin

•ill

ell

1

11,11,

llll,,„l„, yMll
dllllO ollol

yll„

li.",, ,,11,1,1

COUNTRY=USA
DATA IN
AIRFIELDS
FILE
\~JCOUNTRY

AIRFIELDS

(II,ho oil

cv,„.|l„ „ll„„l

I

FILE NAMES

tliiiin i,n,

::.

„„l„, y!

(ii,i„i nii.,1

DATA BASE
V

PDR

til,!,,, ,,11,.!!,.

WITH

\

L-3055

RETRIEVE

IKIIIII

AIRFIELDS

> >•"<•

I.II, Mil,,,

,'

,1 , ,'
.!l,i ullonlio

|,h

lulcl

nlctlilm ollntlii

nollo ullonlio y

,,,,,.11,, ,,ll,„,i„,
,,!„,, |l„l ,11,1,,

.11,, „ll,,„l„, jll,, ,,
ii II..I ,11

liKyimllii olh,

nollo iil1,,,,In, y\

in. ylli. niton 1!

II,.:'

Ho ol

Ilolcllil

ELEV <100

-«—•+--—HUSA

USA

„ll,,„l„. ,11,. „l„,

USA

CANADA

CANADA

ALTUS

HILL

DOVER

I

AFLD NAME

LOGAN I

ANDREWS

STATE

MD

ELEV

130

THEN

! SHAW •

!

T

i °{ I

^30

AFLD NAME

LIST

I

OKLA

75

280

ELEV

QUALIFIED AND
SELECTED DATA
C0U.J7RY
AFLD HAM::

OUTPUT:

USA
-^— LOGAN

USA
<

SHAW-

STATE

'1AS3

OA

ELEV

55

90 -*•

LOGAN

SHAW

KAFLD ;IA:IL
ELEV

Fig. 5. An example of how Information Is to be retrieved
from the data files using Query Language.
19

functions

format
a. simple form — one SUM control
attribute and one summed
attribute.
... SUM BY SUM control attribute
(summed attribute) ...
Example ... SUM BY UNIT (ACFT POSS)...

Will sum values of
summed attribute for
each value of the SUM
control attribute.

b. general form -- multiple SUM control
attributes and summed attributes.
... SUM BY SUM control attribute EY
SUM control attribute BY SUM control
attribute (summed attribute, summed
attribute, ...) ...
Example ... SUM BY UNIT BY MDS (ACFT
RDY, CRWS RDY) ...

Will sum values of each
summed attribute for
every combination of
values of the SUM control attributes.

c. short form (no SUM function in the
qualifier).
... SUM (summed attribute, summed
attribute, ...) ...
Example ... SUM (ACFT RDY, ACFT
POSS) ...

Will provide a total 3um
(over all entries that
qualify).

d. short form if a SUM function has
been specified in the qualifier.
... SUM ...
Example ... LISTH UNIT, UNIT LOC,
SUM"]

Outputs sums alreadycomputed in the qualifier

Fig. 6.

Formats and functions of SUM in the output selector.

20

POB (C)
ITEM (C)

STORAGE LOC

(S)

SL ASSET
1.

entry

entry

PARKERSBURG
CHAFF RR131

SAMPSON
CHAFF RR131

ARNOLD

MULDOON

VICTORY

ARNOLD

VICTORY

5000

7000

3000

4000

7000

RETRIEVE MATERIEL STATUS WITH ITEM = CHAFF RR131, STORAGE
LOC = VICTORY THEN LIST SL ASSET"!

output:
SL ASSET
2.

RETRIEVE MATERIEL STATUS WITH ITEM = CHAFF RR131, STORAGE
LOC = VICTORY THEN LIST SUM (SL ASSET)'

output
SUM SL ASSET
3-

RETRIEVE MATERIEL STATUS WITH ITEM = CHAFF RR131 THEN LIST
SUM BY STORAGE LOC (SL ASSET)"]

output:
STORAGE LOC
SUM SL ASSET

Fig. 7.

Examples of input and output formats for SUM.

21

2.3.2.4 Program format. Each "frame" consists of two parts, a
stimulus panel and an answer panel. A stimulus panel consists of
written material, Including at least one Incomplete statement;
missing words or phrases are indicated by blanks. Answer panels
contain the missing words or phrases. In some cases, where this
appeared appropriate, the missing words or phrases are presented
in their appropriate context (from the stimulus panel) on the
answer panel.
Stimulus panels, and likewise response panels, are presented
three to a page, on sequential pages of bound volumes. Frames
that relate to a relatively discrete topic are bound together;
thus, separate topics are usually denoted by separate volumes.
The sequence of stimulus and answer panels is shown in the schematic
on the following page.
2.3.3 Computer Exercises. Although developing proficiency In
using the integrated console is not a primary task of the programed text, it was suggested that some experience in actually
using the console to retrieve data with Query Language statements
would have a motivating affect. In addition, the exercises are to be
used to permit trainees to become adept at interpreting and
reacting to Error Messages related to Query Language inputs, displayed on the CRT display screen.
The computer exercises are designated for use at points
scattered throughout the course. The instructions for these
exercises are bound together in an Exercise Book.
2.3.4 Test. Two tests were developed for use in evaluating the
Query Language course. One wa3 used during the preliminary tryouts to test the first 70$ of the program. The other test is the
final posttest, which is cumulative over the entire content of
the program; a copy of this test is shown in Appendix B. A copy
of the final test answer and scoring key is in Appendix C.
The final posttest is designed primarily to evaluate proficiency in writing Query Language statements; it requires the
trainee to write a representative sample of the types of Query
Language statements taught throughout the program. This test
was used at the completion of the programed course to evaluate
the overall effectiveness of the course, and to indicate necessary specific revisions In the program. In the future, this
test can be used to evaluate the proficiency of individual
trainees. The almost exclusive emphasis in the final test on
proficiency in writing Query Language statements reflects both
the importance of this objective and the fact that the level of
proficiency In writing Query Language statements also reflects,
in most cases, the level of proficiency in the subcriterlon
behaviors.

22

23

2.4 THE DRAFTING OF FRAMES, INITIAL PROGRAM TRYOUTS, TECHNICAL
REVIEW, AND REVISIONS
Draft frames were baaed on the Initial specification of
course objectives, content, and training strategies. As successive portions of the program were completed, they were given preliminary tryouts and a review for technical accuracy, and revised
on the basis of the tryout data and technical comments. Due to
the evolving nature of Query Language, some changes in frames
were also necessitated by changes in the language itself.
Prior to the field tryout on the entire text, there were two
preliminary tryouts and revisions. The volumes evaluated in the
first tryout constituted about 38$ of the final text. The volumes
in the second tryout included the volumes used in the first tryout
plus additional materials; the total number of volumes in the
second tryout amounted to about 70$ of the final text. The two
preliminary tryouts and revisions are described below.
2.4.1 Subjects. All of the preliminary tryouts were conducted
at the Pentagon with a small number of Air Force personnel who
were typical of the target population. Four trainees participated
in the first tryout. In the second tryout, these trainees took
only the materials that had been drafted subsequent to the first
tryout; two new trainees took all of the materials that had been
drafted up to this point (approximately 70$ of the final program).
2.4.2 Results. Since the number of subjects for the preliminary
tryouts was small, the results were used for the very limited
purpose of revising the programed text. Generalizations to other
programed materials are not appropriate.
Percent correct pretest scores for individual trainees were
low: the median score for the six trainees participating in
the two preliminary tryouts was 2.9$; the range was 0.5$ to 10.0$.
For the preliminary tryouts, the average error rate on each
draft volume of the programed text (not counting Volume I - the
preprogram, for which the error rate was negligible) is given in
Table 1 below. The error rates ranged from 3-65$ to 24.2$. The
median error rate for the draft volumes was 8.5$. As shown,
only two out of 14 volumes had an error rate exceeding 10$.
Volumes II through IX were used in draft form in the first
tryout; the average error rate on this portion of the program
was 8.3$. In addition to the revised volumes II-LX, Volumes X
through XIV were used in draft version in the second tryout;
the average error rate on draft volumes X-XIV was 7-6$.

24

TABLE 1
Average Error Rates on Draft QL Volumes That
Were Used in the Preliminary Tryouts
First Tryout
Volume

Second Tryout

Average Err

II
III
IV
V
VI
VII

6.6$
8.5
23.4
9.1
9.6
3.65

VIII

24.2

Rate

Volume

Average Error Rate

x

8.8$

XI

8.1

XII
XIII
XIV

4.7
7.0
8.7

5.2

IX

Individual error scores on the first 70$ of the draft program
and percent correct posttest scores on the test covering this portion of the program are given in Table 2 below for each preliminary
tryout.
TABLE 2
Percent Error for Each Trainee on the Program and Percent
Correct on the Test for Each of the Preliminary Tryouts
Tryout 1

Si

S2

Draft Program

9.9$

6.0$.

96$

95$

Tryout 2

s

s

Draft Program

2.8$

6.2$

Test

Test

5

92$

s

3
9-9$

90$

S4

7.5$
79$

6

88$

The average posttest score for the first tryout was 90$ and
likewise for the second tryout.
In view of the high posttest
scores and the fairly low error rates on even the draft volumes -- with
a couple of exceptions -- and considering the apparent lack of
relationship between program error rates and proficiency on the
posttest, revisions were mostly based on the technical review and the
critical comments of the trainees.

25

2.5

FIELD TRYOUT AND REVISION

2.5.1 Design. While the scale of the field test was restricted
by the limited availability of personnel representative of the
target population, the scale was sufficient since the field test
was used exclusively for very specific evaluative and diagnostic
purposes: it was used to evaluate the effectiveness of the programed course in terms of the specified objectives, and to diagnose areas of Query Language that trainees had difficulty in
learning, for which revisions were required in the training materials.
As previously mentioned, the primary objective of the program
is to enable Air Staff personnel to write Query Language statements
for the solution of moderately difficult, specified problems.
Evaluation of the program's effectiveness in achieving this objective was based exclusively on the trainees' proficiency on the
criterion test, which emphasized ability to write Query Language
statements.
2.5.2 Subjects. As shown in Table 3 below, seven of the
eight trainees had taken the earlier programed course on OTC
Query Language, which was the first operational version of Query
Language; five had had prior training on either Model I or
Model II Query Language; and, three of the trainees had
actually participated in the preliminary tryouts with the draft
materials covering approximately 70% of the program on Intermediate Query Language, Model II.
TABLE 3
Background Data for Trainees Participating in the Field Tryout

Trainee

I1
4
§s
l
s

8

Previous
Training
in OTC QL

Previous Training
in either Model I
or Model II QL

Yes
Yes
Yes
Yes
Yes

Yes
Yes
Yes
Yes
Yes
No
No
No

Yes
Yes
No

Participated in the
Preliminary Tryouts on
the Draft Program for
Model II QL
Yes
Yes
Yes
No
No
No
No
No

Since the structure of Query Language changed greatly from
OTC to Model II, the previous training of some subjects in OTC
Query was not seen as a serious problem; in fact, in transitioning
from OTC Query Language to Query Language, Model II, it is
problematic whether transfer of training was positive or negative.
In either event, there was an overriding, pragmatic justification
26

for the selection of these particular trainees to participate in
the field tryout: the tryout was utilized by the Air Force as
an opportunity to meet current training needs.
All of the trainees completed the programed course on
Intermediate Query Language, Model II, but only five took
the final test. These were the trainees Identified in Table 3
as S2, So, Sc, Sy, and Sg.
2.5-3 Administration of the Field Tryout. The tryout was adminlstered by Air Force personnel. Aside from an initial briefing,
direct monitoring was negligible. Work on the course was done at
the Pentagon, on a flexible schedule. Other Air Force duties
were interpolated, as required, by each individual. Interspersed
activities consisted of the trainees1 normal duties at the
Pentagon in their regular section. Trainees worked on the course
an average of 6-1/2 hours per day. Total training time varied
from a minimum of five days to a maximum of eight days. Since
there was no significant relationship between program error rates
and test proficiency in the preliminary tryouts of the program,
and since the collection and analysis of program error data is
very time-consuming, it was not considered worthwhile (and it
was definitely not expedient) to collect program error data.
The only data collected from the field tryout were program completion times and percent correct scores on the final test.
After the instructional program wa3 completed, each trainee
was given a posttest designed to evaluate his proficiency in
writing Query Language statements. As mentioned previously, a
copy of this test appears in Appendix B.
2.5.4

Results of the Field Tryout.

2.5.4.1 Completion times for the programed materials. Since
Interruption of progress on the program to perforrr other duties
was intermittent, recording exact times would have been burdensome. Therefore, the average number of hours per day was estimated, at 6-1/2 hours, and the completion time for each trainee
was recorded In terms of the total number of days required to
finish the program. The completion time for each individual
trainee is shown in Table 4 below. The average number of days
for completion was 6.3 days; at 6-1/2 hours per day, this meant
an average of roughly 41 hours to complete the program.
TAELE 4
Program Completion Time for Each Trainee
Trainee

Time

Si
52
53
54

6-1/2
6
7
5

days
days
days
days
27

Trainee

Time

S5
S5
Sj
Sg

7
5
6
8

days
days
days
days

2.5.4.2 Scores on the final posttest. As mentioned previously,
the final test was taken by only five of the trainees participating in the field tryout. Scores for individual trainees on the
final test are shown in Table 5.
TABLE 5

Percent Correct Scores on Final Test for Individual
Trainees Participating in the Field Tryout
S2
S?

94.6$
46.5

Sj
Ss

86.8
72.4

Si

80.2

The average percent correct score on the final test was 76.1$.
The median was 80$, with scores ranging from 46.5$ to 94.6$. Thus,
scores on the field tryout final test were lower than those obtained
on the posttest covering 70$ of the program, which was used in the
preliminary tryouts. The most probable explanation for this is
twofold: l) the last 30$ of the program had not previously been
tried out; and, 2) the last 30$ of the program was intrinsically
more difficult, overall, than the first 70$, and the final test
was correspondingly5 more difficult than the posttest used in the
preliminary tryouts .
2.5.5 Revisions After the Field Tryout. Since time was a critical
factor in completing the revisions, revisions were restricted to
those based on data from the final test.
The first step in revising the program was to make a few
additional technical revisions based on information obtained from
Air Force experts on Query Language. Further revisions in the
program were based on errors made on the final test. For each
item missed on the test, corresponding topical sequences were
identified. Appropriate revisions were made in these sequences.
The revision based on test errors was considered of critical
importance and ample time was devoted to this work in order to
adequately revise for all recurrent test errors.
^As seen in Table 3> trainees S2 and S3 had previously participated
in the preliminary tryouts of the draft program; their scores on
the test covering the first 70$ of the entire program were 90$
and 79$ > respectively. The substantial drop in SVs score would
seem to indicate that the final test was substantially more
difficult (for him) than the posttest used in the preliminary
tryouts.

28

2.6

DISCUSSION AND RECOMMENDATIONS

Generally, the results of the field tryout indicated that the
programed course can train to a satisfactory level of proficiency
over a period of about six days, using a massed training schedule
of roughly 6-1/2 hours per day. Past experience with a similar
programed course on OTC Query Language indicated that a training
schedule requiring less
time per day achieves a slightly higher
level of proficiency6.
Thus, if the immediate need for personnel
trained in Query Language is not especially urgent, a spaced
schedule of about two to four hours per day would be most desirable. But if immediate training is critical, a massed schedule
could be used with only a small penalty expected in terms of the
final proficiency level.
For the Implementation of this course as a training device,
it is recommended that the final test be retained for evaluative
and motivational purposes. It is also recommended that the integrated console exercises be used to demonstrate data retrieval
for a variety of Queries. While these demonstration exercises
would be on the computer, they would not be computer directed.
If it is possible to provide the adaptive, computer directed
training course described In the next section, this would be
recommended in preference to the programed text.

massed practice group (approximately seven hours per day) scored
Qkf on the posttest and had an average error rate of 7.8%. A
spaced practice group (approximately two hours per day) scored
90$ on the posttest and had an average error rate of 5.0%. More
detailed data are available in reference 1.

6A

29

Section III
DEVELOPMENT OF THE OPERATIONAL SPECIFICATION
FOR COMPUTER DIRECTED TRAINING
3.1

INTRODUCTION

The rationale for using computer directed training in Query
Language was described in Section 1.4.2 of this report. The
terminal product of this phase of the project is the Operational
Specification for Computer Directed Training in Intermediate
Query Language, Model II, for System 473L, February 196b7. This
operational specification completely describes the training design and the necessary operating procedures for the proposed
computer directed training capability. A description of the
scope of programming required for the implementation of this
capability, and the impact it will have on System 473L operational capabilities is contained in a report prepared by the
Federal Systems Division of International Business Machines, Inc.;
this report is entitled Computer
Directed Training: System 473L
Query Language, April 19668.
The general conclusion stated in
this document, as a result of programming analysis, is that the
computer directed training capability would be compatible with
System 473L equipment and programming subsystems,can be utilized
in a manner similar to that of existing operational capabilities
using an overlay, would have little Impact on storage requirements for data and programs, and would have no Impact on the
simultaneous utilization of existing operational capabilities.
The training design and operating procedures were developed
with specific reference to the Librascope 3055 computer and
Intermediate Query Language, Model II, as they are intended for
use in System 473L. However, the specific design described by
the operational specification may be used as a model for the
development of similar training programs in other command and
control systems.
3.2

DEVELOPMENT OF THE OPERATIONAL SPECIFICATION

Each capability of System 473L is described in detail by an
operational specification. The general contents and format of a
System 473L operational specification are predefined. To ensure
that the operational specification for the proposed training
capability would meet approved 473L standards, and to provide
guidance regarding the feasibility of proposed training features,
the developmental process required close Interaction between the
A-I'R project staff and technical experts on System 473L from the
Federal Systems Division of International Business Machines, Inc.
7This document is reference 4.
"This document is reference 5.
31

The operational specification evolved through predeslgnated
stages, which required the development, revision, and integration of three successive parts: the training design, or training
sequence logic; the procedural flow diagram; and the specification of operating procedures.
3.3

DESCRIPTION OP THE OPERATIONAL SPECIFICATION

The following sections briefly describe the training design
and the operating procedures for the computer directed training
capability. More detailed information is available from reference
4.
3.3.1

The Training Sequence Logic

3.3.1.1 Complexity of the training design. An ideal design for
computer directed training would permit maximum interaction
between the computer and the individual trainee, including complete analysis of responses and computer generation of all evaluations, directions, problems for individual trainees, and responses
to student-generated requests, with all communications in more or
less unrestricted English. However, this is not feasible for use
with an operational system. In developing the training sequence
logic, there was a need to limit the complexity of the training
design in order to optimize the feasibility of Implementation in
terms of a) the need to minimize any possible conflict with
other System 473L operational capabilities; b) the cost of developing a computer program to implement the proposed logic; and,
c) the cost of training itself -- this would also Increase with
increasing complexity of the training design.
The major training restrictions Imposed to increase the
feasibility of computer implementation and reduce cost were:
1) computer analysis of trainee responses on designated
parts of QL statements, as opposed to analysis of complete QL statements.
2) the use of "canned" answers (vs computer-generated
answers) to problems requiring computer analysis, so
that analysis of a trainee's response could be accomplished by a simple matching process.
3) the use of fixed formats for training materials and
problems, so that content -- but not format -- changes
could be made without a programming change.
4) the use of only one major level of remedial training
after each evaluative problem section. Thus, no
remedial loop is used to correct errors on a remedial
sequence Itself.

32

5) the use of a limited number of evaluation ratings to
evaluate a trainee's performance and assign remedial
materials. However, the error criteria for the
various ratings can be stored in such a way that
changes in these criteria would be relatively easy
to make if experience so dictates.
3.3.1.2 Provision for adaptation of the capability. Several
provisions were made to permit adaptation of the capability for
anticipated future changes in the training design and Query
Language, and to permit training in an entirely different subject. To achieve maximum flexibility without any programming
change, the provisions for adaptation are through data maintenance. Some of the major areas in which data maintenance
changes may be made are: the content of any cue -- i.e., the
content for any unit of presentation to the trainee; the number
of cues within a set; the criteria by which trainee errors are
evaluated; and, the remedial continuations for trainee errors.
3.3.1-3 The training sequence logic. The proposed computer
directed training capability has three basic uses for training
in Query Language, Model II: 1) to train personnel in the use
of Intermediate Query Language, Model II; 2) to provide performance data for the trainees and supervisory personnel; and, 3) to
provide proficiency maintenance training for trainees who have
previously completed this or some other training program in
Query Language, Model II.
Salient features of the training sequence logic, designed
to optimize training effectiveness and efficiency within the
bounds of computer feasibility and cost, include the following:
presentation of training materials on the console display screen;
computer evaluation of trainee performance based on the number
and kind of errors and pre-stored evaluation ratings; computer
determination of training areas requiring remedial work for
each trainee; computer determination of the specific remedial
sequence in each area that is appropriate to the kind and number
of trainee errors; and, periodic provision of optional remedial
work for trainee selection.
The training course designed for initial use has a built-in
research design on the relative effectiveness of forcing trainees
to take remedial work appropriate to the number and kind of their
errors, in addition to periodic provision cf optional remedial
work (called Research Group "A"), as opposed to the provision of
optional remedial work only (called Research Group "B"). This
comparison would provide some of the information needed to make
revisions in the training logic that would improve the overall
effectiveness and efficiency of the computer directed training
course.
Except for the remedial materials, whose presentation is
contingent on trainee errors and/or selection, the sequence of
training materials is fixed. The initial training sets provide

33

a ba3lc orientation to the use of the instructional materials, the
use of the console, the data files accessed by Query Language, the
overall structure of Query Language statements, and the use of the
Air Force Manuals that describe the general nature and contents of
each data base file and the vocabulary used to reference the file
contents. Subsequent sets provide extensive and cumulative
training and evaluation on the use of Query Language to retrieve
desired Information from the data files. The Query Language
topics covered by the computer directed training course are designed to progress in the order specified by Appendix D of this
report.
Any deviation from the main training path for the computer
directed training course is contingent on the trainee's performance and/or his own optional selection of remedial materials.
The main training path and the remedial branches are shown in
the Training Sequence Logic Flow Diagram, Figure 8. Explanatory
notes for this logic on a general level are given in the paragraphs below.^
The Training Sequence Logic Flow Diagram shows the main
training sequence for all cues (individual units of presentation
to the operator), points requiring a decision by the operator,
points requiring computer evaluation of a trainee's performance,
points requiring computer determination of any necessary branching, all remedial trailing paths, and points at which data printouts occur.
To facilitate understanding of the Training Sequence Logic
Flow Diagram, a condensed outline of the basic training sequence
and remedial branches is presented below.
Basic

X SETS (5 sets)

Training

LX-PX -- Eval

Sequence
Remedial
Branches
for
Excess
Errors

Y SETS (20 sets)
LY-PY Eval-PEY*-Free Choice Review

I

*FEY not avail after Posttest PY

4-

A FIXED SEQ FROM
LAST LX (ONE SEQ,
REGARDLESS OF TYPE
OF ERRORS)

ONE OR MORE REMEDIAL SEQUENCES
APPROPRIATE TO 1) THE PARTICULAR
AREAS IN WHICH ERRORS WERE MADE
AND 2) THE NUMBER OF ERRORS MADE

9por a complete explanation of all points on this diagram, the
reader is referred to reference 4, the Operational Specification
for Computer Directed Training in Intermediate Query Language,
Model II, for System 473L.

34

NOTES: The cues used by this capability may be divided into two
major classes; (l) the cues that provide a basic foundation for
the course but do M not teach Query Language; these cues are divided
into sets called X" SETS; and (2) the subsequent cues that cover
all aspects of intermediate QL; these cues are divided into sets
called "Y" SETS.
1) The X SETS COVER BASIC, NON-QL MATERIALS:
a) the use of the instructional materials
b) the use of the console and the CDT overlay
c) a basic introduction to the files and structure of QL
d) the use of the manuals covering the QL data base and
system vocabulary
2) The Y SETS cover all training and evaluation materials
provided on the use of Query Language -- i.e., all materials not covered by X SETS
3) For any X SET:
LX = training sequence -- a section within the X SET
PX = series of problems on the LX -- a section within
the X SET
Eval s evaluation of errors on the PX section
4) For any Y SET:
LY = training sequence -- a section within the Y SET
PY = series of problems on one or more of the preceding
LY's -- a section within the Y SET
Eval SB evaluation of errors on each area of QL subjected
to analysis by the last PY section
PEY = series of free-form practice exercises (which the
trainee answers with complete QL statements and on
which he receives feedback). This is a section
within each of the Y SETS except the last -- this
is not available after the posttest
Free-Choice Review = at end of each Y SET, the trainee
has the option of taking review on any area subjected to analysis by the last PY section
5) Throughout the entire program there are four general
types of sets -- the basic, non-QL X SETS and three
kinds of QL Y SETS. These sets are described below:
a) 5 basic, non-QL X SETS
b) 13 INDEPENDENT Y SETS for each of which the PY section tests only the materials covered by the LY section in the same set
c) 6 CUMULATIVE Y SETS, used at appropriate points, in
which the PY section tests all materials covered after
the last CUMULATIVE Y SET
d) one POSTTEST Y SET: this is the last Y SET, for which
the LY section reviews all materials covered in the
program and the PY section is the POSTTEST, which tests
all the materials covered over all of the Y SETS in the
program.
35

An understanding of the Training Sequence Logic Flow Diagram
will also be enhanced by an understanding of cue types. There
are two general types of cues used by this capability:
1) Instructional cues. These cues are used primarily to
instruct the operator (trainee) in the proper procedure for
making a transition from one point In the training sequence to
another. These Instructional cues usually Indicate the available
options for continuing, and in some cases they provide feedback
to the trainee regarding his performance on the last cue or series
of cues. On the Training Sequence Logic Flow Diagram, most of the
instructional cues are represented by a cue number, e.g., Ql,
enclosed in a circle, and a brief description of the cue's function. Instructional cues that provide an introduction to a
sequence of learning cues are specified to the right of the
bracket enclosing the series. For example, on page 1 of Figure B,
Q7 is an Instructional cue that precedes a series of learning cues
(which are, In this case, Q9's).
2) Learning cues. These cues are sequenced so that the
trainee will learn the desired criterion behaviors, e.g., how to
write a SUM function. On the Training Sequence Logic Flow Diagram, learning cues are differentiated according to their training function and according to their format.
According to its training function, which may change from
one part of the training sequence to another, a learning cue is
identified as one of the following types: LX, PX, RX, LY, PY,
PEY, Text, CUM TEXT, RY, CUM REVIEW, PRACTICE Y, and CUM PRACT
PROB. For example, on page 1 of Figure 8, the first series of
learning cues is represented by ( LX-1)
fLX^T)
According to its format, a learning cue is identified as
one of the following types: Q8, Q9, Qll, Q19, and Q19A. For
example, on page 1 of Figure 8, the first series of learning
cues is identified as a series of Q9's preceded by an instructional cue (Q7).
The different types of training functions for learning cues
will now be explained. The learning cues used by this capability
are logically and sequentially divided Into two major groups:
1) The non-QL learning cues. These cues provide a basic
foundation for the course but do not teach
Query Language itself.
They are divided Into the sets called Mx" SETS, which precede
all other learning cue sets used by this capability. Thus, on
page 1 of Figure o, the first series of learning cues have an X
subscript, which identifies them as non-QL learning cues.
2) The QL learning cues. These cues provide trainees with
all training and evaluation materials that are needed in order to
learn to use Intermediate Query Language, Model II; these cues

36

are divided into the seta called "Y" SETS. Thus, on page 2 of
Figure 8, the first series of learning cues have a Y subscript,
which Identifies them as QL learning cues.
In each non-QL X SET, learning cues may be further subgrouped
according to their sequence and function. There are two major
sections of learning cues in each X SET:
1) An LX section. This is a sequence of cues (Q9's) used
primarily for training. Each LX section has two subsections:
a) the initial series that provides basic training, and b) the
final series that reviews the information taught in the first
section.
2) A PX section. This is a sequence of cues (Q8's) used
primarily for evaluation of the trainee's proficiency on the
information covered in the preceding LX section.
In each X SET, these two sections are followed by computer
error-analysis of the trainee's responses on the PX cues and, if
necessary, appropriate remedial work.
Learning cues may also be subgrouped in each Y SET according
to their sequence and function. There are three major sections
of learning cues in each Y SET:
1) An LY section. This is a sequence of cues (Q9's) used
primarily for training. Each LY section ha3 three subsections,
in this general order: a) the basic training sequence, called
a TEXT sequence; b) the sequence that reviews the Information
taught by the TEXT sequence -- this is called a REVIEW sequence;
and c) a series of cues that emphasize the development of QL
statements for specified data retrieval problems. This series
is Intended to give the trainee practice in using the QL elements taught In the TEXT and REVIEW sequences. This Is called
a PRACTICE PROBLEM sequence.
2) A PY section. This is a series of cues (Qll's) used
primarily for evaluation of the trainee's proficiency on the
information in one or more of the preceding LY's. The complexity of the computer error-analysis on each problem is
minimized by the restriction of error-analysis to specified
parts of the appropriate QL statement.
In each Y SET, these two sections are followed by computer
error-analysis of the trainee's responses on the PY cues and, if
necessary, appropriate remedial work. There are three levels of
remedial work, appropriate to three evaluation-ratings, GOOD,
AVERAGE and POOR. In general, remedial work for a rating of
POOR is excerpted from a TEXT subsection of an LY section;
remedial work for a rating of AVERAGE Is excerpted from a
REVIEW subsection of an LY section; and, remedial work for a
rating of GOOD Is excerpted from a PRACTICE PROBLEM subsection
of an LY section. Since remedial work In an area is forced on

37

a trainee in Experimental Group A if his PY errors on that area
were excessive, any remedial sequence taken at this point is
called a FORCED REMEDIAL sequence.
3) Following the error-analysis and any necessary remedial
work for a Y SET is a PEY section. Thi3 is a series of cues
(Q19's and Q19A's) that give the trainee practice in developing
complete QL statements for specified problems, with feedback to
the trainee indicating the correct QL statement for each problem
presented. This section is not used for computer evaluation; it
is only used for a trainee's self-evaluation. Since the trainee's
answers In this section are not evaluated by the computer, the
answer format used by the trainee does not constitute a problem
for error-analysis. Therefore, the trainee types the entire QL
statement as his answer, not Just the restricted excerpts required
for his answers in the PY section. Since these cues provide practice in writing QL statements and the answer formats are not
artificially restricted by the length of the answer required,
these cues are called free-form practice exercises.
For each Y SET, after these three sections are complete,
the trainee is given the option of taking remedial work in one
or more areas fcr which the trainee's responses in the last PY
section were evaluated. Since remedial work at this point Is
taken only by free-choice and consists of REVIEW materials from
one of the last LY sections, a remedial sequence that the trainee
chooses to take at this point is called FREE-CHOICE REVIEW. As
noted earlier, this is the only remedial work available to Experimental Group B.
As mentioned previously, the X and Y SETS are different in
terms of their position in the overall training sequence and their
general training functions: X SETS are placed first and emphasize
non-QL topics; Y SETS are last and emphasize the use of QL elements. Cue sets may also be differentiated in terms of their
general training function and the overall scope of their trainingand-evaluatlon materials. In terms of function and scope, there
are four types of sets:
All of the X SETS are of one type:
1) Non-QL PRETEST sets. These precede the QL materials
that are tested by the final posttest; In each set, the PX section tests only the materials covered by the LX section in the
same set.
The Y SETS include three different types of sets:
2) QL, INDEPENDENT sets. In each set, the LY section teaches
new materials, and the PY section tests only the materials covered
by the LY section in the same set.

38

3) QL, CUMULATIVE sets. In each set, the LY section reviews
the materials covered in the INDEPENDENT sets that followed the
last CUMULATIVE set, and the PY section tests all materials covered after the last CUMULATIVE Y SET.
4) The QL, POSTTEST set. In this set, the LY section reviews
the materials covered throughout all of the sets. The PY section
is the posttest, which tests all materials covered over all of
the other Y SETS in the program. It is, in effect, the end-ofcourse criterion test.
The entire program consists of the following sets, in the
order described:
1) five non-QL, PRETEST X SETS
2) a series of 13 QL INDEPENDENT Y SETS, with six QL
CUMULATIVE Y SETS interpolated at appropriate points
in the overall sequence
3) one QL, POSTTEST Y SET.

39

E
U

cd

Ml
CO

*

CM

o o

rH

rH

CM

O -P
•H CD

o JC
f-t 03
CD

o

c •

CD bO

a c

D'-H


M 0)
o x:
H 03
*—*
d)
O

c

•

afi
C
D* •H

Remedial Continuations. When this key is pressed,
printout #7 will be printed out on the line
printer.

CUE
FILE
MATRIX

This key signals the program that for updating
purposes, the instructor wants to obtain P/05,
Cues (the contents for all cues). When this
key is pressed, printout #R will be printed out
on the line printer.

CRITERIA
MATRIX

This key signals the program that for updating
purposes, the instructor wants to obtain P/06,
Criteria. When this key is pressed, printout
^6 will be printed out on the line printer.

L16

L17
FLEX
COURSE

This key signals the program that for updating
purposes, the instructor wants to obtain ?/0i,
FLEX COURSE, which specifies the number of
PRETEST (X) SETS, the number of POSTTEST problems, and the number of subcategories. When
this key is pressed, printout #9 will be printed
out on the line printer.

47

TABLE 6 (cont. )
(Page 3 of 3)
Function

Key Designation
L25

CONTINUE

L27

SKIP

L29
TERMINATE
TRAINING

This key signals the program that either the
operator does not wish to exercise any special
training options available at this time or,
none are available. When this key is pressed,
the next cue that is available in queue (without branching out to pick up cues that are not
presently stacked up) will appear on the display screen. The type of cue that is presented
will vary from one training point to another.
This key signals the program that the operator
does not wish to complete the remedial sequence
on which he is now working, When this key Is
pressed, the trainee will be sent to the same
continuation point that he wo uld have reached
had he completed the ren.edial sequence: if the
trainee is still In training on the CDT program
and he has not yet taken the posttest, or if he
is a proficiency maintenance trainee, Q20 will
appear on the display screen; if the trainee has
just completed the CDT progra m including the
posttest, Q23 will appear on the display screen.
This key signals the program that the trainee
is going to stop work on the CDT program at the
present time. When this key is pressed, Q32
will appear on the display screen, and the
storage of data for a trainee on the work done
so far on a set will continue until storage is
complete.

A diagram of the training sequence in terms of the presentation of specific cues and capability options is given In the
Procedural Flow Diagram for the Computer Directed Training Capability. This may be referenced In the Operational Specification
for Computer Directed Training in Intermediate Query Language,
Model II, for System TOL^i

TO See reference 4.

48

3.4

DISCUSSION AND RECOMMENDATIONS

The techniques for a computer Directed training program
briefly described here, and more fully described in the
Operational Specification, are sufficiently general as to be
applicable to other programs of training in artificial languages
that are relatively fixed in format and vocabulary (such as
computer languages). However, it is recognized that other computer systems may not have available the type of equipment for
which this program was designed. The only major feature of the
instructional system that may not be commonly available on other
systems is the overlay keyboard. The overlay system on the
Integrated console was not originally designed to perform an
instructional function, but it provides an excellent opportunity
for permitting limited control of the Instructional process by
the trainee and increases the scope of interaction at the computer interface. However, this overlay keyboard is not essential for a computer directed training capability. The functions
performed with the overlay could be simulated by outputs on a
display screen or typewriter providing a trainee with choices
that would have been indicated on the overlay. Although this
method may be less economical, it would still be functional.
It is felt that the major advantages of the CDT instructional system are the periodic analysis of tne trainee's input
and the selection of appropriate remedial materials based on
this analysis, the extensive capabilities for output data, and
the introduction of some learner control over the selection of
training materials. The latter feature would be especially
valuable for a proficiency maintenance training sequence, where
individual needs would vary greatly.
Proficiency maintenance is a particularly critical training
problem with command and control systems where there is a need
for highly proficient personnel at all times but where there is
little opportunity for actual practice in real or simulated
situations. It would be possible to expand the computer directed
training course described here to provide frequent realistic
combat exercises and to provide, on the basis of an analysis of
errors made, a series of training exercises uniquely designed
to build proficiency in weak areas. This approach could also
be generalized to other command and control systems where proficiency must always be at a maximum but where actual practice
may be at a minimum.
Looking further into the future, it would be possible to
build Into a CDT program appropriate statistical tests that
would determine the effectiveness of the program as measured
against appropriate criteria of terminal performance. With
Initial entrance data (e.g., initial aptitude, 1^, etc.) and
performance data collected at appropriate points throughout the
program, statistical computations could be performed after an
appropriate N of trainees had been obtained. The results of
such an analysis may suggest changes in the program, e.g., a

49

new organization cf the materials and/or a shift in the
parameters used for deciding en the frequency and nature of
the correctional training sequences. Adjustments of this
sort could be accomplished automatically and internally by
the computer itself. Such a training system would be truly
adaptive in that it would adjust its own training strategy
based on the measured success of the program being" used.
For example, in the program described in this report, the
two experimental groups (A and B) could be compared in an
analysis of variance design and if one group was always
inferior, that group could be discarded, or, if a significant
interaction was found between the experimental group and seme
other variable, e.g., some background factor, a decision logic
could be internally prepared for deciding who got the "A" condition and who got the B" condition. For example, perhaps it
would be shown by the analysis that trainees with previous
experience with computer languages should be given the program
allowing self choice (i.e., optional remedial sequences)>
whereas those trainees with no previous experience in the
field should be given the "lock-step" program (i.e., the program with forced remedial sequences). Entering trainees, in
this case, would be asked to indicate this information and
would be assigned by the computer to the appropriate program.
Other relevant "questions" of this kind could be "asked"
in an effort to devise CDT systems that achieve a truly
optimal relationship between level of proficiency and time in
tralning.il These approaches are within the present "stateof-art.
They are expensive and they are yet to be proven by
the test of practical experience. In spite of the fact that
computer-directed training is still in an experimental, neophyte
phase, the military may provide the only feasible environment
in which the effectiveness of such approaches can be demonstrated.
The existence and availability of large capacity computers, the
number of men to be trained, the complexity of the subject
matter, the difficulty of obtaining qualified instructors, and
the critical nature of the training requirements, all exist in
the context of military training as perhaps in no other sector
of the training world. As has happened in so many other areas,
a significant general advance in the technology of training
could be initiated in support of military requirements. The
impact of such an advance on both military and civilian training
would be hard to overestimate. The Implementation of the CDT
program described in this report would be an important step in
this direction. Further refinements of the approach described
here, applications to other military training problems, and
inclusion of more sophisticated strategies such as have been
suggested in this section of the report, would represent a program of both short- and long-term value to military and civilian
training".
l^See reference #6.
50

REFERENCES
1.

Clapp, Doris J., Yens, D. P., Shettel, H. H., % Mayer,
Sylvia R. Development and evaluation of a self-instructional
course in the operational training capability Query Language
for System 473L* U. S. Air Force Heacquartars. Bedford, Mass.:
Air Force Electronic Systems Division, Decision Sciences Labora
tory, 1964. (Tech. Dec. Report No. ESD-TDR-64-S62)

2.

Self-instructional course in OTC Query Language, Volumes IXXIII. Bedford, Mass.: Air Force Electronic Systems
Division, Decision Sciences Laboratory, 1964. (Tech. Doc.
Report No. FSD-TDR-64-443)

3.

Self-instructional course in Mod II Query Language, Volumes
I-XXIX. Bedford, Mass.: Air Force Electronic Systems Division, Decision Sciences Laboratory, 1966. (Tech. Report No.
FSD-TR-66-513)

4.

Clapp, Doris J., Shettel, H. H., & Mayer, Sylvia R. Opera tional specification for computer directed training in
Intermediate Query Language, Model II, for System 473L, U. S.
Air Force Headquarters. Bedford, Mass.: Air Force Electronic
Systems Division, Decision Sciences Laboratory, 1966. (Tech.
Report No. ESD-TR-65-252)

5.

Schlff, J. D., Chenevert, M. L., Sc Bennett, W. F. Computer
directed training: System 473L Query Language. Bedford,
Mass.: Air Force Electronic Systems Division, Decision Sciences Laboratory, 1966. (Tech. Report No. ESD-TR-66-261)

6.

Shettel, H. H., Yens, D. P. Developing a computer-directed
program to teach a computer language. National Society
for Programmed Instruction Journal, i960, 5., No. 1.

51

APPENDIX A
Number of Frames in Each Volume of the Program12

Volume
Number

Section of Program

Number
Frames

r

Preprogram
General Orientation
Organization of Information in the
Data Files
Selection of the Proper File Indicator
Selection of the Correct Attributes and
Values Using the Data File Contents
Manual
Writing Simple Query Language Statements The Simple Qualifier
Writing Simple Query Language Statements Writing the Output Director and Output
Selector
Writing Query Language Functions - Special
Features of the Qualifier
Writing Query Language Functions - Special
Features of the Selector

34
133
43

X
XI
XII
XIII
XIV

GCD Functions
MIN and MAX Functions
GREATEST and LEAST
Simple SUMS
More on SUM

64
137
51
118
62

XV
XVI
XVII
XVIII
XIX
XX
XXI

Titles
Sorting
Compound Qualifiers
SAVE Procedures
CHECK
Review and Practice of II-XIX
Use of the Data Control Manuals and
Complex Queries
Complex Queries - Computed Attributes
and Special Uses
Final Practice

First
Tryout

I
II
III
IV
V
<

VI
VII
VIII
^.

r

Second
Tryout
*

IX

XXII
XXIII

TOTAL NUMBER

12" There were no frames in Volumes XXIV through XXIX; these
volumes contained reference or test materials only.

21
51
88
55
67
64

18
58
46
68
8
73
98
74
17
1448

APPENDIX B
Final Test*
ALL MANUALS, EXHIBITS, AND REVIEW PANELS KAY BE USED.
A.

General Information
1.

For what purpose is the ^73L system designed?

2.

Determine what file indicator you would use to obtain
the following information (you may use Exhibits 2N,
3A and 3B):
a. Specific data concerning current materiel inventory
b. The types of aircraft possessed by TAC
c. Detailed characteristics of tactical aircraft
d. The selection of a specific plan that fits certain
requirements

3.

Label the following outputs with the appropriate output
director.
UNIT
MDS
23FIS
F86A

F100P
113FIS

In
a.
b.
c.
d.
e.
f.
g.
h.
i.
j.
k.

AFLD NAME
STATE
RNWY LENGTH

F104B
F100B
F101C
F105D

ANDREWS

MD
12000
11000

9000

AMARILLO
TEX

TRUAX
WIS

13500
10000
9500
8000

9000
8500
7000

the Query below, use brackets and labels to indicate
The program indicator
The file indicator
The qualifier
The output director
The output selector
The qualifier conjunction
The output director conjunction
Modifiers
Attributes
Comparators
Values

RETRIEVE
VUN PLAN IND
TYPE OP,

PLAN IDENT
=

NO

WITH
THEN

PLAN
PRINT

CAT

=

EX,

PLAN IDENT,

START DATE "I

To save space, answer blanks are omitted in this reproduction
of the test.

APPENDIX B (cont.)
5.

Identify with arrows the points at which CHECK could be
inserted in the following QL statement.
RETRIEVE
THEN

AIRFIELDS
PRINT

AND

WITH
COUNT

COUNTRY
AFLD NAME

CANADA
* AFLDS IN

CANADA *~l
Match the following:
•AND
a. commas between modifiers
.OR
b. semicolons between modifier sets
7.

B.

Select the correct statements about SUM.
a. Only one SUM expression may be used in a QL qualifier.
b. Only one SUM expression may be used in a QL selector.
c. If more than one SUM expression is used in a QL
statement, all SUM expressions must have (or represent) the same set of SUM control attributes.
d. If more than one SUM expression is used in a QL
statement, all SUM expressions must have (or represent) the same set of summed attributes.
e. A SUM function used in one subordinate query of a
complex Query Language statement automatically applies
to all subordinate Queries in the statement.
f. A SUM function used in one modifier set of a compound
qualifier automatically applies to all modifier sets
of the compound qualifier.
g. The only SUM expression that may be retained is SUM,
referencing a SUM already computed in the qualifier.

Problems
Write the appropriate Query Language statements for the problems
below (you may refer to any manuals or Exhibits).
1.

Write a Query to print out the entire UNIT/LOC STRNGTH file,
sorted in descending alphabetic order according to the name
of the installation. Specify also that the following SUMS
are to be printed out:
a. for each LSA, the number of U. S. citizens.
b. for the entire file, the total number of U. S. citizens.

2.

You are checking the
Information has been
operating bases have
Request a horizontal

3.

Obtain the plan idents of all exercise plans that require
C119A aircraft and are scheduled to start on or between
1 June 1966 and 15 July 1966. Use a horizontal display.

data files to determine whether some
omitted. Determine which planned
no values for ITEM in the files.
printout.

APPENDIX B (cont.)
4.

Store a Query which will print out the number of authorized
weapons systems for a force (UNIT, UNIT LOC, MDS and COMD)
to be specified at a later time. Leave room for other
attributes In the selector.

5.

Write a Query to display the numbers of any SAVEd statements pertaining to F105G entries in FORCE STATUS.

6.

Make the necessary entries to complete this Query for
Barclay Air Force Base. Add the title "Barclay Status
Data" to the output.
RETRIEVE MATERIEL STATUS WITH POB = '1' THEN PRINT ALL"1
/INCOMPLETE QUERY - TO COMPLETE TYPE INSERT NUMBER AND =
SYMBOL FOLLOWED BY:
ADDITIONAL ATTRIBUTES
ADDITIONAL VALUES
OTHER PORTIONS OF QUERY STATEMENT AS DESIRED.
START FOR EACH NUMBER IN STATEMENT OR ERASE COMPLETE
.INSERT NUMBER (E.G., '3'). TYPE EOM AFTER LAST INSERT.
>CHECK PUNCTUATION AND FORMAT BEFORE PRESSING ENTER.
(1 = •

7.

Write a Query to select either airfields in Canada that
are no more than 500 nautical miles away from Hill and
can land or depart (whichever is lower) at least 10 aircraft per hour under IFR conditions or airfields in the
United States that are no more than 350 nautical miles
from KANSAS CITY and have the above arrival/departure
restrictions. Specify a horizontal printout of airfield
names, the country, the GCD from both airfields, and the
number of aircraft that can be landed or departed
(labeled MIN ARR/DEP).

8.

Write a Query
Bailey of all
than Peterson
Peterson, and

9-

Write a Query to find the LOG ABSTRACTS annex to plan
32111 and print out the annex plan ident and the abstract.
Also, a count of the number of and the plan idents of
other plans using this annex is desired. Specify a
printout.

10.

to print out the names and GCD's from
airfields that are no further from Bailey
Is from Wallace. Eliminate Bailey,
Wallace from being selected.

Write a Query to select the medical plan(s) with the
fewest total number of patients per day to be moved from
the collect bases. Specify a vertical display of the
plan ident, the total number of patients per day to be
moved, and all strategic data related to the route segment requiring the fewest number of AME crews per day
at the staging base.

APPENDIX B (cont.)
11.

Write a Query to find the closest airfield to SEMHOI
(excluding SEMHOI) that is a POB for at least 10,000
pounds of gaseous nitrogen. Obtain a vertical display
of the airfield name, the GCD from SEMHOI, and the
amount of gaseous nitrogen intended for use there.

12.

Write a Query to find those FORCE STATUS entries with
minimum ready weapons systems of at least 10. Entitle
the minimum RDY WP SYS. Specify a vertical display of
the force and the weapons system.

13.

Write a Query to find the total amount of jet fuel stored
at Parkersburg (regardless of the POB of the fuel) and
those airfields in logistics subarea 3A capable of storing
at least that much Jet fuel. Specify a horizontal listing
of the amount of jet fuel stored at Parkersburg, the names
of the qualifying airfields, their GCD's from Parkersburg,
and their jet fuel capacities.

14.

Write a Query to display the number of airfields in the
United States having all navigational aids either available or estimated and having at least 1 of the lighting
facilities either available or estimated.

15.

Write a Query to find all plan idents, the POB's, the
MDS's, and the number of airmen deployed from a POB and
MDS set (such as F105G's at Bailey) when at least 1000
airmen are to be deployed from a set and the POB is in
the United States or when at least 1000 men are to be
deployed from a set and the airmen are from an RFG unit.
Specify a sort in ascending order according to MDS.

16.

Write a Query to find the minimum authorized weapons
system of B52A's for the 8lBW unit of TAC at Parkersburg
(entitled WPSYS AUTH) and those plans which require no
more crewed B52A's than the authorized weapons system.
Specify a horizontal display of the weapons system, and
the plan idents of qualifying plans.

APPENDIX C
Final Test Scoring and Answer KeyScoring of QL Statements on the posttest is based on:
1.

Program director — 1 point

2.

File indicator — 1 point

3.

WITH — 1 point

h.

Each modifier — 1 point

5.

THEN — 1 point

6.

Output director — 1 point

7.

Each phrase in selector* — 1 point

8.

Punctuation and spelling of entire QL statement — 1 point

* A phrase may consist of a single attribute, a computed attribute, an
attribute = INCR/DECR, or a SAVE instruction. For^example,

0

Q

_1^

0

Q „—32. ..

...THEN LIST AND COUNT^AFLD NAME, GCD, GCD (BEALE),

(£)

''SUM AND TOTAL BY AFLD NAME (RNWY LENGTH = INCPj? MIN (MAX IFR ARR,

v

^ ^

\J2)

^

^0_

MAX IFR DEP), TITLE = MIN TRAFF, "CALIFORNIA AIRFIELD INFO*"""
""SAVE BY SMITH 1966-1
would be worth 12 points.
Since most Query Language problems can be solved in more than one way,
it is important that the test be scored by someone knowledgeable in Query
Language.

Each Query should first be assigned a number of possible points,

according to how the student attempted to solve the problem, and then points
subtracted for the items missed.

For example, if a student wrote the Query

RETRIEVE AIRFIELDS WITH AFLD NAME = PETERSON THEN RETAIN GCD (WALLACE):
RETRIEVE AIRFIELDS WITH AFLD NAME i BAILEY AND PETERSON AND WALLACE,
GCD (BAILEY<[R1, GCD (WALLACE), OR] THEN PRINT AFLD NAME, GCD (BAILEYH
for question number 8, he would receive 17 possible points, and lose two:
1) for using AND's in the AFLD NAME i- modifier, and 2) he would lose his
punctuation and spelling point for using parentheses around WALLACE in the
value [Rl, GCD WALLACE, OR].
It will be necessary, for each student, to total the possible points and
the correct points and compute his percentage.

APPENDIX C (cont.)

TEST ANSWER KEY

A.

General Information
1.

provide information needed for decision-making

1

2.

a.

MATERIEL STATUS

1

b.

FORCE STATUS

1

c.

TACTICAL CHAR

1

d.

PLAN IDENT

1

3.

k.

LISTH OR PRINTH OR, HORIZONTAL

1

LISTV OR PRINTV OR, VERTICAL

1

In the Query below, use brackets and labels to indicate:
a.
b.
c.
d.
e.
f.
g.
h.
i.
J.
k.

The program indicator
The file indicator
The qualifier
The output director
The output selector
The qualifier conjunction
The output director conjunction
Modifiers
Attributes
Comparators
Values

(a)
program
indicator
>m

-

Points

RETRIEVE

(b)
file
indicator

11

/ \
,._.
qualifier

(f)
qualifier
conjunction

PLAN IDENT

(h)
modifier

(kTattribute
PLAN

WITH

comparator

EX,

CAT

(h)
modifier
attribute (].)

output directo]
comparator(j)value(k) conjunction(g

~2r

VUN PLAN IND

NO

output
director(d)
PRINT

THEN

output selector(e)
attribute (i)

attribute (i)

5.

RETRIEVE

PRINT

attribute(ij>

TYPE OP,

PLAN IDENT,

AIRFIELDS

AND

COUNT

START DATE ""|

WITH

*

COUNTRY

AFLD NAME

t

CANDA

*AFLDS IN CANADA* . "1

value

THEN

6

APPENDIX C (cont.)

6.

a

Points
1

b

1

!.(£)

B.

1

b.

1



1

d.
e.

1
1

©

1

g-

1

Problems
1.

RETRIEVE UNIT/LOC STRNGTH THEN PRINT INSTALLATION = DECR, ALL,
SUM AND TOTAL BY LSA (TOTAL US CIVILS)-i

8

2.

RETRIEVE MATERIEL STATUS WITH ITEM = BLANK THEN PRINTH POB ~i

8

3.

RETRIEVE PLAN IDENT WITH PLAN CAT = EX, MDS = C119A, START
DATE>01JUN66 AND < 15JUL66 THEN LISTH PLAN IDENT 1

10

U.

RETRIEVE FORCE STATUS WITH UNIT • ?, UNIT LOC = ?, MDS = ?,
COMD = ? THEN PRINT MIN (CRWS AUTH, ACFT AUTH), ? SAVE -1

13

5.

FIND FORCE STATUS, F105G -\

k

6.

RETRIEVE MATERIEL STATUS WITH POB = '1' THEN PRINT '2' H

2

(1 = 'BARCLAY 2 = 'ALL *BARCLAY STATUS DATA*' -\

2

7.

RETRIEVE AIRFIELDS WITH COUNTRY = CANADA, GCD (HILL < 500),
MIN (MAX IFR ARR, MAX IFR DEP) > 10, TITLE = MIN ARR/DEP;
COUNTRY • USA, MIN (MAX IFR ARR, MAX IFR DEP) > 10, TITLE =
MIN ARR/DEP, GCD (KANSAS CITY < 350) THEN PRINTH AFLD NAME,
COUNTRY, GCD, MIN ARR/DEP n (OR PRINTH AFLD NAME, SAME n)

18

8.

RETRIEVE AIRFIELDS WITH AFLD NAME i BAILEY OR PETERSON OR
WALLACE, GCD (BAILEY < GCD (PETERSON, WALLACE)) THEN PRINT
AFLD NAME, GCD -j

9

APPENDIX C (cont.)
Points
RETRIEVE PLAN IDENT WITH PLAN IDENT = 32111 THEN RETAIN
LOG ABST IDENT: RETRIEVE LOG ABSTRACTS WITH PLAN IDENT =
[Rl, LOG ABST IDENT, OR] THEN PRINT PLAN IDENT, LOG ABSTR:
RETRIEVE PLAN IDENT WITH LOG ABST IDENT = [Rl, LOG ABST
IDENT, OR] THEN PRINT AND COUNT PLAN IDENT ~\

2k

10.

RETRIEVE MED PLAN RQMT WITH SUM BY PLAN IDENT (COLLECT CSU
RQMT • MIN), STAGE AME RQMT = LEAST THEN LISTV SUM, RTE ID =
GROUP -I

10

11.

RETRIEVE MATERIEL STATUS WITH POB + SEMHOI, ITEM = NITRO GAS, 28
TOTAL ASSETS > 10000 THEN RETAIN POB: RETRIEVE AIRFIELDS WITH
AFLD NAME = [Rl, POB, OR], GCD (SEMHOI = MIN) THEN RETAIN AND
LISTV AFLD NAME, GCD: RETRIEVE MATERIEL STATUS WITH ITEM =
NITRO GAS, POB = [R2, AFLD NAME, OR] THEN LISTV TOTAL ASSETS 1

12.

RETRIEVE FORCE STATUS WITH MIN (CRWS RDY, ACFT RDY) > 10, TITLE 13
= RDY WP SYS THEN LISTV COMD, UNIT, UNIT LOC, MDS, RDY WP SYS ~i

13.

RETRIEVE MATERIEL STATUS WITH ITEM = JET FUEL, STORAGE LOC •
20
PARKERSBURG THEN RETAIN AND LISTH SUM (SL ASSET): RETRIEVE
AIRFIELDS WITH LSA = 3A, JET FUEL CAP> [Rl, SUM SL ASSET, AND]
THEN LISTH AFLD NAME, GCD (PARKERSBURG), JET FUEL CAP 1

Ik.

RETRIEVE AIRFIELDS WITH COUNTRY = USA, NAVAID FACILITY = TOWER
AND APPROACH AND VOR AND TACAN AND VORTAC AND RBN AND AIR/GND
AND DF AND GCA AND ILS, LTG FACILITY = ANY THEN COUNT -|

15.

RETRIEVE PERS DEPLOY STAT WITH SUM BY MDS BY POB (TOTAL ORG-AMN 11
>1000), CRTY/STATE = USA; ORG KIND = RFG THEN LIST SUM = INCR,
PLAN IDENT ~»

16.

RETRIEVE FORCE STATUS WITH COMD = TAC, UNIT = 81BW, MDS = B52A, 21
UNIT LOC = PARKERSBURG THEN RETAIN AND LISTH MIN (ACFT AUTH,
CRWS AUTH), TITLE = WPSYS AUTH: RETRIEVE PLAN IDENT WITH MDS =
B52A, CRWD ACFT RQ < [Rl, WPSYS AUTH, AND] THEN LISTH PLAN IDENT1

9.

9

APPENDIX D
Sequence and Contents of the Computer Directed Training Sets
SET
A
B
C
X SETS

•• V
D

E

CONTENTS
A brief introduction to the use of the console
and the CDT overlay; use of the Instructional
materials; use of the CDT Exhibit Book.
More detailed coverage on the use of the console
and the CDT overlay.
The computer component and the input and output
devices of the 473L System; the file structure and the types of data storage utilized
by System 473L; basic methods of data
retrieval with System 473L; more on the use
of the console and the CDT overlay.
Basic elements of a QL statement; introduction
to Error Messages; use of the AP manuals
covering the QL data base and system vocabulary; more on the use of the CDT Exhibit Book,
More on the use of the AF manuals covering the
QL data base and system vocabulary; more on
the use of the CDT Exhibit Book; more on the
use of the console and the CDT overlay.

INDEP
Y SETS

F
G

The Simple Qualifier
Basic Instruction on writing the Output Director
and Output Selector

CUM Y SET

H

Review of F and 0

INDEP
Y SETS

(I

••< J

Special features of the qualifier
Special features of the selector
Review of I and J

CUM Y SET
INDEP
Y SETS

L

GCD functions
MIN and MAX functions

CUM Y SET

Review of L and M

INDEP
Y SETS

More on MIN and MAX functions
GREATEST and LEAST, and Review of MIN and MAX

CUM Y SET

Review of 0 and P

••

Y SETS

• •
Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.4
Linearized                      : Yes
Page Count                      : 71
Page Mode                       : UseNone
XMP Toolkit                     : XMP Core 4.1.1
Metadata Date                   : 2010:12:16 13:53:03-05:00
Creator Tool                    : LuraDocument PDF Compressor Server 5.5.50.41
Create Date                     : 2010:12:16 13:53:03-05:00
Modify Date                     : 2010:12:16 13:53:03-05:00
Producer                        : LuraDocument PDF v2.41
Part                            : 1
Conformance                     : B
Document ID                     : uuid:uuid:7afa3105-cf89-6ce8-1d7f-17022f557e66
Version ID                      : 1
Creator                         : LuraDocument PDF Compressor Server 5.5.50.41
EXIF Metadata provided by EXIF.tools

Navigation menu