GC28 6399 2_COBOL_Compiler_and_Library_Version_2_Programmers_Guide_Jul72 2 COBOL Compiler And Library Version Programmers Guide Jul72

GC28-6399-2_COBOL_Compiler_and_Library_Version_2_Programmers_Guide_Jul72 GC28-6399-2_COBOL_Compiler_and_Library_Version_2_Programmers_Guide_Jul72

User Manual: GC28-6399-2_COBOL_Compiler_and_Library_Version_2_Programmers_Guide_Jul72

Open the PDF directly: View PDF PDF.
Page Count: 426

DownloadGC28-6399-2_COBOL_Compiler_and_Library_Version_2_Programmers_Guide_Jul72 GC28-6399-2 COBOL Compiler And Library Version 2 Programmers Guide Jul72
Open PDF In BrowserView PDF
File No.
S360-24
No. GC28-6399-2

I Order

Systems Reference Library

IBM OS Pull American National Standard COBOL
Compiler and Library, Version 2
Programmer's Guide

Program Numbers 3605-CB-545
3605-LM-S46

This publication describes how to compile an
ArnericanNational Standard COBOL X3.23-1968
program using Version 2 of the OS Full American
National Standard COBOL compiler.
It also
discusses how to link-edit or load and execute the
program under control of the operating System.
There is a description of the output of each of
these steps, i.e., compile, link-edit, load, and
execute.
In addition, there is an explanation of
the features of the compiler and available options
of the operating system. Note that American
National Standard COBOL was formerly known as USA
Standard COBOL.

.,

OS

Third Edition (July 1972)
This is a major revision of, and makes obsolete" GC28-6399-1 and
Technical Newsletters GN28-0422, GN28-0437, and GN28-0473.
This edition corresponds to Release 21.6 of the IBM Operating System.
changes are continually made to the specifications herein: any such
changes will be reported in subsequent revisions or Technical
Newsletters. Before using this publication in connection with the
operation of IBM systems, refer to the latest SRL Newsletter, Order
No. GN20-0360, for editions that are applicable and current.
Requests for copies of IBM publications should be made to your IBM
representative or to the IBM branch office serving your locality.
A form for readers' comments is provided at the back of this
publication. If the form has been removed, comments may be addressed to
IBM corporation, Programming Publications, 1271 Avenue of the Americas,
New York, New York 10020. comments become the property of IBM.
@Copyright International Business Machines corporation 1969, 1971, 1972

PREFACE

The purpose of this publication is to
enable programmers to compile, linkage
edit, and execute, or compile and load
American National Standard COBOL compiler
version 2 programs under control of the IBM
Systern/360 Operating System. The COBOL
language is described in the publication
IBM OS Full American National Standard
COBOL, Order No. GC28-6396, which is a
corequisite to this publication.

Programmers who are familiar with the
operating system and wish to know how to
run COBOL programs should read "Job Control
Statements" and "Data Set Requirements"
under "Job Control Procedures," and
"output." These chapters provide
information about the preparation of COBOL
proqrams for processing by the operating
system.

Wider and more detailed discussions of
the operating system are given in the
following publications:
IBM System/360 Operating System:
Concepts and Facilities, Order
No. GC28-6535

IBM System/360 Operating System:
Control Language Charts, Order
No. GC28-6632

Job

IBM Systern/360 Operating System: System
Programmer's Guide, Order No. GC28-6550

IBM Systern/360 Operating System:
Supervisor Services, Order No. GC28-6646
Programmers who are unfamiliar with the
concepts of the Operating System should
read "Introduction," "Job Control
Procedures," "Checklist for Job Control
Procedures," and "Using Cataloged
Procedures" in addition to the sections
listed above.

IBM

Sys~~~360_0perat!~g~~t~m~_~at~

~an~~~ent ~ery!ce~,

Order No. GC26-3746

IBM Systern/360 Operating system:
Supervisor and Data Management Macro
Instructions, Order No. GC28-6647
IBM Systern/360 Operating System:
Sort/Merge, Order No. GC28-6543

The chapters "Program Checkout" and
"Programming Techniques" are of special
interest, since they contain information
about debugging and efficient programming.
Other chapters discuss optional features of
the language and the operating system.
Some chapters include introductory
information about features of the operating
system that are described in detail in
other publications.

!BM_2ystern/160_Q~~~ti~~y~tem:_
Util~ties,

Order No. GC28-6586

!BM SY2tem/360 Operating System:
Generation, Order No. GC28-6554

System

IBM System/360 operating System:
programmer's Guide to Debugging, Order
No. GC28-6670
The machine configuration required for
system operations is described in the
chapter "Machine Considerations."

IBM System/360 Operating System:
Storage Estimates, Order No. GC28-6551

CONTENTS.

INTRODUCTION • • • • • • • • • • •
Executing A COBOL Program
Compilation
• •
Linkage Editing
•••• • •
Loading • • • •
• • • •
Execution
• • • •
operating System Environments
Multiprogramming With A Fixed Number
Of Tasks • • • • • • • • • • • •
Multiprogramming With a Variable
Number Of Tasks
JOB CONTROL PROCEDURES
Control Statements • •
Job Management • • •
Preparing Control Statements
Name Field • • • •
Operation Field
Operand Field
Comments Field • •
Conventions for Character Delimiters
Rules for Continuing Control
Statements • • • • • • • • • • • • •
Notation for Describing Job Control
Statements • • • • • • • • • • • •
JOB Statement • • • • • • • • • •
Identifying the Job (jobname)
JOB Parameters • • • • • • •
Supplying Job Accounting
Information • • • • • •
Identifying the Programmer
Displaying All Control Statements,
Allocation, and Termination
Messages (MSGLEVEL) • • • • •
Specifying Conditions for Job
Termination (COND)
• • • •
Requesting Restart for a Job (RD)
Resubmitting a Job for Restart
(RESTART)
•••••••••••
Priority Scheduling Job Parameters
Setting Job Time Limits (TIME) •
Assigning a Job Class (CLASS)
Assigning Job Priority (PRTY)
Requesting a Message Class
( MSGCLASS ) • • • • • • • • • •
Specifying Main Storage
Requirements for a Job (REGION)
Holding a Job for Later Execution
Specifying Additional Storage
(ROLL) • • • • • • • • • • • • •
EXEC statement • • • • • • • • • • •
Identifying the Step (stepname)
Positional Parameters • • • • • • •
Identifying the Program (PGM) or
Procedure (PROC) • • • • • • • • •
Keyword Parameters • • • • • • • •
Specifying Job Step Accounting
Information (ACCT) • • • •
Specifying Conditions for
Bypassing or Executing the Job
Step (COND)
•••••••
• •

• 13
13
• 13
• 14
14
14
• 14
• 14
• 14
15
• 17
17
17
18
18
18
19
• 19
• 19
20
21
21
22
22
22
22
23
• 23
24
25
25
25
25
26
26
• 27
27
27
28
• 28
• 28
31
31
• 31

Passing Information to the
Processing Program (PARM)
Options for the Compiler •
Options for the Linkage Editor •
Options for the Loader •
• • • .
Requesting Restart for a Job Step
(RD) • • • • • • • • • • • • • • • •
Priority Scheduling EXEC Parameters
Establishing a Dispatching
Priority (DPRTY) • • • • • • • • • .
Setting Job Step Time Limits (TIME)
Specifying Main Storage
Requirements for a Job Step
(REGION) • • • • • • • • • • • • • ..
Specifying Additional Main Storage
for a Job step (ROLL)
•••••
DD Statement • • • • • • • • • • • •
Additional DD Statement Facilities •
JOBLIB And STEPLIB DD Statements •
SYSABEND And SYSUDUMP DO Statements •
PROC Statement • •
PEND Statement • • • • •
• • • .
Command Statement
Delimiter Statement • • • •
Null Statement • • • •
Comment Statement
Data Set Requirements
Compiler • • • • • • •
SYSUT1, SYSUT2, SYSUT3. SYSUT4 •
SYSIN
SYSPRINT •
SYSPUNCH • •
SYSLIN •
• • • .
SYSLIB •
Linkage Editor •
SYSLIN •
SYSPRINT • •
SYSLMOD
SYSUTl •
SYSLIB •
User-Specified Data Sets •
LOADER • • • • • •
• • • •
SYSLIN •
• • • • •
• • • .
SYSLIB •
SYSLOUT
Execution Time Data Sets •
DISPLAY Statement
ACCEPT Statement • • • •
EXHIBIT or TRACE Statement •
Abnormal Termination Dump
USER FILE PROCESSING • • • • • •
User-Defined Files • • • • • • •
File Names and Data Set Names
Specifying Information About a File •
File Processing Techniques • • • • • • •
Data Set Organization • • • • • • • •
Accessing a Standard Sequential File.
Direct File Processing • • • • • • • •
Dummy and Capacity Records • • • • .
Sequential Creation of Direct Data
Set • • • • • • • • • • • • • • • •

33
34
36
36
38
39
39
39
40
41
41
55
55
5~

56
56
56
56
56
56
56
56
57
57
57
57
58
58
59
59
60
60
61
61
61
61
61
61
62
62
62
63
63
63
64
64
64
65
65
65
66
71
74
74

Random Creation of a Direct Data
Set
.. • • • • • • • •
• 77
sequential Reading of Direct Data
Sets • • • • • • • • • • • • •
• 78
Random Reading, Updating, and
Adding to Direct Data Sets
• 78
79
Multivolume Data Sets
File Organization Field of the
System-Name • • • • •
• 80
Randomizing Techniques •
• • • • 83
91
Relative File Processing •
92
Sequential Creation
.. 93
Sequential Reading • • •
Random Access • • • • • •
93
Indexed File Processing • .. • • • • .100
.101
Indexes • • • • • • •
.103
Indexed File Areas • •
.104
Creating Indexed Files •
Reading or Updating Indexed Files
sequentially • • • • • • • • • • • • 108
Accessing an Indexed File Randomly .110
USING THE DD STATEMENT. • • • • •
.112
Creating a Data Set • • • • • •
.112
Creating Unit Record Data Sets
.113
Creating Data Sets on Magnetic Tape 113
Creating Sequential (BSAM or QSAM)
Data Sets on Mass Storage Devices .114
Creating Direct (BDAM) Data Sets • • 115
Creating Indexed (BISAM and QISAM)
Data Sets • • • • • • • • • • • • • 115
Creating Data Sets in the Output
.115
Stream • • • • • • • • .. • • •
Examples of DD Statements Used To
Create Data Sets • • • • • • • • • .116
Retrieving Previously Created Data
Sets • • • • • • • • • • • • • • • • • 119
Retrieving Cataloged Data Sets
.119
Retrieving Noncataloged (KEEP)
Data Sets • • • • • • • • • •
.120
Retrieving Passed Data Sets
.120
Extending Data Sets With
Additional Output • • • • • • • • • 120
Retrieving Data Through an Input
Stream. • • • • • • • • •
• • • 120
Examples of DD Statements Used To
Retrieve Data Sets • • • • • • • • • 122
DD Statements that Specify Unit Record
Devices • • • • • • •
••• •
.123
Cataloging a Data Set
• • • • • • • 123
Generation Data Groups
• • 123
Naming Data Sets. • •
• • • • • • 124
Additional File Processing Information .124
Data Control Block. • • • • • •
.124
/ Overriding DCB Fields • • • • • • • 125
Identifying DCB Information
.125
Error Processing for COBOL Files • • • 125
System Error Recovery
.125
INVALID KEY Option • • • • • • • • • 126
USE AFTER ERROR Option.
• • • 126
Volume Labeling • • • • • • • •
.129
Standard Label Format
• • • 130
STANDARD LABEL PROCESSING
• • 130
STANDARD USER LABELS. •
• • • 130
User Label Totaling
• • • 131
NONSTANDARD LABEL FORMAT. •
• •• 131
NONSTANDARD LABEL PROCESSING.
.131
User Label Procedure. • • •
.132

RECORD FORJ.VlATS • • • • • • • • •
• 134
Fixed-Length (Format F) Records
•• 134
Unspecified (Format U) Records • • • • • 135
Variable Length (Format V) Records • • • 135
APPLY WRITE-ONLY Clause
• • 138
Spanned (Format S) Records. • •
• .138
S-Mode Capabilities • • • • •
• .139
Sequential S-Mode Files (QSAM) for
Tape or Mass Storage Devices.
• .140
Source Language Considerations • • • 140
Processing Sequential S-Mode Files
(QSAM) • • • • • • • • • • • • • • • 141
Directly Organized S-Ivlode Files
(BDAM and BSAM) • • • • • • • • • • . 142
Source Language Considerations
. 142
Processing Directly Organized
S-Mode Files (BDAM and BSAM) • • • . 143
OCCURS Clause with the DEPENDING ON
Option • • • • • • • • • • • • •
• • 144
M

OUTPUT •
• .147
Compiler output
• .147
Object Module
• .153
Linkage Editor output
.153
Comments on the Module Map and
Cross Reference List • •
• .157
Linkage Editor Messages • • • • • .157
• 157
Loader Output • • • • • • •
COBOL Load Module Execution output • • .157
Requests for output
• • 160
.160
Operator Messages
System Output • • • •
• • 160
PROGRA.{\-l CHECKOUT
• .161
Debugging Language • • • • •
• • 161
Following the Flow of Control
• .161
Displaying Data Values During
Execution • • • • • • • • • •
• . 162
Testing a Program Selectively
• . 163
Testing Changes and Additions to
Programs • • • • • • • • • • • • • • . 164
Dumps • • • • • • • • • • • • • • • • . 164
Errors That Can Cause A Dump.
• .165
Input/Output Errors • • • •
• .165
Errors Caused by Invalid Data • • • 165
Other Errors • • • • • •
• . 166
Completion Codes • • • • •
• .167
Finding Location of Program
Interruption in COBOL Source
Program Using the Condensed Listing 169
Using the Abnormal Termination Dump .169
Finding Data Records in an Abnormal
Termination Dump • • • • • • • • • • 175
Locating Data Areas for Spanned
Records • • • • • • • • • •
• • 182
Incomplete Abnormal Termination
• .183
.. .184
Scratching Data Sets •
PROGRAMMING TECHNIQUES.
• .185
General Considerations.
• .185
Spacing the Source Program Listing .185
Environment Division.
• .185
APPLY WRITE-ONLY Clause
• • 185
QSAM Spanned Records. •
• .185
APPLY RECORD-OVERFLOW Clause
•• 185
APPLY CORE-INDEX Clause
•• 185
BDAM-W File Organization..
• .185

Data Division • • • • • •
• • • 186
Overall Considerations.
• .186
Prefixes. • • • •
• .186
Level Numbers • • • •
• .186
File Section. • • • • • • • • •
.186
RECORD CONTAINS Clause • • • • • • • 186
Working-Storage section • • • • • • • 187
Separate Modules. • •
• • • 187
Locating the Working-Storage
Section in Dumps
• • • 187
Data Description. •
• .187
REDEFINES Clause.
• • • 187
PICTURE Clause.
• • • 188
USAGE Clause. • •
• .190
SYNCHRONIZED Clause
• • • 191
Special considerations for DISPLAY
and COMPUTATIONAL Fields.
• •• 191
Data Formats in the Computer • • • • . 192
Procedure Division • • • • • • • • • • • 194
Modularizing The Procedure Division .194
Main-Line Routine
• • • 194
Processing Subroutines.
.194
Input/Output Subroutines.
• • • 194
Intermediate Results.
• .194
Intermediate Results and Binary
Data Items. • • ••
• • • 195
Intermediate Results and COBOL
Library Subroutines
.195
Intermediate Results Greater than
30 Digits • • • • •
.195
Intermediate Results and
Floating-Point Data Items
.195
Intermediate Results and the ON
SIZE ERROR Option
• • • 195
Verbs • • • • • • •
• .195
ACCEPT Statement.
• • • • • • • 195
CLOSE Statement
• • • 195
COMPUTE Statement
• • • • • 196
IF Statement • •
• • • •
.196
MOVE Statement.
• • • • • 196
NOTE Statement •
• • • • • • • 196
OPEN Statement
• • 196
PERFORM Verb. •
• • • 196
READ INTO and WRITE FROM Options • • 197
TRANSFORM Statement
• • 197
Using The Report Writer Feature
• • 197
REPORT Clause in FD
• • 197
Summing Technique
• • 197
Use of SUM • • • • • •
• .198
SUM Routines • • • • • • • • • • • • 198
Output Line Overlay
• • 199
Page Breaks
• • • • • 199
WITH CODE Clause. • •
• .200
Control Footings and Page Format • • 201
Floating First Detail Rule
• • • 201
Report Writer Routines • • • • • • • 202
Table Handling Considerations
•• 202
Subscripts. • • •
• .202
Index-Names
.202
Index Data Items.
.202
OCCURS Clause
.202
DEPENDING ON Option • • • • • • • • 202
SET Statement
• • • 203
SEARCH Statement.
• • • 205
Building Tables
.207
CALLING AND CALLED PROGRAMS
Specifying Linkage • • • • •

• • 208
•• 208

Linkage in a Calling COBOL Program •• 208
Linkage in a Called COBOL Program • • 208
correspondence of Identifiers in
Calling and Called Programs
• • 209
Linkage in a Calling or Called
Assembler-Language Program • • • • • • 209
Conventions Used in a Calling
Assembler-Language Program •
• • 209
Conventions Used in a Called
Assembler- Language Program
• • 210
File-Name and Procedure-Name
Arguments • • • . • • •
• . 211
Communication with Other Languages •• 211
Linkage Editing Programs •
• • 212
Specifying Primary Input
• • 213
Specifying Additional Input
• • 217
INCLUDE Statement
• • • • • .217
LIBRARY Statement • • • •
• .217
Linkag~ Editor Processing
• • • • • . 217
Example of Linkage Editor
Processing • • • • • • • • •
• . 218
Overlay Structures • • • • • •
• . 218
Considerations for Overlay •
• .218
Linkage Editing with Preplanned
Overlay • • • • • • • • • •
• . 218
Dynamic Overlay Technique
• • 219
Loading Programs • • • • • • • •
• . 220
Specifying Primary Input • • •
• . 220
Specifying Additional Input
• .220
LIBRARIES • • • • • • • • • ••
• . 221
Kinds of Libraries • • • • • •
• . 221
Libraries Provided by the system • • • 221
Link Library. • •
• .221
Procedure Library
.222
Sort Library. • •
• .222
COBOL Subroutine Library
• 222
Libraries Created by the User
•• 222
Automatic Call Library • • ••
• • 223
COBOL Copy Library • • • • • • • • • • 223
Entering Source Statements.
• .223
Updating Source Statements.
.224
Retrieving Source Statements • • • • 224
COPY Statement.
• • • • 224
BASIS Card
• • • • ••
• .22S
Job Library
• • • • ••
• . 226
Additional Input to Linkage Editor. 227
Creating and Changing Libraries
•• 227
USING THE CATALOGED PROCEDURES • • • • • 228
Calling Cataloged Procedures • •
• • 228
Data Sets Produced by cataloged
Procedures • • • • • • • • •
• • 228
Types of Cataloged Procedures
• • 229
Programmer-Written Cataloged
Procedures • • • • • • • • • •
• • 229
Testing Programmer-Written
Procedures • • • • •
• • • . 229
Adding Procedures to the Procedure
Library • • • • • • • • • • • • • • 229
IBM-Supplied Cataloged Procedures •• 230
Procedure Naming Conventions • • • • 231
step Names in Procedures..
• .231
Unit Names in Procedures..
• .231
Data Set Names in Procedures
•• 231
COBUC Procedure
•• 231
COBUCL Procedure
• • 231
COBULG Procedure.
• .232

COBUCLG Procedure
• • • • • • 233
COBUCG Procedure.
• • • • • • 233
Modifying Existing Cataloged Procedures 234
overriding and Adding to Cataloged
Procedures.
• • • • •
• • • • 234
Overriding and Adding to EXEC
Statements. • • • • • •
• • • 234
Examples of Overriding and Adding
to EXEC Statements • • • • • • • • • 234
Testing ~ Procedure as an In-Stream
Procedure
• • • • •
• • • • 235
overriding and Adding to DD
• • 236
Statements • • • • • • •
Examples of overriding and. Adding
.236
to DD Statements • • •
Using The DDNAME Parameter • • • •
.238
Examples of Using the DDNAME
Parameter • • • • •
• • • • • • • 238
USING THE SORT FEATURE
• • •
• .241
Sort DD Statements • • • • • •
• • 241
Sort Input DD Statements.
• .241
Sort Output DD Statements
• • 241
Sort Work DD Statements
• • 241
SORTWKnn Data Set Considerations • • 241
Input DD Statement. • •
• .242
Output DD Statment • • •
• • • 242
SORTWKnn DD Statements • • • • • • • 242
Additional DD Statements.
• • • 242
Sharing Devices Between Tape Data Sets .243
Using More Than One SORT Statement In
A Job • • • • • • • • • • • • • • • • • 243
SORT Program Example. • • • •
• • • 243
cataloging SORT DD Statements • • • • • 243
SORT Diagnostic Messages. • • •
• .244
Linkage with the SORT/MERGE Program • • 244
Completion Codes. • • • • •
• .244
Locating Sort Record Fields
• • 244
Locating Last Record Released To Sort
By An Input Procedure • • • • • • • • • 245
sort/Merge Checkpoint/Restart • • • • • 245
Efficient Program Use • • • • • • • • • 245
Data Set Size • • • • • •
• • .245
Main Storage Requirements • • • • • • 245
Defining Variable-Length Records • • 246
sorting Variable-Length Records • • 246
USE OF SEGMENTATION FEATURE
Using the PERFORM Statement in
Segmented Program
Operation • • • • • • • • •
Compiler Output • • • • •
Job Control Considerations.

• • • • 248
a
• • • • 249
• • • 249
• • • • 2S0
• • • 250

USING THE CHECKPOINT/RESTART FEATURE • • 256
Taking A Checkpoint
• • 256
Checkpoint Methods • • • • • • • • • 256
DD Statement Formats. •
• .256
Designing a Checkpoint.
• .258
Messages Generated During Checkpoint .258
Restarting A Program.
• .258
RD Parameter • • •
• • • • • • • 258
Automatic Restart
• • • • 259
Deferred Restart.
• • • 259
CHECKPOINT/RESTART DATA SETS • • • • • 260
MACHINE CONSIDERATIONS • • • •

• • • 262

Minimum Machine Requirements for the
COBOL COMPILER • • • • • • • • • • • • 262
Multiprogramming with a Variable
Number of Tasks (MVT)
••••
• . 262
REGION Parameter. • • • • •
• .262
Intermediate Data Sets Under MV'I' • • 263
Execution Time Considerations
•• 264
Sort Feature Considerations
•• 264
APPENDIX A: SAMPLE PROGRAM OUTPUT

.265

APPENDIX B: COBOL LIBRARY SUBROUTINES .277
COBOL Library Conversion Subroutines • • 277
COBOL Library Arithmetic Subroutines. .277
COBOL Library Input/Output Subroutines .277
DISPLAY, TRACE, and EXHIBIT
Subroutine (ILBODSPO)
• .277
ACCEPT Subroutine (ILBOACPO)
• • 277
BSAM Subroutine (ILBOSAMO) •
• • 279
BSAM Subroutine (ILBOS~~O)
• • 279
Error Intercept Subroutine
CILBOERRO) • • • • • • • • •
• . 280
Printer Overflow Subroutine
(ILBOPTVO) • • • • ~ • • • ..
• .280
Printer Spacing Subroutine
(ILBOSPAO) • • • • • • • • .
• .280
Sort Feature Subroutine (ILBOSRTO) • • • 280
Cobol Library Subroutines • • • • • • • 280
COMPARE Subroutine (ILBOVCOOJ
• • • 280
MOVE Subroutine (ILBOVMOO and
ILBOVM01)
• • • • • • • • • • • • • 280
TRANSFORM Subroutine (ILBOVTRO) •• 280
Class Test Subroutine (ILBOCLSO) • • 280
Segmentation Subroutine (ILBOSGMO) .280
SEARCH Subroutine (ILBOSCHO) • • • . 280
STOP RUN Subroutine (ILBOSTPO) • • • 281
Date Subroutine (ILBODTEO) . • • • • 281
Compare Figurative Constant
Greater Than One Character
Subroutine (ILBOIVLO,
• • • . 281
MOVE Data-name, Literal, or
Figurative Constant Subroutine
(ILBOANEO) • • • • • • • • • • • • • 281
MOVE Figurative Constant of More
Than One Character Subroutine
(ILBOANFO) • • • • .. • • • . • • .. • 281
Checkpoint Subroutine (ILBOCKPO) •• 281
APPENDIX C: FIELDS OF THE DATA CONTROL
BLOCK • • • • • • • • • • • •
• .. 283
APPENDIX D: COMPILER OPTIMIZATION
.289
Block Size for Compiler Data Sets
.289
How Buffer Space Is Allocated to
Buffers . . . . . . . . . . . . . . . . . 289
APPENDIX E: INVOCATION OF THE COBOL
COMPILER AND COBOL COMPILED PROGRAMS •• 291
Invoking the COBOL Compiler
.291
Invoking COBOL compiled Programs • • • 292
APPENDIX F: SOURCE PROGRAM SIZE
CONSIDERATIONS • • • • • •
• • 293
• • • . 293
Compiler capacity
Minimum Configuration SOURCE
PROGRAM Size • • • • • • • •
• . 293
Effective Storage Considerations
• .293
.294
Linkage Editor Capacity

APPENDIX G: INPUT/OUTPUT ERROR
CONDITIONS • • • • • • •
• • • • • 297
Standard Sequential, Direct, and
Relative File Processing Technique
(Sequential Access) • • • • • • • • 297
Direct and Relative File Processing
Technique (Random Access)
• • • • • 297
Indexed File Processing Technique

JI

In order to allocate this additional space
to a job step, another job step may have to
be rolled out, i.e., temporarily
transferred to secondary storage. When x
is replaced with YES, each of the
programmer's job steps can be rolled out;
when ~ is replaced with NO, the job steps
cannot be rolled out. When y is replaced
with YES, each job step can cause rollout;
when y is replaced with NO, the job steps
cannot cause rollout.
If additional main
storage is required for the job's steps,
YES must be specified for y.
If this
parameter is omitted, ROLL=(YES, NO) is
assumed.
ROLL parameters can also be coded
in EXEC statements, but are superseded by a
ROLL parameter coded in the JOB statement.

The EXEC statement defines a job step
and calls for its execution.
It contains
the following information:
1.

The name of a load module or the name
of a cataloged procedure that contains
the name of a load module that is to
be executed. The load module can be
the COBOL compiler, the linkage
editor, the loader, or any COBOL
program in load module form.

2.

Accounting information for this job
step.

3.

Conditions for bypassing the execution
of this job step.

4.

For priority scheduling systems:
computing time for a job step or
cataloged procedure step, and main
storage region size.
Job Control Procedures

21

5.

Compiler, linkage editor, or loader
options chosen for the job step.

Figure 5 is the general format of the
EXEC statement.

The PGM parameter depends upon the type
of library in which the program resides.
If the job step uses a cataloged procedure,
the EXEC statement identifies it with the
PROC parameter, in place of the PGM
parameter.

Note:
1.
• If the information specified is
normally delimited by parentheses, but
contains blanks, parentheses, or equal
signs, it must be delimited by single
quotation marks instead of parentheses.

Identifying the Step (stepname)
The stepname identifies a job step
within a job.
It must satisfy the
positional, length, and content
requirements for a name field.
The
programmer must specify a stepname if later
control statements refer to the step or if
the step is going to be part of a cataloged
procedure. Each stepname in a job or
procedure must be unique.

r-----------------------------------------,

IL_________________________________________
PGM=*.stepname.ddname
JI

The asterisk (*) indicates the current
job step. Replace the terms stepname
and ddname with the names of the job
step and the DD statement within the
procedure step, respectively, in which
the temporary library is created.
If the temporary library is created in
a catalogued procedure step, in order
to call it in a later job step outside
the procedure, give both the name of
the job step that calls the procedure
and the procedure stepname by coding
the positional parameter in the first
position of the operand field of the
EXEC statement.

POSITIONAL PARAMETERS
Identifying the Program (PGM) or Prq,cedure
(PROC)
The EXEC statement identifies the
program to be executed in the job step with
the PGM parameter. To specify the COBOL
compiler, code the positional parameter in
the first position of the operand field of
the EXEC statement.

r-----------------------------------------,I

I

PGM=IKFCBLOO
L _________________________________________
J

It indicates that the COBOL compiler is the
processing program to be executed in the
job step.
To specify the linkage editor, code the
positional parameter in the first position
of the operand field of the EXEC statement.

Temporary libraries are temporary
partitioned data sets created to store
a program until it is used in a later
job step of the same job. This type
of library is particularly useful for
storing the program output of a
linkage editor run until it is
executed in a later job step. To
execute a program from a temporary
library, code the positional parameter
in the first position of the operand
field of the EXEC statement.

r-----------------------------------------,

IL _________________________________________
PGM=*.stepname.procstepname.ddname
JI

2.

The system library is a partitioned
data set named SYS1.LINKLIB that
contains nonresident control program
routines, and processor programs.
To
execute a program that resides in the
system library, code the positional
parameter in the first position of the
operand field.

r-----------------------------------------,

IL_________________________________________
PGM=progname
JI

r-----------------------------------------,

IL _________________________________________
PGM=IEWL
JI

This indicates that the linkage editor is
the processing program to be executed in
the job step.
28

Replace the term progname with the
member name or alias associated with
this program. This same keyword
parameter can be used to execute a
program that resides in a private
!iQE~EY.
Private libraries are made

available to a job with a special DO
statement (see "Additional OD
Statement Facilities").
3.

Instead of executing a particular
program, a job step may use a
cataloged procedure. A cataloged
procedure can contain control
statements for several steps, each of
which executes a particular program.
Cataloged procedures are members of a
library named SYS1.PROCLIB.
To
request a cataloged procedure, code
the positional parameter in the first
position of the operand field of the
EXEC statement.

r-----------------------------------------,

Replace the term procname with the
unqualified name of the cataloged
procedure (see "Using the 00
Statement" for a discussion of
qualified names).

• A procedure may be tested before it is
placed in the procedure library by
converting it into an In-Stream
procedure and placing it within the job
step itself.
In-Stream procedures are
discussed in the section, "Testing a
Procedure as an In-Stream Procedure" in
the chapter "Using the Cataloged
Procedures."

IL _________________________________________
PROC=procname
JI

Job Control Procedures

29

r--------------T-----T------------------------------------------------------------------,
lOper-I
I

I
I Name

I ation I Operand
I
~--------------+-----+------------------------------------------------------------------i
I
I
Positional Parameters

I

//[stepname]1

I

EXEC I ( P G M = p r o g n a m e )
I)PGM=*.stepname.ddname
(
I" PROC=procname
'
I Jprocname
(
I~PGM=*.stepname.procstep.ddname ,

I

I

Keyword Parameters

I

'[5ACCT2
t
3 .. 5 ]
, fACCT.procstep 5 = (accounting-information)

I

'[5COND2
t
6 7J
, fCOND.procstep 5 = «code,operator[,stepname[.procstep]]) ••• )

,

'[~PARM2
t
3
, fPARM.procstep 5 = (option[,optionl ••• )

8

9

J

I

:D~=.procstep ~

= (minutes,seconds) J

I

'[5REGION
t
]
, fREGION.procstep ~ = nnnnnxK[,nnnnnyK]

I

ROLL

=

I'[5fROLL.procstep
I

I,B:~.

procstep

I[~DPRTY

~

=

t
t'tDPRTY.procstep 5

(x,y) ]

request ]
(value 1, value 2)

]

~--------------~-----~------------------------------------~-----------------------------~

1Stepname is required when information from this control statement is referred to in a
later job step.
2If this format is selected, it may be repeated in the EXEC statement once for each
step in the cataloged procedure.
3If the information specified contains any special characters except hyphens, it must
be delimited by single quotation marks instead of parentheses.
"If accounting-information contains any special characters except hyphens, it must be
delimited by single quotation marks.
5The maximum number of characters allowed between the delimiting quotation marks or
parentheses is 142.
6The maximum number of repetitions allowed is 7.
7If only one test is specified, the outer pair of parentheses may be omitted.
8If the only special character contained in the value is a comma, the value may be
enclosed in quotation marks.
9The maximum number of characters allowed between the delimiting quotation marks or
parentheses is 100.
L-_____________________________________________________________________________________
_
Figure

3.0

5.

EXEC statement

the preceding job steps abnormally
terminated.

KEYWORD PARAMETERS
Specifying Job step Accounting Information
(ACCT)
When executing a multistep job, or a job
that uses cataloged procedures, the
programmer can use this parameter so that
jobsteps are charged to separate accounting
areas. To specify items of accounting
information to the installation accounting
routines for this job step, code the
keyword parameter in the operand field of
the EXEC statement.

r-----------------------------------------,

Il _________________________________________
ACCT=(accounting information)
JI

Replace the term "accounting information"
with one or more subparameters separated by
commas. If both the JOB and EXEC
statements contain accounting information,
the installation accounting routines decide
how the accounting information shall be
used for the job step.
To pass accounting information to a step
within a cataloged procedure, code the
keyword parameter in the operand field of
the EXEC statement.

r-----------------------------------------,

Il _________________________________________
ACCT.procstep=(accounting information) JI

To specify conditions for bypassing a
job step, code the keyword parameter in the
operand field of the EXEC statement.

r-----------------------------------------,

I COND=«code,operator, [stepname]), ••• ,
I
(code, operator, [stepname]»)IJ
lI _________________________________________

The term "code" may be replaced by a
decimal numeral to be compared with the job
step return code. The return codes for
both the compiler and the linkage editor
are:
00

Normal conclusion

04

Warning messages have been listed,
but program is executable.

08

Error messages have been listed;
execution may fail.

12

Severe errors have occurred;
execution is impossible.

16

Terminal errors have occurred;
execution of the processor has been
terminated.

The compiler issues a return code of 16
when any of the following are detected:
• BASIS member-name is specified and no
member of that name is found

Procstep is the name of the step in the
cataloged procedure.
This specification
overrides the ACCT parameter in the named
procedure step, if one is present.

• COpy member-name is specified and no
SYSLIB statement is included
• Required device not available

Specifying Conditions for Bypassing or
Executing the Job Step (COND)

• Not enough core storage is available
for the tables required for compilation
• A table exceeded its maximum size

The execution of certain job steps is
based on the success or failure of
preceding steps. The COND parameter
provides the means to:
• Make as many as eight tests on return
codes issued by preceding job steps or
cataloged procedure steps, which were
completed normally.
If anyone of the
tests is satisfied, the job step is
bypassed.
• Specify that the job step is to be
executed even if one or more of the
preceding job steps abnormally
terminated or only if one or more of

• A permanent input/output error has been
encountered on an external device
The return codes have a correlation with
the severity level of the error messages.
With linkage editor messages, for example,
the rightmost digit of the message number
states the severity level; this number is
multiplied by 4 to get the appropriate
return code. With the COBOL compiler, 04,
08, 12, and 16 are equal to the severity
flags: W, C, E, and D, respectively.
The term "operator" specifies the test
to be made of the relation between the
Job Control Procedures

31

programmer-specified code and the job step
return code. Replace the term operator
with one of the following:
GT
GE
EQ
LT
LE
NE

(greater than)
(greater than or equal to)
(equal to)
(less than)
(less than or equal to)
(not equal to)

The term "stepname" identifies the
previously executed job step that issued
the return code to be tested and is
replaced by the name of that preceding job
step.
If stepname is not specified, code
is compared to the return codes issued by
all preceding steps in the job.
Replace the term stepname with the name
of the preceding job step that issues the
return code to be tested.
If the programmer codes

language errors or inability to allocate
space, the remainder of the job steps are
bypassed, whether or not a condition for
executing a later job step was specified.)
To specify the condition for executing a
job step, code the keyword parameter in the
operand field of the EXEC statement.

r---------------------{;;;;}----------------l
I
COND=
I
JI
ONLY
Il _________________________________________

The EVEN or
exclusive.
be coded in
return code
between, or

ONLY subparameters are mutually
The subparameter selected can
combination with up to seven
tests, and can appear before,
after return code tests, e.g.,

COND=(EVEN,(4,GT,STEP3»
COND=«8,GE,STEP1),(16,GE),ONLY)

COND=«4,GT,STEP1), (8,EQ,STEP2»)
the statement is interpreted as: "If 4 is
greater than the return code issued by
STEP1, or if STEP2 issues a return code of
8, this job step bypassed."
Notes:
• If only one test is made, the
programmer need not code the outer
parentheses, e.g., COND=(12,EQ,STEPX).
• If each return code test is made on all
preceding steps, the programmer need
not code the terms stepname, e.g.,
COND= ( ( 4, GT) , (8, EQ) ) •
• When the return code is issued by a
cataloged procedure step, the
programmer may want to test it in a
later job step outside of the
procedure.
In order to test it, give
both the name of the job step that
calls the procedure and the procedure
stepname, e.g., COND=«code,operator,
stepname.procstep), ••• ).

The EVEN subparameter causes the step to
be executed even when one or more of the
preceding job steps have abnormally
terminated.
However, if any return code
tests specified in this job step are
satisfied, the step is bypassed. The ONLY
subparameter causes the step to be executed
only when one or more of the preceding job
steps have abnormally terminated. However,
if any return code tests specified in this
job step are satisfied, the step is
bypassed.
When a job step abnormally terminates,
the COND parameter on the EXEC statement of
the next step is scanned for the EVEN or
ONLY subparameter. If neither is specified, the job step is bypassed and the EXEC
statement of the next step is scanned for
the EVEN or ONLY subparameter.
If EVEN or
ONLY is specified, return code tests, if
any, are made on all previous steps
specified that executed and did not
abnormally terminate.
If anyone of these
tests is satisfied, the step is bypassed.
Otherwise, the job step is executed.
If the programmer codes

Abnormal termination of a job step
normally causes subsequent steps to be
bypassed and the job to be terminated.
By
means of the COND parameter, however, the
programmer can specify execution of a job
step after one or more preceding job steps
have abnormally terminated. For the COND
parameter, a job step is considered to
terminate abnormally if a failure occurs
within the user's program once it has
received control.
(If a job step is
abnormally terminated during scheduling
because of failures such as job control
32

COND=EVEN
the statement is interpreted as: "Execute
this step even if one or more of the
preceding steps abnormally terminated
during execution." If COND=ONLY is coded,
it is interpreted as:
"Execute this step
only if one or more of the preceding steps
abnormally terminated during execution."
If the COND parameter is omitted, no
return code tests are made and the step

will be bypassed when any of the preceding
job steps abnormally terminate.

r-----------------------------------------,

1//STEP4 EXEC ANALYSIS,COND.
XI
1//
REDUCE=«16,EQ,STEP4.LOOKUP),ONLY), ••• J1
L_________________________________________

~:

• When a job step that contains the EVEN
or ONLY subparameter refers to a data
set that was to be created or cataloged
in a preceding step, the data set will
not exist if the step creating it was
bypassed.
• When a jobstep that contains the EVEN
or ONLY subparameter refers to a data
set that was to be created or cataloged
in a preceding step, the data set may
be incomplete if the step creating it
abnormally terminated.
• When the job step uses a cataloged
procedure, the programmer can establish
return code tests and the EVEN or ONLY
subparameter for a procedure step by
including, as part of the keyword COND,
the procedure stepname, e.g.,
COND.procstepname.
This specification
overrides the COND parameter in the
named procedure step if one is present.
The programmer can code as many
parameters of this form as there are
steps in the cataloged procedure.
• To establish one set of return code
tests and the EVEN or ONLY subparameter
for all steps in a procedure, code the
COND parameter without a procedure
stepname. This specification replaces
all COND parameters in the procedure if
any are present.
Job steps following a step that
abnormally terminates are normally
bypassed. If a job step is to be executed
even if a preceding step abnormally
terminates, specify this condition, along
with up to seven return code tests:

r-----------------------------------------,

Here, the cataloged procedure step named
REDUCE will be executed only if a preceding
job step has abnormally terminated and the
procedure step named LOOKUP does not issue
a return code of 16. The programmer can
code as many COND parameters of this type
as there are steps in the procedure.

Passi~ Information to the Processing
Program (PARM)

For processing programs that require
control information at the time they are
executed, the EXEC statement provides the
PARM parameter.
To pass information to the
program, code the keyword parameter in the
operand field.

r-----------------------------------------,

1L _________________________________________
PARM=(option[,option) ••• )
J1

This will pass options to the compiler,
linkage editor, loader, or object program
when anyone of them is called by the PGM
parameter in the EXEC statement or to the
first step in a cataloged procedure.
To pass options to a compiler, the
linkage editor, loader, or the execution
step within the named cataloged procedure
step, code the keyword parameter in the
operand field.

r-----------------------------------------,

I _________________________________________
PARM.procstep=(option[,optionl ••• )
L
J1

1//STEP3 EXEC PGM=CONVERT,
XI
1//
COND=(EVEN,(4,EQ,STEP1»),...
L _________________________________________
J1

Any PARM parameter already appearing in the
procedure step is deleted, and the PARM
parameter that is passed to the procedure
step is inserted.

Here, the step is executed if the return
code test is not satisfied, even if one or
more of the preceding job steps abnormally
terminated.
If a job step is to execute
only when one or more of the preceding
steps abnornally terminate, replace EVEN in
the above example with ONLY.

A maximum of 100 characters may be
written between the parentheses or single
quotation marks that enclose the list of
options. The COBOL compiler selects the
valid options of the PARM field for
processing by looking for three significant
characters of each key option word. When
the keyword is identified, it is checked
for the presence or absence of the prefix
NO, as appropriate. The programmer can
make the most efficient use of the option
field by using the significant characters
instead of the entire option. Table 2
lists the significant characters for each

If the EXEC statement calls a cataloged
procedure, the programmer can establish
return code tests and the EVEN or ONLY
subparameter for a procedure step by coding
the COND parameter followed by the name of
the procedure step to which it applies:

Job Control Procedures

33

option (see "Options for the Compiler" for
an explanation of each).
Table

2.

Note:
The SIZE and BUF compile-time
parameters can be given in multiples of K,
where K = 1024 decimal bytes. For example,
80K is 81,920 decimal bytes.

Significant Characters for
Various Options

r------------------T----------------------,
I
Significant
I

I
I

Option
I
Characters
I
~------------------+----------------------~
LINECNT
CNT
SEQ
SEQ
FLAGE(W)
LAG,LAGW
SIZ
SIZE
BUF
BUF
SOURCE
SOU
DEC
DECK
LOA
LOAD
ACE
SPACE
DMAP
DMA
PMAP
PMA
SUPMAP
SUP
CLIST
CLI
TRUNC
TRU
APOST
APO
QUOTE
QUO
XREF
XRE
LIB
LIB
VERB
VER
ZWB
ZWB
------------------~----------------------

Options for the Compiler
The IBM-supplied default options
indicated by an underscore in the following
discussion can be changed within each
installation at system generation time.
The format of the PARM parameter is
illustrated in Figure 6.
Note:
• When a subparameter contains an equal
sign, the entire information field of
the PARM parameter must be enclosed by
single quotation marks instead of
parentheses, e.g.,
PARM='SIZE=160000,PMAP'.

SOURCE
NOSOURCE
indicates whether or not the source
module is to be listed.
CLIST
NOCLIST
indicates whether or not a condensed
listing is to be produced.
If
specified, the procedure portion of
the listing will contain generated
card numbers, verb references, and the
location of the first generated
instruction for each verb. CLIST and
PMAP are mutually exclusive options.
Note:
In nonsegmented programs, verbs are
listed in source order.
In segmented
programs, verbs are listed in source order
within each segment, with the root segment
last.
If the VERB option is specified, the
verb-name is printed out.
otherwise, only
the verb number (VERB1, VERB2, and so on)
is printed out.
DMAP
NODMAP
-----indicates whether or not a glossary is
to be listed.
PMAP
NOPMAP
indicates whether or not register
assignments, global tables, literal
pools and an assembler language
expansion of the source modules are to
be listed.
CLIST and PMAP are
mutually exclusive options.
LIB

NOLIB

SIZE=yyyyyyy
indicates the amount of main storage,
in bytes, available for compilation
(see "Machine Considerations").
BUF=yyyyyy
indicates the amount of main storage
to be allocated to buffers. If both
SIZE and BUF are specified, the amount
allocated to buffers is included in
the amount of main storage available
for compilation (see "Appendix D:
Compiler Optimization" for information
about how buffer size is determined>.
34

indicates that BASIS and/or COPY
statements are in the source program.
If either COpy or BASIS is present,
LIB must be in effect.
If COpy and/or
BASIS statements are not present, use
of the NO LIB option yields more
efficient compiler processing.

~RB

NOVERB
indicates whether procedure-names and
verb-names are to be listed with the
associated code on the object-program
listing. VERB has meaning only if
PMAP or CLIST is in effect.

~­

NOLOAD
indicates whether or not the object
module is to be placed on a mass
storage device or a tape volume so
that the module can be' used as input
to the linkage editor.
If the LOAD
option is used, a SYSLIN DD statement
must be specified.

DECK
NODECK
indicates
module is
option is
statement

whether or not the object
to be punched. If the DECK
used, a SYSPUNCH DD
must be specified.

SEQ
NOSEQ
indicates whether or not the compiler
is to check the sequence of the source
module statements. If the statements
are not in sequence, a message is
printed.

LINECNT=nn
indicates the number of lines to be
printed on each page of the
compilation source card listing.
The
number specified by nn must be a
2-digit integer from 01 to 99.
If the
LINECNT option is omitted, 60 lines
are printed on each page of the output
listing. The first three lines of the
output listing are for the compiler
headings.
(For example, if nn=55 is
specified, then 52 lines are printed
on each page of the output listing.)

FLAGW
FLAGE
indicates the type of messages that
are to be listed for the compilation.
FLAGW indicates that all warning and
diagnostic messages are to be listed.
FLAGE indicates that all diagnostic
messages are to be listed, but the
warning messages are not to be listed.

SUPMAP
NOSUPMAP
indicates whether or not the object
code listing, and object module and
link edit decks are to be suppressed
if an E-Ievel message is generated by
the compiler.

SPACE 1
SPACE 2
SPACE 3
indicates the type of spacing that is
to be used on the source card listing
generated when SOURCE is specified.
SPACE1 specifies single spacing,
SPACE2 specifies double spacing, and
SPACE3 specifies triple spacing.
TRUNC
NOTRUNC
is an option that applies only to
COMPUTATIONAL receiving fields in MOVE
statements and arithmetic expressions.
If TRUNC is specified, extra code is
generated to truncate the final
intermediate result of the arithmetic
expression, or the sending field in
the MOVE statement, to the number of
digits specified in the PICTURE clause
of the COMPUTATIONAL receiving field.
If NOTRUNC is specified, the compiler
assumes that the data being
manipulated conforms to PICTURE and
USAGE specifications. The compiler
then generates code to manipulate the
data based on the size of the field in
core (halfword, etc.).
TRUNC conforms
to the American National Standard,
while NOTRUNC leads to more efficient
processing. This will occasionally
cause dissimilar results for various
sending fields because of the
different code generated to perform
the operation.
QUOTE
APOST.
-----indicates to the compiler that either
the double quote (") or the apostrophe
(I} is acceptable as the character to
delineate literals and to use that
character in the generation of
figurative constants.
XREF
NOXREF
------indicates whether or not a cross
reference listing is produced.
If
XREF is specified, an unsorted listing
is produced with data-names and
procedure names appearing in two parts
in the order in which they are
referenced.
Use of the XREF option
considerably increases compile time.
NOXREF will suppress any
cross-reference listing.
~!!~

NOZWB
indicates whether or not the compiler
generates code to strip the sign from
a signed external decimal field when
comparing this field to an
alphanumeric field.
If ZWB is
Job Control Procedures

35

specified, the signed external decimal
field is moved to an intermediate
field, in which its sign is removed,
before it is compared to the
alphanumeric field.
Note: The default option cannot be
changed at system generation time.

produce, see "Output." Linkage editor
control statements and overlay structures
are explained in "Calling and Called
Programs." There are other PARM options
for linkage editor processing that describe
additional processing options and special
attributes of the load module (see the
publication !~~£yst~~L1£Q_Qp~~~ting
Sys~~m: __~ink~~~di~Q~_~~g_~Q~der).

For examples of what the SOURCE, PMAP,
DMAP, and SEQ options produce, see
"output."
Options for the Loader
Options for the Linkage Editor

MAP
indicates that a map of the load
module is to be listed. If MAP is
specified, XREF cannot be specified,
but both can be omitted.
XREF
indicates th~t a cross reference list
and a module map are to be listed.
If
XREF is specified, MAP cannot be
specified.
LIST
indicates that any linkage editor
control statements associated with the
job step are to be listed.
OVLY
indicates that the load module is to
be in the format of an overlay
structure. This option is required
when the COBOL Segmentation feature is
used.
The format of the PARM parameter is
illustrated in Figure 6. For examples of
what the MAP, XREF, and LIST options

36

MAP

NOMAP
-----indicates whether or not a map of the
loaded module is to be produced that
lists external names and their
absolute addresses on the SYSPRINT
data set.
If the SYSPRINT DD
statement is not used in the input
deck, this option is ignored. An
example of a module map is shown in
"Output."
RES
NORES
indicates whether or not an automatic
search of the link pack area queue is
to be made. This search is always
made after processing the primary
input (SYSLIN), and before searching
the SYLIB data set. When the RES
option is specified, the CALL option
is automatically set.
CALL
NOCALL (NCAL)
indicates whether or not an automatic
search of the SYSLIB data set is to be
made.
If the SYSLIB DD statement is
not used in the input deck, this
option is ignored. The NOCALL option
causes an automatic NORES.

.--------------------------------------------------------------------------------------,
Compiler:
t

~PARM

1PARM. procstep f

=( [SIZE=yyyyyyy] [,BUF=yyyyyy]

[,SOURCE ] [,DMAP ] [,PMAP ]
, NODMAP

, NOSOURCE

[,SUPMAP ]

[,WAD ]

[,DECK ]

[,gQ ]

, NOSUPMAP

, NOLOAD

, NODECK

, NOSEQ

[,TRUNC ] [,CLIST ] [,FLAGW ]
, NOTRUNC .
, SPACEl ]
[ : ~~~~:~

, NOCLIST
[, !:!!~

[, ~g~]

[, LINECNT=nn)

[,XREF ] [,QUOTE]
, NOXREF

,FLAGE

]

, NOPMAP

[,

~~~

,APOST

])

1. 2

3

, NOLIB
, NOVERB
, NOZWB
r---------------------------------------------------------------------------------------~
ILinkage Editor:
I

I

PARM

t

: f

I

([ 1f

MAP}) [, LIST) [, LET] [, OVLY])
:
I
PARM. procstep }
XREF
I
~---------------------------------------------------------------------------------------~
I Loader:
1

t

II
I

t

jPARM
lpARM. procstep 5

=

[MAP
] [,RES ]
NOMAP, NORES

[,CALL ]
, NOCALL

[,LET
]
, NOLET

[,SIZE=100K]
, SIZE=size

1

I

"

I
1
I

1I

[, EP=name)
)
,I
I
, NOPRINT
1
r---------------------------------------------------------------------------------------~
I1.If the information specified contains any special characters, it must be delimited by 1
1 single quotation marks instead of parentheses.
1
12If the only special character contained in the value is a comma, the value may be
,
1 enclosed in parentheses or quotation marks.
1
13 The maximum number of characters allowed between the delimiting quotation marks or
I
L
J1
I_______________________________________________________________________________________
parentheses is 100.
Figure

[, PRINT
-----

6.

]

Compiler, Linkage Editor, and Loader PARM options

LET
NOLET
-----indicates whether or not the loader
will try to execute the object program
when a severity level 2 error
condition is found.

SIZE=100K
SIZE=size
specifies the size, in bytes, of
dynamic main storage that can be used
by the loader. This storage must be
large enough to accommodate the object
program.

EP=name
specifies the external name to be
assigned as the entry point of the
loaded program.
PRINT
NOPRINT
indicates whether or not diagnostic
messages are to be produced on the
SYSLOUT data set.
The format of the PARM parameter is
illustrated in Figure 6. The default
options, indicated by an underscore, can be
changed at system generation with the
LOADER macro instruction.
Job Control Procedures

37

RequestinqRestartfor a Job Step (RD)
The restart facilities can be used in
order to minimize the time lost in
reprocessing a job that abnormally
terminates.
These facilities permit the
automatic restart of jobs that were
abnormally terminated during execution.
The programmer uses this parameter to
tell the operating system:
(1) whether or
not to take checkpoints during execution of
a program, and (2) whether or not to
restart a program that has been
interrupted.
A checkpoint is taken by periodically
recording the contents of storage and
registers during execution of a program.
The RERUN clause in the COBOL language
facilitates taking checkpoint readings.
Checkpoints are recorded onto a checkpoint
data set.
Execution of a job can be automatically
restarted at the beginning of a job step
that abnormally terminated (step restart)
or within the step (checkpoint restart).
In order for checkpoint restart to occur, a
checkpoint must have been taken in the
processing program prior to abnormal
termination. The RD parameter specifies
that step restart can occur or that the
action of the CHKPT macro instruction is to
be suppressed.
To request that step restart be
permitted or to request that the action of
the CHKPT macro instruction be suppressed
in a particular step, code the keyword
parameter in the operand field of the EXEC
statement.

r-----------------------------------------,

IL _________________________________________
RD=request
JI
Replace the word "request" with:

R

NC

NR

38

to permit automatic step restart.
The programmer must specify at
least one RERUN clause in order
to take checkpoints.
to suppress the action of the
CHKPT macro instruction and to
prevent automatic restart. No
checkpoints are taken; no RERUN
clause in the COBOL program is
necessary.
to request that the CHKPT macro
instruction be allowed to
establish a checkpoint, but to
prevent automatic restart. The
programmer must specify at least

one RERUN clause in order to take
checkpoints.
RNC -- to permit step restart and to
suppress the action of the CHKPT
macro instruction.
No
checkpoints are taken; no RERUN
clause in the COBOL program is
necessary.
Each request is described in greater detail
in the following paragraphs.
RD=g:
If the processing programs used by
this step do not include a RERUN statement,
RD=R allows execution to be resumed at the
beginning of this step if it abnormally
terminates. If any of these programs do
include one or more CHKPT macro
instructions (through the use of the RERUN
clause), step restart can occur if this
step abnormally terminates before execution
of a CHKPT macro instruction; thereafter,
checkpoint restart can occur.
RD=NC or RD=RNC: RD=NC or RD=RNC should be
specifIea-to suppress the action of all
CHKPT macro instructions included in the
programs used by this step. When RD=NC is
specified, neither step restart nor
checkpoint restart can occur. When RD=RNC
is specified, step restart can occur.
RD=NR: RD=NR permits a CHKPT macro
instruction to establish a checkpoint, but
does not permit automatic restarts.
However, a resubmitted job could have
execution start at a specific checkpoint.
Before automatic step restart occurs,
all data sets in the restart step with a
status of OLD or MOD, and all data sets
being passed to steps following the restart
step, are kept. All data sets in the
restart step with a status of NEW are
deleted.
Before automatic checkpoint
restart occurs, all data sets currently in
use by the job are kept.
If the RD parameter is omitted and no
CHKPT macro instructions are executed,
automatic restart cannot occur. If the RD
parameter is omitted but one or more CHKPT
macro instructions are executed, automatic
checkpoint restart can occur.

• If the RD parameter is specified on the
JOB statement, RD parameters on the
job's EXEC statements are ignored.
• When using a system with MVT or MFT,
restart can occur only if MSGLEVEL=l is
coded on the JOB statement.

• If step restart is requested for this
step, assign the step a unique step
name.
• When this job step uses a cataloged
procedure, make restart request for a
single procedure step by including, as
part of the RD parameter, the procedure
stepname, i.e., RD.procstepname. This
specification overrides the RD
parameter in the named procedure step
if one is present.
Code as many
parameters of this form as there are
steps in the cataloged procedure.
• To specify a restart request for an
entire cataloged procedure, code the RD
parameter without a procedure stepname.
This specification overrides all RD
parameters in the procedure if any are
present.
• If no RERUN clause is specified in the
user's program, no checkpoints are
written, regardless of the disposition
of the RD parameter.
Reference:
• For detailed information on the
checkpoint/restart facilities, refer to
the publication IBM System/360
Operating System:
Supervisor Services.

• Whenever possible, avoid assigning a
number of 15 to "value 1." This number
is used for certain system tasks.
• If "value 1" is omitted, the comma must
be coded before "value 2" to indicate
the absence of "value 1," e.g.,
DPRTY=(, 14).
• If "value 2" is omitted, the
parentheses need not be coded, e.g.,
DPRTY=12.
• On an MVT system with time-slicing
facilities, the DPRTY parameter can be
used to make a job step part of a group
of job steps to be time-sliced.
The
priorities of the time-sliced groups
are selected at system generation.
To
cause the job step to be time-sliced,
assign to "value 1" a number that
corresponds to a priority number
selected for time-slicing.
"Value 2"
is either omitted or assigned a value
of 11.
• When the step uses a cataloged
procedure, a dispatching priority can
be assigned to a single procedure step
by including the procedure step name in
the DPRTY parameter, i.e.,
DPRTY.procstepname=(value 1, value 2).
This parameter may be used £or each
step in the cataloged procedure.

Priority Scheduling EXEC Parameters
Establishing a Dispatching Priority (DPRTY)
(MVT only)
The DPRTY parameter allows the
programmer to assign to a job step, a
dispatching priority different from the
priority of the job. The dispatching
priority determines in what sequence tasks
use main storage and computing time.
To
assign a dispatching priority to a job
step, code the keyword parameter in the
operand field of the EXEC statement.

r-----------------------------------------,

IL _________________________________________
DPRTY=(value 1, value 2)
JI

Both "value 1" and "value 2" should be
replaced with a number from 0 through 15.
"Value 1" represents an internal priority
value.
"Value 2" added to "value 1"
represents the dispatching priority. The
higher numbers represent higher priorities.
A default value of 0 is assumed if no
number is assigned to "value 1." A default
value of 11 is assumed if no number is
assigned to "value 2."

• To assign a single dispatching priority
to an entire cataloged procedure, code
the DPRTY parameter without a procedure
step name.
This specification
overrides all DPRTY parameters in the
procedure if there are any.

To assign a limit to the computing time
used by a single job step, a cataloged
procedure, or a cataloged procedure step,
code the keyword parameter in the operand
field of the EXEC statement.

r-----------------------------------------,

IL _________________________________________
TIME=(minutes, seconds)
JI

Such an assignment is useful in a
multiprogramming environment where more
than one job has access to the computing
system.
Minutes and seconds represent the
maximum number of minutes and seconds
allotted for execution of the job step.
Job Control Procedures

39

Notes:
• If the job step requires use of the
system for 24 hours (1440 minutes) or
longer, the programmer should specify.
TIME=1440.
Using this number
suppresses timing. The number of
seconds cannot exceed 59.
• If the time limit is given in minutes
only, the parentheses need not be
coded; e.g., TIME=5.
• If the time limit is given in seconds,
the comma must be coded to indicate the
absence of minutes; e.g., TIME=(,45).
• When the job step uses a cataloged
procedure, a time limit for a single
procedure step can be set by qualifying
the keyword TIME with the procedure
step name; i.e., TIME.procstep=
(minutes, seconds).
This specification
overrides the TIME parameter in the
named procedure step if one is present.
As many parameters of this form can be
coded as there are steps in the
cataloged procedure.
• To set a time limit for an entire
procedure, the TIME keyword is left
unqualified. This specification
overrides all TIME parameters in the
procedure if any are present.
• If this parameter is omitted, the
standard job step time limit is
assigned.

Specifying Main Storage Requirements for a
Job Step (REGION)
(MVT only)
The REGION parameter permits the
programmer to specify the size of the main
storage region to be allocated to the
associated job step. The REGION parameter
specifies:
• The maximum amount of main storage to
be allocated to the job. This amount
must include the size of those
components required by the user's
program that are not resident in main
storage.
• The amount of main storage to be
allocated to the job, and the storage
hierarchy or hierarchies in which the
space is to be allocated. This request
should be made only if main storage
hierarchy support has been specified
during system generation.
If an IBM
2361 Core Storage, Model 1 or 2, is
present in the system, processor
storage is referred to as hierarchy 0
40

and 2361 Core Storage is referred to as
hierarchy 1. If 2361 Core Storage is
not present but main storage hierarchy
support was specified in system
generation, a two-part region is
established in processor storage when a
region is defined to exist in two
hierarchies. The two parts are not
necessarily contiguous.
To specify a region size, code the
keyword parameter in the operand field of
the EXEC statement.

r-----------------------------------------,

IL_________________________________________
REGION=(nnnnnxK[,nnnnnyK])
JI

To request the maximum amount of main
storage required by the job, replace the
term "nnnnnx" with the maximum number of
contiguous 1024-byte areas allocated to the
job step, e.g., REGION=52K.
This number
can range from one to five digits and must
not exceed 16383.
To request a region size and the
hierarchy desired, the term nnnnnx is
replaced with the number of contiguous
1024-byte areas to be allocated to the job
in hierarchy 0; the term nnnnny is replaced
with the number of contiguous 1024-byte
areas to be allocated in hierarchy 1, e.g.,
REGION=(60K,200K). When only processor
storage is used to include hierarchies 0
and 1, the combined values of nnnnnx and
nnnnny cannot exceed 16383.
If 2361 Core
Storage is present, nnnnnx cannot exceed
16383 and, for a 2361 modell, nnnnny
cannot exceed 1024, or 2048 for a 2361
model 2. Each value specified should be an
even number.
(If an odd nurr~er is
specified, the system treats it as the next
higher even number.)
If storage is requested only in
hierarchy i., a comma must be coded to
indicate the absence of the first
subparameter, e.g., REGION=(,200K).
If
storage is requested only in hierarchy 0,
or if hierarchy support is not present, the
parentheses need not be coded, e.g.,
REGION=70K.
If the REGION parameter is omitted or if
a region size smaller than the default
region size is requested, it is assumed
that the default value is that established
by the input reader procedure.

Notes:
• Region sizes for each job step can be
coded by specifying the REGION
parameter in the EXEC statement for
each job step. However, if a REGION
parameter is present in the JOB
statement, it overrides REGION
parameters in EXEC statements.
• If main storage hierarchy support is
not included but regions are requested
in both hierarchies, the region sizes
are combined and an attempt is made to
allocate a single region from processor
storage.
If a region is requested
entirely from hierarchy 1, an attempt
is made to allocate the region from
processor storage.
• For information on storage requirements
to be considered when specifying a
region size, see the publication IBM
Systeml360 Operating System: Storage
Estimates.
Specifying Additional Main storage for a
Job.step (ROLL)
(MVT only>
To allocate additional main
job step whose own region does
any more available space, code
parameter in the operand field
statement.

storage to a
not contain
the keyword
of the EXEC

r-----------------------------------------,
I
ROLL=(x,y)
I
l _________________________________________ J

In order to allocate this additional space
to a job step, another job step may have to
be rolled out, i.e., temporarily
transferred to secondary storage. When x
is replaced with YES, the job step can be
rolled out; when x is replaced with NO, the
job step cannot be rolled out. When y is
replaced with YES, the job step can cause
rollout; when y is replaced with NO, the
job step cannot cause rollout.
(If
additional main storage is required for the
job step, YES must be specified for y.)
If
this parameter is omitted, ROLL=(YES,NO) is
assumed.

• If the ROLL parameter is specified in
the JOB statement, the ROLL parameter
in the EXEC statements is ignored.
• When a job step uses a cataloged
procedure, it can be indicated whether
or not a single procedure step has the
ability to be rolled out and to cause
rollout of another job step. To
indicate this, the procedure stepname,
i.e., ROLL.procstepname, is included as
part of the ROLL parameter. This
specification overrides the ROLL
parameter in the named procedure step,
if one is present. As many parameters
of this form can be coded as there are
steps in the cataloged procedure.
• To indicate whether or not all of the
steps of a cataloged procedure have the
ability to be rolled out and to cause
rollout of other job steps, the ROLL
parameter can be coded without a
procedure stepname. This specification
overrides all ROLL parameters in the
procedure, if any are present.

DO STATEMENT
The data definition (DO) statement
identifies each data set that is to be used
in a job step, and it furnishes information
about the data set. The DO statement
specifies input/output facilities required
for using the data set; it also establishes
a logical relationship between the data set
and input/output references in the program
named in the EXEC statement for the job
step.
Figure 7 is a general format of the DD
statement.
Parameters used most frequently for
COBOL programs are discussed in detail.
The other parameters (e.g., SEP and AFF)
are mentioned briefly.
For further
information, ·see the publication IBM
System/360 op~~~ig~£Y~~~~~ __~2Q-COll~~Q!
Languag~~efe~~£~.

Job Control Procedures

41

T ------------ -----------------------:;r
r------------------------TI --------------.
operation
I
Operand
/

I Name

~~-fd~;;~---------~-~--i-;;-----------i-----~~~~-:~~:~-:::-::::-::::~---~

Il ________________________
rprocstep.ddname,
I ______________ I _____________ - -. - - - - - - - - - - - - - - - - - - -__
~
---~

~

r---------------------------------------------------------------------------------------,I

I Operand 2

~---------------------------------------------------------------------------------------~

Positional Parameters
3

*
]
[ DUMMY
DATA

~ord

Parameters

..

5

[DDNAME=ddname]
11

dsname
dsname(element)
*.ddname
*.stepname.ddname
*.stepname.procstep.ddname
&&name
&&name(element)

j DSNAME t

I DSN

\

6

dSname
]
*.ddname
[ *.stepname.ddname
*.stepname.procstep.ddname
SEP=(SUbParameter list)7]
[ AFF=ddname

I. subparameter-listl l ]

10

Positional Subparameters
UNIT=(name[, [n/P] [,DEFER]] [,SEP=(list of up to 8 ddnameS)])s]
[ UNIT=(AFF=ddname)
Positional Subparameters
SPACE=(

j~~~

l

10 12

, (primary-quantity[,secondary-quantity],

)averag~-reCord-length\

[directory- or index-quantity]) [,RLSE]

, MXLG
]
,ALX
[ ,CCNTIG

[, ROUND])

SPACE=(ABSTR,(quantity,beginning-address[,directory- or index-quantity]»
SPLIT= (n,

1CYL

l

, (primary-quantity[,secondary-quantityl»

)average-record-lengthr
SUBALLOC=(

L

CYL
average-reccrd-length

}

, (prirnary-quantity[,secondary-quantityl

\ddnaroe
}
[, directory-quanti tV] ), ') stepname. ddname
)
!stepnaroe.procstep.ddname
______________________________________________________________________________________
J

Figure

42

TRK

{

7.

The DD statement (Part 1 of 2)

r---------------------------------------------------------------------------------------,

IOperand 2 ]

(

*.stepname.procstep.ddname

~~

[

t

LABEL=([data-set-sequence-number), )
! NSL l
f SUL ,

DISP=( [

NEW ]
OLD
SHR
MOD

,DELETE]
,KEEP
,PASS
[ , CATLG
,UNCATLG

, EXPD'I'=yyddd ]
[ ,RETPD=xxxx

[, PASSWORD)) ]

,DELETE]
,KEEP
[ ,CATLG
,UNCATLG

SYSOUT=classname
SYSOUT= (x[, program-name) [,form-no.)

I
I
I
I
I
I
I
I
I
I
I

~---------------------------------------------------------------------------------------~

1The name field must be blank whe~ concatenating data sets.
I
2All parameters are optional to allow a programmer flexibility in the use of the DD
I
statement; however, a DD statement with a blank operand field is meaningless.
I
3If the positional parameter is specified, keyword parameters other than DCB cannot bel
specified.
I
4If subparameter-list consists of only one subparameter and no leading comma
I
(indicating the omission of a positional subparameter) is required, the delimiting
I
parentheses may be omitted.
5If subparameter-list is omitted, the entire parameter must be omitted.
6See "User-Defined Files" for the applicable subparameters.
7see the publication IBM System/360 O~Eati~~~~~~_JoQ Co~~rol_~~Qgua~_g~fer~~ce.
8If only name is specified, the delimiting parentheses may be omitted.
9If only one volume-serial-number is specified, the delimiting parentheses may be
omitted.
10The SEP and AFF parameters should not be confused with the SEP and AFF subparameters
of the UNIT parameter.
11The value specified may contain special characters if the value is enclosed in
apostrophes.
If the only special character used is the hyphen, the value need not be
enclosed in apostrophes.
If DSNAME is a qualified name, it may contain periods
I without being enclosed in apostrophes.
I 12The unit address may contain a slash, and the unit type number may contain a hyphen,
I _______________________________________________________________________________________
without being enclosed in apostrophes, e.g., UNIT=293/S,UNIT=2400-2.
L
JI
Figure 1.

The DD statement (Part 2 of 2)

Job control Procedures

43

Name Field
ddname (Identifying the DD Statement)
is used:
• To identify data sets defined by
this DD statement to the compiler or
linkage editor (see "compiler Data
Set Requirements" and "Linkage
Editor Data Set Requirements").
• To relate the data sets defined in
this DD statement to a file
described in a COBOL source program
(see "User-Defined Files").
• To identify this DD statement to
other control statements in the
input stream.
procstep.ddname
is used to alter or add DD statements
in cataloged procedures. The step in
the cataloged procedure is identified
by procstep. The ddname identifies
either one of the following:
• A DD statement in the cataloged
procedure that is to be modified by
the DD statement in the input
stream.
• A DD statement that is to be added
to the DD statement in the procedure
step.
operand Field

*

(Defining Data in an Input Stream)
indicates that data immediately
follows this DD statement in the input
stream. This parameter is used to
specify a source deck or data in the
input stream.
If the EXEC statement
specifies execution of a program, only
one data set may be placed in the
input stream. The end of the data set
must be indicated by a delimiter
statement. The data cannot contain //
or /* in the first two characters of
any record.
The DD * statement must
be the last DD statement of the job
step.
In MVT, for a step with a
single input stream data set, DD * and
a /* statement are not required.
The
system will supply bdth if missing.
The default DDNAME will be SYSIN.

DATA (Defining Data in an Input Stream)
also indicates a source deck or data
in the input stream. If the EXEC
statement specifies execution of a
program, only one data set may be
placed in the input stream.
The end
of the data set must be indicated by a
delimeter statement. The data cannot
contain /* in the first two characters
of any record.
The DD DATA statement
44

must be the last DD statement of the
job step. // may appear in the first
and second positions in the record,
for example, when the data consists of
control statements of a procedure that
is to be cataloged.
DUMMY (Bypassing Input/Output Operations on
the Data Set)
allows the user's processing program
to operate without performing
input/output operations on the data
set. The DUMMY parameter is valid
only for sequential data sets to which
reference is made by the basic
sequential or queued sequential file
processing techniques.
If the DUMMY
parameter is specified, a read request
results in an end of data set exit. A
write request is recognized, but no
data is transmitted.
No device
allocation, external storage
allocation, or cataloging takes place
for dummy data sets.
Note: For a file defined
program, this operand may
for a file opened OUTPUT.
the record area of a file
with DD DUMMY will result
results.

in a COBOL source
not be specified
Any reference to
opened OUTPUT
in unpredictable

In multiprogramming environments, data
in the input stream is temporarily
transferred to a direct-access device for
later high-speed retrieval.
Normally, the
reader procedure assigns a blocking factor
for the data when it is placed on the
direct-access device. The programmer may
assign his own values through use of the
BLKSIZE parameter of the DCB parameter.
He
may also indicate the number of buffers to
be assigned to transmitting the data,
through use of the BUFNO parameter.
For
example, he may assign the following:
DCB=(BLKSIZE=800,BUFNO=2)
If the programmer omits these parameters or
assigns values greater than the capacity of
the input reader, it is assumed that the
established default values for the reader
are in effect.
DDNAME Parameter , or records
(average-record-length, expressed as a
decimal number).
In addition, the
ABSTR subparameter indicates that the
allocated space is to begin at a
specific track address.
If the
specified tracks are already allocated
to another data set, they will not be
reallocated to this data set.
T~

Note: For indexed data sets, only the
CYL or ABSTR subparameter is
permitted. When an indexed data set
is defined by more than one DD
statement, all must specify either CYL
or ABSTR; if some statements contain
CYL and others ABSTR, the job will be
abnormally terminated.
(primary-quantity[,secondary-quantityl
[,directory- or index-quantity])
specifies the amount of space to be
allocated for the data set. The
primary quantity indicates the number
of records, tracks, or cylinders to be
allocated when the job step begins.
For indexed data sets, this
subparameter specifies the number of
cylinders for the prime, overflow, or
index area (see "Execution Time Data
Set Requirements"). The secondary
quantity indicates how much additional
space is to be allocated each time
previously allocated space is
exhausted. This subparameter must not
be specified when defining an indexeddata set.
If a secondary quantity is
specified for a sequential data set,
the program may receive control when
additional space cannot be allocated
to write a record.
The directory
quantity is used when initially
creating a partitioned data set (PDS),
Job Control Procedures

47

and it specifies the number of
256-byte records to be reserved for
the directory of the PDS. It can also
specify the number of cylinders to be
allocated for an index area embedded
within the prime area when a new
indexed data set is being defined (see
the publication IBM system/360
Operating system: Job Control
Language Reference).
NQig: The directory contains the name
and the relative position, within the
data set, for each member of a
partitioned data set.
The name
requires 8 bytes, the location 4
bytes. Up to 62 additional bytes can
be used for additional information.
For a directory of a partitioned data
set that contains load modules, the
minimum directory requirement for each
member is 34 bytes.

RLSE
indicates that all unused external
storage assigned to this data set is
to be released when processing of the
data set is completed.
MXIG

{

~~TIG

beginning address
specifies the relative number of the
track desired, where the first track
of a volume is defined as O.
(Track 0
cannot be requested.) The number is
automatically converted to an address
based on the particular device
assigned.
For an indexed data set
this number must indicate the
beginning of a cylinder.
directory quantity
defines the number of 256-byte records
to be allocated for the directory of a
new partitioned data set.
It also
specifies the number of tracks to be
allocated for an index area embedded
within the prime area when a new
indexed data set is being defined.
In
the latter case, the number of tracks
must be equivalent to an integral
number of cylinders (see the
publication IBM ~yst~~360_QE~~~~ing
~~tem: __ Job~~~~ol ~~gg~~g~
Ref~!:~!·

}

qualifies the request for the space to
be allocated to the data set. MXIG
requests the largest single block of
storage that is greater than or equal
to the space requested in the primary
quantity. ALX requests the allocation
of additional tracks in the volume.
The operating system will allocate
tracks in up to five blocks of
storage, each block equal to or
greater than the primary quantity.
CONTIG requests that the space
indicated in the primary quantity be
contiguous.

If this subparameter is not
specified, or if any option cannot be
fulfilled, the operating system
attempts to assign contiguous space.
If there is not enough contiguous
space, up to five noncontiguous areas
are allocated.
ROUND
indicates that allocation of space for
the specified number of records is to
begin and end on a cylinder boundary.
It can be used only when average
record length is specified as the
first subparameter.
quantity
specifies the number of tracks to be
allocated. For an indexed data set,
this quantity must be equivalent to an
integral number of cylinders: it
48

specifies the space for the prime,
overflow, or index area (see
"Execution Time Data set
Requirements" ) •

SPLIT Parameter (Allocating Mass Storage
Spa~~l.

is specified when other data sets in
the job step require space in the same
mass storage volume, and the user
wishes to minimize access-arm movement
by sharing cylinders with the other
data sets. The device is then said to
be operating in a split cylinder mode.
In this mode, two or more data sets
are stored so that portions of each
occupy tracks within every allocated
cylinder.
Note: SPLIT should not be used when
one of the data sets is an indexed
data set.
SPLIT Subparameters:
n

indicates the number of tracks per
cylinder to be used for this data set
if CYL is specified.
If the average
record length is specified, g is the
percentage of the tracks per cylinder
to be used for this data set.
CYL
}
{ average-record-length
indicates the units in which the space
requirements are expressed in the next
subparameter. The units may be
cylinders 

II

The STEPLIB statement overrides the
JOBLIB statement if both are present in a
job step.

By specifying certain ddnames, the
programmer can request the operating system
to perform additional functions.
The
operating system recognizes these
special-purpose ddnames:
SYSABEND AND SYSUDUMP DD

STATE~lliNTS

• JOBLIB and STEPLIB to identify private
user libraries
• SYSABEND and SYSUDUMP to identify data
sets on which a dump may be written

The ddnames SYSABEND or SYSUDUMP
identify a data set on which an abnormal
termination dump may be written.
The dump
is provided for job steps subject to
abnormal termination.

JOBLIB AND STEPLIB DD STATEMENTS
The JOBLIB and STEPLIB DD statements are
used to concatenate a user's private
library with the system library
(SYS1.LINKLIB). Use of JOBLIB results in
the system library being combined with the
private library for the duration of a job;
use of STEPLIB, for the duration of a job
step.
During execution, the library
indicated in these statements is scanned
for a module before the system library is
searched.
The JOBLIB DD statement must appear
immediately after the JOB statement and its
operand field must contain at least the
DSNAME and DISP parameters. The DISP

The SYSABEND DD statement is used when
the programmer wishes to include in his
dump the problem program storage area, the
system nucleus, and the trace table if the
trace table option had been requested at
system generation time.
The SYSUDUMP DD statement is used when
the programmer wishes to include only the
problem program storage area.
The programmer may rout the dump
directly to an output writer by specifying
the SYSOUT parameter on the DD statement.
In a multiprogramming environment, the
programmer may also define the intermediate
direct-access device by specifying the UNIT
and SPACE parameters.
Job Control Procedures

55

PROC STATEMENT

DELIMITER STATEMENT

The PROC statement may appear as the
first control statement in a cataloged
procedure and must appear as the first
control statement in an in-stream
procedure. The PROC statement must contain
the term PROC in its operation field. For
a cataloged procedure, the PROC statement
assigns default values to symbolic
parameters defined in the procedure; its
operand field must contain symbolic
parameters and their default values. The
PROC statement marks the beginning of an
in-stream procedure: its operand may
contain symbolic parameters and their
default values.

The delimiter statement marks the end of
a data set in the input stream. The
identifying characters /* must be coded
into columns 1 and 2, the other fields are
left blank. Comments are coded as
necessary.
Note: When using a system with MFT or MVT,
the end of a data set need not be marked in
an input stream that is defined by a DD *
statement.

NULL STATEMENT

PEND-STATEMENT
The PEND statement must appear as the
last control statement in an in-stream
procedure and marks the end of the
in-stream procedure. It must contain the
term PEND in the operation field. The PEND
statement is not used for cataloged
procedures. For further information about
in-stream procedures refer to the topic
"Testing a Procedure as an In-Stream
Procedure" in "Using the Cataloged
Procedures."

COMMAND STATEMENT
The operator issues commands to the
system via the console or a command
statement in the input stream. Commands
can also be issued to the system via a
command statement in the input stream.
However, this should be avoided since
commands are executed as they are read and
may not be synchronized with execution of
job steps. Command statements must appear
immediately before a JOB statement" an EXEC
statement, a null statement, or another
command statement.
The command statement contains
identifying characters (//) in columns 1
and 2, a blank name field, a command, and,
in most cases, an operand field. The
operand field specifies the job name, unit
name, or other information being
considered.
Note: A command statement cannot be
continued, it must be coded on one card or
card image.
56

The null statement is used to mark the
end of certain jobs in an input stream. If
the last DD statement in a job defines data
in an input stream, the null statement
should be used to mark the end of the job
so that the card reader is effectively
closed. The identifying characters // are
coded into columns 1 and 2, and all
remaining columns are left blank.

The comment statement is used to enter
any information considered helpful by the
programmer. It may be inserted anywhere in
the job control statement stream after the
JOB Statement.
(The comment statement
contains a slash in columns 1 and 2, and an
asterisk in column 3. The remainder of the
card contains comments.) Comments are
coded in columns 4 through 80, but a
comment may not be continued onto another
statement.
When the comment statement is printed on
an output listing, it is identified by the
appearance of asterisks in columns 1
through 3.

DATA SET REQUIREMENTS
COMPILER
Nine data sets may be defined for a
compilation job step: six of these (SYSUT1,
SYSUT2, SYSUT3, SYSUT4, SYSIN, and
SYSPRINT) are required. The other three
data sets (SYSLIN, SYSPUNCH, and SYSLIB)
are optional.

Page of

For compiler data sets other than
utility data sets, a logical record size
can be specified by using the LRECL and
BLKSIZE subparameters of the DCB parameter.
The values spedified must be permissible
for the device on which the data set
resides.
LRECL equals the logical record
size, and BLKSIZE equals LRECL multiplied
by Qr where B is equal to the blocking
factor.
If this information is not
specified in the DO statement, i t is
assumed that the logical record sizes for
the unblocked data sets have the followi~
default values:
Unblocked
Data.Set
SYSIN
SYSLIN
SYSPUNCH
~YSLIB

SYSPRINT

Default
Value (bytes)
80
80
80
80
121

~;

When using the SYSUT1, SYSUT2,
SYSUT3, SYSUT4, SYSPRINT, SYSPUNCH, or
SYSLIN data sets, the followiRg should se
considered:
If the primary space allocate.
must not be written.
DISP
The first subparameter must be OLD.
The second subparameter cannot be
CATLG or UNCATLG (see "Cataloging
Files" above for more information on
cataloging indexed files).
Note: For further information about
Indexed parameters, see "DD statement
Requirements for Indexed Files" in
"Creating Indexed Files."

Only one DO statement is needed to
specify an existing file if all of the
areas are on one volume. The following is
an example of a DO statement that can be
used when processing a single-volume QISAM
file.
Iiddname
II
II

DD

DSNAME=dsname,
DCB=(DSORG=IS •••• ).
UNIT=unit,DISP=OLD

X

X

Further details about DO statements for
existing single-volume and multivolume
indexed files can be found in the
publication !~~_£Y~!~m/3~Q_QQ~~~t~~~
£Y~t~ml __~QQ_£Qg!~2!_~~~~~~~_g~£~~~~£~·
NO!~:
Figure 28 shows the parameters that
may be used in a DO statement when
processing indexed files opened as input or
1-0. Additional information about indexed
file structure is contained in the
publication !~~_£Y~!~~L1~Q_Q~~~tig~
£Y~!~ml __Q~!~_~~g~~m~nt_£~~y~£~~.

Reorganizing Files: As new records are
added to an indexed file. chains of records
may be created in the overflow area if one
exists. The access time for retrieving
records in an overflow area is greater than
that required for retrieving records in the
prime area.
Input/output performance is,
therefore, sharply reduced when many
overflow records develop.
For this reason,
an indexed file can be reorganized as soon
as the need becomes evident. The system
maintains a set of statistics to assist the
programmer when reorganization is desired.
These statistics are maintained as fields
of the file's data control block. They are
made available when APPLY REORG-CRITERIA is
specified.
If these statistics are
desired, the OPTCD subparameter of the DCB
parameter must have included the OPTCD=R
parameter in each of the DD statements when
the file was created. Additional
information about reorganizing files is
contained in the publication IBM Systeml360
Q~E~~ig~_£ystem:
Data Management
seEYi£~~·

Sequential Retrieval Using the START
Statement: For indexed INPUT and 1-0
files, retrieval starts with the first
nondurnmy record in the file.
If the
programmer wishes to begin processing at a
point other than the beginning of the file,
he can do so through the use of the START
verb. When the START statement is used,
the retrieval starts sequentially from the
record specified in the NOMINAL KEY.
Note: If SETL is to be issued from a
user-written assembler language program
against a QISAM file opened by a COBOL
program, either a null START statement
which has never been branched to should
User File Processing

109

Delete Option. In order to keep the
number of records in the overflow area
to a minimum, and to eliminate
unnecessary records, an existing record
may be marked for deletion. This is
done by moving the figurative constant
HIGH-VALUE into the first character
position of the record. The record is
not physically deleted unless it is
forced off its prime track by the
insertion of a new record (see "Using
the WRITE Statement" in "Accessing an
Indexed File Randomly"), or if the file
is reorganized. Records marked for
deletion may be replaced (using BISAM)
by new records containing equivalent
keys. Execution of the READ statement
in QISAM does not make available a
record marked for deletion, whether the
record has, been physically deleted or
'not. Dummy records and deletion are
discussed fUrther in "Accessing an
Indexed File Randomly."

appear in the COBOL program, or an
assembler language program should be called
before the file is opened. This program
must set the MACRF field of the DCB to
ensure loading of the SETL and ESETL
routines.

r--------y--------------------------------,

Iddname I ddname used only for first DD I
I
I statement of each file
I
~--------+--------------------------------~
I DSNAME I
dsname
I

I

I

I
I

I

INote: Element subparameter must I
Inot be used.
I

~--------+--------------------------------~

I Device

IMass storage required

I

~--------+--------------------------------~

I UNIT

IApplicable subparameter

I

I

I

I

I

INote: Not needed if file is
I
I cataloged.
I
~--------+--------------------------------~
ISEP, AFFIRestricted; see "Job Control
I
I
I Procedures"
I
~--------+--------------------------------~
I VOLUME IApplicable subparameters
I

I

~--------+--------------------------------~

I LABEL

ISL

I

~--------+--------------------------------~

I SPACE

INot applicable

I

~--------+--------------------------------~

ISUBALLOCINot applicable

I

~--------+--------------------------------~

I SPLIT

INot applicable

I

~--------+--------------------------------~

I DISP
I

I

I
I

I

OLD1

[, KEEP ]
, PASS
,DELETE

I
I

I

~--------+--------------------------------~

IDCB

I Required: DSORG=IS

I

I

I

I

I
I

10ptional: BUFNO=xxx (not allowed I
I
for BISAM)
I

~--_-----i--------------------------------~

11CATLG
UNCATLG not permitted
L_________________________________________
JI
Figure 28.

DO Statement Parameters
Applicable Indexed Files Opened
as INPUT or 1-0

COBOL Considerations: When processing an
already existing file with QISAM, the
following COBOL programming considerations
should be noted:
• RECORD KEY Clause. The RECORD KEY
always in the SELECT sentence of the
Environment Division is required, just
as it is when creating the file. Note
other record key considerations under
"Accessing an Indexed File Randomly."

110

The file processing technique used for
random retrieval of a logical record, the
random updating of a logical record, and/or
the random insertion of a record is BISAM
(Basic Indexed Sequential Access Method).
When accessing an indexed file randomly,
both NOMINAL KEY and RECORD KEY must be
specified. The format of the NOMINAL KEY
is described briefly below:

r-----------------------------------------,
Format
I

I

~-----------------------------------------~

I

I
I
IL_________________________________________ JI
I

NOMINAL KEY IS data-name

Data-name may be any fixed-length
Working Storage item from 1 through 255
bytes in length.
If it is part of a
logical record, it must be at a fixed
displacement from the beginning of that
record description (see the publication !~~
System/360 Operating_~~~gm~ __~~!!_~g~~£~~
National Standard COBOL for additional
informatron)~----------

Since a RECORD KEY is used to identify a
record to the system, the record keys
associated with the logical records of the
file may be thought of as a table of
arguments. When a record is read or
written, the contents of NOMINAL KEY is
used as a search argument that is compared
to the record keys of the file.

The following example illustrates the
use of the NOMINAL KEY clause.
ENVIRONMENT DIVISION.

NOMINAL KEY IS NOM-KEY
RECORD KEY IS REC-KEY.

DATA DIVISION.
FILE SECTION.
FD INDEXED-FILE
LABEL RECORDS ARE STANDARD.
01 REC-l,.
02 DELETE-CODE PIC X.
02 REC-KEY
PIC 9(5).

WORKING-STORAGE SECTION.
77 NOM-KEY
PIC 9(5).
Because of their complementary use of
the indexed file organization, much of the
information discussed above for QISAM also
applies to BISAM. Differences are noted
below.
Using the WRITE-Statement: The programmer
can use the WRITE statement to add a new
record into an indexed file. The record is
added on the basis of the value specified
in the NOMINAL KEY. The contents of the
NOMINAL KEY are used to locate the two
records in the file between which the new
record is to be inserted. The records
sought are those that have values less than
and greater than the values in the nominal
key field. Two methods can be used to add
records.
In the first method, the key to be added
is a new key value. The record is inserted
in place so that the sequence of the keys
is maintained. If an overflow area exists,
the insertion may cause records to be
forced off the prime track into the
overflow area. Dummy records forced off
the track in this way are physically
deleted and are not written in the overflow
area.
In the second method, the key of the
record to be added has the same value as
that of a known dummy record. If the dummy
record has not been physically deleted, it
is replaced by the new record. If it has
been physically deleted, the record is
inserted as though it had a new key value.
If the key of the record to be added has
the same value as a record other then a
dummy record, an INVALID KEY condition will
result.

Note:
• Records with a key higher (or lower)
than the current highest (or lowest)
key of the file may be added.
• Whenever a WRITE statement is executed
the contents of RECORD KEY and NOMINAL
KEY must be identical. Except in the
case of dummy records, this value must
be unique in the file.

Using the REWRITE Statement: If a record
is to be updated, the indexed file should
be opened as 1-0 and the REWRITE statement
should be used. All REWRITE statements
must be preceded by a READ statement.
However, a READ statement can be followed
by either a WRITE, REWRITE, or another
READ.
Note: Whenever a REWRITE statement is
executed the value contained in NOMINAL KEY
and RECORD KEY must be identical.
Using the READ Statement: Records are
retrieved on the basis of the value
specified in the NOMINAL KEY. If the key
of a record marked for deletion is
specified and the record has not been
physically deleted, it will be produced.
If the record has been physically deleted.
the READ statement will cause an INVALID
KEY condition and control will go to the
INVALID KEY routine if specified.
Note: Although the RECORD KEY clause must
be specified, no value need be moved to the
record key field before the execution of
the READ statement. The search for the
desired record is based on the contents of
NOMINAL KEY.
COBOL Considerations: When processing an
indexed file randomly, the following COBOL
programming considerations should be noted:
• RECORD KEY Clause and NOMINAL KEY
Clause. The RECORD KEY and NOMINAL KEY
clauses in the SELECT sentence of the
Environment Division are required. The
RECORD KEY clause is used to specify
the location of the key within the
record itself. The NOMINAL KEY is used
as a search argument to locate the
proper record, and must not be defined
within the file being processed. Note
that since a RECORD KEY is defined
within a record, the contents of EECORD
KEY are not available after a WRITE
statement has been executed for that
record.
User File Processing

111

Page of GC28-6399-2, Revised 4/15/13, by TNL GN28-1038

Table 17.
Data Management
Techniques
QISAM

Indexed File Processing bn Mass Storage Devices
Access
Method

KEY
aauses

OPEN
Statement

SEQUENTIAL

RECORD
NOMINAL

INPUT

Access
Verbs
READ [INTO]
AT END
START
INVALID
KEY

~-----

I- -

1-0

[WITH LOCK]

-----WRITE [FROM]
INVALID KEY

OUTPUT
~--

CLOSE
Statement

- - - - - -READ
-[INTO]
---AT END
START
INVALID
KEY
REWRITE [FROM]
INVALID KEY

BISAM

RANDOM

RECORD
NOMINAL

INPUT

f------1-0

READ [INTO]
INVALID KEY

I-- -

[WITH LOCK]

-- - READ [INTO]
INVALID KEY
WRITE [FROM]
INVALID KEY
REWRITE [FROM]
INVALID KEY

• TRACK-AREA Clause. Specifying the
clause results in a considerable improvement in efficiency when a record
is added to the file.
If a record is
added and the TRACK-AREA clause was not
specified for the file, the contents of
the NOMINAL KEY field are unpredictable
after the WRITE statement is executed.
In this case, the key must be reinitialized before the next WRITE statement
is executed. Even if TRACK-AREA is
specified, if the. addition of a record
causes another record to be bumped off
the track into the overflow area, the
contents of the NOMINAL KEY are unpredictable after a WRITE.
• APPLY REORG-CRITERIA Clause. If the
OPTCD=R parameter was specified on the
DO card for an indexed file when i t was
created, the APPLY REORG-CRITERIA
clause can be used to obtain the reorganization statistics when the file is
closed. These statistics are moved
from the data control block to the
identifier specified in the clause when
a CLOSE statement is executed for the
file.
• APPLY CORE-INDEX Clause. This clause
specifies that the highest level index
will reside in core storage during
input/output operations. Otherwise,
the index will be searched on the
volume, and processing time will be
longer.

been previously created and is to be retrieved.
In either case, the data set must
have a disposition; for example, if the
data set is being created, the disposition
must indicate whether the data set is to be
cataloged, kept, or deleted. Other DD
parameters may simply indicate that the
data set is in the input stream or that
ultimately the data set is to be pririted or
punched.
The following sections summarize the DD
statement parameters and show examples for
various uses of the DD statement. These
sections include information about cataloging data sets and creating or referring to
generation data groups; examples of cataloged data sets and partitioned data sets
are included. For additional information
about partitioned data sets see
"Libraries." Also see "Appendix I: Checklist for Job Control Procedures" for additional examples of the DD statement used in
job control.procedures.

CREATING A DATA SET
When creating a data set, the programmer
ordinarily will be concerned with the following parameters:
1.

The data set name (DSNAME) parameter,
which assigns a name to the data set
being created.

2.

The unit (UNIT) parameter, which
allows the programmer to state the
type and quantity of input/output
devices to be allocated for the data
set.

• Required and optional COBOL statements
are summarized in Table 17.
USING TBEOO.STATEMENT
Each data set that is defined by a DD
statement is either to be created, or has
112

3.

The volume (VOLUME) parameter, which
allows specification of the volume in
which the data set is to reside.
This
parameter also gives instructions to
the system about volume mounting.

listed in "Appendix C:
Control Block."

4.

The space (SPACE), split cylinder
(SPLIT), and suballocation (SUBALLOC)
parameters, for mass storage devices
only, which permit the specification
of the type and amount of space
required to accommodate the data set.

Creating Data sets on Magnetic Tape

5.

The label (LABEL) parameter, which
specifies the type and some of the
contents of the label associated with
the data set.

6.

The disposition (DISP) parameter,
which indicates what is to be done
with the data set by the system when
the job step is completed.

7.

The DCB parameter, which allows the
programmer to specify additional
information to complete the DCB
associated with the data set (see
"User-Defined Files"). This allows
additional information to be specified
at execution time to complete the DCB
constructed by the compiler for a data
set defined in the source program.

Figure 29 shows the subparameters that
are frequently used in creating data sets.
Additional subparameters are discussed in
"Job Control Procedures."

Creating Unit Record Data Sets

r

Data sets whose destination is a printer
or card punch are created with the DD
statement parameters UNIT and DCB.

Fields of the Data

Tape data sets are created using
combinations of the DD statement parameters
UNIT, LABEL, DSNAME, DCB, VOLUME, and DISP.
UNIT: Required, except when volumes are
requested using VOLUME=REF.
A unit can be
assigned by specifying its address, type,
or group name, or by requesting unit
affinity with an earlier data set.
Multiple output units and defer volume
mounting can also be requested with this
parameter.
LABEL: Required when the tape has user
labels or does not have standard labels,
and when the data set does not reside first
on the reel.
It is also used to assign a
retention period and password protection.
DSNAME: Required for data sets that are to
be cataloged or used by a later job.
DCB: Required only when data control block
information cannot be specified in COBOL.
Usually, such attributes as the logical
record length (LRECL) and buffering
technique (BFTEK) will have been specified
in the processing program.
Other
attributes, such as the OPTCD field and the
tape recording technique (TRTCH), are more
appropriately specified in the DD
statement. Valid DCB subparameters are
listed in "Appendix C:
Fields of the Data
Control Block."

UN~T:
us~ng

VOLUME: Optional, this parameter is used
to request specific volumes.
If VOLUME=REF
is specified, and the existing data sets on
the specified volume(s) are to be saved,
indicate the data set sequence number in
the LABEL parameter.

DCB: Required only if the data control
block is not completed in the processing
program. Valid DCB subparameters are

DISP: Required for data sets that are to
be-cataloged, passed, or kept.
The
programmer can specify conditional
disposition as the third term in the DISP
parameter to indicate how the data set is
to be treated if the job step abnormally
terminates.

Required. Code unit information
the 3-digit address (e.g., UNIT=OOE),
the type (e.g., UNIT=1403), or the
system-generated group name (e.g.,
UNIT=PRINTER) •

User File Processing

113

.--------------------------------------------------------------------------------------,
DSNAME}

{dSname

=

)

dsname(element)~

\ DSN

"name
(
"name (element)J

UNIT= (name [, n] )
VOLUME}
=([PRIVATE] [,RETAIN] [,volume-sequence-number] [,volume-count]
{ VOL

·.
t

,SER=(Volume-serial-numb.er[,VOlume-serial-nUmberl ••• )]

:

·,REF=

dsname
*.ddname
*.stepname.ddname
*.stepname.procstep.ddname

~~~

SPACE=(

}

)

, (primary-quantity[,secondary-quantity)

taverage-record-Iength
[,directory-quantity]»
SPLIT=(n,

J

CYL
[ average-record-Iength

LABEL=([data-set sequence-number].

[,(primary-quantity,[secondary-quantity))
(NL ) [,EXPDT=yYdd1

);SL

t

~

::}
i

DISP=(

,DELETE]
,KEEP
,PASS
[.,CATLG

. )
, RETPD=xxxx

,DELETE]
,KEEP
)
[. ,CATLG

DCB=(subparameter-list)
L _____________________________________________________________________________________ _

Figure 29.

DD Statement Parameters Frequently Used in Creating Data Sets

Creating Sequential (BSAM or QSAM) Data
Sets on Mass Storage Devices
Sequential data sets are created using
combinations of the DD statements
parameters UNIT, DSNAME, VOLUME, LABEL,
DISP, DCB, and one of the space allocation
parameters SP~CE, SPLIT, or SUBALLOC.
UNIT: Required, except when volumes are
requested using VOLUME=REF or space is
allocated using SPLIT or SUBALLOC. Assign
a unit by specifying its address, type, or
group name, or by requestinq unit affinity.
114

DSNAME: Required for all but temporary
data sets.
Label: Required to specify label type and
to assign a retention period or password
protection.
DCB: Required only when data control block
information is not completely specified in
the processing program. Usually, such
attributes as the logical record length
(LRECL) and buffering technique (BFTEK)
will have been specified in the processing
program. Other attributes, such as the
OPTCD field are more appropriately
specified in the DD statement. Valid DCB

subparameters are listed in "Appendix C:
Fields of the Data Control Block."
VOLUME: Optional. This parameter requests
specific volumes (SER and REF), specific
volumes when the data set resides on more
than one volume (seq #), multiple
nonspecific volumes (volcount), private
volumes (PRIVATE), or private volumes that
are to remain mounted until the end of the
job (RETAIN).'
DISP: Required for data sets that are to
be cataloged, passed, or kept. The
programmer can specify conditional
disposition as the third term in the DISP
parameter to indicate how the data set is
to be treated if the job step abnormally
terminates.
SPACE, SPLIT, SUBALLOC: One of these is
required for all new mass storage data
sets.
Creating Direct (BDAM) Data Sets
Direct (BDAM) data sets are created
using the same subset of DD statement
parameters as sequential data sets, with
the exception of the SPLIT parameter.
Valid DCB subparameters for BDAM data sets
are listed in "Appendix C: Fields of the
Data Control Block."

Creating Indexed (BISAM and QISAM) Data
Sets
Indexed (ISAM) data sets are created
using combinations of the DD statement
parameters UNIT, DSNAME, VOLUME, LABEL,
DISP, DCB, and SPACE.
The ISAM data sets
occupy three areas of storage: an index
~-that contains master and cylinder
indexes, a prime area that contains the
data records and track indexes, and an
optional overflow area to hold additional
records when the prime area is exhausted.
Detailed information on creating and
retrieving indexed data sets is presented
in "Appendix H: Creating and Retrieving
Indexed Sequential Data Sets."

Creating Data Sets in the Output Stream
New data sets can be written on a system
output device in much the same way as
messages. When using a sequential
scheduler, a data set is directed to the
output stream with the SYSOUT and DCB
parameters.

SYSOUT: Required. The output class
through which the data set is routed must
be specified. output classes are
identifi~d by a single alphanumeric
character.
DC~:

Required only if complete data
control block information has not been
specified in the processing program.

When using a priority scheduler, data
sets are not routed directly to a system
output device. They are stored by the
processing program on an intermediate mass
storage device and later written on a
system output device.
In addition' to the
SYSOUT and DCB parameters, DD statements
defining a data set of this type can also
contain UNIT and SPACE parameters. All
other parameters must be absent.
~X~QQ~:
Required.
The output class
through which the data set is routed must
be specified. Output classes are
identified by a single alphanumeric
character.
(Do not use classes 0 through 9
except in cases where the other classes are
not sufficient.)

DC~:

Required only if complete data
control block information has not been
specified in the processing program.
Data
control block information is used when the
data set is written on an intermediate mass
storage volume and read by the output
writer. However, the output writer's own
DCB attributes are used when the data set
is written on the system output device.
Valid DCB parameters are listed in
"Appendix C:
Fields of the Data Control
Block."
UNIT: Optional. An intermediate mass
storage device is assigned if UNIT is
specified. A default device is assigned if
this parameter is omitted.
~~~~~:

Optional.
Estimate the amount of
mass storage space required.
A default
estimate is assumed if this parameter is
omitted.
Note: When a Direct SYSOUT Writer is used,
the-priority scheduler functions as a
sequential scheduler.
The SYSOUT data sets
of the particular output class from any of
the elegible job classes are not stored on
an intermediate storage device, but are
written directly to the system output
device. When Direct SYSOUT Writer is used,
all the parameters on the DD card are
ignored.
For detailed information on
Direct SYSOUT Writer, see the publication
±~~_~y~~~~~36Q_QP~E~~igg_~y~~~~~

Qp~E~~QE~_g~f~E~gf~'

Order No. GC28-6691.

User File

Proces~ing

115

Examples of DD statements Used To Create Data Sets
The following examples show various ways of specifying DD statements
for data sets that are to be created.
In general, the number of
parameters and subparameters that are specified depend on the
disposition of the data set at the end of the job step.
If a data set
is used only in the job step in which it is created and is deleted at
the end of the job step, a minimum number of parameters are required.
However, if the data set is to be cataloged, more parameters should be
specified.

Example 1:

Creating a data set for the current job step only.

//SYSUTl

DD UNIT=SYSDA,SPACE=(TRK,(SO,10»)

This example shows the basic required DD statement for creating and
storing a data set on a mass storage device. The UNIT parameter is
required unless the unit information is available from another source.
If the data set were to be stored on a unit record or a tape device, the
SPACE parameter would not be needed.
The operating system assigns a
temporary data set name and assumes a disposition of (NEW, DELETE).
Example 2:

creating a data set that is used only for the current job.

//SYSLIN

DD DSNAME=&&TEMP,DISP=(MOD,PASS>,UNIT=SYSSQ,
SPACE=(TRK, (50»

//

x

This example shows a DD statement that creates a data set for use in
more than one step of a job. The system assigns a unique symbol for the
name, and this same symbol is substituted for each recurrence of the
&&TEMP name within the job. The data set is allocated space on any
available mass storage or tape device.
If a tape device is selected,
the SPACE parameter is ignored. The disposition specifies that the data
set is either new or is to be added to (MOD), and is to be passed to the
next job step (PASS>. This DD statement can be used for specifying the
data set that is created as output from the compiler and that is to be
used as input to the linkage editor. By specifying MOD, separately
compiled object modules can be placed in sequence in the same data set.
Note: If MOD is specified for a data set that does not already exist,
the job may be abnormally terminated when a volume reference name, a
volume serial number, or the disposition CATLG is specified or when the
dsname is indicated by a backwards reference.
Example 3:

Creating a data set that is to be kept but not cataloged.

//TEMPFILE
//

DD DSN=FILEA,DISP=(,KEEP),SPACE=(TRK, (30,10»,
UNIT=DIRECT,VOL=(,RETAIN,SER=AA70)

x

The example shows a DD statement that creates a data set that is kept
but not cataloged. The data set name is FILEA. The disposition (,KEEP)
specifies that the data set is being created in this job step and is to
be kept. It is kept until a disposition of DELETE is specified on
another DD statement. The KEEP parameter implies that the volume is to
be treated as private.
Private implies that the volume is unloaded at
the end of the job step but because RETAIN is specified, the volume is
to remain mounted until the end of the job unless another reference to
it is encountered.
The DIRECT parameter is a hypothetical device class,
containing only mass storage devices.
The volume with serial number
AA70, mounted on a device in this class, is assigned to the data set.
116

Space for the data set is allocated as specified in the SPACE parameter.
The data set has standard labels since it is on a mass storage volume.
If the volume serial number were not specified in the foregoing
example, the system would allocate space in an available nonprivate
volume.
Because KEEP is specified, the volume becomes private.
(Another data set cannot be stored on a private volume unless its volume
serial number is specified or affinity with a data set on the volume is
stated.) The volume serial number of the volume assigned, if
applicable, is included in the disposition message for the data set.
Disposition messages are messages from the job scheduler, generated at
the end of the job step.

Example 4:

Creating a data set and cataloging it.

//DDNAMEA

DD DSNAME=INVENT.PARTS,DISP=(NEW,CATLG),
LABEL=("EXPDT=69031),UNIT=DACLASS,
VOLUME=(,REF=*.STEP1.DD1),
SPACE=(CYL, (5,1)"CONTIG)

//
//
//

x
X
X

This example shows a DO statement that creates a data set named
INVENT. PARTS and catalogs it in the previously created system catalog.
The data set is to occupy the same volume as the data set referred to in
the DO statement named DDl occurring in the job step named STEP1. The
UNIT parameter is ignored since REF is specified.
Five cylinders are
allocated to the data set, and if this space is exhausted, more space is
allocated, one cylinder at a time. The five cylinders are to be
contiguous. The disposition (CATLG), implies that the volume is to be
private. The INVENT. PARTS is to have standard labels.
The expiration
date is the 31st day of 1969.

Example 5:

Adding a member to a previously created library.

//SYSLMOD

DO DSNAME=SYS1.LINKLIB(INVENT),DISP=OLD

This DO statement adds a member named INVENT to the link library
(SYS1.LINKLIB). When a member is added to a previously created data
set, OLD should be specified. The member INVENT takes on the
disposition of the library.

Example 6:

Creating a library and its first member.

//SYSLMOD

DO DSNAME=USERLIB(MYPROG),DISP=(,CATLG),
SPACE=(TRK,(50,30,3»,UNIT=2311,VOLUME=SER=111111

//

X

This DO statement creates a library, USERLIB, and places a member,
MYPROG, in it.
The disposition (,CATLG) indicates that the data set is
being created in this job step (NEW is the default condition for the
DISP parameter and is indicated by the comma) and is to be cataloged.
The data set is to have standard labels. Space is allocated for the
data set in a volume on a mass storage device that is an IBM 2311 unit.
Initially, 50 tracks are allocated to the data set, but when this space
is exhausted, more tracks are added, 30 at a time.
The SPACE parameter
must be specified when the library is created, and it must include
allocation of space for the directory. SPACE cannot be specified when
new members are added.
If additional space is required when new members
are added, the secondary allocation, if specified, will be used.
Three
256-byte records are to be used for the directory.
The volume serial
number of the volume on which the library is to reside, is 111111.
User File Processing

117

Example 7:

Replacing a member of an existing library.

//SYSLMOD

DD DSNAME=MYLIB(CASE3),DISP=OLD

This DD statement replaces the member named CASE3 with a new member
with the same name. If the named member does not exist in the library,
the member is added as a new member.
In the foregoing example, the
library is cataloged.
Example 8: Creating and adding a member to a library used only for the
current job.
//SYSLMOD
//

DD DSNAME=&&USERLIB(MYPROG),DISP=(,PASS),UNIT=SYSDA,
SPACE=(TRK, (50,,1»

x

This DD statement creates and adds a member to a temporary library.
It is similar to the DD statement shown in Example 6, except that a
temporary name is used and the data set is not cataloged nor kept but is
simply passed to the next job step.
Since the data set is to be used
only for this one job, it is not necessary to specify VOLUME and LABEL
information.
This statement can be used for a linkage edit job step in
which the module is to be passed to the next step.
Note:
If DISP=(,DELETE) is specified for a library, the entire library
will be deleted.

118

RETRIEVING PREVIOUSLY CREATED DATA SETS
The parameters that must be specified in
a DD statement to retrieve a previously
created data set depend on the information
that is available to the system about the
data set. For example,
1.

2.

If a data set on a magnetic-tape or
mass storage volume was created and
cataloged in a previous job or job
step, all information for the data
set, such as volume, space, etc., is
stored in the catalog and data set
label. This information need not be
repeated. Only the dsname and
disposition parameters need be
specified.
If the data set was created and kept
in a previous job but has not been
cataloged, information concerning the
data set, such as space, record
format, etc., is stored in the data
set label. However, the unit and
volume information must be specified
unless available elsewhere.

r-----------------------------------------,I
DSNAMEt
{ 'DSN

)

If the data set was created in the
current job step, or in a previous job
step in the current job, the
information in the previous DD
statement is available to the system
and is accessible by referring to the
previous DD statement. Only the
dsname and disposition parameters need
be specified.

~
t

t

I.

*·stepname.ddname~

&&name
&&name (element)

UNIT= (name [, n])
DCB=(subparameter-list)

DISP=(

[~i~J [:~T]
MOD

,CATLG
,UNCATLG

,DELETE]
,KEEP
,CATLG
[
,UNCATLG

LABEL=(subparameter-list)

~VOLUME}

= (subparameter-list>

tVOL
L-_______________________________________ _

Figure 30.
3.

=

dsname

dsname(element)
*.ddname

Parameters Frequently Used in
Retrieving Previously Created
Data Sets

DSNAME: Required. The data set must be
identified by its cataloged name. If the
catalog contains more than one index level,
the data set name must be fully qualified.
Required.
The status  or deleted (DELETE).
User File Processing

119

Retrieving Noncataloged (KEEP) Data Sets
Input data sets that were assigned a
disposition of KEEP are retrieved by their
tabulated name and location, using the DD
statement parameters DSNAME, UNIT, VOLUME,
DISP, LABEL, and DCB.
DSNAME: Required. The data set must be
identified by the name assigned to it when
it was created.
UNIT: Required, unless VOLUME=REF is used.
The unit must be identified by its address,
type, or group name.
If the data set
requires more than one unit, give the
number of units. Deferred volume mounting
and unit separation can be requested with
this parameter.

DSNAME: Required. The original data set
must be identified by either its name or
the DD statement reference term
*.stepname.ddname. If the original DD
statement occurs in a cataloged procedure,
the procedure stepname must be included in
the reference term.
DISP: Required.
The data set must be
identified as OLD, and an indication made
as to how it is to be treated after its
use. The programmer can specify
conditional disposition as the third term
in the DISP parameter to indicate how the
data set is to be treated if the job step
abnormally terminates.
UN!!: Required only if more than one unit
is allocated to the data set.

Extending Data sets With Additional
VOLUME: Required.
The volume(s) must be
identified with serial numbers or, if the
data set was retrieved earlier in the same
job, with VOLUME=REF. If the volume is to
be PRIVATE, it must be so designated.
If a
private volume is to remain mounted until a
later job step uses it, RETAIN should be
designated.
DISP: Required. The status (OLD or SHR)
of the data set must be given and an
indication made as to how it is to be
treated after its use. The programmer can
specify conditional disposition as the
third term in the DISP parameter to
indicate how the data set is to be treated
if the job step abnormally terminates.
LABEL: Required if the data set does not
have a standard label. If the data set
resides with others on tape, its sequence
number must be given.
DCB: Required for all indexed data sets.
otherwise, required only if complete data
control block information is not supplied
by the processing program and the data set
label.
To save recoding time, copy DCB
attributes from an existing DeB parameter,
and modify them if necessary. Valid DCB
subparameters are listed in Appendix C.

Retrieving Passed Data sets
Input data sets used in a previous job
step and passed are retrieved using the DD
statement parameters DSNAME, DISP, and
UNIT. The data set's unit type, volume
location, and label information remain
available to the system from the original
DD statement.
120

oU~~i

A processing program can extend an
existing data set by adding records to it
instead of reading it as input.
such a
data set is retrieved using the same
subsets of DD statement parameters
described under the preceding three topics,
depending on whether it was cataloged,
kept, or passed when created.
In each
case, however, the DISP parameter must
indicate a status of MOD. When MOD is
specified, the system positions the
appropriate read/write head after the last
record in the data set.
If a disposition
of CATLG for an extended data set that is
already cataloged is indicated, the system
updates the catalog to reflect any new
volumes caused by the extension.
When extending a multivolume data set
where number of volumes might exceed the
number of. units used, the progranuner should
either specify a volume count or deferred
mounting as part of the volume information.
This ensures data set extension to new
volumes.

Retrieving Data Through an

InE~i_~i~~~ill

Data sets in the form of decks of cards
or groups of card images can be introduced
to the system through an input stream by
interspersing them with control statements.
To define a data set in the input stream,
mark the beginning of the data set with a
DD statement and the end with a delimiter
statement. The DD statement must contain
one of the parameters * or DATA.
Use DATA
if the data set contains job control
statements and an * if it does not.
Two
DCB subparameters can also be coded when

defining a data set in the input stream.
In systems with MFT or MVT, data in the
input stream is temporarily transferred to
a mass storage device.
The DCB
subparameters BLKSIZE and BUFNO allow
blocking of this data as i t is placed on
the mass storage device.

• The records must be unblocked, and
80-characters in length.
• The characters in the records must be
coded in BCD or EBCDIC.
When using a priority scheduler:

When using a sequential scheduler:
• The input stream must be on a card
reader or magnetic tape.

• The input stream can be on any device
supported by QSAM.

• Each job step and procedure step can be
associated with only one data set in
the input stream.

• Each job step and procedure step can be
associated with several data sets in an
input stream. All such data sets
except the first in the job must be
preceded by DD * or DD .DATA statements.

• The DD statement must be the last in
the job step or procedure step.

• The characters in the records must be
coded in BCD or EBCDIC.

User File Processing

121

Examples of DD statements Used To Retrieve Data Sets

Example 1:
//CALC

Retrieving a cataloged data set.
DD

DSNAME=PROCESS, DISP=(OLD, PASS, KEEP)

This DD statement retrieves a cataloged data set named PROCESS.
No
UNIT or VOLUME information is needed.
Since PASS is specified, the
volume in which the data set is written is retained at the end of the
job step.
PASS implies that a later job step will refer to the data
set. The last step in the job referring to the data set should specify
the final disposition. If no other DD statement refers to the data set,
it is assumed that the status of the data set is as it existed before
this job.
In the event of an abnormal termination, the KEEP disposition
explicitly states the disposition of the data set.
Example 2:

Retrieving a data set that was kept but not cataloged.

//TEMPFILE DD

DSNAME=FILEA,UNIT=DIRECT,VOLUME=SER=AA70,DISP=OLD

This DD statement retrieves a kept data set named FILEA.
(This data
set is created by the DD statement shown in Example 3 for creating data
sets.) The data set resides on a device in a hypothetical device class,
DIRECT.
The volume serial number is AA70.
Example 3:

Referring to a data set in a previous job step.

//SAMPLE
//STEP1

JOB
EXEC PGM=IKFCBLOO,PARM=DECK

//SYSLIN
//STEP2
//SYSLIN

DD
DSNAME=ALPHA,DISP=(NEW,PASS),UNIT=SYSSQ
EXEC PGM=IEWL
DD
*.STEP1.SYSLIN, DISP= (OLD, DELETE)

The DD statement SYSLIN in STEP2 refers to the data set defined in
the DD statement SYSLIN in STEP1.
Example 4:
//BANKING

Retrieving a member of a library.
DD

DSNAME=PAYROLL(HOURLY),DISP=OLD

The DD statement retrieves a member, HOURLY, from a cataloged
library, PAYROLL.

122

DDSTATEMENTS THAT SPECIFY UNIT RECORD
DEVICES
A DD statement may simply indicate that
data follows in the input stream or that
the data set is to be punched or printed.
Figure 31 shows the parameters of special
interest for these purposes.

~~~illP!~_l:

Specifying a card punch.

//SYSPUNCH

DD SYSOUT=B

B is the standard device class for punch
devices.

CATALOGING A DATA SET

r-----------------------------------------,I

I

!{~ATA
I
I
I
I
I
I

!

}

SYSOUT=A
UNIT=name
DCB=(subparameters)

I
I
I
I
I
I

~-----------------------------------------~

I Note: The DCB parameter can be
I
I specified, where permissible, for data I
I sets on unit record devices. For
I
I
I example, it can be specified for
I compiler data sets (other than SYSUT1, I
I SYSUT2, SYSUT3, and SYSUT4) and data
I
I
I sets specified by the DD statements
I
I required for the ACCEPT and DISPLAY
I statements, when any of these data sets I
are assigned to unit-record devices.
IL _________________________________________
JI
Figure 31.

Parameters Used To Specify
unit Record Devices

A data set is cataloged whenever CATLG
is specified in the DISP parameter of the
DD statement that creates or uses it.
This
means that the name and volume
identification for the data set are placed
in a system index called the catalog.
(See
"Processing with QISAM" in the section
"Execution Time Data Set Requirements" for
information about cataloging indexed data
sets.) The information stored in the
catalog is always available to the system;
consequently, only the data set name and
disposition need be specified in subsequent
DD statements that retrieve the data set.
See Example 4 in "Creating Data Sets," and
Example 1 in "Retrieving Data sets."
If DELETE is specified for a cataloged
data set, any reference to the data set in
the catalog is deleted unless the DD
statement containing DELETE retrieves the
data set in some way other than by using
the catalog.
If UNCATLG is specified for a
cataloged data set, only the reference in
the catalog is deleted; the data set itself
is not deleted.
Note: A "cataloged data set" should not be
confused with a "cataloged procedure" (see
"Using the Cataloged Procedures").

Example 1:
reader.

Specifying data in the card

//SYSIN

DD

*

The asterisk indicates that data follows
in the input stream.
This statement must W
be the last DD statement for the job step."
The data must be followed by a delimiter
statement.

Example 2:

Specifying a printer data set.

//SYSPRINT

DD SYSOUT=A

SYSOUT is the system output parameter; A
is the standard device class for printer
data sets.

It is sometimes convenient to save data
sets as elements or generations of a
generation data group
(DSNAME=dsname(element».
A generation
data group is a collection of successive
historically related data sets.
Identification of data sets that are
elements of a generation data group is
based upon the time the data set is added
as an element. That is, a generation
number is attached to the generation data
group name to refer to a particular
element. The name of each element is the
same, but the generation number changes as
elements are added or deleted.
The most
recent element is 0, the element added
previous to 0 is -1, the element added
previous to -1 is -2, etc.
A generation
data group must always be cataloged.
User File Processing

123

For example, a data group named PAYROLL
might be used for a weekly payroll.
The
elements of the group are:
PAYROLL(O)
PAYROLL(-l)
PAYROLL (-2)
where PAYROLL(O) is the data set that
contains the information for the most
current weekly payroll and is the most
recent addition to the group.
When a new element is added, it is
called element(+n), where n is an integer
greater than O. For example, when adding a
new element to the weekly payroll, the DD
statement defines the data set to be added
as PAYROLL(+l)i at the end of the job the
system changes its name to PAYROLL(O).
The
element that was PAYROLL(O) at the
beginning of the job becomes PAYROLL(-l) at
the end of the job, and so on.
If more than one element is being added
in the same job, the first is given the
number (+1), the next (+2) and so on.

NAMING DATA SETS

each other.
Including all simple names and
periods, the length of a data set name must
not exceed 44 characters.
Thus, a maximum
of 21 qualification levels is possible for
a data set name.
Programmers should not use fully
qualified data set names that begin with
the letters SYS and that also have a P as
the nineteenth character of the name.
Under certain conditions, data sets with
the above characteristics will be deleted.

The following topics are discussed in
this section: the data control block,
error processing for COBOL files, and
volume and data set labels.
More information about input/output
processing is contained in the publication
IBM ~y~~~m/36Q_QE~E~ti~~~y~£g~~ __Q~£~
~an~g~ill~nt_~~Evi2~~·

Each data set must be given a name.
The
name can consist of alphanumeric characters
and the special characters, hyphen and the
+0 (12-0 multipunch). The first character
of the name must be alphabetic. The name
can be assigned by the system, it can be
given a temporary name, or it can be given
a user-assigned name.
If no name is
specified on the DD statement that creates
the data set, the system assigns to the
data set a unique name for the job step.
If a data set is used only for the duration
of one job, it can be given a temporary
name (DSNAME=&&name).
If a data set is to
be kept but not cataloged, it can be given
a simple name.
If the data set is to be
cataloged it should be given a fully
qualified data set name. The fully
qualified data set name is a series of one
or more simple names joined together so
that each represents a level of
qualification.
For example, the data set
name DEPT999.SMITH.DATA3 is composed of
three simple names that are separated by
periods to indicate a hierarchy of names.
Starting from the left, each simple name
indicates an index or directory within
which the next simple name is a unique
entry.
The rightmost name identifies the
actual location of the data set.
Each s~mple name consists of one to
eight characters, the first of which must
be alphabetic.
The special character
period (.)
separates simple names from
124

DATA CONTROL BLOCK
Each data set is described to the
operating system by a data control block
(DCB). A data control block consists of a
group of contiguous fields that provide
information about the data set to the
system for scheduling and executing
input/output operations. The fields
describe the characteristics of the data
set (e.g., data set organization) and its
processing requirements (e.g., whether the
data set is to be read or written).
The
COBOL compiler creates a skeleton DCB for
each data set and inserts pertinent
information specified in the Environment
Division, FD entry, and input/output
statements in the source program. 'rhe DCB
for each file is part of the object module
that is generated. Subsequently, other
sources can be used to enter information
into the data control block fields.
The
process of filling in the data control
block is completed at execution time.
Additional information that completes
the DCB at execution time may come from the
DD statement for the data set and, in
certain instances, from the data set label
when the file is opened.

overriding DCB Fields
Once a field in the DCB is filled in by
the COBOL compiler, i t cannot be overridden
by a DD statement or a data set label.
For
example, if the buffering factor for a data
set is specified in the COBOL source
program by the RESERVE clause, it cannot be
overridden by a DD statement.
In the same
way, information from the DD statement
cannot be overridden by information
included in the data set label.

Identifying DCB Information
The links between the DCB, DD statement,
data set label, and input/output statements
are the filename, the system name in the
ASSIGN clause of the SELECT statement, the
ddname of the system-name, and the dsname
(Figure 32).

1.

The filename specified in the SELECT
statement and in the FD entry of the
COBOL source program is the name
associated with the DCB.

2.

Part of the system-name specified in
the ASSIGN clause of the source
program is the ddname link to the DD
statement.
This name is placed in the
DCB.

3.

The dsname specified in the DO
statement is the link to the physical
data set.

The fields of the data control block are
described in the tables in Appendix C.
They identify those fields for which
information must be supplied by the source

program, by a DD statement, or by the data
set label.
For further information about
the data control block, see the discussion
of the DCB macro instruction for the
appropriate file processing technique in
the publication IBM System/360 Ope~~£ing
System: Data Mana~~~g~~~~Yi£~~.

ERROR PROCESSING FOR COBOL FILES

During the processing of a COBOL file,
data transmission to or from an
input/output device may not be successful
the first time i t is attempted.
If it is
not successful, standard error recovery
routines, provided by the operating system,
attempt to clear the failure and allow the
program to continue uninterrupted.
If an input/output error cannot be
corrected by the system, an abnormal
termination (ABEND) of the program may
occur unless the programmer has specified
some means of error analysis.
Error
processing routines initiated by the
programmer are discussed in the following
paragraphs, and in "Appendix G:
Input/Output Error Conditions."
For sequential files, the programmer can
specify a DD statement option (EROPT) that
specifies the type of action to be taken by
the system if an error occurs.
This option
can be specified whether or not a
declarative is written.
If a declarative
is specified, the DD statement option is
executed when a normal exit is taken from
the declarative.
See "Accessing a Standard
Sequential File" for further information.

r--------------------,
SELECT
I

r-----------

I
I

Statement

I
I
I Other
I Input/Output
I _____________________
Statements
L
JI

Figure 32.

I

r------------------,
Data Set
I

I
I

I
I
___________ JI

Label

I

DD
I
Statement
IL___________________
JI

Links between the SELECT Statement, the DO Statement, the Data Set Label, and
the Input/Output Statements

User File Processing

125

INVALID KEY Option
INVALID KEY errors may occur for files
accessed randomly, or for output files
accessed sequentially. A test to determine
these errors may be made by using the
INVALID KEY option of the READ, WRITE,
REWRITE, or START verb.
Note: Secondary space allocation must be
specified when the INVALID KEY option is
used in a WRITE statement for QSAM and
BSAM.

USE AFTER ERROR Option
The programmer may specify the USE AFTER
ERROR option in the declarative section of
the Procedure Division to determine the
type of the input/output error. with the
USE AFTER ERROR option, the programmer can
pass control to an error-processing routine
to investigate the nature of the error.
If
the GIVING option of the USE AFTER ERROR
declarative is specified, data-name-1 will
contain information about the error
condition. Data-name-2, if specified, will
contain the block in error if the last
input/output operation was a read.
If the
file was opened as output, data-name-2 in
the GIVING option cannot be referenced.
Data-name-2 of the GIVING option
contains valid data only if data was
actually transferred on the last
input/output operation. For example, if
the declarative is entered after execution
of a START verb for a QISAM file on which
no INVALID KEY option was present, an
attempt to access data-name-2 results in an
abnormal termination, because no transfer
of data has taken place. If data-name-2 is
specified in other than the Linkage
Section, an abnormal termination will occur
on entry to the USE ERROR declarative, if
the declarative is invoked by an I/O
request other than a READ (or any READ
error in which no data transfer has taken
place). Therefore, data-name-2 should be
specified in the Linkage section, and
should be referred to only if the error is
a READ error in which data transfer took
place. This can be determined by examining
data-name-1.
Either or both the INVALID KEY clause
and the USE AFTER ERROR declarative may be
specified for a file.
If both have been
specified and an INVALID KEY error occurs,
the imperative-statement specified in the
INVALID KEY option will be executed. If
126

both have been specified and any other type
of input/output error occurs the USE AFTER
ERROR declarative will be entered.
For a
file other than standard sequential, if an
I/O error occurs which is not INVALID KEY,
and the USE ERROR declarative is not active
for the file, the execution will be
terminated in a manner equivalent to a STOP
RUN. However, if such an error occurs in a
sort, input, or output procedure, the
execution will abnormally terminate.
Tab~e
18 is a generalized summary of the means
available for recovery from an invalid key
condition or an input/output error.
Table
19 lists the error processing facilities
available for each type of file
organization. The following discussion
summarizes the action taken by each
facility for each type.
For further
information on the USE AFTER ERROR option,
see the publication IBM System/360
Operating System: Full American National
Standard COBOL.
STANDARD SEQUENTIAL
• Operating System: If the error cannot
be corrected (read only), the program
will ABEND in the absence of a DD
statement option, USE AFTER STANDARD
ERROR declarative, or INVALID KEY
option.
If both the DD statement
option and USE section are specified,
the control program will execute the
USE declarative first and then the DD
option if normal exit is taken from the
declarative section.
If no EROPT
subparameter is indicated, or if ABS is
specified and a USE AFTER STANDARD
ERROR declarative exists, the
declarative will receive control.
After a normal exit, the job will
abnormally terminate.
• DD Statement Option: The EROPT
subparameter in the DCB parameter
specifies one of three actions: accept
the error block (ACC), skip the error
block (SKP), or terminate the job
(ABE).
• INVALID KEY: A transfer of control to
the procedure indicated in the INVALID
KEY phrase occurs if additional space
cannot be allocated to write the record
requested. This condition occurs when
either no more space is available or 16
extents have already been allocated on
the last volume assigned to the data
set. The transfer of control occurs
only if a secondary-quantity is
specified in the DD statement SPACE,
SPLIT, or SUBALLOC parameter. If no
secondary-quantity is specified, the
primary-quantity is assumed to be the
exact amount of space required for the
data set, and any attempt to write a
record beyond the storage allocated

causes the program to end abnormally.
When an INVALID KEY error occurs, the
file can be closed so that it may
subsequently be reopened for retrieval
as INPUT or 1-0.
• USE AFTER STANDARD ERROR: The
programmer may specify this option in
order to display the cause of the
error. Control goes to the declarative
section; the programmer can then
display a message indicating the error
and execute his DD statement option on
a normal exit from the declarative
section.
INDEXED (RANDOM)
• INVALID KEY: If the error is caused by
an invalid key, recovery is possible.
If the error is not an invalid key and
the USE AFTER ERROR option is not
specified, the program is terminated.
• USE AFTER STANDARD ERROR: Control goes
to the declarative section. The
programmer can check the error type in
the section by specifying data-name-l
in the GIVING option. If the error is
caused by a key error or the "no space
found" condition, recovery is possible.
On a READ error, the block can be
skipped by executing additional READ
statements.
If the error persists
(more bad READ statements than the
blocking factor), processing is limited
to a CLOSE statement. Any other error
cannot be corrected. The program may

Table 18.

continue executing, but processing of
the file is limited to CLOSE. If the
programmer closes the file, he may do
so in either the declarative section or
in the main body of his program.
INDEXED (SEQUENTIAL)
A.

WRITE (load mode)
• Operating System: If the error
cannot be corrected, the program
will ABEND unless an error
processing option is specified.
• INVALID KEY: If the error is caused
by an invalid key, recovery is
possible.
(The programmer may
attempt to reconstruct the key and
retry the operation, or may bypass
the error record.)
• USE AFTER STANDARD ERROR: Control
goes to the declarative section.
The programmer can check the error
type in the section by specifying
data-name-l in the GIVING option.
If the error is the result of a key
error, recovery is possible.
If the
error is not a key error, the error
cannot be corrected.
The program
may continue executing, but
processing of the file is limited to
CLOSE.
If the programmer closes the
file, he may do so in either the
declarative section or in the main
body of his program.

Recovery from an Invalid Key Condition or from an Input/Output Error

I
I
invalidlGo to user's
key routine I routine

I Error ignored; I
Inext sequential I
IGO to invalidlinstruction
I
I key routine I executed
I

Abend

I
I

~---.---.-------+--------------+-------------+---------------+--------------~

IGo to user's
IGO to user's IReturn to
I
I
system
Abend
_____________ iI routine
______________ iI routine
_____________ i I system
___________ ____ i I ______________
JI

User File Processing

127

Table 19.

Input/Output Error Processing Facilities

x
I
X
I
Note 1
I
X
1
--------------f-----------------f------------f-------------------~
X

I
I
I
I

I
I
I
I

I
I
I

I
I
I

X

I
I
I
I

X

I
I
I
I

X
Note 2
X
--------------f-------------~---f------------f-------------------~
Note 3
I
I
X
I
X
I
---"---,------,----f-----------------+------------f-------------------~

I
I
I

I
I
I

X
Note 1
X
--------------f-----------------f------------f-------------------~
X

1

I

1

I

X

1

X

I

1
1

-------------~-----------------~------------~-------------------~

INotes:
1
Holds only for WRITE.
I
Error cannot be caused by an invalid key.
1
13. No system error processing facility is available. If errors occur, they are
I
I
ignored and processing continues, unless a programmer-specified error processing
1
IL_______________________________________________________________________________________
routine is specif ied.
JI

11.
12.

B.

READ, REWRITE (scan mode)
• Operating System:
If the error
cannot be corrected, the program
will ABEND unless an error
processing option is specified.

• Operating System:
If the error cannot
be corrected, the program will ABEND
unless an error processing option is
specified.

• INVALID KEY: The error cannot be
caused by an invalid key.
A source
program coding error is implied and
a compiler diagnostic message is
generated.

• INVALID KEY:
If the error is caused by
an invalid key, recovery is possible.

• USE AFTER STANDARD ERROR: The
programmer may specify this option
in order to display the cause of the
error.
Control goes to the
declarative section. The programmer
can check the error type in the
section by specifying data-name-l in
the GIVING option.
Since the error
cannot be caused by an invalid key,
processing of the file is limited to
CLOSE.
If the programmer elects to
close the file, he may do so in
either the declarative section or in
the main body of his program.

128

DIRECT or RELATIVE (RANDOM)

• USE AFTER STANDARD ERROR:
Control goes
to the declarative section.
The
programmer can check the error type in
the section by specifying data-name-l
in the GIVING option.
If the error is
the result of a key error or the "no
space found within the search limit"
condition, recovery is possible.
Any
other error cannot be corrected.
The
program may continue executing, but
processing of the file is limited to
CLOSE.
If the programmer closes the
file, he may do so in either the
declarative section or in the main body
of his program.

DIRECT or RELATIVE (SEQUENTIAL)
• Operating System: If no error
processing option is specified, a
message is written to the console
providing identification of the file
and type of input/output error. Then
control is returned to the system.
For
sequential data sets, if EROPT has SKP
or ACC (as specified in the JCL for the
data set), an ABEND will not occur and
processing will continue.
• INVALID KEY: A transfer of control to
the procedure indicated in the rNVALID
KEY phrase occurs if additional space
cannot be allocated to write the record
requested. This condition occurs when
either no more space is availabLe or 16
extents have already been allocated on
the last volume assigned to the data
set. The transfer of control occurs
only if a secondary-quantity is
specified in the DD statement SPACE,
SPLIT, or SUBALLOC parameter. rf no
secondary-quantity is specified, the
primary-quantity is assumed to be the
exact amount of space required for the
data set, and any attempt to write a
record beyond the storage allocated
causes the program to end abnormally.
When an INVALID KEY error occurs, the
file can be closed so that it may
subsequently be reopened for retrieval
as INPUT or 1-0.
• USE AFTER STANDARD ERROR: The
programmer may specify this option in
order to display the cause of the
error. Control goes to the decLarative
section. The programmer can check the
error type in the section by specifying
data-name-1 in the GIVING option.
If
the error is not the result of an
invalid key" processing of the file is
limited to CLOSE.
If the programmer
elects to close the file, he may do so
in either the declarative section or in
the main body of his program.
Notes: The user should consider the
following points when a relatively Large
number of INVALID KEY exits or declarative
sequences (with GO TO exits) are to be
executed:
1.

The distinction between error
processing via an error declarative
and the INVALID KEY clause. ~en an
input/output operation is requested, a
storage area (called an input/output
block, or lOB) is allocated until the
request is satisfied (or, in the event
of an error, until return from the
user-provided error-handling routine).
If the error declarative is used, a
normal exit from the declarative
returns control to the system and

frees the lOB.
When the INVALID KEY
routine is used, however, the syste7:'.
does not regain control, and the If B
is not freed.
2.

The error declarative dynamically
allocates storage for a register save
area upon entry.
If a GO TO statement
is used to exit from the declarative,
neither this save area nor the lOB is
freed.
To make the maximum space
available to other users, the
programmer should rely on the
declarative as much as possible,
taking a normal exit from it.
Otherwise, it is recommended that the
programmer specify a larger region.

VOLUME LABELING
Various groups of labels may be used in
secondary storage to identify magnetic-tape
and mass storage volumes, as well as the
data sets they contain. The labels are
used to locate the data sets and are
identified and verified by label processing
routines of the operating system.
There are two different kinds of labels,
standard and nonstandard.
Magnetic tape
volumes can have standard or nonstandard
labels, or they can be unlabeled. The
type(s) of label processing for tape
volumes to be supported by an installation
is selected during the system generation
process. Mass storage volumes are
supported with standard labels only.
Standard labels consist of volume labels
and groups of data set labels.
The volume
label group precedes or follows data on the
volume; i t identifies and describes the
volume. The data set label groups precede
and follow each data set on the volume, and
identify and describe the data set.
• The data set labels that precede the
data set are called header labels.
• The data set labels that follow the
data set are called trailer labels.
They are almost identical to the header
labels.
• The data set label groups can
optionally include standard user labels
except for ISAM files.
• The volume label groups can optionally
include standard user labels for QSAM
files.
Nonstandard labels can have any format
and are processed by routines provided by
User File Processing

129

the programmer. Unlabeled volumes contain
only data sets and tapemarks.
In the job
control statements, a DD statement must be
provided for each data set to be processed.
The LABEL parameter of the DD statement is
used to describe the data set's labels.
Specific information about the contents
and physical location of labels is
contained in the publications IBM
System/360 Operating System: Data
Management Services and IBM System/36 0
Operating System: Tape Labels, Order
No. GC2S-66S0.

STANDARD LABEL FORMAT
Standard labels are SO-character records
that are recorded in EBCDIC and odd parity
on 9-track tape; or in BCD and even parity
on 7-track tape. The first four characters
are always used to identify the labels.
These identifiers are:
VOLl
HDRl and HDR2
EOVl and EOV2
EOFl and EOF2
UHLl to UHLS
UTLl to UTLS

volume label
data set header
labels
data set trailer
labels (end-of-volume)
data set trailer labels
(end-of-data set)
user header labels
user trailer labels

The format of the mass storage volume
label group is the same as the format of
the tape volume label group, except one of
the data set labels of the initial volume
label consists of the data set control
block (DSCB). The DSCB appears in the
volume table of contents (VTOC) and
contains the equivalent of the tape data
set header and trailer information, in
addition to space allocation and other
control information.

STANDARD LABEL PROCESSING
Standard label processing as performed
by the system consists of the following
basic functions:
• Checking the labels on input data sets
to ensure that the correct volume is
mounted, and to identify, describe, and
protect the data set being processed.
• Checking the existing labels on output
data sets to ensure that the correct
volume is mounted and to prevent
overwriting of vital data.
130

• Creating and writing new labels on
output data sets.
When a data set is opened for input, the
volume label and the header labels are
processed.
For an input end-o£-data
condition, the trailer labels are processed
when a CLOSE statement is executed.
For an
input end-of-volume condition, the trailer
labels on the current volume are processed,
and then the volume label and header labels
on the next volume are processed.
When a data set is opened for output,
the existing volume label and HDRl label
are. checked, and new header labels are
written.
For an output end-of-volume
condition, trailer labels are written on
the current volume, the existing volume
labels and header labels on the next volume
are checked, and then new header labels are
written on the next volume.
When an output
data set is closed, trailer labels are
written.

STANDARD USER LABELS
Standard user labels contain
user-specified information about the
associated data set. User labels are
optional within the standard label groups.
The format used for user header labels
(UHL1-S) and user trailer labels (UTL1-S)
consists of a label SO characters in length
recorded in EBCDIC on 9-track tape units,
or in BCD on 7-track tape units. The first
three bytes consist of the characters that
identify the label: UHL for a user header
label (at the beginning of a dataset) or
UTL for a user trailer label (at the
end-of-volume or end-of-data set). The
next byte contains the relative position of
this label within a set of labels of the
same type and can be any number from 1
through S.
The remaining 76 bytes consist
of user-specified information.
User labels are generally created,
examined, or updated when the beginning or
end of a data set or volume (reel) is
reached.
User labels are applicable for
sequential, direct, and relative data sets.
For sequentially processed data sets, end
or beginning of volume exits are allowed
(i.e., "intermediate" trailers and headers
may be created or examined).
For direct or
relative data sets, user label routines
will be given control only during OPEN or
CLOSE condition for a file opened as INPUT,
OUTPUT, or 1-0. Trailer labels for files
opened as INPUT or 1-0 are processed when a
CLOSE statement is executed for the file
that has reached an AT END condition.
Thus, for standard sequential data sets,

the user may create, examine, or update up
to eight header labels and eight trailer
labels on each volume of the data set,
whereas for direct or relative data sets
the user may create, examine, or update up
to eight header labels during OPEN and up
to eight trailer labels during CLOSE.
Note
that these labels reside on the initial
volume of a multi-volume data set.
This
volume must be mounted at CLOSE if trailer
labels are to be created, examined, or
updated.
When standard user label processing is
desired, the user must specify the label
type of the standard and user labels (SUL)
on the DD statement that describes the
dataset. For mass storage volumes~
specification of a LABEL subparameter of
SUL results in a separate track being
allocated for use as a user-label track
when the data set is created.
This
additional track is allocated at initial
allocation and for sequential data sets at
end-of-volume (volume switch) time.
The
user-label track (one per volume of a
sequential data set) will contain both user
header and user trailer labels.

When using this facility for an output
data set (i.e., when creating the data
set), the programmer must update his
control data in the TOTALING area prior to
issuing a WRITE instruction.
When
subsequently using this data set for input,
the programmer can accumulate the same
information as each record is read. These
values can be compared with the ones
previously stored in the user label when
the records were created.
variable length records with APPLY
WRITE-ONLY or records with SAME RECORD AREA
specified require special considerations
when using the TOTALING option. Since the
control program determines whether a
variable-length record will fit in a buffer
after a WRITE instruction has been issued,
the values accumulated may include one more
record than is actually written on the
volume.
In this case, the programmer must
update his TOTALING area after issuing a
WRITE instruction.
User label totaling is not available
with S-mode records.
For further information on user label
totaling, see the publication !~~

User Label Totaling
(BSAM and QSAM only)
When creating or processing a data set
with user labels on a sequential file, the
programmer may develop control totals to
obtain exact information about each volume
of the data set. This information can be
stored in his user labels.
For example, a
control total accumulated as the data set
is created, can be stored in a user label
and later compared with a total accumulated
while processing a volume. The user
totaling facility enables the programmer to
synchronize the control data that he has
created while processing a data set with
records physically written on a volume.
In
this way, he can tell exactly what records
were written. This information can also be
used for accurately labeling tape reels
(i.e., assigning physical adhesive labels).
To request this option, specify OPTCD=T
in the DCB parameter of the DD statement.
The user's TOTALING area, where control
data is accumulated, is provided by the
user.
In this area, the user can store
information on each record he writes. When
an input/output operation is scheduled, the
control program sets up a user TOTALED save
area that preserves an image of the
information in the user's TOTALING area.
When the output USE LABEL declarative is
entered, the values accumulated in the
user's TOTALING area corresponding to the
last record actually written on the volume
are stored in the TOTALED area. These
values can be included in user labels.

~~~~mLJ2Q_QE~ratinq S~~~m~_~~ll ~~~i£~g
~at!on!!-2~ang~g CO~Q~.

NONSTANDARD LABEL FORMAT
Nonstandard labels do not conform to the
standard label formats.
They are designed
by programmers and are written and
processed by programmers.
Nonstandard
labels can be any length less than 4096
bytes. There are no requirements as to the
length, format, contents, and number of
nonstandard labels, except that the first
record on the volume cannot be a standard
volume label.
In other words, the first
record cannot be 80 characters in length
with the identifier VOLl as its first four
characters.

NONSTANDARD LABEL PROCESSING
To use nonstandard labels (NSL) , the
programmer must:
• Create nonstandard label processing
routines for input header labels, input
trailer labels, output header labels,
and output trailer labels.
• Insert these routines into the
operating system as part of the SVC
library (SYS1.SVCLIB).
User File Processing

131

• Code NSL in the LABEL parameter of the
DD statement at execution time.
The system verifies that the tape has a
nonstandard label.
Then if NSL is
specified in the LABEL parameter, it loads
the appropriate NSL routines into transient
areas.
These NSL routines are entered at
OPEN, CLOSE, and END-OF-VOLUME conditions
by the respective executors.

of data set positioning. These routines
may communicate at the LABEL source level
with USE BEFORE LABEL PROCEDURE
declaratives by means of linkage described
under "User Label Procedures."

USER LABEL PROCEDURE
For a data set opened as output, the NSL
routines entered include:
• At OPEN time, a header routine to check
the old header and/or create the new
header;
• At CLOSE time, a trailer-creation
routine;
• At EOV time, a trailer-creation routine
and a header routine.
For a data set opened as input essentially
the same types of routines are required.
Note: The NSL routines must observe the
following conventions:
1.

Follow Type-IV SVC routine
conventions.

2.

Use GETMAIN and FREEMAIN for work
areas.

3.

Be reentrant load modules of 1024
bytes each.

4.

Use EXCP for I/O operations and XCTL
for passing control among load modules
and then returning to the I/O-support
routines.

5.

6.

Begin with the letters NSL if the
system branches to them directly.
(Other user-written modules having to
do with nonstandard labels must begin
with the letters IGC.)
Have as their entry points the first
byte in each load module.

In addition, the NSL routines must write
their own tapemarks, do all I/O operations
necessary (via EXCP), determine when all
labels have been processed, and take care
132

The USE ••• LABEL PROCEDURE statement
provides the user with label handling
procedures at the COBOL source level to
handle nonstandard or user labels. The
BEFORE option indicates processing of
nonstandard labels. The AFTER option
indicates processing of standard user
labels. The labels must be listed as
data-names in the LABEL RECORDS clause in
the File Description entry for the file.
When the file is opened as input, the label
is read in and control is passed to the USE
declarative if a USE ••• LABEL PROCEDURE is
specified for the OPEN option or for the
file.
If the file is opened as output, a
buffer area for the label is provided and
control is passed. to the USE declarative if
a USE ••• LABEL PROCEDURE is specified for
the OPEN option or for the file.
For files
opened as INPUT or I-O, control is passed
to the USE declarative to process trailer
label$ when a CLOSE statement is executed
for the file that has reached the AT END
condition. A more detailed discussion of
the USE ••• LABEL PROCEDURE statement is
contained in the publication IBM System/360
Oper~~!~~tem:
American National
Standard COBOL.

One of the concerns of the programmer is
linkage between the nonstandard label SVC
routine and the USE BEFORE LABEL PROCEDURE
section. Other problems related to writing
nonstandard label SVC routines are
discussed in the publication IB~_~~~~~~~~
Opera~!~§~~m~_-2Y§te~~§_~ro~ra~~~~~
Gui~~.

When the nonstandard label SVC routine
has determined that a particular DCB has
nonstandard labels, the nonstandard label
routine must inspect the DCB exit list for
an active entry to'ensure that there is a
USE BEFORE ••• LABEL section for this DCB and
for that type of label processing. The DCB
field EXLST contains a pointer to this exit
list. An active entry is defined as a
1-byte code other than X'OO' or X'80'
followed by a 3-byte address of the
appropriate label section (Figure 33).

r-----T-----------------------------------,
Exit List
I

,Code,

~-----+-----------------------------------~

,,

1

,

2

,
I
I

,,(USE section for

header labels)

,(USE section for trailer labels)

I
I
I

,I
,

I
I
,

~-----~-----------------------------------~
,
,
, INPUT, or X'02' : indicating OUTPUT.
,
,
code 2 is set to 'X'OD' indicating
,
IL_________________________________________
INPUT, or X'04' :indicating OUTPUT.
JI

I Note:
I
Code 1 is set to X'Ol' indicating

Figure 33.

Once the nonstandard label SVC routine
tests that the exit list confirms an
appropriate active entry, it must pass the
address of a parameter list in register 1.

The parameter list (Figure 34) must have
the following format.

r-------------T------------------,
,
1 byte
,
3 bytes
,
r--------+-------------+------------------~

I A (label buffer)
I A (DCB)
,

~-------------+-----------+---------------~

I Out put header I
land/or
I
I trailer

4
8

,

I
I

I
I

1
2

,

I

~-------------+-----------+---------------~

IUpdate header I
land/or
,
I trailer
I

8
12
16

I

I

I

1
2
3

I
,

I

~-------------~-----------~---------------~

Notes:

~or output mode,

Exit List Codes

, Byte 0 ,
0
I Byte 4 I Flag byte
, Byte 8 , Error flag

r-------------T-----------T---------------,

IRoutine Type IReturn CodelApplicable Note,
~-------------+-----------+---------------~
IInput header I
0
I
1
I
land/or
I
4
I
2
I
I trailer
I
16
I
3
I

I
,

I

the label is
written or rewritten.
For input mode,
normal processing is resumed; any
additional user labels are ignored.
2. Another label is read (for input
mode) and control is returned to the USE
BEFORE LABEL PROCEDURE section. For
output mode, the labels should be
written and control should be returned
to the USE BEFORE LABEL PROCEDURE
section. When control is returned to
the nondeclarative portion, either
normal processing will continue or the
label section will be re-entered,
depending on whether the return code is
4 or 8.
3. A return code of 16 indicates that
the USE BEFORE LABEL PROCEDURE section
has determined that an incorrect volume
was mounted. When LABEL-RETURN is set
to a nonzero value, the return code is
set to 16.
_________________________________________
J
Figure 35.

Label Routine Returns Codes

L~-------~-------------~------------------J

Figure 34.

Parameter List Formats

The A 

Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.3
Linearized                      : No
XMP Toolkit                     : Adobe XMP Core 4.2.1-c043 52.372728, 2009/01/18-15:56:37
Create Date                     : 2013:04:01 07:37:39-08:00
Modify Date                     : 2013:04:01 09:16:19-08:00
Metadata Date                   : 2013:04:01 09:16:19-08:00
Producer                        : Adobe Acrobat 9.52 Paper Capture Plug-in
Format                          : application/pdf
Document ID                     : uuid:99dcb81f-8dbb-4f2d-94f8-3d0d903edbac
Instance ID                     : uuid:12cf8870-acf3-4889-baf4-1b78df777559
Page Layout                     : SinglePage
Page Mode                       : UseNone
Page Count                      : 426
EXIF Metadata provided by EXIF.tools

Navigation menu