Avaya Media Processing Server Series System Software Release 2 1 Reference Manual

2015-06-01

: Avaya Avaya-Media-Processing-Server-Series-System-Software-Release-2-1-Reference-Manual-736227 avaya-media-processing-server-series-system-software-release-2-1-reference-manual-736227 avaya pdf

Open the PDF directly: View PDF PDF.
Page Count: 306

DownloadAvaya Avaya-Media-Processing-Server-Series-System-Software-Release-2-1-Reference-Manual-  Avaya-media-processing-server-series-system-software-release-2-1-reference-manual
Open PDF In BrowserView PDF
Avaya Media Processing Server Series System
Reference Manual
(Software Release 2.1)

Avaya Business Communications Manager
Release 6.0
Document Status: Standard
Document Number: P0602477
Document Version: 3.1.12
Date: June 2010

© 2010 Avaya Inc.
All Rights Reserved.
Notices
While reasonable efforts have been made to ensure that the information in this document is complete and accurate at the time of printing,
Avaya assumes no liability for any errors. Avaya reserves the right to make changes and corrections to the information in this document
without the obligation to notify any person or organization of such changes.
Documentation disclaimer
Avaya shall not be responsible for any modifications, additions, or deletions to the original published version of this documentation
unless such modifications, additions, or deletions were performed by Avaya. End User agree to indemnify and hold harmless Avaya,
Avaya’s agents, servants and employees against all claims, lawsuits, demands and judgments arising out of, or in connection with,
subsequent modifications, additions or deletions to this documentation, to the extent made by End User.
Link disclaimer
Avaya is not responsible for the contents or reliability of any linked Web sites referenced within this site or documentation(s) provided by
Avaya. Avaya is not responsible for the accuracy of any information, statement or content provided on these sites and does not
necessarily endorse the products, services, or information described or offered within them. Avaya does not guarantee that these links will
work all the time and has no control over the availability of the linked pages.
Warranty
Avaya provides a limited warranty on this product. Refer to your sales agreement to establish the terms of the limited warranty. In
addition, Avaya’s standard warranty language, as well as information regarding support for this product, while under warranty, is
available to Avaya customers and other parties through the Avaya Support Web site: http://www.avaya.com/support
Please note that if you acquired the product from an authorized reseller, the warranty is provided to you by said reseller and not by Avaya.
Licenses
THE SOFTWARE LICENSE TERMS AVAILABLE ON THE AVAYA WEBSITE, HTTP://SUPPORT.AVAYA.COM/LICENSEINFO/
ARE APPLICABLE TO ANYONE WHO DOWNLOADS, USES AND/OR INSTALLS AVAYA SOFTWARE, PURCHASED FROM
AVAYA INC., ANY AVAYA AFFILIATE, OR AN AUTHORIZED AVAYA RESELLER (AS APPLICABLE) UNDER A
COMMERCIAL AGREEMENT WITH AVAYA OR AN AUTHORIZED AVAYA RESELLER. UNLESS OTHERWISE AGREED TO
BY AVAYA IN WRITING, AVAYA DOES NOT EXTEND THIS LICENSE IF THE SOFTWARE WAS OBTAINED FROM ANYONE
OTHER THAN AVAYA, AN AVAYA AFFILIATE OR AN AVAYA AUTHORIZED RESELLER, AND AVAYA RESERVES THE
RIGHT TO TAKE LEGAL ACTION AGAINST YOU AND ANYONE ELSE USING OR SELLING THE SOFTWARE WITHOUT A
LICENSE. BY INSTALLING, DOWNLOADING OR USING THE SOFTWARE, OR AUTHORIZING OTHERS TO DO SO, YOU,
ON BEHALF OF YOURSELF AND THE ENTITY FOR WHOM YOU ARE INSTALLING, DOWNLOADING OR USING THE
SOFTWARE (HEREINAFTER REFERRED TO INTERCHANGEABLY AS "YOU" AND "END USER"), AGREE TO THESE
TERMS AND CONDITIONS AND CREATE A BINDING CONTRACT BETWEEN YOU AND AVAYA INC. OR THE
APPLICABLE AVAYA AFFILIATE ("AVAYA").
Copyright
Except where expressly stated otherwise, no use should be made of the Documentation(s) and Product(s) provided by Avaya. All content
in this documentation(s) and the product(s) provided by Avaya including the selection, arrangement and design of the content is owned
either by Avaya or its licensors and is protected by copyright and other intellectual property laws including the sui generis rights relating
to the protection of databases. You may not modify, copy, reproduce, republish, upload, post, transmit or distribute in any way any
content, in whole or in part, including any code and software. Unauthorized reproduction, transmission, dissemination, storage, and or
use without the express written consent of Avaya can be a criminal, as well as a civil offense under the applicable law.
Third Party Components
Certain software programs or portions thereof included in the Product may contain software distributed under third party agreements
("Third Party Components"), which may contain terms that expand or limit rights to use certain portions of the Product ("Third Party
Terms"). Information regarding distributed Linux OS source code (for those Products that have distributed the Linux OS source code),
and identifying the copyright holders of the Third Party Components and the Third Party Terms that apply to them is available on the
Avaya Support Web site: http://support.avaya.com/Copyright.
Trademarks
The trademarks, logos and service marks ("Marks") displayed in this site, the documentation(s) and product(s) provided by Avaya are the
registered or unregistered Marks of Avaya, its affiliates, or other third parties. Users are not permitted to use such Marks without prior
written consent from Avaya or such third party which may own the Mark. Nothing contained in this site, the documentation(s) and
product(s) should be construed as granting, by implication, estoppel, or otherwise, any license or right in and to the Marks without the
express written permission of Avaya or the applicable third party. Avaya is a registered trademark of Avaya Inc. All non-Avaya
trademarks are the property of their respective owners.
Downloading documents
For the most current versions of documentation, see the Avaya Support. Web site: http://www.avaya.com/support
Contact Avaya Support
Avaya provides a telephone number for you to use to report problems or to ask questions about your product. The support telephone
number is 1-800-242-2121 in the United States. For additional support telephone numbers, see the Avaya Web site: http://
www.avaya.com/support

Table of Contents
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
How to Use This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Organization of This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Conventions Used in This Manual . . . . . . . . . . . . . . . . . . . . . . . . 13
Solaris and Windows 2000 Conventions . . . . . . . . . . . . . . . . . . . 15
Trademark Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Avaya MPS Architectural Overview . . . . . . . . . . . . . . . . . . 17
Overview of the Avaya Media Processing Server (MPS) System 18
System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Hardware Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Front Control Panel (FCP) . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Variable Resource Chassis (VRC) . . . . . . . . . . . . . . . . . . . . 22
Power Supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
VRC Rear Panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Drive Bays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Application Processor . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Network Interface Controller (NIC) or Hub-NIC . . . . . . 27
Telephony Media Server (TMS). . . . . . . . . . . . . . . . . . . . . . 28
Phone Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Multiple DSP Module (MDM) . . . . . . . . . . . . . . . . . . . . 31
System LAN Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Field Programmable Gate Arrays (FPGA) and the Boot
ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
TelCo Connector Panel (TCCP) . . . . . . . . . . . . . . . . . . . . . 33
Software Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Software Environment . . . . . . . . . . . . . . . . . . . . . . . . . . 35
ASE Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
ASE/VOS Integration Layer . . . . . . . . . . . . . . . . . . . . . . 39
VOS Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
System Utilities and Software . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
alarm . . . . . . . . . . . . . . . .
51
dlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
dlt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
PeriProducer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
PeriReporter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
PeriStudio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
PeriView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
PeriWeb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
vsh . . . . . . . . . . . . . . . . .
60

# P0602477 Ver: 3.1.11

Page 3

Avaya Media Processing Server Series System Reference Manual

Base System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Base System Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
System Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Solaris Startup/Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Windows Startup/Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . 69
SRP (Startup and Recovery Process) . . . . . . . . . . . . . . . . . . 70
Manually Starting and Stopping SRP . . . . . . . . . . . . . . . 70
VPS Topology Database Server (VTDB) . . . . . . . . . . . . 71
Restart of Abnormally Terminated Programs . . . . . . . . . 72
Communication with VOS Processes . . . . . . . . . . . . . . . 72
SRP Configuration Command Line Arguments . . . . . . . 74
VSH Shell Commands . . . . . . . . . . . . . . . . . . . . . . . . . . 75
SRP Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Call Control Manager (CCM/CCMA) . . . . . . . . . . . . . . . . . 82
Startup Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
The hosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
User Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
The .xhtrahostsrc File. . . . . . . . . . . . . . . . . . . . . . . . . 86
The MPSHOME Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
The MPSHOME/common Directory . . . . . . . . . . . . . . . . . . . . . . 88
The MPSHOME/common/etc Directory . . . . . . . . . . . . . . 88
The srp.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
The vpshosts File . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
The compgroups File . . . . . . . . . . . . . . . . . . . . . . . . . 95
The gen.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
The global_users.cfg File . . . . . . . . . . . . . . . . . . 98
The alarmd.cfg and alarmf.cfg Files . . . . . . . . . 99
The pmgr.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
The periview.cfg File . . . . . . . . . . . . . . . . . . . . . . 102
The MPSHOME/common/etc/tms Directory . . . . . . . . 103
The sys.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . 103
The tms.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Protocol Configuration Files . . . . . . . . . . . . . . . . . . . . . . . 123
The $MPSHOME/packages Directory . . . . . . . . . . . . . . 125
%MPSHOME%\PERIase - /opt/vps/PERIase. . 127
The /etc/ase.conf file . . . . . . . . . . . . . . . . . . . . . 127
The /etc/services File . . . . . . . . . . . . . . . . . . . . . 129
%MPSHOME%\PERIbrdge - /opt/vps/PERIbrdge 132
%MPSHOME%\PERIdist - /opt/vps/PERIdist. 133
%MPSHOME%\PERIglobl - /opt/vps/PERIglobl 133
%MPSHOME%\PERIview - /opt/vps/PERIview. 134
%MPSHOME%\PERIplic - /opt/vps/PERIplic. 134
%MPSHOME%\PERItms - /opt/vps/PERItms. . 134
The /cfg/atm_triplets.cfg File . . . . . . . . . . . 135
The /cfg/ps_triplets.cfg File . . . . . . . . . . . . 136

Page 4

# P0602477 Ver: 3.1.11

Table of Contents
The /cfg/tms_triplets.cfg File . . . . . . . . . . . 136
%MPSHOME%\PERImps - /opt/vps/PERImps .
137
The MPSHOME/tmscommN Directory. . . . . . . . . . . . . . . . 138
MPS 500 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
MPS 1000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
The MPSHOME/mpsN Directory . . . . . . . . . . . . . . . . . . . . 139
The MPSHOME/mpsN/apps Directory . . . . . . . . . . . 140
The MPSHOME/mpsN/etc Directory . . . . . . . . . . . . . 142
VMM Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . 144
The vmm.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
The vmm-mmf.cfg File . . . . . . . . . . . . . . . . . . . . . . . 146
ASE Configuration Files. . . . . . . . . . . . . . . . . . . . . . . . . . . 148
The ase.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
The aseLines.cfg File . . . . . . . . . . . . . . . . . . . . . . 149
CCM Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . 151
The ccm_phoneline.cfg File . . . . . . . . . . . . . . . . 151
The ccm_admin.cfg File . . . . . . . . . . . . . . . . . . . . . 155
TCAD Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . 157
The tcad-tms.cfg File . . . . . . . . . . . . . . . . . . . . . . 157
The tcad.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
TRIP Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . 159
The trip.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
TMS Watchdog Functions . . . . . . . . . . . . . . . . . . . . . . . . . 160

Common Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Multi-Media Format Files (MMFs) . . . . . . . . . . . . . . . . . . . . . . 164
How to Create an MMF File. . . . . . . . . . . . . . . . . . . . . . . . 164
Vocabulary MMF Files vs. CMR MMF Files . . . . . . . . . . 165
Activating MMF Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Delimited and Partial Loading . . . . . . . . . . . . . . . . . . . 168
Audio Playback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Custom Loading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Using Hash Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
System MMF Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
Application-Specific MMF Files . . . . . . . . . . . . . . . . . 174
Default Vocabulary and Record MMF Files . . . . . . . . 175
Diagnostics and Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Synchronizing MMF Files Across Nodes. . . . . . . . . . . . . . 177
ZAP and MMF files on the MPS . . . . . . . . . . . . . . . . . 177
MMF Abbreviated Content (MAC) File . . . . . . . . . . . . 178
Basic Implementation (Low Volume/Traffic) . . . . . . . 178
Advanced Implementation (High Volume/Traffic) . . . 181
Updating a Specific Element . . . . . . . . . . . . . . . . . . . . 185
Exception Processing . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

# P0602477 Ver: 3.1.11

Page 5

Avaya Media Processing Server Series System Reference Manual
Synchronization (ZAP) Command Summary . . . . . . . . 191
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Call Simulator Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
VEMUL Script Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Script Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Configuration Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Script Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Primitives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Phone Line Behavior During Simulation . . . . . . . . . . . . . . 199
Call Simulator Conditions and Usage. . . . . . . . . . . . . . . . . 199
Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Example Call Simulation Script Files. . . . . . . . . . . . . . . . . 202
Alarm Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Filtering Precepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . 205
alarmf Command Line Options . . . . . . . . . . . . . . . . 206
Notation Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Logical Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Action Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Filtering Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Interapplication/Host Service Daemon Data Exchange . . . . . . . 215
VMST (VMS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Starting Under SRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
PeriPro Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Examples: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
VTCPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Single Connection to Host . . . . . . . . . . . . . . . . . . . . . . 221
Multiple Connections to Multiple Hosts . . . . . . . . . . . . 221
One Connection Per Line . . . . . . . . . . . . . . . . . . . . . . . 222
Multiple VTCPD Daemons . . . . . . . . . . . . . . . . . . . . . 222
Host Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Attaching to VMST . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Message Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Message Identification (ID) . . . . . . . . . . . . . . . . . . . . . 231
Connection Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Application-Host Interaction Configuration Options . . 234
Queuing Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Monitoring Host Connections . . . . . . . . . . . . . . . . . . . . 238
Backup LAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
VFTPD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Specifying a Port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Automatic Startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Automatic FTP Logins . . . . . . . . . . . . . . . . . . . . . . . . . 241
Identifying the Configured Host Computers . . . . . . . . 242

Page 6

# P0602477 Ver: 3.1.11

Table of Contents
Configuration Procedures and Considerations . . . . . . . . . 243
Making Changes to an Existing System . . . . . . . . . . . . . . . . . . 244
Adding Spans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Modifying the Span Resource Set . . . . . . . . . . . . . . . . . . . 244
Changing Pool/Class Names . . . . . . . . . . . . . . . . . . . . . . . 245
Renumbering a Component . . . . . . . . . . . . . . . . . . . . . . . . 245
Renaming a Solaris MPS Node . . . . . . . . . . . . . . . . . . . . . 246
Renaming a Windows MPS Node . . . . . . . . . . . . . . . . . . . 247
Introducing a New Node. . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Enabling Statistics Collection. . . . . . . . . . . . . . . . . . . . . . . 249
Debug Terminal Connection . . . . . . . . . . . . . . . . . . . . . . . 250
Connection Using a Dumb Terminal or PC . . . . . . . . . 250
Connection from the System Console . . . . . . . . . . . . . 250
Verifying/Modifying Boot ROM Settings . . . . . . . . . . . . . . . . 252
DCC Boot ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
TMS Boot ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
NIC Boot ROM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Resetting the NIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
TMS Computer Telephony (CT) Bus Clocking . . . . . . . . . . . . 265
N+1 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Sample MPS 1000 N+1 Redundancy System Configuration 267
TRIP Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Directory Layout on a Secondary (Backup) Node . . . . . . . 269
Least Cost Routing Daemon . . . . . . . . . . . . . . . . . . . . . . . . 271
Redundancy Configuration Daemon (RCD). . . . . . . . . . . . 271
The Failover/Failback Process . . . . . . . . . . . . . . . . . . . . . . 273
Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . 274
Create the Secondary Node . . . . . . . . . . . . . . . . . . . . . . 274
TMSCOMM Component Configuration . . . . . . . . . . . 274
Edit the vpshosts File . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Edit the tms.cfg File . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Edit TRIP and RCD Configuration Files . . . . . . . . . . . 276
Edit the gen.cfg file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
PMGR configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Media Directories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
First Startup After Configuration . . . . . . . . . . . . . . . . . . . . 280
Verifying N+1 Functionality . . . . . . . . . . . . . . . . . . . . . . . 283
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Speech Server Resources in N+1 Redundancy. . . . . . . . . . 285
Pool Manager (PMGR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Resource Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Allocation/Deallocation . . . . . . . . . . . . . . . . . . . . . . . . 288
Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

# P0602477 Ver: 3.1.11

Page 7

Avaya Media Processing Server Series System Reference Manual
Resource Identifier/String . . . . . . . . . . . . . . . . . . . . . . . 289
Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Port Service States . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Network Failure Detection (Pinging) . . . . . . . . . . . . . . 291
Database Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Platform Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Starting a Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Starting a Writer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
Database Format Conversion . . . . . . . . . . . . . . . . . . . . . . . 293
Reader/Writer Synchronization . . . . . . . . . . . . . . . . . . . . . 293
File Size Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Call Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Listening to Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Antivirus Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Secure Shell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

Page 8

# P0602477 Ver: 3.1.11

Preface

Avaya Media Processing Server Series System Reference Manual

Scope
The Avaya Media Processing Server Series System Reference Manual details the
procedures and parameters for configuring the Avaya Media Processing Server (MPS)
Series system for online operation in a variety of telephony environments. In addition,
this manual provides configuration parameters and basic file information for elements
common to all MPS within the network. Note, however, that though there are two
basic products available in the MPS system - a single rack-mounted version known as
the Avaya MPS Series and a cabinet enclosed network configuration which relies on
the MPS 500 - this manual deals almost exclusively with the latter.
In addition to this document, the Avaya Media Processing Server Series System
Operator’s Guide may be particularly helpful. They provide a road map through the
major functions in the daily operation and monitoring of the MPS system. For a list of
other user manuals, see the Reference Material link in PeriDoc.

Intended Audience
This manual is intended for the persons who will be configuring the MPS for a
specific site and/or maintaining it from a particular perspective. The reader should be
familiar with telecommunications and computer equipment, their functions, and
associated terminology. In addition, the reader must be familiar with the
characteristics of the specific installation site, including site-specific power systems,
computer systems, peripheral components, and telephone networks.
Some of the material covered here involves the configuration of basic and critical
MPS parameters. Small inaccuracies in the configuration of these parameters can
impede system performance. Individuals without highly specialized knowledge in this
area should not attempt to change the defaults.
This guide assumes that the user has completed an on-site system familiarization
training program conducted as part of the initial system installation. Basic knowledge
of the Solaris and/or Windows 2000 operating system(s) is also assumed.

Page 10

# P0602477 Ver: 3.1.11

Preface
How to Use This Manual
This manual uses many standard terms relating to computer system and software
application functions. However, it contains some terminology that can only be
explained in the context of the MPS system. Refer to the Glossary of Avaya Media
Processing Server Series Terminology for definitions of product specific terms.
It is not essential that this document be read cover-to-cover, as the entire contents is
not universally applicable to all MPS environments. It is essential, however, that there
is a clear understanding of exactly what information pertains to your environment and
that you can identify, locate, and apply the information documented in this manual.
Later, you can use the Table of Contents to locate topics of interest for reference and
review.
If you are reading this document online, use the hypertext links to quickly locate
related topics. Click once with your mouse while positioned with your cursor over the
hypertext link. Click on any point in a Table of Contents entry to move to that topic.
Click on the page number of any Index entry to access that topic page. Use the
hyperlinks at the top and bottom of each HTML “page” to help you navigate the
documentation. Pass your cursor over the Avaya Globemark to display the title,
software release, publication number, document release, and release date for the
HTML manual you are using.
For additional related information, use the Reference Material link in PeriDoc. To
familiarize yourself with various specialized textual references within the manual, see
Conventions Used in This Manual on page 13.
Periphonics is now part of Avaya. The name Periphonics, and variations thereof,
appear in this manual only where it is referred to in a product. (For example, a
PeriProducer application, the PERImps package, the perirev command, etc.)

# P0602477 Ver: 3.1.11

Page 11

Avaya Media Processing Server Series System Reference Manual

Organization of This Manual
This document is designed to identify the procedures and configuration parameters
required for successful MPS operations. It provides an overview of the MPS system
and proceeds to document both basic and common system parameters. The following
passages provide an overview of the information contained in each area of this
manual.
Chapter 1 - Avaya Media Processing Server Series Architectural Overview
Provides a description of the MPS system and an overview of its hardware
and software. Diagrams and describes the MPS structure, its software
processes, and identifies other system utilities.
Chapter 2 - Base System Configuration
Describes and diagrams the system directory structure and startup and
shutdown, delineates the Startup and Recovery Process (SRP), and details
MPSHOME and all required configuration files.
Chapter 3 - Common Configuration
Documents the facilities available on all (common) MPS platforms. Details
MultiMedia Format (MMF) file creation and utilization. Also covers call
simulation, alarm filtering, and exchange of data between applications, hosts,
and MPS.
Chapter 4 - Configuration Procedures and Considerations
Contains common procedures and comprehensive considerations for
modifying existing systems and adding features.
Appendix A - Process and Utility Command Summary
Lists commands for some of the processes and utilities most commonly
interacted with in the MPS system. Provides brief definitions for each and
links to more detailed information.
Appendix B - Avaya MPS Specifications
Contains physical, electrical, environmental, and interface specifications for
the MPS.

Page 12

# P0602477 Ver: 3.1.11

Preface
Conventions Used in This Manual
This manual uses different fonts and symbols to differentiate between document
elements and types of information. These conventions are summarized in the
following table.

Conventions Used in This Manual Sheet 1 of 2
Notation

Description

Normal text

Normal text font is used for most of the document.

important term

The Italics font is used to introduce new terms, to highlight
meaningful words or phrases, or to distinguish specific terms from
nearby text.

system
command

This font indicates a system command and/or its arguments. Such
keywords are to be entered exactly as shown (i.e., users are not to
fill in their own values).

command,
condition
and alarm

Command, Condition and Alarm references appear on the screen
in magenta text and reference the Command Reference Manual,
the PeriProducer User’s Guide, or the Alarm Reference Manual,
respectively. Refer to these documents for detailed information
about Commands, Conditions, and Alarms.

file name /
directory

This font is used for highlighting the names of disk directories, files,
and extensions for file names. It is also used to show displays on
text-based screens (e.g., to show the contents of a file.)

on-screen field

This font is used for field labels, on-screen menu buttons, and
action buttons.



A term that appears within angled brackets denotes a terminal
keyboard key, a telephone keypad button, or a system mouse
button.

Book Reference

This font indicates the names of other publications referenced
within the document.

cross reference

A cross reference appears on the screen in blue text. Click on the
cross reference to access the referenced location. A cross
reference that refers to a section name accesses the first page of
that section.
The Note icon identifies notes, important facts, and other keys to
understanding.

!

# P0602477 Ver: 3.1.11

The Caution icon identifies procedures or events that require
special attention. The icon indicates a warning that serious
problems may arise if the stated instructions are improperly
followed.

Page 13

Avaya Media Processing Server Series System Reference Manual

Conventions Used in This Manual Sheet 2 of 2
Notation

Description
The flying Window icon identifies procedures or events that apply
to the Windows 2000 operating system only.1

The Solaris icon identifies procedures or events that apply to the
Solaris operating system only.2

1. Windows 2000 and the flying Window logo are either trademarks or registered
trademarks of the Microsoft Corporation.
2. Solaris is a trademark or registered trademark of Sun Microsystems, Inc. in the
United States and other countries.

Page 14

# P0602477 Ver: 3.1.11

Preface
Solaris and Windows 2000 Conventions
This manual depicts examples (command line syntax, configuration files, and screen
shots) in Solaris format. In certain instances Windows 2000 specific commands,
procedures, or screen shots are shown where required. The following table lists
examples of general operating system conventions to keep in mind when using this
manual with either the Solaris or NT operating system.

Solaris

Windows 2000

Environment

$MPSHOME

%MPSHOME%

Paths

$MPSHOME/common/etc

%MPSHOME%\common\etc

Command

 &

start /b 

Trademark Conventions
The following trademark information is presented here and applies throughout for
third party products discussed within this manual. Trademarking information is not
repeated hereafter.
Solaris is a trademark or registered trademark of Sun Microsystems, Inc. in the United
States and other countries.
Microsoft, Windows, Windows 2000, Internet Explorer, and the Flying Windows logo
are either trademarks or registered trademarks of Microsoft Corporation.
Netscape® and Netscape Navigator® are registered trademarks of Netscape
Communications Corporation in the United States and other countries. Netscape's
logos and Netscape product and service names are also trademarks of Netscape
Communications Corporation, which may be registered in other countries.

# P0602477 Ver: 3.1.11

Page 15

Avaya Media Processing Server Series System Reference Manual

This page has been intentionally left blank.

Page 16

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
This chapter covers:
1. Overview of the Avaya
Media Processing Server
Series System
2. System Architecture
3. System Utilities and
Software

Avaya Media Processing Server Series System Reference Manual

Overview of the Avaya Media Processing Server System
The Avaya Media Processing Server (MPS) Series products comprise hardware and
software to create a call and web-based processing environment. These systems
integrate the call processing environment with speech, telephony, data
communications, and transaction processing functions. The platform is based on the
Avaya Telephony Media Server (TMS) which provides high phone port densities and
increased user flexibility and extensibility. The basic TMS assembly provides
resources for telephony media management including switching/bridging, digital
signal processing, voice and data memory, and network interfaces. A variety of
interactive voice processing applications are accommodated, from simple information
delivery services to complex multimedia (voice/fax/data/web) call processing
implementations with local databases, multiple services, and transaction processing
functions.
The MPS system supports a wide selection of telephony and host computer
connectivity interfaces for easy integration into an existing dataprocessing/communications environment. It also includes a set of easy to use objectoriented Graphical User Interface (GUI) tools. These tools are used for:
•
•
•
•

application and vocabulary development
system configuration, control, and monitoring
collection and reporting of statistical data
access to on-line documentation and its concurrent implementations

The application development environment provides a totally graphical environment
for the entire application life cycle, and also allows typically phone-line applications
to be ported over to Internet-based Web usage. The PeriProducer GUI is the suggested
tool of choice for application development. The PeriWeb package allows these phone
line applications to be run as interactive World Wide Web apps.
The MPS systems employ industry standards and distributed processing in an open
architecture, allowing plug-in integration of future technological developments. In
addition, networking elements of the MPS support multiple LAN/WAN interfaces,
providing an environment ready for distributed computing.
This chapter of the Avaya Media Processing Server Series System Reference Manual
presents an overall view of the MPS hardware and software, describes the software
processes responsible for operations, and provides a series of diagrams that illustrate
both hardware and software relationships.
Base System Configuration on page 64, documents the process of getting the MPS
system up and running, identifies the individual configuration files, details some of
the newer processes, and describes the directory structure of the operating
environment and predefined environment variables.

Page 18

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
System Architecture
The MPS family is designed with a flexible hardware and software architecture that is
highly scalable. System models range from small (48 ports) to large networked
configurations of tens of thousands of ports. The same basic hardware and software
components are used for all configurations. Individual systems usually vary only in
application/transaction processor performance, capacity for additional ports (TMS’),
and optional feature software/hardware (for example, Call Progress Detection, Speech
Recognition, or Caller Message Recording).
Architecture of the MPS is based on a Sun Microsystems SPARC system processor
running the Solaris operating system or an Intel processor running Windows 2000.
The system processor is connected to one or more Telephony Media Servers (TMS).
The TMS is a flexible platform that provides switching, bridging, programmable
resources, memory, and network interfaces to execute a comprehensive set of
telephony and media functions.
Each MPS system consists of a Solaris or Windows host node running OS and MPS
software, and one or more TMS’ responsible for the bulk of the actual telephony
processing. One TMS is required for each MPS defined on the node. A multiple node
configuration is referred to as the MPS Network. The following diagrams illustrate the
two basic products available in the MPS system: a single rack-mounted version,
known as the MPS100, which is available on the Windows platform only, and a
cabinet enclosed networked configuration which relies on the MPS1000 and is
available on both the Windows and Solaris platforms. Typically, the MPS100 contains
only 2 spans (though it may contain up to 8) and only 1 Digital Communications
Controller (DCC) card, and does not support bridging outside the TMS. Conversely,
the MPS1000 is the high-capacity model, with 4 TMS’ per chassis and up to 4 chassis
per cabinet. It can support up to ten thousand ports with the ability to bridge between
any two regardless of the chassis the ports are in with respect to each other. This
manual deals almost exclusively with the MPS1000.
The flexibility inherent in the product line allows the MPS networks to incorporate
numerous different designs. For additional information and configurations, see the
Avaya Media Processing Server Series 1000 Transition Guide. For information on
using the MPS, see the Avaya System Operator’s Guide.
Though the Avaya Media Processing Server Series 1000 Transition Guide is typically
used by those migrating from a previous version of our transaction processing
systems, it also contains information of interest to those new to the product line. Such
information should be used in that context only.

# P0602477 Ver: 3.1.11

Page 19

Avaya Media Processing Server Series System Reference Manual

MPS1000 Network

MPS100

MPS
Node A

MPS
Node B

MPS

MPS 1

MPS 2

ASE
VOS

ASE
VOS

ASE
VOS

TMS

TMS

TMS

Windows

Single Media Processing Server 100 and Basic Media Processing Server 1000 Network

Page 20

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
Hardware Overview
Typical system hardware includes a SPARC (Solaris) or Intel (Windows)
application/transaction processor and related computer components (such as hard
drive and RAM) and TMS hardware, including storage for speech and data files, a
telephone interface card, network interface cards, power supplies, and various voice
processing modules. The major hardware components that make up the MPS1000 are
shown in the following illustration (MPS100 information is contained in a separate
manual). Each of these is further dissected and discussed in the paragraphs that
follow. See the Avaya Media Processing Server Series System Operator’s Guide
regarding details on system monitoring and control and specific analysis of panel
switches and LEDs.

Front View
Front Control
Panel (FCP)

Rear View

OVER
FAULTS/
FAN SPEED
SYSTEM
TEMP
MANMANUAL
HSFAN SPEED
AUTOOK
HIGH TEMP
SPEED CNTRL LOWMEDHIGH

Application
Processors

# P0602477 Ver: 3.1.11

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 4

14121086420

SLOT 2

AUDIO
CONSOLE 15131197531

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 3
SLOT 1

SLOT 4

14121086420

SLOT 2

MC1 IN
MC1
OUT
ALARM
MAJ
CSL
EXT CLK A
NC
CNO
B C D SENSORS
NC
CMIN
NOAEXTERNAL
TEST S5SN
IC ON
ENET-A
6 PWR
0
ON
OFF
NORMAL
MIN ALARM
6
CHASSIS
MAJ
ALARM
EXTIDCLK5 BSLOT
ENET-B

POWER ON
TEST
NORMAL
MINOR ALARM ON
MAJOR ALARM OFF
RESET
4

AUDIO
CONSOLE 15131197531

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 3

SLOT 4

14121086420

SLOT 1

SLOT 2

AUDIO
CONSOLE 15131197531

HUB B HUB A

SSMC
SSMC E
E
X
L
TL2L1PO
LX
TL4L3PO

SLOT 3

SLOT 4

SLOT 3

14121086420

SLOT 2

SLOT 1

AUDIO
CONSOLE 15131197531

+3.3V +3.3V +3.3V
+5V
+5V
+5V
+12V
+12V
+12V
-12V
-12V
-12V
MIS- MATCH
MIS- MATCH
MISMATCH

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 4

14121086420

SLOT 2

AUDIO
CONSOLE 15131197531

SLOT 3
SLOT 1

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 4

POWER ON
TEST
NORMAL
MINOR ALARM ON
MAJOR ALARM OFF
RESET
4

14121086420

SLOT 2

3

AUDIO
CONSOLE 15131197531

HUB B HUB A

SSMC
MC E
ELSLSP
X
L
TL2L1PO
LX
T43 O

SLOT 3

SLOT 4

14121086420

SLOT 1

SLOT 2

AUDIO
CONSOLE 15131197531

SLOT 3

HUB B HUB A

SSMC
SSMC E
E
X
L
TL2L1PO
LX
TL4L3PO

SLOT 4
SLOT 2

POWER ON
POWER ON TEST
TEST
NORMAL
NORMAL
ON
MINOR ALARM ON
MINOR ALARMOFF
MAJOR ALARM OFF
MAJOR ALARM
2
5
6

14121086420

1

SLOT 1

TEST
ON
OFF
SLOT

SLOT 3

Asynchronous
Transfer Mode
(ATM) Fiber
Optic Switch

3

+3.3V +3.3V +3.3V
+5V
+5V
+5V
+12V
+12V
+12V
-12V
-12V
-12V
MIS- MATCH
MIS- MATCH
MISMATCH

Rear of VRCs

SLOT 1

Network
(Ethernet) Switch

POWER ON
POWER ON TEST
TEST
NORMAL
NORMAL
ON
MINOR ALARM ON
MINOR ALARMOFF
MAJOR ALARM OFF
MAJOR ALARM
2
5
6

AUDIO
CONSOLE 15131197531

Variable
Resource
Chassis (VRCs),
populated with
Telephony Media
Server (TMS)
assemblies

1

SLOT 1

TEST
ON
OFF
SLOT

MC1 IN
MC1
OUT
ALARM
MAJ
CSL
EXT CLK A
NC
CNO
B C D SENSORS
NC
CMIN
NOAEXTERNAL
TEST S5SN
IC ON
ENET-A
6 PWR
0
ON
OFF
NORMAL
MIN ALARM
6
CHASSIS
MAJ
ALARM
EXTIDCLK5 BSLOT
ENET-B

+3.3V +3.3V +3.3V
+5V
+5V
+5V
+12V
+12V
+12V
-12V
-12V
-12V
MIS- MATCH
MIS- MATCH
MISMATCH

+3.3V +3.3V +3.3V
+5V
+5V
+5V
+12V
+12V
+12V
-12V
-12V
-12V
MIS- MATCH
MIS- MATCH
MISMATCH

TelCo Connector
Panels (TCCP)

Rear of
Application
Processors

Page 21

Avaya Media Processing Server Series System Reference Manual

For detailed information on the physical, electrical, environmental, and interface
specifications of the Avaya Media Processing Server (MPS) Series, please refer the
MPS Specifications chapter in the Avaya MPS Hardware Installation and
Maintenance manual.

Front Control Panel (FCP)
One FCP is present for each VRC in the system. The FCP provides separate power
controls and status indicators for each TMS (by chassis slot).

POWER ON

TEST
ON

SLOT

1

MINOR ALARM
MAJOR ALARM

OFF
2

5

POWER ON

TEST
ON

NORMAL

ON

MINOR ALARM
MAJOR ALARM

OFF

POWER ON

TEST

NORMAL

OFF
6

TEST

NORMAL

ON

MINOR ALARM
MAJOR ALARM
3

OFF
4

RESET

FCP Front View

Variable Resource Chassis (VRC)
The VRC is a versatile chassis assembly that is used in several Avaya product lines.
The VRC has four front and two rear plug-in slots, and contains:
•
•
•
•
•

Page 22

Up to four TMS assemblies
One or two application processor board(s) (rear; not present if rack mounted
application processor(s) are used)
Two Network Interface Controllers (NICs) or one Hub-NIC
Up to six power supplies, one for each populated slot
Two available drive bays

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview

VRC Front View (Populated with Four TMS’)

HUB A

E S S M C
X L L P O
T 2 1
L
E S S M C
X L L P O
T 4 3
L

HUB B
14 12 10 8 6 4 2 0

SLOT 4
SLOT 2

15 13 11 9 7 5 3 1

AUDIO CONSOLE

SLOT 3
SLOT 1

HUB A
HUB B

E S S M C
X L L P O
T 4 3
L
15 13 11 9 7 5 3 1

AUDIO CONSOLE

SLOT 4
SLOT 2

E S S M C
X L L P O
T 2 1
L

Slot 4

14 12 10 8 6 4 2 0

SLOT 3
SLOT 1

HUB A
HUB B

E S S M C
X L L P O
T 4 3
L
15 13 11 9 7 5 3 1

14 12 10 8 6 4 2 0

SLOT 4
SLOT 2

E S S M C
X L L P O
T 2 1
L

Slot 3

AUDIO CONSOLE

SLOT 3
SLOT 1

HUB B

E S S M C
X L L P O
L
T 4 3
15 13 11 9 7 5 3 1

AUDIO CONSOLE

SLOT 4
SLOT 2

14 12 10 8 6 4 2 0

SLOT 3
SLOT 1

HUB A

Slot 2

E S S M C
X L L P O
L
T 2 1

Slot 1

The VRC backplane is located midway between the front and rear of the chassis. The
backplane contains connectors for the modules that plug into each slot, front and back.
The backplane provides connections for:
•
•
•
•

# P0602477 Ver: 3.1.11

Inter-module signals
Power from the power supplies to the module slots
A Time Delay Multiplexing (TDM) bus for PCM (voice/audio)
communications between the TMS assemblies
Clocking signals for the TDM bus

Page 23

Avaya Media Processing Server Series System Reference Manual

VRC Rear View
VRC Rear View
Power Supplies for slots
4

3

Power Supplies for slots
VRC Rear Panel

5

MC1 IN

2

1

MC1 OUT
MAJ

EXT CLK A

6

ALARM MIN

NC C NO

NC C NO

EXTERNAL SENSORS
A

B

C

D

CSL

TEST

0

S5

ON
OFF

CHASSIS ID

5

SLOT

ENET-A

S6 NIC

6

EXT CLK B

PWR ON
NORMAL
MIN ALARM
MAJ ALARM

ENET-B

+3.3V
+5V

+3.3V
+5V

+3.3V
+5V

+3.3V
+5V

+3.3V
+5V

+3.3V
+5V

+12V
-12V

+12V
-12V

+12V
-12V

+12V
-12V

+12V
-12V

+12V
-12V

MISMATCH

MISMATCH

MISMATCH

MISMATCH

MISMATCH

MISMATCH

Alternate
Application
Processor
Location
(Slot 5)

Hub-NIC
OR...
Drive Bay

NIC
(Primary)
(Logical Slot 7)

NIC
(Secondary)
(Logical Slot 8)

Application
Processor (Slot 6)
(If rack-mounted AP
is not used)

Drive Bay

In multiple chassis and cabinet systems, some VRCs do not contain all the assemblies
listed above.
Power Supplies
Each slot in the VRC has a separate power supply dedicated to it. The power supplies
are identical and can be installed in any of the six locations for a slot that requires
power. The slot that each power supply is associated with is indicated on the decals on
the drive bay doors. There is no dedicated power supply for the NIC slot.

Page 24

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview

+3.3V
+5V
+12V
-12V
MISMATCH

# P0602477 Ver: 3.1.11

Page 25

Avaya Media Processing Server Series System Reference Manual
VRC Rear Panel
The rear panel of the VRC contains indicators, switches, and connectors for
maintenance, configuration, and connection to other system components. The power
switches for slots 5 and 6 are also located here, as well as the chassis ID wheel.
MC1 IN

MC1 OUT
MAJ

EXT CLK A

ALARM MIN

NC C NO

NC C NO

EXTERNAL SENSORS
A

B

C

D

CSL

TEST

0

S5

ON
OFF

CHASSIS ID

EXT CLK B

5

SLOT

6

S6

ENET-A

NIC

PWR ON
NORMAL
MIN ALARM
MAJ ALARM

ENET-B

Drive Bays
These bays contain the slots for and physical location of the system hard drives when
VRC-mounted application processors are used. Generally one drive is present per
processor, but additional drives may be added if system performance requires them.
Application Processor
In VRC-mounted configurations, the application processor is a “stripped down”
version of a Solaris or Windows computer: it contains the CPU, memory, and printed
circuit boards needed for both standard OS functions as well as basic MPS1000
transaction processing. One application processor is present per VRC in slot 6, but if
the VRC is populated with multiple TMS’ (which may in turn contain more than one
phone line interface card) and large numbers of spans, system performance may be
degraded and require the addition of another processor.
In typical rack-mounted configurations, there is one application processor per VRC,
and they are mounted at the bottom of the cabinet. This application processor is
similar in makeup to a typical Solaris or Windows computer. In either form, an
additional application processor may be added where instances of dual redundancy is
desired.

Page 26

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
Network Interface Controller (NIC) or Hub-NIC
Each VRC in the system contains either two NICs (primary and secondary) or a single
Hub-NIC. The Hub-NIC plugs into the NIC slot in back of the VRC, and contains two
network hubs for the chassis Ethernet. It is generally used only in single chassis
systems. In multiple chassis systems, two NICs are used. In this case a midplane board
is installed over the backplane connector of the NIC slot, effectively splitting the slot
and providing separate connectors for each NIC. The two connectors on the midplane
board are logically assigned to slot 7 (primary) and slot 8 (secondary) for addressing.
The NICs have additional functionality such as system monitor capabilities, watchdog
timer, and alarm drivers, and can interface from the intra-chassis Pulse Code
Modulation (PCM) highways to a fiber optic Asynchronous Transfer Mode (ATM)
switching fabric. The NICs receive power from any installed power supply that is on.
NIC

# P0602477 Ver: 3.1.11

Hub-NIC

Page 27

Avaya Media Processing Server Series System Reference Manual

Telephony Media Server (TMS)
The TMS is the core functional module of the Avaya Media Processing Server (MPS)
Series system. It provides a versatile platform architecture for a broad range of
telephony functions with potential for future enhancement. The basic TMS assembly
consists of a motherboard and mounting plate containing front panel connectors and
indicators.
TMS Assembly Front View

AUDIO CONSOLE

SLOT 1

SLOT 3

SLOT 2

SLOT 4

14 12 10 8 6 4 2 0

HUB B

HUB A

15 13 11 9 7 5 3 1

E S S M C
X L L P O
T 4 3
L

E S S M C
X L L P O
T 2 1
L

The TMS motherboard provides most essential functions for telephony and telephony
media management, including network and backplane bus interfaces, local memory,
digital signal processors, tone generators, local oscillators, and Phase-Lock Loop
(PLL) for Computer Telephony (CT) bus synchronization with other TMS’ and the
chassis. The motherboard contains a riser board that allows up to four additional
modules to be plugged in. The TMS motherboard also contains six Digital Signal
Processors (DSPs) which can be configured for communications protocols and to
provide resources.
Phone Line Interface
A TMS contains at least one phone line interface card, which can be a single Digital
Communications Controller (DCC) (see page 29) or up to three Analog Line Interface
(ALI) (see page 30) (a second DCC will be present if Voice over Internet Protocol
[VoIP] is installed). Though digital and analog line interfaces cannot be combined in
the same TMS, multiple TMS systems can contain any combination of digital and
analog lines in the VRC. Any line can be either incoming or outgoing, and all ports are
nonblocking (i.e., any port can be bridged to any other port). The TMS can also be
populated with a Multiple DSP Module (MDM) (see page 31), in one or more of the
remaining open slots. Although the motherboard has local digital signal processors,
the MDM provides additional resources for systems that require them.

Page 28

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
A single TMS can support up to eight digital T1 (24 channels/span for a total of 192
lines) or E1 (30 channels/span for a total of 240 lines) spans by using an individual
DCC to connect to the Public Switched Telephone Network (PSTN). If some of the
lines are used exclusively for IVR resources, one or more spans may be dedicated.
Spans dedicated as such are connected directly in clear channel protocol. Supported
digital protocols include in-band T1/E1 and out-of-band SS7 and ISDN.
In addition a TMS can support up to 72 analog lines by using three ALI boards (24
lines per ALI). The standard analog interface supports common two-wire loop-start
circuits.
Information on configuration and application of phone line protocols and interfaces
can be found in the Avaya Media Processing Server Series Telephony Reference
Manual.
Digital Communications Controller (DCC)
The DCC provides the digital phone line interfaces for the system. It can be plugged
into any of the four slots of the TMS. The DCC is dedicated for either a T1 or E1
system, and connects to the PSTN via an RJ48M connector (up to eight spans).
The DCC is also capable of interfacing with a telephony network using VoIP. A
DCC-VoIP has no telephony connector on the front panel. Only one DCC is typically
installed in the TMS, unless the system is also using VoIP, in which case the
DCC-VoIP will also be installed. The DCC cannot be combined with an ALI in the
same TMS.
A serial console connector is provided for diagnostic purposes and for verifying and
configuring the boot ROM (see Verifying/Modifying Boot ROM Settings on page 252
for details). Other connectors and indicators are provided on the DCC front panel but
are reserved for future enhancement.
DCC Front View
Console
Connector

RJ48M
Connector

(Reserved for future enhancement)

# P0602477 Ver: 3.1.11

Page 29

Avaya Media Processing Server Series System Reference Manual
Analog Line Interface (ALI)
The ALI provides a phone line interface to the system for up to 24 analog phone lines.
It connects to the PSTN via an RJ21X connector on the front panel. The standard
analog interface supports common two-wire loop-start circuits. There are no other
connectors or indicators on the front of the ALI.
Up to four ALIs can be installed in a TMS, although three is typical since one of the
four TMS slots is usually occupied by an MDM. ALIs cannot be combined with a
DCC in the same TMS.

RJ21X
Connector
ALI Front View

Page 30

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
Multiple DSP Module (MDM)
A resource must be available on the system for an application to use it. If the resident
DSPs are fully allocated to resources or protocols, capacity for more resources can be
added by installing a Multiple DSP Module (MDM) in an open TMS slot and loading
the image definitions for the resources required. These resources are in addition to the
MPS resource itself. Examples of TMS supported resources are:
•
•

•
•

Player (ply) - Vocabularies or audio data can be played from local memory
on the TMS motherboard.
DTMF Receiver (dtmf) and Call Progress Detection (cpd) - Phone line
events such as touch-tone entry, hook-flash, dial tone, busy signals, etc. can
be detected.
Tone Generator (tgen) - In lieu of playing tones as vocabularies, DTMF
and other tones can be generated.
R1 Transmit (r1tx), R1 Receive (r1rx), and R2 (r2) - Tone generators
and detectors to support R1 and R2 protocols.

The MDM contains 12 DSPs for configuration of additional resources. There are no
indicators or connectors on the front panel of the MDM. The only visible indication
that an MDM is installed in a TMS slot (versus a blank), is the presence of bend tabs
near the center of the front bracket that secure it to the MDM circuit board.

MDM Front View

Configuration of resources and protocols is covered in Base System Configuration on
page 64.

# P0602477 Ver: 3.1.11

Page 31

Avaya Media Processing Server Series System Reference Manual
System LAN Interface
The TMS interfaces with the system Local Area Network (LAN) via Ethernets using
TCP/IP. The chassis Ethernet is connected via the VRC backplane to separate hubs on
the chassis NIC or Hub-NIC (see VRC Rear View on page 24). If there is a failure on
the master Ethernet (controlled by the first NIC), the secondary NIC takes control of
all Ethernet A, system clocking, and ATM functions. The switchover is virtually
instantaneous and the inherent error correction of TCP/IP prevents loss of data.
The redundant Ethernet is only for backup of the primary Ethernet. Ethernet A
is the ONLY Ethernet supported between the chassis and the Application
Processor. There is no support for dual redundant Ethernet.
Field Programmable Gate Arrays (FPGA) and the Boot ROM
The TMS and the modules that plug into it (i.e., DCC, MDM, and ALI) contain
FPGAs. An FPGA is a generic microchip that has no inherent functionality. It
contains arrays of generic logic elements (e.g., gates) that are software configurable.
The software that configures the FPGA is called an image, and the image typically
commands the FPGA to assume the functionality of a designed logic circuit. A
hardware architecture based on FPGAs is very powerful and flexible because:
•

•

A greater degree of complex logic functionality can be achieved in a
relatively smaller board space with fewer circuit components than if dedicated
circuit components and hard board wiring were used. This also provides
greater circuit reliability.
Functionality can be enhanced without hardware redesign or even removal
and replacement. Upgrades can be done in the field by loading a new image
definition.

FPGAs are dynamic devices in that they do not retain their image definition when
power is removed. The image definition for each device is loaded from an image
definition file (*.idf) during the system boot sequence. The TMS contains a boot
ROM that statically stores the names of the .idf files for the devices contained on its
motherboard and the modules that are plugged in.
Whenever a new system is installed, has components added or replaced, or the system
is upgraded, the boot ROM should be verified and, if necessary, modified by Certified
Avaya Support Personnel. Details concerning boot ROM verification can be found at
Verifying/Modifying Boot ROM Settings on page 252.

Page 32

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
TelCo Connector Panel (TCCP)
The TCCP provides a built-in platform for connecting to the Public Switched
Telephone Network (PSTN) and for conveniently breaking out and looping-back
spans for monitoring or off-line testing. One TCCP can support up to four TMSs and
can be configured with RJ48M or RJ48C connectors for each TMS.
TCCP with RJ48M Interfaces

TCCP with RJ48C Interfaces

TCCP Rear View

J2 (Connects
to TMS 2)

J4 (Connects
to TMS 4)

J3 (Connects
to TMS 1)

J1 (Connects
to TMS 3)

The TCCP is connected to each TMS from the corresponding connector on the TCCP
back panel by a direct feed RJ48M cable. In TCCP equipped systems, PSTN
connections are made at the TCCP using the RJ48M or RJ48C connectors on the front
of the panel. A pair of bantam jacks (SND and RCV) is provided for each span
connected to the TCCP. The bantam jacks are resistor isolated and can be used for
monitoring only. The bantam jacks cannot be used to create span loop-back
connections. Loop-back connections for testing purposes can be made between TMSs
or spans using special crossover cables. For details, see the Avaya Media Processing
Server Series 1000 Transition Guide.

# P0602477 Ver: 3.1.11

Page 33

Avaya Media Processing Server Series System Reference Manual

Software Overview
The following illustration shows the functional arrangement of the ASE and VOS
processes for MPS release 1.x. Though many of the processes are similar to those of
release 5.x, there are several new and revised processes, all of which are described in
the paragraphs that follow.

ASE
Processes

PPro

PPro

PPro

App.

App.

App.

VENGINE

VENGINE

VENGINE

VMST

VSUPD

ASE/VOS Integration Layer
VAMP

VOS Process Group
(1 per TMS)

To host

COMMGR
host
protocol
SRP

Common Processes
(1 each per node)

CCMA

VSH

CONFIGD

CONOUT

CCM

CFG

VSTAT

ALARMD

PMGR

CONSOLED

To system
console

To alarm
viewer

TCAD

MMF
VSH

VMM

TRIP

VMem
TMS

Page 34

TMS Common
Processes
(1 per node)

LRM,
ADSM,
SIM

To TRIPs of
other VOS
process groups
(if present)

NCD

Master

VSH

Slave

NIC Pairs

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
Software Environment
The MPS software components are categorized into two process groups: VOS (Voice
Operating Software) and ASE (Application Services Environment).
The VOS software process group comprises the main system software required to run
the MPS system. The ASE software process group contains the software required to
develop and execute applications.
VOS and ASE software processes have been designed to operate in an open systems
Solaris or Windows environment. All speech, telephony, and communications
functions run under Solaris or Windows, and require no additional device drivers.
VOS uses the standard Solaris or Windows file system for managing all speech/fax
data. A set of GUI tools provides for application development and system
management.
Some VOS and ASE software processes are common to all MPS components defined
on a specific host node; these are located in the GEN subcomponent of the common
component on that node (and defined in the file
$MPSHOME/common/etc/gen.cfg). Other VOS processes are unique to each
defined MPS component, and are part of the VOS subcomponent of the MPS
component (and defined in $MPSHOME/mpsN/etc/vos.cfg). The NCD process,
on the other hand, is part of the VOS subcomponent of the tmscomm component
(and defined in $MPSHOME/tmscommN/etc/vos.cfg). This TMS-specific
process requires one instance per node; other common processes also require that only
a single instance of the process execute on a node. Processes that are unique to each
component require an instance of each process be executed for each MPS component
defined on the node. When uncommented in their respective gen.cfg or
vos.cfg files, these processes are started by the Startup and Recovery Process
(SRP). (For a more comprehensive discussion about SRP, see SRP (Startup and
Recovery Process) on page 70.)
Individual applications are executed by means of a separate instance of the ASE
process VENGINE for each instance of the application’s execution. There are three
major types of applications:
•

•

Call processing applications are assigned to physical phone lines. A separate
instance of both the application and VENGINE process is required for each
physical phone line to which the application is assigned.
Administrative applications perform system maintenance functions and
support the call processing applications. They are not assigned with physical
phone lines. However, they also require a separate instance of VENGINE for
each instance of the application.

Applications can communicate with each other by means of shared memory or
message passing.

# P0602477 Ver: 3.1.11

Page 35

Avaya Media Processing Server Series System Reference Manual
ASE Processes
The Application Services Environment (ASE) process group is comprised of software
required to develop and execute applications. ASE processes include:
Process

Description

VENGINE

The application execution process. One VENGINE process is
required for each MPS application (call processing, web based,
and administrative).

VMST

VENGINE Message Server - Extended. Manages MPS
messages related to VENGINE applications. This process also
can be used to bridge messages in a multi-MPS environment.

VSUPD

Collects application-specific statistics (as opposed to system
statistics).

•

•

VMST, and VSUPD are node-specific processes and require only one
occurrence of the process for each host node regardless of the number of
components defined on the node.
VENGINE is an application-specific process. One occurrence of VENGINE
must execute for each application assigned to an MPS line.

VENGINE
VENGINE is the application-specific ASE software process. It is responsible for the
execution of each occurrence of an application that is assigned to an MPS. One
VENGINE process is required to execute for each occurrence of a call processing,
web based, or administrative application. Administrative applications are not
associated with physical phone lines and perform system maintenance operations and
support call processing applications.
Additionally, VENGINE is used to execute all or part of an application while it is
under development. It can run all or part of the application so that the logic paths and
application functions may be tested.
VENGINE is located in $MPSHOME/PERIase/bin on Solaris systems and
%MPSHOME%\bin on Windows systems, and can be initiated from the command
line or by starting an application with the PeriView Assign/(Re)Start Lines tool (see
the PeriView Reference Manual for more information on the latter). Applications that
require ASE processes are located in the $MPSHOME/mpsN/apps directory. For
additional information about these applications, see The MPSHOME/mpsN/apps
Directory on page 140. VENGINE makes connections to both these applications and
VMST. For additional information on VENGINE, see the PeriProducer User’s
Guide.
VMST
VMST (VENGINE Message Server - Extended) is an ASE software process that

Page 36

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
performs message server functions for VENGINE. It funnels VOS messages that have
been translated by VAMP to VENGINE processes and service daemons. VMST
interprets and supports all pre-existing VMS options, allowing scripts incorporating
them to continue functioning under the present release without any modifications.
The advent of the TMS brings about an increase in the number of lines supportable on
a single platform, as well as an increase in potential message traffic. In order to handle
the increase in addressable lines, this modified version of VMS was created
(previously, VMS addressing was limited to a one-to-one correspondence of VMS to
CPS/VPS). Though VMST can still act on behalf of a single MPS, VMST can also
address the new paradigm by supporting many real or virtual MPS’ in a single process
(the VMST process assumes the functions of one or more VMS’ running on the same
node). In addition, VMST:
•
•

•

eliminates traffic between VMS’, since all messages are now passed between
threads inside the VMST process.
supports interapplication communications between the MPS systems (the
MPS system to which an application directs a message must be directly
connected to the MPS running the application). Inter-VMST traffic is
supported as described in Interapplication/Host Service Daemon Data
Exchange on page 215.
supports automatic detection of lost TCP/IP connections (pinging)

The VMST process is located in $MPSHOME/PERIase/bin (Solaris) or
%MPSHOME%\bin (Windows). When used with a single MPS, VMST is started by
SRP through the $MPSHOME/mpsN/etc/ase.cfg file. When used with multiple
MPS’ (whether real or virtual), it is started through the
$MPSHOME/common/etc/gen.cfg file. In addition to VENGINE and VAMP,
VMST makes connections to the VSUPD processes
VMST is aliased as vms in its SRP startup files, but should not be confused with
previous (“non-extended”) versions of VMS.

# P0602477 Ver: 3.1.11

Page 37

Avaya Media Processing Server Series System Reference Manual
VSUPD
VSUPD is the ASE software process that is responsible for collecting applicationspecific statistics. VSUPD is a node-specific process, thus one instance of this process
is required for each node regardless of the number of MPS components assigned to the
node.
This process need not be run unless application statistics have to be collected and
reported.
Each node collects statistics at 15-minute intervals for all applications executing on all
MPS’ on the node and stores them in the ASEHOME/stats directory. On systems
with remote nodes, statistics for the four previous 15-minute periods are collected
hourly from all other nodes by the one designated for MPS network statistical
collection and reporting and transferred to that node’s ASEHOME/stats directory.
VSUPD supports an optional command line argument -w , which specifies
the maximum amount of time to wait for phone line responses.
PeriReporter, in conjunction with the individual call processing applications, is used
to define the statistical events to be collected and to create and generate reports. For
information about PeriReporter, see the PeriReporter User’s Guide.
VSUPD is started by SRP through the $MPSHOME/common/etc/gen.cfg file and
located in $MPSHOME/PERIase/bin on Solaris systems and %MPSHOME%\bin
on Windows systems. It makes its connections to VMST.
System statistics are collected by the VSTAT process on a per-MPS basis. For
information about the VSTAT process, see VSTAT on page 50.

Page 38

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
ASE/VOS Integration Layer
This layer is used to convert and translate messages from the applications to the VOS
processes. For PeriProducer applications, this layer communicates with the ASE
processes, which in turn communicate with the applications themselves. The Vengine
Application Management Process (VAMP) is an interface between the Application
Services Environment (ASE) and the Voice Operating Software (VOS).
The VAMP services application requests:
• Consolidate information (lines, resources, etc.) for applications
• Consolidate information for commands issued by applications
• Control line bridging based on Call Progress Detection information
• Process resource control commands which may be directed to different
resource providers and have different formats
VOS Processes
The Voice Operating Software (VOS) process group is comprised of the main system
software required to run the MPS system. VOS processes can be common (only one
instance required per node) or MPS-specific (one instance required per MPS
component). This software group consist of the following independently running
processes:
Process

Description

ALARMD (Alarm Daemon)

Collects alarm messages, writes them to
the alarm log, and forwards them to any
running alarm viewers.

CCM (Call Control Manager)

The primary interface between VAMP
and the VOS services. Provides request
synchronization and resource
management.

COMMGR (Communications Manager)

Manages external host communications.

CONFIGD (Configuration Daemon)

System wide configuration process.

CONOUT (Console Output Process)

Relays output from VOS processes to
the system console.

CONSOLED (Console Daemon)

Takes messages that would normally
appear on the system console and
displays them in the alarm viewers.
(Solaris only.)

NCD (Network Interface Controller
Daemon)

Controls interconnections between
multiple TMS platforms attached to the
NIC card.

nriod

Daemon responsible for remote
input/output.

# P0602477 Ver: 3.1.11

Page 39

Avaya Media Processing Server Series System Reference Manual

Process

Description

PMGR (Pool Manager)

Provides resource management,
including resource allocation, resource
deallocation, and keeping track of
resource allocation statistics.

rpc.riod

Daemon responsible for remote
input/output (Solaris backward
compatibility only).

TCAD (TMS Configuration & Alarm
Daemon)

Provides loading, configuration, and
alarm functions for TMS.

TRIP (TMS Routing Interface Process)

Acts as a router between the VOS and
TMS.

VMM (Voice Memory Manager)

Provides media management services
for the VOS.

VSTAT (VPS Statistics Manager)

Provides system (as opposed to
application) statistics consolidation and
reporting.

ALARMD
ALARMD resides in the GEN subcomponent of the common component. It is
responsible for collecting alarm messages, writing them to the alarm log, and
forwarding alarms to the MPS alarm viewers. The alarm logs are located in the
directory $MPSHOME/common/log in the format
alarm...log, with backup files being
appended with the .bak extension.
To avoid problems with memory exhaustion and the ALARMD daemon growing out
of bounds, alarms can be suppressed from being logged to disk or being transmitted to
the viewers (see Alarm Filtering on page 203). The daemon accepts commands either
dynamically during run-time or statically from its configuration file during startup.
ALARMD associations:
• Connections: All processes which generate alarms
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/common/etc/alarmd.cfg
• SRP Startup File: $MPSHOME/common/etc/gen.cfg
The alarmd.cfg file only exists on systems where alarm filtering is instituted at
startup (see The alarmd.cfg and alarmf.cfg Files on page 99).
The alarm command may be used to display the text of alarms, that are broadcast
from ALARMD, in a command or VSH window. The PeriView Alarm Viewer is the
GUI tool that may be used to select and display this same alarm information.

Page 40

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
See the PeriView Reference Manual for additional information about the Alarm
Viewer and the alarm information that may be obtained with this tool.
You can configure ALARMD to display the year in the timestamps that are added to
entries written to the alarm log files (such as info, warming, alarm, and app). By
default, the year is not displayed in the timestamp.
The optimum way to enable the display of the year in the timestamps of alarm log
entries is to start ALARMD with the command line option -y. This can be done by
modifying the COMMAND LINE field for ALARMD in the
$MPSHOME/common/etc/gen.cfg configuration file to include the -y
command line option. The entry for ALARMD in that file would appear as follows
(note that the quotation marks are required):
alarmd - - 1 0 “alarmd -y”
An alternate method of enabling the display of the year in the timestamps of alarm log
file entries is to add either of the following lines to the ALARMD configuration file
$MPSHOME/common/etc/alarmd.cfg:
alarmd showyear on
alarmd showyear 1
Displaying the year in the timestamps of alarm log file entries can be enabled or
disabled after ALARMD starts by using VSH to issue the showyear console option
with an appropriate argument to ALARMD.
For example, to enable the display of the year in the timestamps of alarm log file
entries, issue either of the following commands at a vsh prompt:
alarmd showyear on
alarmd showyear 1
To disable the display of the year in timestamps of alarm log file entries, issue either
of the following commands at a vsh prompt:
alarmd showyear off
alarmd showyear 0
If you want to display the year in the timestamps of alarm log file entries, Avaya
recommends using the -y command line option in
$MPSHOME/common/etc.gen.cfg to ensure that the year appears in the
timestamp of every alarm written to the log file. If you use either of the other options
described above, alarms generated early in the bootup sequence may not display the
year in their timestamps
For additional information about the alarm facility, see System Utilities and
Software on page 51. alarm is located in $MPSHOME/bin on Solaris systems or
%MPSHOME%\bin on Windows systems.

# P0602477 Ver: 3.1.11

Page 41

Avaya Media Processing Server Series System Reference Manual
CCM
CCM resides in the VOS subcomponent of the MPS component. Two CCM processes
will exist in the VOS subcomponent: CCM and CCMA. CCM manages and controls
phone lines and all resources required for interacting with the phone line (caller).
CCMA provides administrative services only, and does not provide phone line related
services (i.e., outdial, call transfer, etc.). Configuration is accomplished in one of two
ways: process wide or line/application specific. Process wide configuration is setup in
ccm_phoneline.cfg (for CCM) or ccm_admin.cfg (for CCMA).
Line/application specific configuration is achieved by the application by setting up its
required configuration when it binds with CCM/CCMA.
The CCM process is primarily responsible for:
•
•
•
•
•
•
•
•

managing the phone line state dynamic
allocating and deallocating internal and external resources, as well as
administering the former
command queue management and synchronization
element name parsing for play, record and delete requests
servicing audio play and record requests
data input management (touch-tones, user edit sequences, etc)
third party call control (conference management)
maintaining call statistics

The CCMA process is primarily responsible for:
•
•
•
•

command queue management and synchronization
element name parsing for delete and element conversion requests
MMF event reporting
maintaining statistics

The VSH interface provides the ability to send commands to CCM. For a list of these
commands, see the CCM Commands section in the Avaya Media Processing Server
Series Command Reference Manual.
CCM associations:
• Connections: VAMP, NCD, TRIP, TCAD, VMM, PMGR
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration Files:
• For CCM: $MPSHOME/mpsN/etc/ccm_phoneline.cfg
• For CCMA: $MPSHOME/mpsN/etc/ccm_admin.cfg
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg

Page 42

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
COMMGR
COMMGR resides in the VOS subcomponent of the MPS component and provides
transaction processing services for the VOS. It enables application programs to
communicate with external host computers using a variety of protocols. Though
functionally equivalent to pre-existing versions, the release 1.0 COMMGR no longer
requires that Virtual Terminals (VTs) be mapped to phone lines.
The commgr.cfg file defines the configuration parameters required to communicate
with most external hosts. For more information, see The commgr.cfg File on page
144.
Host communications functions and protocols are documented in the Avaya Media
Processing Server Series Communications Reference Manual.
COMMGR associations:
• Connections: Protocol server processes
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/mpsN/etc/commgr.cfg
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg
CONFIGD
CONFIGD is the system wide configuration process. It reads configuration files on
behalf of a process and sends this configuration information to the process.

!

Online reconfiguration must only take place when the system is idle (no applications
are attached). Unexpected behavior will result if the system is not idle during an
online reconfiguration.
CONFIGD associations:
• Connections: All VOS processes
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/common/etc/gen.cfg

# P0602477 Ver: 3.1.11

Page 43

Avaya Media Processing Server Series System Reference Manual
CONOUT
CONOUT is the VOS process that is responsible for providing output to the system
console. On Windows this provides output to the window in which SRP is started. It
receives display data from the VOS processes and routes it to the system console.
CONOUT associations:
• Connections: Any VOS process sending info to the system console
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/common/etc/gen.cfg
CONSOLED
CONSOLED takes messages that would normally appear on the system console and
displays them in an alarm viewer. These messages include:
•
•

system messages
Zero Administration for Prompts (ZAP) synchronization status alarms

System messages can be generated by the MPS system or the operating system itself.
CONSOLED associations:
• Connections: Any process sending info to the system console
• Location: $MPSHOME/bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/common/etc/gen.cfg

Page 44

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
NCD
NCD is comprised of three distinct logical entities: bridge control; Phase-Lock Loop
(PLL) control; and a VSH interface to the NIC board itself. As part of the tmscomm
component process group, one instance of NCD exists on a node containing TMS’. It
interfaces with the TRIP and CCM processes in each VOS on the node, and with
embedded processes running on the two chassis NICs (i.e., master, slave).
The NCD Bridge Control Process (NCD BCP) provides a common interface to
support bridging between Resource Sets (RSETs) on or between TMS’. NCD BCP
orchestrates the setup and teardown of the various bridging configurations supported
by the TMS and NIC architecture. NCD BCP also has the ability to construct bridges
between a pair of TMS’ where the connections are physically hardwired (on a
Hub-NIC card), or locked on a Time Space Switch (TSS) on the NIC.
The NCD PLL process provides configuration and control of the timing and clock
sources on and between TMS’ in a common chassis. NCD PLL is primarily used in
small systems that do not have a NIC to provide these functions.
The NCD VSH interface provides the ability to send simple configuration commands
to the NIC as well as query the current configuration. For a list of these commands,
see the NCD Commands section in the Avaya Media Processing Server Series
Command Reference Manual.
NCD associations:
• Connections: TRIP (local and remote)
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/common/etc/tms/tms.cfg
• SRP Startup File: $MPSHOME/tmscommN/etc/vos.cfg

nriod
The nriod file provides information and access to MPS files for remote PeriView
processes in both the Solaris and Windows environments. nriod is a system daemon
and, as such, only one instance of this process is required for each node.
nriod associations:
• Connections: Any process communicating with the PeriView Task Scheduler
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/common/etc/gen.cfg

# P0602477 Ver: 3.1.11

Page 45

Avaya Media Processing Server Series System Reference Manual
PMGR
PMGR provides pooled resource management of all resources from Resource
Provider (RP) processes running on the local node. An example of an RP is the CCM
process, which provides lines as resources. An RP registers its resources with PMGR
upon initialization. A registered resource can also be pooled applications (used for call
handoff, for instance). As applications request resources, PMGR allocates the
resources, keeps track of applications and their resources, maintains statistics, and
deallocates resources as necessary.
If PMGR cannot allocate a resource locally, it forwards the request to a remote
instance of PMGR; the specific instance is determined through round-robin
availability. If there are no remote PMGRs available, the request fails. If PMGR dies,
it releases all resources that have been allocated; if an RP dies, it must reconnect to
PMGR to reregister its resources; if an application dies, allocated resources remain
with it: after the application restarts, it queries PMGR for a list of resources currently
allocated to the application; it may then use these resources or free them if no longer
needed.
PMGR associations:
• Connections: Any process that provides resources (RP), applications
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/common/etc/pmgr.cfg
• SRP Startup File: $MPSHOME/common/etc/gen.cfg
The VSH interface also provides the ability to send commands to PMGR. For a list of
these commands, see the PMGR Commands section in the Avaya Media Processing
Server Series Command Reference Manual.

rpc.riod
The rpc.riod file provides information and access to MPS files for remote
PeriView processes in the SPARC/Solaris environment. rpc.riod is a system
daemon and, as such, only one instance of this process is required for each node.
This file is maintained for backward compatibility for systems running pre-5.4
software. nriod on page 45 is now included with the system to provide Solaris and
Windows functionality.
rpc.riod associations:
• Connections: Any process communicating with the PeriView Task Scheduler
• Location: $MPSHOME/bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/common/etc/gen.cfg
TCAD
TCAD resides in the VOS subcomponent of the MPS component. It provides both

Page 46

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
alarm and diagnostic services for the TMS hardware and loading and configuration
services for the VOS. This includes:
•
•
•

•

•
•

loading and configuration of all TMS devices
a listing of TMS internal resources to the VOS
alarm generation on behalf of TMS devices by translating TMS alarm code to
the correct alarm format used by the alarm daemon (see ALARMD on page
40).
diagnostics (System Performance Integrity Tests) which provide information
about any device in the TMS. TCAD allows other processes to request
information about any device (i.e., request telephony span status).
logging capabilities for the hardware
statistics and internal information about TMS devices

TCAD communicates with the TMS via TRIP. This includes sending loading and
configuration messages through the Load Resource Management (LRM) port and
sending and receiving alarm messages via the Alarm, Diagnostic, and Statistics
Management (ADSM) port.
User interface with TCAD is via a VSH command line, which provides the ability to
send commands to TCAD.
TCAD associations:
• Connections: TRIP, ALARMD, VMM, PMGR, and configuration files
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/mpsN/etc/tcad.cfg
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg

# P0602477 Ver: 3.1.11

Page 47

Avaya Media Processing Server Series System Reference Manual
TRIP
TRIP resides in the VOS subcomponent of the MPS component. It is responsible for
routing messages between the front end (VOS) and back end (TMS) over the TCP/IP
connection. TRIP communicates directly with the LRM, ADSM, and Call SIMulator
(SIM) ports of the TMS. TRIP is also responsible for providing the IP and port
number of the TMS connected to a VOS. The calling process must identify the
particular port on the TMS that it is interested in.
The VSH interface provides the ability to send commands to TRIP.
TRIP associations:
• Connections: CCM, VMM, TCAD, NCD, and the LRM, ADSM, and SIM
ports of the TMS
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/mpsN/etc/trip.cfg
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg

Page 48

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
VMM
VMM resides in the VOS subcomponent of the MPS component and provides media
management services for the VOS. When VMM starts it connects to TCAD, TRIP and
the VMEM port of the TMS. Once VMM detects that TCAD has configured the TMS,
VMM loads the Voice Data Memory (VDM).
The startup time for VMM is minimal and does not delay speak/record requests unless
the system is under heavy load. In the case of a record request under heavy load, the
TMS buffers that data destined for VMM. Since input/output (I/O) blocking is
performed, VMM is capable of servicing all other requests that arrive while prior I/O
requests are awaiting completion, eliminating direct impact on other lines.
The VMM process is primarily responsible for:
•
•
•
•
•
•
•
•

loading and managing VDM
loading and managing media MMF files both system wide and application
specific (playback and record)
creating and managing hash tables of element names
performing hash lookups on behalf of CCM
performing on-line updates and deletes
receiving data for ethernet based Caller Message Recording (CMR)
maintaining maximum workload constraints and related queuing of pending
I/O operations
maintaining media access related statistics (reference counts and cache hits,
for example)

The VSH interface provides the ability to send commands to VMM. For a list of these
commands, see the VMM Commands section in the Avaya Media Processing Server
Series Command Reference Manual.
VMM associations:
• Connections: CCM, TRIP, TCAD, and the VMEM port of the TMS
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: $MPSHOME/mpsN/etc/vmm.cfg
• $MPSHOME/mpsN/etc/vmm-mmf.cfg
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg

# P0602477 Ver: 3.1.11

Page 49

Avaya Media Processing Server Series System Reference Manual
VSTAT
VSTAT is the VOS software process that is responsible for collecting host, phone line
and span system statistics. It resides in the VOS subcomponent of the MPS
component.
Statistics are collected at each host node in 15-minute intervals and stored in the
MPSHOME/mpsN/stats directory. Statistics for the four previous 15-minute
periods are collected hourly by the node designated for MPS network statistical
collection and reporting, converted to binary files, and moved to the
ASEHOME/stats directory of that node. The same process occurs on single-node
systems.
System statistics are collected by the VSTAT process and application statistics are
collected by the VSUPD process. VSUPD is a member of the ASE software process
group (see VSUPD on page 38). PeriReporter is used to create and generate reports
based on these statistics. For information about PeriReporter, see the PeriReporter
User’s Guide.
•
VSTAT commands are intended to be issued by the Solaris cron or Windows
scheduling facility and not at the VSH command line.
VSTAT associations:
• Connections: All processes which generate alarms
• Location: $MPSHOME/bin or %MPSHOME%\bin
• Configuration File: Not applicable
• SRP Startup File: $MPSHOME/mpsN/etc/vos.cfg

Page 50

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
System Utilities and Software
In addition to the previously defined software processes, an array of system utilities
and graphical tools is available to the MPS system operator and network
administrator. These include:
Utility

Description

alarm

Textually displays alarms that were processed by the
alarm daemon (see ALARMD on page 40). PeriView’s
Alarm Viewer may be used to display this same
information in a GUI format.

dlog

Generic Debug-Logging.1 An interface that provides
additional command options to multiple VOS processes.

dlt

Diagnostics, Logging, and Tracing (Daemon). Provides
these capabilities for the TMS. 1 Also used when
executing call simulations (see Call Simulator Facility on
page 195).

log

Textually displays low-level system process messages
used for diagnostic purposes.

PeriProducer

Used to create and edit Avaya applications in a GUI
environment.

PeriReporter

Collects, stores, and reports statistical data for the MPS
network.

PeriStudio

Used to create and edit MMF files.

PeriView

A suite of GUI tools used to control and administer the
MPS network. Included in this set of tools is: the
PeriView Launcher, Application Manager, Activity
Monitor, Alarm Viewer, File Transfer Tool2, Task
Scheduler 2, SPIN 1, PeriReporter Tools, PeriStudio,
PeriProducer, PeriWWWord, PeriSQL, and PeriDoc.

PeriWeb

Used to create web-based applications and to extend
typically IVR applications to the Internet.

vsh

Text command shell interface utility. Up to 256 VSH
windows may be active at any one time.

1. Intended for use only by Certified Avaya Support Personnel
2. Not available at present on Windows.

alarm
alarm is the text-based utility used to display the alarms that are broadcast by
ALARMD, the alarm daemon. alarm is a non-interactive application that simply
displays the alarm message text received from the ALARMD process running on the
MPS node with which alarm is currently associated. This translation facility uses the

# P0602477 Ver: 3.1.11

Page 51

Avaya Media Processing Server Series System Reference Manual
alarm database to convert system and user-created messages to the proper format that
may then be displayed and logged. If alarm filtering has been implemented through
ALARMD, then alarm only receives those that pass the filter (ALARMF filtering
has no affect on it since alarm “attaches” directly to ALARMD).
Alternatively, the Alarm Viewer may be used to display this same alarm information.
The Alarm Viewer is a GUI tool accessible by means of the PeriView Launcher. Refer
to the PeriView Reference Manual for additional information.
If the alarm process is unable to establish an IPC connection to ALARMD, it will
periodically retry the connection until it succeeds. This functionality permits the
Alarm Viewer to be invoked before starting the MPS system itself and allows for any
startup messages to be viewed. Consequently, the Alarm Viewer for systems equipped
with a graphics-capable console is invoked as part of the normal startup process
providing for the automatic display of alarms (including normal startup messages) as
they are generated during this period of time. See the Avaya Media Processing Server
Series System Operator’s Guide for information on system startup and monitoring.

dlog
Debug logging is typically used by Certified Avaya Support Personnel. It is not
frequently necessary to interact with dlog from an end-user’s perspective.
Although DLOG is not process-specific, a process name must be specified to invoke
any of the commands. The processes that are configured to use DLOG options include
CCM/CCMA, COMMGR, VAMP, PMGR, TCAD, TRIP, and VMM. The process
name is substituted for the standalone dlog string in the command line options.
The VSH interface provides the ability to interact with these processes. For a list of
these commands, see the DLOG Commands section in the Avaya Media Processing
Server Series Command Reference Manual.

Page 52

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
dlt
The DLT process provides:
•

•
•
•

diagnostics (system performance integrity tests) which provide information
about any device in the TMS. DLT allows other processes to request
information about any device (i.e., request telephony span status)
logging capabilities for the hardware (including line-based logging)
statistics and internal information about TMS devices
an interface for call simulation

DLT is used primarily by Certified Avaya Support Personnel and programmers. To
initiate the DLT process, open a command window on the node you wish to monitor
and enter the dlt command. Connections to TRIP and TCAD are attempted: if
these connections are successful, the dlt prompt appears in the command line. For
a list of these commands, see the DLT Commands section in the Avaya Media
Processing Server Series Command Reference Manual.

log
log is the text-based utility used to display messages sent between MPS processes. It
monitors message traffic among selected VOS processes and is used for diagnostic
purposes. This utility has a command line user interface.
log is an interactive application. It accepts commands from the terminal, maintains a
history event list similar to that maintained by VSH (the MPS shell used for user
interaction with VOS processes), and allows for simplified command entry and
editing. For additional information refer to this manual’s section about vsh on page
60.
log accepts the same command line options defined for any VOS process. These
options may be used to determine the MPS with which log communicates and the
method by which the messages are to be displayed. Further, a command line option
may be used to determine the status of active logging requests when the log utility
loses the IPC connection to the remote process responsible for implementing those
logging requests. The utility is also able to log messages between processes that are
not registered with SRP.

# P0602477 Ver: 3.1.11

Page 53

Avaya Media Processing Server Series System Reference Manual

PeriProducer
PeriProducer is the software tool used to create, maintain, and test interactive
applications for MPS systems in a GUI environment. It also provides a graphical
application generation and testing environment that supports all aspects of an
application’s life cycle.
These applications are invoked by means of the Application Manager tool
(APPMAN) accessible through PeriView. Generally, an MPS system runs multiple
lines concurrently, and these lines are used to run different applications or multiple
instances of the same application. For additional information about APPMAN see the
PeriView Reference Manual.
The following is a list of the major functions that are available for processing caller
transactions. An application can use some or all of these features:
•
•
•
•
•
•
•
•
•

speaking to callers in recorded and/or synthesized speech
accepting input from the caller using touch tone, speech recognition, or
speech recording
concurrently interfacing to multiple hosts
processing information via computation
accessing local files and databases
sending or receiving a fax
controlling phone lines
processing exceptions
recording caller messages

Generally, PeriProducer should be run on a separate development workstation. Should
it be necessary to run it on a workstation that is also actively processing phone calls,
PeriProducer should be used only during times of low system activity. Processingintensive activities (e.g., testing logic paths, implementing resource usage, etc.) may
impact the overall performance of the MPS system.
PeriProducer provides features that are used to verify the performance and
functionality of an application either before or after it is placed into a production
environment. While under development, application execution is accurately
simulated within the PeriProducer software environment on the development
workstation. A set of diagnostic functions allows the developer to view the internal
workings of an application during the simulation.
When assigned to a line and started, the processing of an application is managed by
the VOS VENGINE process (see VENGINE on page 36). VENGINE is also used
while developing an application to execute all or part of the application so that the
logic paths and application functions can be tested.
For additional information about using PeriProducer to create and maintain
applications designed to execute in the MPS environment, refer to the PeriProducer
User’s Guide.

Page 54

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
PeriReporter
PeriReporter is the tool used for collecting, storing, and reporting statistical data for
the MPS network. It allows a point-and-click specification of multiple report formats
for each statistics record type. A report is viewed as a set of columns, with each
column representing an application or system-defined statistical counter. Each row of
cells corresponds to a time interval recorded in a statistics file.
PeriReporter consists of three tools:
Tool Name

Description

PeriConsolidator

This program gathers all system and application statistics
and consolidates them into 15 minute, hourly, daily,
weekly, monthly and yearly files. PeriConsolidator is
configured in the crontab1 and set to run at a convenient
time once a day, preferably when the MPS system load is
relatively light.

PeriDefiner

This program is a graphical utility which is used to set up
the contents and the display of a specific report. After a
report definition is created and saved it can be generated
via the PeriReporter component of the tool.

PeriReporter

This program is a graphical utility which is used to
generate reports. The report (created in PeriDefiner) must
be specified, along with the date and the consolidation
type, after which it can be generated and printed.

1. Functionality similar to crontab has been added to the Windows operating
system through the Avaya software installation.

The PeriReporter tool typically resides only on the node that is designated as the site
for statistical collection and reporting. Therefore, in a multi-node environment, the
PeriReporter tool only displays and is available on the statistics node.
For more information on using PeriReporter Tools and configuring it for use in single
and multi-node environments, see the PeriReporter User’s Guide.

# P0602477 Ver: 3.1.11

Page 55

Avaya Media Processing Server Series System Reference Manual

PeriStudio
PeriStudio is a software tool used to create, manage, and edit audio elements for MPS
systems. Audio elements serve a variety of purposes in the voice processing
environment, including providing verbal information, messages, voice recordings,
touch-tones for phone line control, sound effects, music, etc. In the PeriStudio editor,
audio elements may be initially recorded, as well as edited in any way germane to
audio processing (e.g., volume levels, frequency range, duration of silent periods,
etc.). Included with the tool is a GUI-based audio (MMF file) editor, file management
and interchange facilities, and advanced audio signal processing capabilities.
Primarily, PeriStudio is used for:
•
•
•
•
•
•

recording audio from a variety of sources (microphone, tape, line source, and
other audio data format files).
playing back recorded vocabulary elements for audible verification.
editing all or portions of the recorded data (cut, paste, delete, scale length,
etc.).
importing and exporting audio items from or to other multimedia format files.
performing advanced audio signal processing (equalization, normalization,
mixing, filtering, etc.) of recorded elements to improve the sound quality.
performing batch editing and processing on multiple elements in a single
operation for obtaining consistent vocabularies as well as saving time.

Support is provided for both digital and analog environments, and digital and analog
elements may be stored in the same multi-media (vocabulary) file. Audio files created
in other software environments may also be imported into PeriStudio.
In order to provide a complete audio processing environment, an audio cassette tape
player, an external speaker and a telephone handset are recommended. The cassette
player is used to input recordings of speech to be digitized and processed for use on an
MPS system. The telephone handset is used to verify the speech quality of audio
elements as heard by system callers. The handset can also be used to record new
speech elements directly to the editor. The external speaker is useful during editing
and any subsequent audio processing operations to determine the effect of signal
modifications made by the user.
Generally, PeriStudio should be run on a separate development workstation. Should it
be necessary to run it on a workstation that is also actively processing phone calls,
PeriStudio should be used only during times of low system activity. Processingintensive activities (e.g., digitizing elements, adjusting their lengths, etc.) may impact
the overall performance of the MPS system.
For additional information about using PeriStudio to create, edit and manage audio
elements in the MPS environment, refer to the PeriStudio User’s Guide.

Page 56

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
PeriView
PeriView provides a suite of self-contained graphical tools used for MPS system
administration, operation, and control. PeriView also provides access to several other
distinct applications. Each tool is invoked independently and displays its own tool
subset.
The Launcher is PeriView’s main administrative tool. It provides a palate from which
to select the various tools and applications. For a detailed description of PeriView and
the use of its tool set, refer to the PeriView Reference Manual. For information on the
daily activities typically conducted with PeriView, see the Avaya Media Processing
Server Series System Operator’s Guide.
Tool Name

Description

PeriView Launcher

The PeriView Launcher is used to define the MPS
network’s composite entities, to graphically portray its
hierarchical tree structure, and to launch other PeriView
tools.

Application Manager

The Application Manager (APPMAN) is used to associate
applications with phone ports. Using APPMAN, you may
invoke and terminate applications, associate and
disassociate them from phone ports, configure application
run-time environments and line start order, and access
supporting application maintenance functions. MPS
component and application status can also be elicited from
this tool.

Activity Monitor

The Activity Monitor is used to monitor the states of phone
line activity and linked applications within the network.
Activity is depicted by a set of graphs in near real time.
Host and span status may also be monitored from this tool.

Alarm Viewer

The Alarm Viewer is used to view live and logged alarms.
A filtering mechanism provides for selectively displaying
alarms based on specified criteria in the viewer. A logging
facility provides for the creation of user-defined historyoriented Alarm Log Files.

File Transfer

The File Transfer tool is used to copy files across the MPS
network. Transfer capability provides for movement of a
single file, a group of files, or a subdirectory tree structure.
This tool is not available on the Windows operating system.

Task Scheduler

The Task Scheduler tool provides a mechanism for
defining and scheduling processes that are to be performed
as either a single occurrence or on a recurrent basis. This
tool is not available on the Windows operating system.

# P0602477 Ver: 3.1.11

Page 57

Avaya Media Processing Server Series System Reference Manual

Page 58

Tool Name

Description

SPIN

SPIN (System Performance and INtegrity monitor) is a
diagnostic tool used to monitor interprocess and intercard
communications to facilitate the identification of potential
problems on MPS systems. SPIN is intended for use
primarily by Certified Avaya Personnel.

PeriReporter

PeriReporter provides statistics and reports management
functions for the MPS network. It generates predefined
reports and collects and reports user-defined application
statistics. (For additional information, see PeriReporter on
page 55.)

PeriStudio

PeriStudio is used on MPS and stand-alone workstations to
develop and edit vocabulary and sound files for voice
applications. (For additional information, see PeriStudio
on page 56.)

PeriProducer

PeriProducer is used on Avaya Media Processing Server
(MPS) Series and stand-alone workstations to create and
support interactive applications. (For additional
information, see PeriProducer on page 54.)

PeriWWWord

Use PeriWWWord, the PeriWeb HTML Dictionary Editor,
to create and maintain dictionaries (directory structure
containing the HTML fragments) of Words (HTML
fragments) and their HTML definitions (HTML tags) for
PeriWeb applications. Available as part of PeriWeb (see
below) on Solaris platforms only!

PeriSQL

PeriSQL is used to create, modify, and execute Structured
Query Language (SQL) SELECT commands through a
graphical interface. PeriSQL can be used as a stand-alone
utility or with the PeriProducer SQL block.

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview
PeriWeb
PeriWeb is used to both build new applications to take advantage of the Web, and also
extend existing IVR applications to the Internet user community. While IVR
applications use the telephone as the primary input/output device, World Wide Web
(WWW) browsers can provide an alternate visual interface for many types of
transaction oriented applications. PeriWeb software facilitates this access mode with
minimum changes. A user of a WWW browser initiates a “call” to an application by
clicking a hypertext link. PeriWeb “answers” the call and routes it to the proper
application. The application normally responds with a request to generate a greeting,
but PeriWeb translates this into a dynamic hypertext document and sends it to the
browser (caller). The user enters responses through forms or image maps, and
PeriWeb delivers these responses back to the application.
Standard PeriPro IVR applications connect callers to MPS systems, where recorded
voice prompts guide them to make service selections and enter information using
touch tones or spoken words. The MPS responds to the caller using recorded prompts,
generated speech, or fax output, as appropriate. For existing Avaya customers with
IVR applications, PeriWeb software provides Internet access with minimal changes to
the application programs. This leverages existing investment in application logic and
host/database connectivity and transaction processing.
For customers with existing PeriProducer applications, PeriWeb adds:
• access to the World Wide Web
• an environment that does not require application logic changes for access to
basic features (that is, IVR supported interactive transactions using a Web
browser)
• enhanced Web presentation without changes to application logic and
processing
In summary, its features allow PeriWeb to:
• co-exist with standard WWW servers, such as HTTPD, but does not rely upon
them
• incorporate network-level security based on WWW encryption and
authentication standards
• support standard HTML tagging formats created with a text editor or Web
publishing tool
• perform Web transactions directly from the internet or through a relay server
• support the Keep-Alive feature of the HTTP1.1 protocol
• support the PUT method for publishing new HTML pages
• support standard and extended log file generation
• enable Web-aware applications for enhanced presentation on the World Wide
Web using Web-oriented features
• support multiple languages for interaction content
• support Java based applications for browsers with Java capability
For information concerning PeriWeb details, see the PeriWeb User’s Guide. For
information on performing PeriPro IVR programming, see the PeriProducer User’s
Guide.

# P0602477 Ver: 3.1.11

Page 59

Avaya Media Processing Server Series System Reference Manual

vsh
vsh is a text-based command shell which provides access to MPS processes. For both
Windows and Solaris, vsh is modeled after the Solaris csh in regard to input
processing and history, variable, and command substitutions. vsh may be invoked
from any command line. Up to 256 MPS shells may be in use at one time.
If only one component is configured in the vpshosts file for the node on which
vsh is initiated, the default MPS shell prompt indicates the current component type
and component number (that is, the component that is local to the node) as well as the
node from which the tool was launched. If more than one component is configured for
the node, a component list displays showing all components configured in the
vpshosts file for that node, including those that are remote to the node (if any).
Select a component by entering its corresponding number.
If vsh is invoked on an Speech Server node, the component list always displays first,
regardless of the contents of the vpshosts file.
To display a list of components configured for a node, enter the comp command at
any time. This command identifies the currently configured components along with
their status. “Local” indicates the component is connected to the present node.
“Remote” indicates the component is connected to another node in the network.
Select a component by entering its corresponding number (“common” is not a
selectable component entry).

vsh/comp Commands Example

Page 60

# P0602477 Ver: 3.1.11

Avaya MPS Architectural Overview

Any native Solaris or Windows commands entered in an MPS shell are issued to the
local node regardless of the current component. For example, if the current
component is mps1 and dire09 is the name of the current node, but the MPS shell
were launched on node tmsi03, ls lists the files in the directory on tmsi03, not on
dire09. To identify the local node when connected to a component remote to that
node, enter the hostname command at the prompt.
See The vpshosts File on page 93 for information about this configuration file. See
the Avaya Media Processing Server Series System Operator’s Guide for additional
information on command line interaction and control.

# P0602477 Ver: 3.1.11

Page 61

Avaya Media Processing Server Series System Reference Manual

This page has been intentionally left blank.

Page 62

# P0602477 Ver: 3.1.11

Base System
Configuration
This chapter covers:
1. Base System Configuration
2. System Startup
3. User Configuration Files
4. The MPSHOME Directory

Avaya Media Processing Server Series System Reference Manual

Base System Configuration
The Avaya Media Processing Server (MPS) series system setup procedures involve
installing and configuring the operating system and proprietary system software. The
installation includes system facilities, and preconfigured root, administrative, and
user accounts. The accounts are set up to run the operating system and define any
required shell variables.
The software installation procedure creates the MPS Series home directory and places
all files into the subdirectories under it. The MPSHOME variable is used to identify the
home directory, and is set by default to /opt/vps for Solaris systems and
%MPSHOME%\PERIase on Windows systems.
During system initialization, the various MPS processes reference configuration files
for site-specific setup. Files that are common to all defined MPS systems are located
in the directory path $MPSHOME/common (%MPSHOME%\common). Files that are
specific to an MPS are located in their own directories under $MPSHOME/mpsN
(%MPSHOME%\mpsN), where N indicates the particular numeric designation of the
MPS. On Solaris systems, the files that comprise the software package release are
stored in $MPSHOME/packages (symbolic links to these packages also exist
directly under MPSHOME directory): on Windows systems, these files are stored in the
MPS home directory. Not all packages and files exist on all systems: this chapter deals
with those which are found in most basic MPS designs.
See the Avaya Media Processing Server Series System Operator’s Guide for a more
detailed discussion of the directory structure. See Installing Avaya Solaris Software on
the Avaya Media Processing Server Series and Installing Avaya Windows Software on
the Avaya Media Processing Server Series for matters regarding package installations.

Page 64

# P0602477 Ver: 3.1.11

Base System Configuration
System Startup
When started, the MPS software sets several system-wide parameters and commences
the Startup and Recovery Process (SRP).
For information about configuration and administration files common to all MPS
systems defined on a node, see The MPSHOME/common/etc Directory on page 88.
For information about component-specific configuration and administration files
common to all MPS’ defined on a node, see The MPSHOME/mpsN/etc Directory on
page 142. Information regarding TMS-specific processes can be found at The
MPSHOME/tmscommN Directory on page 138.
The startup files described in the following table are discussed further later in this
chapter:
Startup File

Description

S20vps.startup

Script that executes when the Solaris node boots. It is installed
by the PERImps package. This script sets several Solaris
Environment Variables and starts SRP (Startup and Recovery
Process) (see page 70). This file is stored in the /etc/rc3.d
directory. See Manually Starting and Stopping SRP on page 70 for
more information about this script.

S30peri.plic

Script that executes upon Solaris node bootup and starts the
Avaya license server. Licenses are required for some Avaya
packages to run. This file is installed by the PERIplic package in
the /etc/rc3.d directory. For additional information on Avaya
licensing and this file, see %MPSHOME%\PERIplic /opt/vps/PERIplic on page 134 and the Avaya Packages
Install Guides.

vpsrc.sh
vpsrc.csh

Defines MPS Solaris Environment Variables used by the Solaris
shells sh and csh. These files perform the same function, but are
specific to each shell type. The files are stored in the /etc
directory.

perirc.sh
perirc.csh

The perirc.csh and perirc.sh files resides in the
$MPSHOME/PERI/etc directory. They contain the
default environment variables that are common to the package.
Do not edit these files! They are subject to software updates by
Avaya. If a customer site must add to or modify environment
variables, set the site-specific environment variables in the
siterc.csh and siterc.sh files.
The vpsrc.csh and vpsrc.sh files is responsible for executing
the perirc.csh and perirc.sh files, which contain the
environment variables specific to the products that are installed
on a node.

# P0602477 Ver: 3.1.11

Page 65

Avaya Media Processing Server Series System Reference Manual

Startup File

Description

siterc.sh
siterc.csh

The siterc.csh and siterc.sh files are designed to
contain site-specific environment variables. When these files
exist on an MPS node, they reside in the following directory path:
$MPSHOME/common/etc.
These files do not necessarily have to exist. Also, they can exist
and be empty. If these files do not exist, you need to create them
to enter site-specific environment variables. If they do exist, edit
the file to include site-specific environment variables.
The vpsrc.csh and vpsrc.sh files on the MPS node are
responsible for executing the siterc.csh and siterc.sh
files (if they exist). The values of the environment variables set in
these files take precedence over the default values set in the
perirc.csh and perirc.sh files.

hosts

Page 66

Defines all systems associated with a particular MPS. The node
names identified in all other configuration files must be included
in this file. On Solaris systems, this file is stored in the /etc
directory. On Windows systems, it is stored in the directory
\Winnt\System32\drivers\etc. (See The hosts File on
page 83.)

# P0602477 Ver: 3.1.11

Base System Configuration
Solaris Startup/Shutdown
When a Solaris system boots, it executes various scripts that bring the system up. The
system software is started at run level 3 by means of the S20vps.startup script
file. The licensing mechanism is started by the S30peri.plic script, also at this
level.
For a reboot, Avaya has altered the command to first perform a controlled shutdown,
then bring the system up gracefully. A message displays that the original Solaris
reboot command has been renamed to reboot.orig.
You can “flush” the memory on your system before rebooting by entering the reset
command from the ROM prompt. This ensures there are no processes still in memory
prior to the system coming back up.
The halt command has also been modified by Avaya to perform a controlled
shutdown by taking down system processes and functions in the proper sequence and
timing. If the halt command has been executed and the system does not respond,
execute the halt.orig command instead.
The table that follows contains detailed Solaris and MPS startup and shutdown
configuration information. For complete instructions on starting and stopping a
node/software/system, see the Avaya Media Processing Server Series System
Operator’s Guide.

# P0602477 Ver: 3.1.11

Page 67

Avaya Media Processing Server Series System Reference Manual

System Initialization and Run States
Scripts

Run
Control
Files

Run
Level

/etc/rc0.d

/sbin/rc0

0

Power-down
state

/etc/rc1.d

/sbin/rc1

1

/etc/rc2.d

/sbin/rc2

/etc/rc3.d

Init State

Type

Use This Level

Functional Summary

Powerdown

To shut down the operating
system so that it is safe to
turn off power to the system.

Stops system services and
daemons; terminates all
running processes;
unmounts all file systems

Administrative
state

Single user

To access all available file
systems with user logins
allowed.

Stops system services and
daemons; terminates all
running processes;
unmounts all file systems;
brings the system up in
single-user mode

2

Multiuser
state

Multiuser

For normal operations.
Multiple users can access
the system and the entire file
system. All daemons are
running except for the NFS
server daemons.

(Expanded functionality see footnote that follows
for details)

/sbin/rc3

3

Multiuser with
NFS resources
shared and Peri
software

Multiuser

For normal operations with
NFS resource-sharing
available and to initiate any
Avaya software startups.

Cleans up sharetab;
starts nfsd; starts
mountd; if the system is
a boot server, starts
applicable services; starts
snmpdx (if PERIsnmp is
not installed).

/etc/rc4.d

/sbin/rc4

4

Alternative
multiuser
state

/etc/rc5.d

/sbin/rc5

5

Power-down
state

Powerdown

To shut down the operating
system so that it is safe to
turn off power to the system.
If possible, automatically
turn off system power on
systems that support this
feature.

Runs the
/etc/rc0.d/K*
scripts to kill all active
processes and unmount
the file systems

/etc/rc6.d

/sbin/rc6

6

Reboot
state

Reboot

To shut down the system to
run level 0, and then reboot
to multiuser state (or
whatever level is the default
- normally 3 - in the
inittab file).

Runs the
/etc/rc0.d/K*
scripts to kill all active
processes and unmount
the file systems

/etc/rcS.d

/sbin/rcS

S or s

Single-user
state

Single-user

To run as a single user with
all file systems mounted and
accessible.

Establishes a minimal
network; mounts /usr, if
necessary; sets the system
name; checks the root (/)
and /usr file systems;
mounts pseudo file
systems (/proc and
/dev/fd); rebuilds the
device entries for
reconfiguration boots;
checks and mounts other
file systems to be
mounted in single-user
mode

—

This level is currently
unavailable.

—

Mounts all local file systems; enables disk quotas if at least one file system was mounted with the quota option; saves editor temporary files in
/usr/preserve; removes any files in the /tmp directory; configures system accounting and default router; sets NIS domain and ifconfig
netmask; reboots the system from the installation media or a boot server if either /.PREINSTALL or /AUTOINSTALL exists; starts various
daemons and services; mounts all NFS entries

Page 68

# P0602477 Ver: 3.1.11

Base System Configuration
Windows Startup/Shutdown
The Avaya Startup Service is installed with the PERIglobl package. During
bootup, the services manager loads the Avaya Startup Service, along with other
required subsystems.
The Avaya Startup Service reads a file name vpsboot.cfg from the system's
\winnt directory. The format of the file is as follows:
• A '#' character introduces a comment until the end-of-line.
• Each line of text is considered to be a self-contained command line suitable
for starting an application.
• The program being invoked must support the insert @term@ -X
, which is the termination synchronization mutex. The
process polls this mutex, and when it is signaled, the process exits. The mutex
is signaled when the service is stopped. Significant events are logged to the
file vpsboot.log in the system's \winnt directory.
The following information is for use by Certified Avaya personnel only:
•

•
•
•
•
•
•
•

If a service is stopped and started from the Services entry in the Control
Panel, it again attempts to execute any commands listed in its configuration
file.
The command line option show (entered via the Control Panel — Services)
allows the window associated with the started commands to be visible.
The general mechanism for preventing Avaya software from starting at boot
time is as follows:
Access administrative privileges
Choose Control Panel — Services.
Select Avaya Startup Service and click on the Startup button.
In the new popup, change the radio box setting from Automatic to Manual.
When the system is restarted, the Avaya software does not start. To restore
automatic startup, follow the same procedure and restore the Automatic
setting.

For Windows systems, the following services used in MPS operations are started at
boot time. Each service is installed by the indicated package.
Service

Installation Package

Avaya Startup Service

PERIglobl

Avaya RSH Daemon
NuTCracker Service

PERIgrs

Avaya License Service

PERIplic

Avaya VPS Resources SNMP Daemon

PERIsnmp

SNMP EMANATE Adapter for Windows
SNMP EMANATE Master Agent

# P0602477 Ver: 3.1.11

Page 69

Avaya Media Processing Server Series System Reference Manual

Service

Installation Package

PeriWeb

PERIpweb

SRP (Startup and Recovery Process)
SRP (the Startup and Recovery Process) is the parent of all MPS software processes. It
is responsible for starting and stopping most other software processes, and for polling
them to ensure proper operation. It also restarts abnormally terminated programs.
One instance of SRP runs on each MPS node to control the systems associated with
that node. As SRP finishes starting on each node, an informational alarm message is
generated indicating that the system is running.
SRP has its own configuration file that provides for control of some internal functions.
For information about this file, see The srp.cfg File on page 89.
Each MPS node contains two classes of software processes, each of which has its own
set of configuration files processed by SRP:
•

•

The VOS (Voice Operating Software) process group is comprised of the core
system software for running the MPS system (see VOS Processes on page
39).
The ASE (Application Services Environment) process group is comprised of
software to execute call processing and administrative applications (see ASE
Processes on page 36).

In addition to controlling processes specific to each MPS system, SRP manages a
common MPS (i.e., virtual MPS), which is used to run processes requiring only one
instance per node. This includes system daemons, such as ALARMD.

Currently, the SRP is capable of starting pot approximately 300 applications.
Manually Starting and Stopping SRP
Normally, SRP is automatically started at boot time. If SRP has been stopped, it can
be manually restarted.
If it is necessary to control the starting and stopping of SRP, it is first necessary to
disable the operations of the S20vps.startup script. To do this, become root
user and place an empty file with the name novps in the $MPSHOME directory.
To manually start SRP on Solaris systems, execute the following command:
/etc/rc3.d/S20vps.startup start
Starts the MPS system software. This command can be used to restart SRP.

Page 70

# P0602477 Ver: 3.1.11

Base System Configuration
To shut down the MPS software, execute the following command:
/etc/rc3.d/S20vps.startup stop
Stops the MPS software without stopping the Solaris software.

!

Do not use the Solaris kill command to stop SRP!

To manually start SRP on Windows systems, follow the menu path Start—
Settings—
Control Panel—Services—Avaya Startup Service—Start.
To shut down the MPS software, follow the menu path Start—Settings—
Control Panel—Services—Avaya Startup Service—Stop. You must have
administrative permissions to perform these actions.

!

Do not use the Windows task manager to kill SRP!

VPS Topology Database Server (VTDB)
Many processes require information about available MPS systems and the processes
running on each node. This information is collected via the VPS Topology Database
(VTDB), which is used internally to store information about the MPS network.
The default well-known port used by other processes for SRP interaction on any node
is 5999. The default port used by the VTDB library for SRP interaction is 5998. These
default ports are intended to suit most configurations, and in most cases, these
numbers should not be modified. To override these defaults, appropriate
specifications must be made in the Solaris /etc/services or the
Winnt\system32\drivers\etc\services file on Windows.
If changes are made to any port entries in these files, SRP must be stopped and
restarted for the changes to take effect (see Manually Starting and Stopping SRP on
page 70).

# P0602477 Ver: 3.1.11

Page 71

Avaya Media Processing Server Series System Reference Manual
Restart of Abnormally Terminated Programs
SRP can restart programs that have either terminated abnormally or exhibited faulty
operation. Abnormal termination is detected on Solaris systems via the SIGCHLD
signal, or by proxy messages from remote copies of SRP that received a SIGCHLD
signal. On Windows, a separate thread is started for each child process that SRP starts.
This thread blocks on monitoring the process handle of the child process; when that
handle becomes signalled by the kernel that the child process has terminated, the
thread initiates the same child-termination processing that is instituted by SRP under
the Solaris SIGCHLD signal handler. In either case, SRP restarts the process.
If the problem process were in the VOS software process group, a synchronization
phase is entered. That is, all other processes in the VOS process group are notified
that a process has terminated and should reset as if they were being started for the first
time. SRP restarts the process that exited and all processes in the VOS software
process group are allowed to begin operation.
Faulty operation is detected by means of the ping messages that SRP sends to
processes in the VOS group. If successive ping messages fail to generate replies, SRP
considers the process to be in an abnormal state and kills it. At that point, the system
behaves as if the process exited abnormally.

Communication with VOS Processes
For Solaris-based systems, multicast pinging is available as a subsystem within the
IPC library. The implementation of multicast pinging is similar to that of unicast
IPC-connection pinging, except that a ping transmission interval may be specified.
All pinging configuration is done for SRP. VOS processes that receive pings cannot
be configured for these actions. (This is handled within callbacks defined by the IPC
library.)
For Windows systems, only unicast pinging is available.
In Solaris systems unicast or multicast pinging can be performed by any process
whenever it is necessary to ping remote connections. The unicast method should be
used when pinging a single remote connection, or a small number of remote
connections. Multicast pinging should be employed if there is a need to ping many
remote connections.

Page 72

# P0602477 Ver: 3.1.11

Base System Configuration
The following are the SRP configuration parameters used to configure multicast
pinging:

Parameter

Description

Multicast
Group IP

Internet Protocol address used for multicasting. The specified value must
be in standard Internet dotted-decimal notation. It must be greater than or
equal to 224.0.1.0 and less than or equal to 239.255.255.255. The IPC
subsystem defines 225.0.0.1 as the default.

Multicast
Group port

Multicast
period

SRP command line

-x mpip=

srp.cfg

MPip=

IPC port used for multicasting. The specified value must be greater than
or equal to 1025 and less than or equal to 65535. The IPC subsystem
defines 5996 as the default.
SRP command line

-x mpport=<#>

srp.cfg

MPport=<#>

Time period between data transmissions. This value is specified in
milliseconds and must be greater than the value given by the macro
ITM_RESOLUTION_MS as defined in the ipcdefs.h file. (This value is
set to 10.) The IPC subsystem defines 15000 as the default (i.e. a
transmission period of 15 seconds).
SRP command line

-x mpperiod=<#>

srp.cfg

MPperiod=<#>

VSH console option
(to SRP)

srp ipctimeout mping=<#>
This method should only be used when pinging is
not currently active (i.e., if SRP was started with
either a -p or a -zp command line argument, or
pinging was turned off via a -ping=off console
option while SRP was running).

Maximum
outstanding
requests

# P0602477 Ver: 3.1.11

Maximum number of unanswered ping requests to listener processes
before the SRP server is notified of the fault. The specified value supplied
must be greater than 0. The IPC subsystem defines 3 as the default.
SRP command line

-x mpmaxout=<#>

srp.cfg

MPmaxout=<#>

Page 73

Avaya Media Processing Server Series System Reference Manual
SRP Configuration Command Line Arguments
The SRP command line arguments are described below. Command line options for
SRP are not typically used since it is started automatically on bootup. However,
command line options do override options in the
MPSHOME/common/etc/srp.cfg file.

Page 74

srp

[-a] [-c] [-d] [-e] [-f ] [-g <#>]
[-h] [-i ] [-j ] [-k <#>] [-l]
[-n] [-p] [-q <#>] [-r <#>] [-s <#>]
[-t <#>] [-u <#>] [-v <#>]
[{-y|-z}[deklnprstTx]]

-a

Sets aseLines startup delay in seconds. Default is 3.

-c

Truncates the log file.

-d

Generates debugging output to the console. (This is the same as
the -yd option.)

-e

Enables extended logging. (This is the same as the -ye option.)

-f 

Sets default VOS priority class. Currently not supported on
Windows. Setting should not be changed on Solaris.

-g <#>

Size of the swap space low-water mark in megabytes.

-h

Displays command line options.

-i 

Default ASE application priority. Currently not supported on
Windows. Setting should not be changed on Solaris.

-j 

SRP priority. Currently not supported on Windows. Setting should
not be changed on Solaris.

-k <#>

Size of the swap space high-water mark in megabytes.

-l

Disables logging. (This is the same as the -zl option.)

-n

Disables restarting VOS processes after termination (This is the
same as the -zn option.) This is primarily used for diagnostics and
debugging.

-p

Disables pinging. (This is the same as the -zp option.)

-q <#>

Number of seconds for the runaway period. Default is 600.

-r <#>

Number of times that a process can restart (after exiting
abnormally) within the runaway period set by the -q option. After
the process has restarted the specified number of times within the
given runaway period, no more restarts are attempted. Default is 3.

-s <#>

Log file size limit. The default maximum size is 5000000 bytes.

-t <#>

Proxy timeout. Times proxy messages, and determines the
frequency of ping messages, between (remote) instances of SRP

-u <#>

Disk low-water mark, specified in megabytes.

-v <#>

Disk high-water mark, specified in megabytes.

# P0602477 Ver: 3.1.11

Base System Configuration
srp

[-a] [-c] [-d] [-e] [-f ] [-g <#>]
[-h] [-i ] [-j ] [-k <#>] [-l]
[-n] [-p] [-q <#>] [-r <#>] [-s <#>]
[-t <#>] [-u <#>] [-v <#>]
[{-y|-z}[deklnprstTx]]

-y[deklnprstTxy]
-z[deklnprstTxy]

Enables (-y) or disables (-z) the following functions:
d
e
k
l
n
p
r
s
t

=> debugging
=> extended logging
=> killAll protocol
=> logging
=> VOS process restarting
=> pinging
=> registry debugging
=> state change logging
=> timestamping of external debugging (output of -d or

-yd)
T => extended timestamping wherever timestamping is performed
(i.e. through -yt, log file entries, or state change logging), and
where extended timestamping indicates milliseconds in addition to
the existing month/day/hour/minutes/seconds.
x => generating alarms for processes that exit normally
-y y

If you start SRP with the -y y option/argument pair, the
timestamps of entries made into the srp.log and
srp_state.log files will contain the year. If the year is enabled
in the timestamp and the timestamp is enabled by the -yt
option/argument pair, the year also appears in the timestamps that
are added to debug output sent to the console and vsh.
You can permanently enable the year in the timestamp by doing
one of the following:
•

Add the following entry into $MPSHOME/common/etc/srp.cfg:

showYearInTimestamp=on
•

Modify the line in the /etc/rc3.d/S20vps.startup file
that starts SRP to add the -y y command line option. For
example, change the line
cd ${VPSHOME}; srp >/dev/null 2>&1 &

to
cd ${VPSHOME}; srp -y y >/dev/null 2>&1 &

VSH Shell Commands
Once SRP is running, the VSH interface can be used to send commands that display
status information or affect the current state of the system. To send commands to
individual MPS systems, they must be sent through SRP.
To facilitate this, SRP supports a syntax construction that allows multiple commands

# P0602477 Ver: 3.1.11

Page 75

Avaya Media Processing Server Series System Reference Manual
to be specified in a single entry intended for one or more MPS systems. Therefore, it is
important that the particular component intended to receive a given command be
clearly specified on the command line.
In general, the syntax of the command line takes the form of the name of the category
for which the command is intended, followed by a pound symbol (#), the component
type, a period, and the component number to which the command is being issued. For
example, vos#mps3 refers to the VOS software process group on MPS number 3.
This information is preceeded by the srp command and followed by an argument:
thus, a complete command example based on the above is srp vos#mps3
-status.
The component IP address can be substituted for the node name (identifier) when
issuing SRP commands.

Page 76

# P0602477 Ver: 3.1.11

Base System Configuration
The syntax and argument format for a VSH SRP command are shown below:
srp

obj -arg[=val] [obj -arg[=val] [obj ...]]

obj

An object (i.e. command destination) controlled by SRP, optionally
specified with a component and node identifier. Any unrecognized
command is compared to the process names in the applicable vos.cfg,
ase.cfg, or gen.cfg file for a match. An object can be one of any of
the following specifications:

# P0602477 Ver: 3.1.11

componentX

Component. Includes (typically for MPS
systems) common, oscar, mps, and
tmscomm, or compX generically. X is a
component specification: if not included, it is
assumed that the component is the one on
which vsh is logged in. A command issued
with this object returns all instances of the
argument applicable to the component only.

subcomponentX

Includes vos, ase, gen, and hardware. X is
a component specification: if not included, it is
assumed that the component is the one on
which vsh is logged in. A command issued
with this object returns all instances of the
argument applicable to the subcomponent
only.

component spec
subset

A subset of a standard component
specification in the general form
.,
/ and where
 is any of the objects given in
componentX,  is a component
number,  is any of those
shown in subcomponentX, and 
is a dotted decimal IP address.

subcomponent spec

A subcomponent specification
in the
general form .,
where  is any of those
shown in subcomponentX and  is
an associated component number.

Page 77

Avaya Media Processing Server Series System Reference Manual

srp

obj -arg[=val] [obj -arg[=val] [obj ...]]

obj

process

A subset of a full thread specification starting
with a process name in the form of
(){:
}, and where  is a
VOS, ASE, or GEN process name, 
is a Group Name (intended to allow a
process, such as a daemon, to segregate the
processes that were connected to it, and treat
a specific group of them in the same way),
 is a Service Type (for example,
CCM provides a service of managing phone
lines, and its Service Type is
SVCTYPE_PHONE [defined as "phone"]), and
 is an identifier or list of
identifiers corresponding to  (in
these instances, phone lines are associated
with CCM, thus the  would be
any applicable phone line number, and the
pairing would be, for example, {phone:1}).

app

The set of lines associated with the
applications bound to the current MPS.
Except for "Line" commands, remainder affect
all applications on system.

none

Command is intended for SRP itself.

-arg[=val]

SRP arguments always begin with a dash ("-"), and arguments that take
values must use the format -arg=val (rather than -arg val) because
an arg specified without a dash prefix is interpreted as a new (unknown)
command, and val not prefaced with an equal sign is also treated this
way.
The list of arguments that SRP recognizes for each of the command
destinations is as follows. Note that if an argument is sent to a group
object, it affects all lower-level objects belonging to the named object.
For example, sending -kill to the vos object kills all VOS processes.
The following arguments are available to all destination objects:
status

Displays current information about the named
object. See SRP Status on page 81.

ping

Toggles the ping flag for the named object.
Takes a value equal to a process name or, for
app object, a line number.

The following argument is valid for all destination objects except for
components which support the hardware subcomponent and which
have a target of "hardware" (or, for legacy instances, "cps"):

Page 78

# P0602477 Ver: 3.1.11

Base System Configuration
srp

obj -arg[=val] [obj -arg[=val] [obj ...]]

-arg[=val]

kill

Kills the named object.

The following arguments are valid for all destination objects except for
components which support the hardware subcomponent and which
have a target of "hardware" (or, for legacy instances, "cps"); where the
target is SRP itself; or where no target is specified:
stop

Stops the specified object (no restart).

start

Starts the specified object.

The following argument is available only to the objects mps, common,
and comp:
alarm

Causes SRP to generate a test alarm
message to the alarm daemon, with the target
object as the source component of the alarm.

The following argument is valid for all destination objects except for
components which support the vos subcomponent and which have a
target of a VOS process; components which support the ase
subcomponent and which have a target of an ASE process; and
components which support the gen subcomponent and which have a
target of a GEN process:
gstatus

Similar to the status command but
displays information about the process groups
as a whole instead of about individual
processes. (See SRP Status on page 81.)

The following argument is available to these destination objects only:
mps, common, comp; and components which support the vos
subcomponent and which have a target of vos or a VOS process:
reboot

Completely shuts down the process or group
and restarts it with commensurate
reinitialization.

The following argument is available to these destination objects only:
components which support the vos subcomponent and which have a
target of vos; and components which support the gen subcomponent
and which have a target of gen:
restart

Similar to performing the stop and start
arguments.

The following arguments are available only to the destination object of
components which support the ase subcomponent and which have a
target of app:

-arg[=val]

# P0602477 Ver: 3.1.11

startLine
stopLine
killLine

Starts/stops/kills application assigned to line
specified by a value equal to its line number.
Stopping an application puts it into an
EXITED state: killing an application stops it
then restarts it.

Page 79

Avaya Media Processing Server Series System Reference Manual

srp

obj -arg[=val] [obj -arg[=val] [obj ...]]

Examples:

srp vos#mps1 -kill
Forcibly terminates all VOS processes on MPS number 1.
srp vos#mps1 -status ase#mps2 -gstatus
Sends the status command to the VOS software process group on
MPS number 1 and the group status (gstatus) command to the ASE
process group on MPS number 2. (See SRP Status on page 81 for sample
output from the status commands.)
srp app -killLine=111
Stops then restarts the application assigned to line 111 of the MPS
associated with the VSH command line.

You can use a console option to enable displaying the year in the timestamps of
entries made to the srp.log and srp_state.log files. To add the year, do one
of the following:
•

Add the following entry to the $MPSHOME/common/etc/srp.cfg file:
showYearInTimestamp=on.

•

Issue the following command at a vsh prompt:
vsh {1} -> srp -showYearInTimestamp=on

If you want to disable displaying the year in the timestamp, issue the following
command:
vsh {2} -> srp -showYearInTimestamp=off
To see a full list of options available to SRP, enter srp -options at a vsh
command line.
Because unrecognized names are compared to the MPS and process names in the
vos.cfg, ase.cfg, and gen.cfg files, SRP substitutes known values from the
current vsh component. For example, if vsh is logged on to the common component
on tms2639, the command srp gen.0 -status is the same as the command
srp gen#common.0/tms2639 -status: thus, the former can be used as a
shorthand version of SRP commands.

Page 80

# P0602477 Ver: 3.1.11

Base System Configuration
SRP Status
The following example of the SRP status command shows information from all
MPS systems and components associated with node tms1000. The gstatus
report produces a summarized version of the status report and includes any remote
components defined for the node (in this case MPS number 1 on node xtc9).

# P0602477 Ver: 3.1.11

Page 81

Avaya Media Processing Server Series System Reference Manual

Call Control Manager (CCM/CCMA)
Startup parameters for CCM can be specified as command line options in the
MPSHOME/mpsN/etc/vos.cfg file for the component CCM controls (see The
vos.cfg File on page 143). These options apply to the current instance of CCM, and
cannot be overridden directly from a command/shell line. If the parameters to CCM
need to be changed, the system must be stopped, the vos.cfg file edited, and the
system restarted. Configuration options available to CCM and CCMA are contained in
The ccm_phoneline.cfg File on page 151 and The ccm_admin.cfg File on
page 155, respectively.
The command line options for CCM are shown below:
ccm

[-c ] [-d ]
[-s ]

-c 

Specifies whether CCM provides administrative (admin) or
Telephony & Media Service (tms) services. The default for this
option is tms.

-d


Enables debugging from startup. All debugging is written to the
default file $MPSHOME/common/log/ccm.dlog. The following
debug objects are supported: LINE, ERROR, STARTUP, ALL.

-s 

Specifies the service IDs/lines that CCM controls. This option is
only required when the class is tms; it is ignored for admin class.
This option has no default.

The -d option should only be used to enable debugging of errors that happen before
the system is up (i.e., before being able to enable debugging via vsh). The -d option
is typically used for debugging administrative application bind issues in CCM.

Page 82

# P0602477 Ver: 3.1.11

Base System Configuration
Startup Files
The hosts File
The hosts file associates network host names with their IP addresses. Also,
optionally, an alias can be included in these name-number definitions. The first line
of the file contains the internal loopback address for pinging the local node. The
section that follows this can be edited to add or delete other nodes recognized by the
present one. You must be root user or have administrative privileges to edit the file.
The subsequent sections of the file contain chassis numbering and LAN information.
Each node contains entries for the hostname vps-to-dtc, tmsN, and nicN
(where N denotes a specific TMS or NIC number). These "N" numbers are the only
items that may be altered in this section of the hosts file. The IP addresses of these
entries must not be edited by the user.
In this file, the term dtc is the same as TMS of release 1.X MPS terminology.
The final section of the file contains diagnostic PPP (Point-to-Point Protocol)
communication addresses. The entries for ppp-DialIn and ppp-DialOut also
must not be altered.
For Solaris systems, this file is stored in the /etc directory. For Windows systems, it
is stored in C:\Winnt\system32\drivers\etc.

Example: hosts
127.0.0.1

localhost

# use www.nodomain.nocom line for systems not in a domain
# ctx servers to tms resource cards, private LAN
#
10.10.160.62
tms1000 loghost
10.10.160.42
is7502
10.10.160.3
vas1001
10.10.160.104
periblast
192.84.160.78
cowbird
192.84.161.17
pc105r
#
#192.168.101.200 scn1
scn1-to-tms
loghost www.nodomain.nocom
192.168.101.201
scn2
scn2-to-tms
192.168.101.202
scn3
scn3-to-tms
#
# If the VPS/is is connected to a network, change
# the above IP address and name as desired.
# When changing the VPS-is nodename, change all occurances of
# VPS-is in this file. Remember to change update /etc/ethers also
#
#tms resource cards, private LAN
#

# P0602477 Ver: 3.1.11

Page 83

Avaya Media Processing Server Series System Reference Manual

Example: hosts (Continued):
192.168.101.1
192.168.101.2
192.168.101.3
192.168.101.4
192.168.101.7
#

vps-to-dtc
tms11
tms3
tms4
nic1

ctx-to-dtc

# IP Addresses associated with ctx chassis nbr 2
192.168.101.11
tms5
192.168.101.12
tms6
192.168.101.13
tms7
192.168.101.14
tms8
192.168.101.17
nic3
#
# IP Addresses associated with ctx chasis nbr 3
192.168.101.21
tms9
192.168.101.22
tms10
192.168.101.23
tms11
192.168.101.24
tms12
192.168.101.27
nic5
#
# IP Addresses associated with ctx chasis nbr 1 qfe ports
192.168.102.1
scn1qfe0
192.168.103.1
scn1qfe1
192.168.104.1
scn1qfe2
192.168.105.1
scn1qfe3
#
# IP Addresses associated with ctx chasis nbr 2 qfe ports
192.168.110.1
scn2qfe0
192.168.111.1
scn2qfe1
192.168.112.1
scn2qfe2
192.168.113.1
scn2qfe3
#
# IP Addresses associated with ctx chasis nbr 3 qfe ports
192.168.118.1
scn3qfe0
192.168.119.1
scn3qfe1
192.168.120.1
scn3qfe2
192.168.121.1
scn3qfe3
#
#
192.84.100.1
ppp-DialIn
192.84.100.2
ppp-DialOut

Page 84

# P0602477 Ver: 3.1.11

Base System Configuration

Entry

Description

localhost

Internal loopback address for pinging the same machine.

loghost

Local machine name (tms1000 in this example) precedes this entry,
which in turn is preceded by its IP address.

vps-to-dtc
tmsN
nicN
ppp-DialIn
ppp-DialOut

Internal LAN designations. Do not edit these lines.

scnNqfeX

IP addresses associated with TMS chassis number N and QFE port
numbers represented by X. Do not edit these lines.

# P0602477 Ver: 3.1.11

Page 85

Avaya Media Processing Server Series System Reference Manual

User Configuration Files
The .xhtrahostsrc File
The $HOME/.xhtrahostsrc file lists the names of nodes where user access may
be required. A node should be listed in this file if pertinent status information may be
required of it, and the node is not already included in the vpshosts file. The
.xtrahostsrc file identifies any nodes, other than those that are defined in the
vpshosts file, which are to be displayed in the PeriView tree. An example of a node
you may want to add to this file is a PeriView Workstation node. To implement this
functionality, the .xtrahostsrc file needs to reside in the $HOME directory of the
user that launched the PeriView tool ($HOME/.xtrahostsrc).
To display nodes in the tree that are not identified in the vpshosts file, create this
file and place it in the user's home directory. Entries in this file must follow this
format:

One of the keywords yes or no must appear after each node name, following a space
or tab. This indicates whether or not SRP is configured to run on the node. The state
of the node displays in PeriView’s tree only if SRP is configured as yes. Only one
node is allowed per line.
The following is an example of this file:
Example: .xhtrahostsrc
$1
#
kiblet yes
sheltimo yes
frankie no

In this example, all three nodes will appear in PeriView’s tree when so expanded, but
only kiblet and sheltimo display their states. Node frankie always appears
black (state unknown) because SRP is not configured to run there.
The first line in this file must contain only the string "$1". In some circumstances,
this must be added manually.
For more information on this file and the states of nodes as displayed in PeriView,
please see the PeriView Reference Manual.

Page 86

# P0602477 Ver: 3.1.11

Base System Configuration
The MPSHOME Directory
The MPS system installation process creates a home directory and several
subdirectories beneath it. On Solaris systems, this is represented as $MPSHOME
(/opt/vps by default). On Windows systems, this is indicated as %MPSHOME%.
See the Avaya Packages Install Guides and the Avaya Media Processing Server Series
System Operator’s Guide for more information about the home and subdirectories.
The relevant subdirectories (from a configuration standpoint) are identified in the
following table, and described in greater detail later in this chapter.
MPSHOME
Directory

Description

common

Contains files common to all MPS components associated with
a particular node. (See The MPSHOME/common Directory on page
88 for more information.)

packages

Contains the actual released software and sample configuration
files. This directory is referenced by means of symbolic links in
/opt/vps in the format PERIxxx (where xxx represents a
package acronym). (See The $MPSHOME/packages Directory
on page 125 for more information.)

PERIxxx

Individual packages of actual released software and
configuration files. These packages are located directly under
%MPSHOME%. Use the Table of Contents to locate each package
by name.

tmscommN

Contains files used for bridging between and within MPS
components. (See The MPSHOME/tmscommN Directory on page
138 for more information.)

mpsN

Contains files unique to each MPS, where N denotes the
particular MPS number. One mpsN directory exists for each
defined on the node with which it is associated. (See The
MPSHOME/mpsN Directory on page 139 for more information.)

On Solaris systems, if the defaults are not used, only the symbolic links to the Avaya
packages exist in /opt/vps.

On Windows systems, if the defaults are not used, the specified target directory
contains a Avaya subdirectory with the common and mpsN component directories,
the distribution directory, and the bin executables directory.

# P0602477 Ver: 3.1.11

Page 87

Avaya Media Processing Server Series System Reference Manual

The MPSHOME/common Directory
The $MPSHOME/common (%MPSHOME%\common) directory contains files common
to all MPS components on a node. The subdirectories of relevance under common
are described in the following table.
MPSHOME/common
Directory

Contents

etc

Configuration, administration, and alarm database files. Contains a
subdirectory structure of files that are generated from within
PeriView and are common to all defined MPS components

log

Log files common to all defined MPS components. These files
include filexfer.log, sched.log, *.dlog, alarm*.log,
srp.log, and srp_state.log.

The MPSHOME/common/etc Directory
The $MPSHOME/common/etc (%MPSHOME%\common\etc) directory contains
configuration and administration files common to all MPS components associated
with the node. These files are used during system startup and are also responsible for
ensuring the continual operation of the MPS system. This directory also contains the
PeriView configuration and administration files.
These files are identified in the following table and further described in the passages
that come afterward. Subdirectories used for the purpose of containing files generated
by PeriView are also generic to the entire MPS system, and are described in the
following table. For information about the files in these subdirectories, refer to the
PeriView Reference Manual.
MPSHOME/common/etc

Page 88

File/Subdirectory

Description

srp.cfg

Defines the configuration parameters for the Startup and
Recovery Process (SRP).

vpshosts

Lists all components known to the local node and the nodes
to which those components are configured.

compgroups

Allows modification of default node for any process group
listed in the vpshosts file.

gen.cfg

Lists ancillary Solaris processes started at boot time.

global_users.cfg

Lists the user names who have global view privileges in the
PeriView GUI applications.

alarmd.cfg

Defines filter set files to be loaded and processed upon
system startup for this daemon. If no such files exist, or they
are not to be started automatically, then the alarmd.cfg
file is not present.

# P0602477 Ver: 3.1.11

Base System Configuration
MPSHOME/common/etc
File/Subdirectory

Description

alarmf.cfg

Defines filter set files to be loaded and processed upon
system startup for this daemon. If no such files exist, or they
are not to be started automatically, then the alarmf.cfg
file is not present.

pmgr.cfg

Defines pools to which resources are allocated and
configures resources that belong to that pool. Also
enables/disables debug logging.

periview.cfg

Defines configuration parameters for PeriView.

/tms

Contains configuration files copied over from the PERItms
package. These include the .cfg, sys.cfg, and
tms.cfg files.

/ents

Contains the names of domains created by the PeriView
Launcher.

/grps

Contains the names of groups created by the PeriView
Launcher.

/snps

Contains the names of snapshots created by the PeriView
Launcher.

/packages

Contains the names of File Transfer Packages created by
the PeriView Launcher.

/images

Image files for PeriView and its tools.

The srp.cfg File
SRP, the process that spawns all other processes in the MPS system, has its own
configuration file, srp.cfg, which allows control of certain internal parameters.
This file is stored in the $MPSHOME/common/etc directory for Solaris systems,
and in the %MPSHOME%\common\etc directory on Windows-based systems.
As included in the system software, this file contains only comments that explain the
syntax of the available parameters. If this file does not exist at the time of system
startup, or if there are no actual commands, all parameters are assigned default values.
Detailed descriptions of these parameters are provided in the table SRP
Configuration Variables on page 90.
When a new srp.cfg file is installed, it does not overwrite an existing one. This
allows modifications in the older file to be retained. The modifications, if any, must
be manually added to the new file, and then the new file must be copied to the
common/etc directory.

# P0602477 Ver: 3.1.11

Page 89

Avaya Media Processing Server Series System Reference Manual

Example: srp.cfg
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#

Note that options in this file will be overridden by command line options to srp
vosProcRestart
vosKillAll
vosFlushQueue
alarmOnExit
maxLogSize
defAseAppPri
srpPri
vosPriClass
runawayLimit
runawayPeriod
proxyTimeout
ping
cdebug
log
elog
swapLWM
swapHWM
diskLWM
diskHWM
statelog
MPip
MPport
MPperiod
MPmaxout
aseLineStartDelay

= 1 (default) - restart procs that terminate
= 0 - do not restart procs that terminate
= 1 (default) - kill all procs if one terminates
= 0 - use MT_RESTART protocol if a proc terminates
= 1 (default) - flush queues for VOS procs
= 0 - do not flush queues for VOS procs
= 1 procs that exit should generate an alarm
= 0 (default) procs that exit should not generate an alarm
= maximum-size-of-log-file (bytes) (default=1000000)
= default-ae-apps-priority (default=0)
= srps-priority (default=55)
= default-vos-process-priority (default=3)
= number-restats-allowed-in-runaway-period (default=3)
= time-before-allow-more-SIGCHLDs (seconds) (default=600)
= timeout-for-proxy-messages (seconds) (default=30)
= 1 (default) pinging on
= 0 pinging off
= 1 debugging on
= 0 debugging off
= 1 logging on
= 0 logging off
= 1 extended logging on
= 0 extended logging off
= swap low water mark
= swap high water mark
= disk low water mark
= disk high water mark
= 1 (default) state logging on
= 0 state logging off
= multicast-group-IP (default set by IPC="225.0.0.1")
= multicast-group-port (default set by IPC=5996)
= multicast pinging period (default set by IPC=15000ms)
= maximum outstanding multicast ping responses (default=3)
= delay between startup of last ASE process and first ASELINES process
(default=2s;specified in seconds)

# regdisp

= display format for "registry" and "lookup" commands

#
#
#

= v (default) for a vertically-oriented listing
= h (old style) for a horizontally-oriented listing

SRP Configuration Variables

Page 90

Variable

Description

vosProcRestart

Enables or disables the automatic restarting of terminated VOS
processes. If this parameter is set to 1 (the default), restarting is
enabled. If it is set to 0, terminated processes are not restarted.
This should be modified only by Certified Avaya personnel.

# P0602477 Ver: 3.1.11

Base System Configuration
SRP Configuration Variables
Variable

Description

vosKillAll

Informs SRP whether it should invoke the normal restart
synchronization method for subcomponent processes, or if it
should kill and restart all VOS processes in the event that any
one process dies. If this variable is set to 1 (the default), all
processes are forced to terminate. If it is set to 0, RESET
messages are used to synchronize VOS processes. Some
software products (like MTS) need the RESET protocol instead.

vosFlushQueue

Sets IPC message queue flushing. This is the same as the IPC
-Q command line option. 0 means the queue does not get
flushed. 1 (the default) allows flushing. This clears all transmit
queues upon receipt of an MT_RESET message from SRP
(used during group resynchronization when vosKillAll is not
enabled).

alarmOnExit

Enables or disables alarm generation for processes (including
applications) that exit normally. The default is 0 (alarms are not
generated for normal terminations). 1 allows alarms to be
generated.

maxLogSize

Specifies the maximum size (in bytes) of the SRP log files in
bytes. The default size is 1MB.

defAseAppPri
srpPri
vosPriClass

Determines the usage of real-time priorities. Settings should
not be changed.

runawayLimit
runawayPeriod

Limits the number of times a process can exit abnormally within
a specified period before further attempts to restart it are
aborted. This is useful for avoiding infinite restarts to
processes that can’t run properly because external intervention
is required (e.g., malfunctioning hardware, poorly made
configuration files, etc.). The defaults are 3 times within 600
seconds (10 minutes).

proxyTimeout

Times proxy messages, and determines the frequency of ping
messages, between (remote) instances of SRP. Default is 30
seconds.

ping

Enables or disables ping message exchange between SRP
and other VOS processes. 1 (enabled) is the default; 0
disables this function.

cdebug

Enables or disables external logging (debugging). 1 (on) is the
default; 0 (off) disables this function.

log

Enables or disables logging to the file srp.log. 1 (on) is the
default; 0 (off) disables this function.

elog

Enables or disables extended logging to the file srp.log. 0
(off) is the default; 1 (on) enables this function.

# P0602477 Ver: 3.1.11

Page 91

Avaya Media Processing Server Series System Reference Manual

SRP Configuration Variables

Page 92

Variable

Description

swapLWM

Sets the swap space low watermark. When the current swap
space resource use reaches the high watermark, SRP
generates an alarm. If the swap space usage drops below this
low watermark level, SRP generates another alarm. When an
argument is supplied, it specifies the low watermark alarm
threshold as a percentage.

swapHWM

Same as swapLWM, but for the high watermark.

diskLWM

Same as swapLWM, but for the current disk resource.

diskHWM

Same as diskLWM, but for the current disk resource’s high
watermark.

statelog

Enables or disables state change logging for all SRP object
state changes in the file srp_state.log. SRP object logging
is enabled (1) by default; 0 disables state logging.

MPip

Specifies the multicast group IP address. The value supplied
must be in standard Internet dotted-decimal notation, and
within the range 224.0.1.0 through 239.255.255.255,
inclusively. The default is 225.0.0.1.

MPport

Specifies the multicast group port number for IPC. The value
supplied must be within the range 1025 through 65535,
inclusively. The default is 5996.

MPperiod

Specifies the multicast period, in milliseconds, between
transmissions. This value must be greater than 10ms. The
default is 15000, which provides a transmission period of 15
seconds.

MPmaxout

Specifies the maximum number of outstanding ping responses
from a listener process before the SRP server is notified of the
fault. The value supplied must be greater than 0. The default
value is 3.

aseLineStartDelay

The time, in seconds, between the final ASE process entering
the RUNNING state and the spawning of the first ASELINE
process as defined through the aseLines.cfg file (default is
2 seconds).

regdisp

Formats the output of the registry and lookup commands
to be either horizontally ( h ) or vertically ( v ) displayed. The
default is v (vertical).

# P0602477 Ver: 3.1.11

Base System Configuration
The vpshosts File
After the srp.cfg file is read, the vpshosts file is processed. This file is stored in
the $MPSHOME/common/etc directory for Solaris systems, and in the
%MPSHOME%\common\etc directory on Windows systems.
The vpshosts file lists all components configured for the MPS network. Each
component is identified by its component number, the name of the node where it
resides, and the component’s type. It is required that this file exist on each node in the
network. Typically, the file’s contents are the same across all nodes; however, this
will vary in instances where additional component information is desired on a
particular node.
The vpshosts file is created/updated on a node, automatically, during the system
installation procedure. The file only needs to be edited to include components that
have not been installed on the node and reside on other nodes in the network. By
default, a node is only aware of those components (in the MPS network) that are
explicitly defined in its vpshosts file. You must edit a node’s vpshosts file to
make the node aware of components that are installed on a different node.
A node name specified as a dash (-) implies the local node. For each component
defined for the local node in the vpshosts file, a corresponding directory must exist
in the $MPSHOME directory for Solaris systems, and in the %MPSHOME% directory on
Windows systems. For example, if four MPS components are defined in the
vpshosts file, the following subdirectories must exist: $MPSHOME/mps1
(%MPSHOME%\mps1), mps2, mps3, and mps4. They may be renumbered, if
desired. If the MPS components are renumbered, the node must be rebooted in order
for the changes to take effect. The file also contains an entry for the tmscomm
component.
The following is an example of the vpshosts file:
Example: vpshosts
$1
#
#vpshosts
#
#
This file was automatically generated by vhman.
#
Wed Apr 26 19:16:25 2000
#
# COMP
NODE
TYPE
110
mps
1
tmscomm
56
tms3003
mps

The first line in this file must contain only the string "$1". If this line is missing, it
must be added manually.

# P0602477 Ver: 3.1.11

Page 93

Avaya Media Processing Server Series System Reference Manual
The vpshosts file is copied over from the MPSHOME/PERIglobl/etc
package and updated by means of the vhman command, issued from the command
line. The vhman command can also be used to add or delete components from an
existing vpshosts file. In general, there is no need to execute this command
because the system comes preconfigured from the factory.
vhman

[-c <#>] [-t ] [-h ]
[-H ] [-a | -d] [-q] [-n] [-f]

-c <#>

Numeric designation of component.

-t 

Type of component. Valid values include mps, tmscomm, oscar,
ctx and mts.

-h 

Host name associated with the component entry. A dash
specifies the local host (which is the default).

-H 

The host name of the vpshosts file to change. The default is to
assume the local host. This option allows you to change a
vpshosts file remote to the node vhman is being run from.

-a

Adds the specified component to the vpshosts file.

-d

Deletes the specified component from the vpshosts file.

-q

Quiet mode. In this mode, vhman does not display status or error
messages.

-n

Disables display of the vpshosts column headings.

-f

Forces the current vpshosts file to be the latest version.

("-")

The vhman functionality can be executed in a GUI environment by using the
xvhman tool.
PeriView needs to be configured with the information for all nodes that it is to control.
This command would be issued on a PeriView node for the purpose of reconfiguring
node names and component numbers. If the node configuration is changed, PeriView
must be restarted.
For specific information about the vpshosts file (including editing and updating)
and xvhman, refer to the PeriView Reference Manual.

Page 94

# P0602477 Ver: 3.1.11

Base System Configuration
The compgroups File
The compgroups file allows any of the groups (subcomponents) of any of the
components listed in the vpshosts file to reside on a node different from the node
hosting the component. If an entry in the compgroups file exists, it changes the
meaning of the entry in the vpshosts file to the specified value. For example, if the
vpshosts file has mpsX configured on nodeY, the compgroups file allows, for
instance, the vos subcomponent of mpsX to reside on nodeZ instead of on nodeY.
Otherwise, this file typically contains only descriptive comments. This functionality is
rarely used.
During installation on a Solaris system, this file is copied over from the
$MPSHOME/PERIglobl/etc directory. The file is stored in the
$MPSHOME/common/etc directory for Solaris systems, and in the
%MPSHOME%\PERIglobl\etc directory on Windows systems.
The following is an example of the compgroups file:
Example: compgroups
#
# Example compgroups file.
#
# Proc group can be one of VOS, ASE, HARDWARE, GEN.
#
# PROCGRP
ALTHOST
VOS
ASE
HARDWARE
GEN

WWWW
XXXX
YYYY
ZZZZ

A different default host can be specified for any process group. If an entry for a
particular group is missing, or if the file itself is missing, the default meaning of "-"
(local host) is used.

# P0602477 Ver: 3.1.11

Page 95

Avaya Media Processing Server Series System Reference Manual
The gen.cfg File
The gen.cfg file lists ancillary software processes that are to be started upon system
initialization. These are commands and custom software that SRP must monitor.
Processes in this file are common to all components on a node and require only one
instance to be present thereto. If adding any additional (user-defined) processes be
sure they meet this criteria.
This file is stored in the $MPSHOME/common/etc directory for Solaris systems,
and in the %MPSHOME%\common\etc directory on Windows systems.
During installation, this file is copied over from the $MPSHOME/PERIglobl/etc
or %MPSHOME%\PERIglobl\commonetc\etc directory. The file, as used by
SRP, is read from common/etc.
The following is an example of this file on a Windows system. The Solaris version of
the file follows immediately thereafter:
Example: gen.cfg
$3
#
# Example gen.cfg file.
#
# All executables listed in this file should support the
# Windows convention for srp-triggered termination. If you do not
# know what this means, please do not add any entries to this file.
#
# NAME
NODE
PORT
is-VOS-CLASS
PRI
COMMAND LINE
#
alarmd
1
0
alarmd
alarmf
1
0
alarmf
configd
1
0
configd
conout
1
0
conout
nriod
1
0
nriod
screendaemon
0
0
screendaemon
pmgr
1
0
pmgr
#vsupd
0
0
vsupd
#periweb
0
0
periweb
#proxy
0
0
"proxy -S ccss
-L cons -l info
-k 10 -n"
pbootpd
0
0
pbootpd
ptftpd
0
0
ptftpd
psched
0
0
"psched -run"
cclpd
1
0
cclpd

Page 96

# P0602477 Ver: 3.1.11

Base System Configuration

Example: gen.cfg
$3
#
# Example gen.cfg file.
#
# NAME
NODE
#
alarmd
alarmf
configd
conout
rpc.riod
nriod
#screendaemon
consoled
pmgr
#vsupd
#periweb
#proxy
-

PORT

is-VOS-CLASS

-

1
1
1
1
0
1
0
0
1
0
0
0

PRI

0
0
0
0
0
0
0
0
0
0
0
0
-L cons -l

COMMAND LINE
alarmd
alarmf
configd
conout
rpc.riod
nriod
screendaemon
consoled
pmgr
vsupd
periweb
"proxy -S ccss
info -k 10 -n"

Field Name

Description

NAME

Shorthand notation by which that process is known to SRP, vsh,
and any other process that attempts to connect to it by name
(essentially the process' well-known system name).

NODE

Node name the process is running on. A dash (-) indicates the
local node.

PORT

Specifies the well-known port the process uses for IPC
communication with other processes. If a dash is present, it
indicates that the system fills in the port value at run time. A static
port number only needs to be assigned for those processes that do
not register with SRP, and must not conflict with the port numbers
configured in the Solaris /etc/services file.

is-VOS-CLASS

Indicates whether or not the process uses IPC (1 is yes, 0 is no). By
default, any processes listed in older versions of gen.cfg are
classified as not using IPC (set to 0).

PRI

Real-time (RT) priority. This field is currently not used on Windows.
A 0 indicates that the process should be run under the time-sharing
priority class.

# P0602477 Ver: 3.1.11

Page 97

Avaya Media Processing Server Series System Reference Manual

Field Name

Description

COMMAND LINE

Actual command line (binary) to be executed. Command line
arguments can be specified if the command and all arguments are
enclosed in quotes (see proxy in examples above). The normal
shell backslash escape mode ("\")may be used to embed quotes
in the command line. A command with a path component with a
leading slash is assumed to be a full path designation and SRP
makes no other attempt to locate the program. If the command
path doesn’t begin with a slash, SRP uses the (system) PATH
environment variable to locate the item. Avaya package
installations add the various binary location paths to this
environment variable during their executions.

The first line in a gen.cfg file must contain only the string "$3". In some
circumstances, this must be added manually.

For Windows systems, only certified programs may be added to the gen.cfg file.
Consult your system administrator before adding program names to this file.
The global_users.cfg File
The global_users.cfg file lists the users who have global view privileges in
PeriView’s APPMAN and Monitor tools. On Solaris systems, this file can only be
modified by a user with root privileges. On Windows systems, this file should only
be edited by users with administrative privileges.
This file is stored in the $MPSHOME/common/etc directory on Solaris systems, and
in the %MPSHOME%\common\etc directory on Windows systems. The following is
an example of this file:
Example: global_users.cfg
#
# global_users.cfg
#
# The user names in this file will have global view privileges.
#
# format:
#
globalUser=username
#
globalUser=peri

For specific information about PeriView and data views, refer to the PeriView
Reference Manual.

Page 98

# P0602477 Ver: 3.1.11

Base System Configuration
The alarmd.cfg and alarmf.cfg Files
These files contain a reference to any filter set file that is to be instituted upon system
startup. Filter sets are used to limit the types and number of alarms that are passed by
the daemons for eventual display by the alarm viewers, or to initiate some other action
in response to receiving alarms that satisfy certain criteria. For additional information,
see Alarm Filtering on page 203. The addflt command is used to enable a filter
set file; the clearflt command to disable it. References in these configuration
files must include the full path name to the filter set file unless it resides in the
MPSHOME/common/etc subdirectory. In that case the name of the file itself is
sufficient. In the example below, the filter set file filter_set.flt exists in
/home/peri. Only one filter set file may be active at a time. This file must be
created for and only exists on systems taking advantage of alarm filter sets at startup.
Example: alarm*.cfg
#
addflt /home/peri/filter_set.flt

Filter sets, though standard ASCII files, should be appended with the .flt
extension.

# P0602477 Ver: 3.1.11

Page 99

Avaya Media Processing Server Series System Reference Manual
The pmgr.cfg File
This file sets parameters used by Avaya’s Pool Manager process. The Pool Manager
provides resource management of all registered resources on the local node (a
registered resource can also be a pool of resources). During installation, this file is
copied over from the $MPSHOME/PERIglobl/etc
or%MPSHOME%\PERIglobl\commonetc\etc directory.
Basic descriptions and formats of file entries are given immediately preceding the
actual data to which they apply, and are relatively self-explanatory. The following is
an example of the default file installed with the system. See the table that follows for a
more detailed explanation of each entry.
Example: pmgr.cfg
#
# Configuration file for PMGR process
#
#
# Enables debugging to a file
#
dlogDbgOn FILE,ERR
#dlogDbgOn FILE,GEN

.
.
.
#
# Defines a new pool called 'poolname'.
#
#defpool poolname
#
defpool line.in
defpool line.out
#
# Configures the resources that belongs in each pool
#
cfgrsrc line.in,phone.1-24.vps.*
cfgrsrc line.out,phone.25-48.vps.*

In theory any dlog command that supports the debug objects ERR and GEN can be
entered in the configuration file. In practice, only those commands in the following
table are. Though these commands are shown in this document prefaced with pmgr,
the actual configuration file entry can be entered without the acronym.

Page 100

# P0602477 Ver: 3.1.11

Base System Configuration

Field Name

Description

dlogDbgOn

Enables debugging to a file for errors only (ERR, the default) or all
output (GEN) for this process. This file is located in
MPSHOME/common/log as pmgr.dlog by default. The file
name/location can be changed via the pmgr dlogfilename
command; the default size of 100k can be changed through the
pmgr dlogfilesize command.
Debug output can also be sent to a capture buffer, but should not be
sent to STDERR, which is intended for Certified Avaya personnel
only.

defpool

A descriptive character string which identifies a particular pool of
resources. This string must never start with the @ character due to
an application programming conflict. The default values for this file
are line.in (all inbound lines) and line.out (all outbound lines).

cfgrsrc

Defines the resources that make up each pool identified by the
defpool entry above. The general format for a resource
configuration is cfgrsrc

,.
.. (this last
entry typically being a number). Wildcards (*) may be used for the
resource instance and component ID. By default the line.in pool
contains phone line numbers 1 through 24 on any MPS on the node;
line.out maintains the same configuration for lines 25 through
48. These values should be adjusted to fit the number of lines,
MPS’, and expected call usage on each system.

The only other command that might typically be set in this file is
pmgr allocRetry. This configures the number of allocation retries that
should be made before sending a failure back to the application if an allocation fails.
The default value is 3.
For details on this and any other PMGR command, see the PMGR Commands section
in the Avaya Media Processing Server Series Command Reference Manual.

# P0602477 Ver: 3.1.11

Page 101

Avaya Media Processing Server Series System Reference Manual
The periview.cfg File
The periview.cfg file defines configuration parameters for the PeriView
Launcher, which is PeriView’s main administrative tool. This file is stored in the
$MPSHOME/common/etc (%MPSHOME%\common\etc) directory, and is not
typically edited by the user. Such editing may impede or surcease operation of the
PeriView GUI. For specific information about PeriView, refer to the PeriView
Reference Manual. The following is an example of this file:
Example: periview.cfg
#
This file contains the configuration information for the
#
periview launcher pertaining to the applications it should
#
launch, and the images and menu strings used to designate them.
#
#
Format:
#
Menu string
Image host
Image path/file
Command
#
Send tree data Send ipc data
Send view data Send login data
#
#
where
'Menu string'
is a quote enclosed string for the
#
launch menu.
#
'Image host'
is the host where the image 'Image
#
path/file' is located.
#
'-' will indicate the current host.
#
'Image path/file'
is the path and filename of the
#
image. If no path is given, the path
#
$MPSHOME/common/etc/images is assumed.
#
The file must be in xpm format.
#
'Send tree data'
is 'yes' if the tree's data should
#
be sent to the application, and 'no'
#
otherwise.
#
'Send ipc data'
is 'yes' if the ipc timeout value
#
should be sent to the application,
#
and no otherwise.
#
'Send view data'
is 'yes' if the view value should
#
be sent to the application, and no
#
otherwise.
#
'Send login data'
is 'yes' if the login should be sent
#
to the application, and no otherwise.
#
#----------------------------------------------------------------------------"Application Manager..." - appman.64xpm
appman
yes yes yes yes
"Activity Monitor..."
- monitor.64xpm
monitor
yes yes yes yes
"Alarm Viewer..."
- alrmview.64xpm
alrmview
no no no no
"File Transfer..."
- filexfer.64xpm
filexfer
no no no no
"Task Scheduler..."
- sched.64xpm
peri_schedule no no no no
"SPIN..."
- spin.64xpm
spin
yes yes yes yes
"PeriReporter Tools..." - prpttools.64xpm PrptLaunch
no no no no
"Peri Studio..."
- peristudio.64xpm peristudio
no no no no
"Peri Producer..."
- pproi.64xpm
peripro
no no no no
"PeriWWWord..."
- periwwword.64xpm pwwword
no no no no
"PeriSQL..."
- perisql.64xpm
perisql
no no no no
"Online Documentation..."- onlinedoc.64xpm peridoc.bat
no no no no

The Windows version of the periview.cfg file does not contain entries for
"File Transfer", "Task Scheduler", or "PeriWWWord".

Page 102

# P0602477 Ver: 3.1.11

Base System Configuration
The MPSHOME/common/etc/tms Directory
This directory contains the configuration files installed with and copied over from the
PERItms package (see %MPSHOME%\PERItms - /opt/vps/PERItms on page
134). Included are several protocol configuration files referenced by the tms.cfg
file. These .cfg files are not detailed in this manual but instead can be
found in the Avaya Media Processing Server Series Telephony Reference Manual.
This directory is referenced by the system for the files to process during configuration.
The sys.cfg File
This file specifies parameters used by Avaya’s Server Address Resolution Protocol
(SARP). This protocol is used by software on MPS nodes to resolve internet addresses
for connecting to TMS’ and NICs.
A copy of the default sys.cfg file is maintained in the
MPSHOME/PERItms/site-cfg subdirectory. The system reads and processes
the sys.cfg file located in the MPSHOME/common/etc/tms subdirectory.
Any customizing or changes should be made to this file: if it is necessary to revert to a
"clean" version of the file, copy the sys.cfg file in the /site-cfg subdirectory
to the /tms subdirectory, then proceed to make modifications as required.
Basic descriptions and formats of file entries are given immediately preceding the
actual data to which they apply, and are relatively self-explanatory. See the table that
follows for a more detailed explanation of each.
Example: sys.cfg

Sheet 1 of 2

#
# File for configuring Periphonic's server address resoulution protocol (SARP)
# used by software on CTX nodes to resolve Internet Addresses for connecting
# to DTCs and Network Interface Cards (NIC's).
#
#
# Port Number
#
LRMPORT
30000
ADSMPORT 30001
SIMPORT
30002
VMEMPORT 30003
SARPPORT 30010

# P0602477 Ver: 3.1.11

Page 103

Avaya Media Processing Server Series System Reference Manual

Example: sys.cfg

Sheet 2 of 2

# EnetA broadCastIP
# Synopsis:
#
Specify the broadccast IP address of the network connected
#
from this host to ENET-A of the DTC's and NICs.
#
Default is 192.168.101.255
#
#
broadcastIP = broadcast IP address from
#
#
ENET-A 192.168.101.255
# iRepeat n
# Synopsis:
#
Specify the interval for repeating UDP SARP broadcasts
#
while in the initial period (iPeriod) occurring after
#
after a ctx node start up. After the initial period expires
#
broadcasts will be repeated at the repeat interval.
#
Default initial repeat is 10 seconds.
#
#
n = number of seconds between broadcasts
#
iRepeat 10
# iPeriod n
# Synopsis:
#
Specify the duration of the initial period which occurs after
#
ctx node start up. During this period UDP SARP broadcasts will
#
be repeated at the iRepeat rate on the networks listed above.
#
After expiration of the initial period, broadcasts will repeat
#
at the repeat interval. Default iPeriod is 600 seconds (10 minutes).
#
#
n = number of seconds of broadcasting every iRepeat seconds
#
iPeriod 600

# repeat n
# Synopsis:
#
Specify the interval for repeating UDP SARP broadcasts
#
after expiration of the initial period (iPeriod).
#
Default repeat interval is 60 seconds.
#
#
n = number of seconds between broadcasts
#
repeat 60

In this file, the term dtc is the same as TMS of release 1.X MPS terminology.

Page 104

# P0602477 Ver: 3.1.11

Base System Configuration

Field Name

Description

Port Number

Numbers assigned for the Load Resource Management
(LRM), Alarm, Diagnostic, and Statistics Management
(ADSM), Call SIMulator (SIM), Voice Memory (VMEM), and
SARP ports.

EnetA broadCastIP

Specifies the broadcast IP address of the network connected
from the host node to ENET-A of the TMS’ and NICs. The
default address is 192.168.101.255.

iRepeat n

Specifies the interval in n seconds for repeating UDP SARP
broadcasts while in the initial period (iPeriod - see next).
After the initial period expires broadcasts are iterated at the
repeat interval. The default initial repeat is 10 seconds.

iPeriod n

Specifies the duration in n seconds of the initial period. This
is the period of time which occurs after an MPS node start
up. During this period UDP SARP broadcasts are repeated
at the iRepeat rate (see above) on the networks listed at
EnetA broadCastIP. After expiration of the initial period,
broadcasts are iterated at the repeat interval (see next).
The default iPeriod is 600 seconds (10 minutes).

repeat n

Specifies the duration in n seconds of the interval for
repeating UDP SARP broadcasts after expiration of the
initial period (iPeriod - see above). The default repeat
interval is 60 seconds.

# P0602477 Ver: 3.1.11

Page 105

Avaya Media Processing Server Series System Reference Manual
The tms.cfg File
The sections that follow, contain printouts from a sample tms.cfg divided into
sections. They are presented in the same order of appearance as in tms.cfg. In
addition to the configuration settings, the file contains narrative descriptions
(comments) explaining the purpose of the configuration variables and settings in that
section. This document provides more detailed explanations and references between
sections to show relationship of entries throughout the file.
System Description Section
The system description section ([SYSTEM]) contains definitions for resource set
profiles (RSET_PROFILE) and system parameters (PARAM SYS_).
The first uncommented line in the section contains the string [SYSTEM] to indicate
the section. The uncommented lines that follow, contain the RSet profile definitions
(one per line). Each RSet profile definition contains the string RSET_PROFILE=
followed by the RSet profile name, the anchor resource class name, and additional
resource class names, if any. The RSet profile name is also used in the Resource Set
Table Section on page 118. The anchor is the resource to which all other resources in
the RSet are connected. The anchor will typically be a system device such as a phone
line, however any resource can be the anchor. The class name (see TMS Resource
Definition Section on page 108) is always followed by :1 (additional numbers are
reserved for future enhancement).

System Parameters
System parameters are used to override the default settings in the TMS. A comment is
usually used to describe the effects of the parameter setting, followed by the
uncommented line that assigns the value to the parameter. The statement line has the
syntax PARAM SYS_ = . The PARAM string is a
keyword to indicate that a parameter is to be set. The SYS string indicates it is a
system parameter (other types of parameters can be specified).
The system parameter SYS_outdial_method defines the resources the TMS uses
when generating tones. Generated tones include DTMF, FAX, and Call Progress. The
two options available are player OUTDIAL_PLY and tone generator
OUTDIAL_TGEN. The default is OUTDIAL_PLY.
The system parameter SYS_coding_law is used to set the system (backplane)
coding law. The default is ulaw. If SYS_coding_law has been changed from the
default, it needs to be set to the network law (ulaw or alaw).
The system parameter SYS_NETCODINGLAW sets the network coding law. This
parameter is used to override the default setting of the ulaw or alaw encoding on the
DCC. By default, the DCC coding law is set based on the Phone Line Interface (PLI)
on the TMS. For an E1 card, the default is alaw and for T1 card, the default is ulaw.
To override the defaults, set SYS_NETCODINGLAW to either ulaw or alaw based

Page 106

# P0602477 Ver: 3.1.11

Base System Configuration
on the site requirements. If this parameter is set incorrectly, it can affect audio quality.
For more information about SYS_coding_law, SYS_NETCODINGLAW and
audio quality issues, refer to the section MPS 2.1 Audio Click Prevention in the
document Avaya Media Processing Server Series Speech Server 6.0.1 Reference
Guide.

System Description Section

;%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
; FILENAME :: $Id: tms-mps1000.cfg,v 1.1.2.7 2002/03/13 15:02:50 clnroom Exp $
;%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
;******************************************************************
;
; S Y S T E M
D E S C R I P T I O N
S E C T I O N
;
;******************************************************************
;
; This section specifies the system param definition area. It
; describes the resource set profiles and system parameters such
; as law to use for the MPS system
;
[SYSTEM]
; rset profiles are defined here and referenced in the line definition section
; defined below. These profiles specify how to build an rset and what resources
; are to be added.
;
; The command format is :
;
; RSET_PROFILE =  default:1  ...
;
;
RSET_PROFILE = MPSLine
LINE:1 player:1 dtmf:1
; Inbound rset profile
;
; System Parameters
;
; The following section contains system parameters. Any parameter defined here will
override
; its hard coded default value in the TMS.
;
; Coding Law for the System.
;
PARAM SYS_coding_law = ulaw
; define law of box
;
; Should outdialing try to use a player or a tone generator first.
;
PARAM SYS_outdial_method = OUTDIAL_PLY

# P0602477 Ver: 3.1.11

Page 107

Avaya Media Processing Server Series System Reference Manual
TMS Resource Definition Section
The TMS resource definition section starts with resource configurations
([RSRC_CONFIG]). Each configuration is defined by a configuration name
(CONFIG_NAME) and a set of resource classes to load ([CLASS]). The resource
classes define the resources that will be loaded for the respective CONFIG_NAME.
There can be multiple resource configurations. Each one must have a unique name
and contain at least one class definition, but can contain more.
The first part of each resource configuration starts with an uncommented line
containing the string [RSRC_CONFIG] to indicate the section. The next
uncommented line contains the string CONFIG_NAME = . The
 value is an arbitrary name, but note that this name is also used in
the DTC Map Section on page 112.
Each resource class definition starts with a line containing [CLASS], followed by
separate lines containing COUNT = , CLASS_NAME =
, and CDF = .
Default settings for the class can be set under Default Params. The keyword
PARAM is followed by the parameter and value.

!

Page 108

System hardware limitations should be considered when configuring the COUNT
value in the [CLASS] definition section. See Resource Limitations on page 111.

# P0602477 Ver: 3.1.11

Base System Configuration

TMS Resource Definition Section

;******************************************************************
;
; T M S
R S R C
D E F I N I T I O N
S E C T I O N
;
;******************************************************************
;
; R S R C
C O N F I G U R A T I O N
;
; The following section defines the configurations that may be used
; for a tms in this MPS system. The configuration is referenced by
; the configuration name.
;
; [RSRC_CONFIG]
; CONFIG_NAME = BasicConfig
;
; This section is used to define the resources that should be loaded.
; (players, recorders, asr, fax) for this configuration. This will
; allow additional dtmf,cpd,tgen,r2 resources to be loaded as well
; the ones specified in the proto.cfg files.
;
; This section specifies the configuration definition file (CDF) to use
; and the class name (optional) to assign to the created line resources.
; Count specifies the number of resources requested to be loaded
; configuration. This number will be checked against the number of
; licences available in order to load this system.
;
; If the class name is specified here it will override any class name
; specified in the CDF file.
;
; Parameters specified here will override any parameters specified
; in CDF file.
;
; Mode definition is done at this level. Each set of configuration parameters
; specifies the values to set the paramters to for mode 0. This mode is used as the
; default mode for a resource. When specified here it will override
; the system defaults for the resources created.
;
; There may be more than one of class of resource loaded for this configuration
; and there will be one of these class definitions for each type loaded.
;
; Each section will start with [CLASS]
;
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

# P0602477 Ver: 3.1.11

Page 109

Avaya Media Processing Server Series System Reference Manual

TMS Resource Definition Section (Continued):

[RSRC_CONFIG]
CONFIG_NAME = BasicConfig
[CLASS]
COUNT = 210
; number of DTMF resources to load
CLASS_NAME = dtmf; 210 / 30 per dsp = 7 Dsp
CDF
= dtmf.cdf
[CLASS]
COUNT = 64
; number of tone generators to load
CLASS_NAME = tgen; 64 / 32 per dsp = 2 Dsp
CDF
= tgen_us.cdf
[CLASS]
COUNT = 60
; number of CPD resource to load
CLASS_NAME = cpd ; 60 / 30 per dsp = 2 Dsp
CDF
= cpd_us.cdf
[CLASS]
COUNT = 210
; number of players to load
CLASS_NAME = player; 210 / 30 per dsp = 7 Dsp
CDF
= okiply.cdf
[CLASS]
COUNT = 0
; number of recorders to load
CLASS_NAME = oki_recorder; 0 / 20 per dsp = 0 Dsp
CDF
= okirec.cdf
[CLASS]
COUNT = 0
; number of recorders to load
CLASS_NAME = pcm_recorder; 0 / 15 per dsp = 0 Dsp
CDF
= pcmrec.cdf
[CLASS]
COUNT = 0
; number of recorders to load
CLASS_NAME = pcm_fulldup_rec; 0 / 15 per dsp = 0 Dsp
CDF
= pcmrec2.cdf
[CLASS]
COUNT = 0
; number of recorders to load
CLASS_NAME = oki_fulldup_rec; 0 / 15 per dsp = 0 Dsp
CDF
= okirec2.cdf
[CLASS]
COUNT = 0
; number of conference PORTS to load
CLASS_NAME = conference;
CDF
= conf.cdf ; 0 / 16 ports per dsp = 0 Dsp

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

Page 110

# P0602477 Ver: 3.1.11

Base System Configuration
Resource Limitations
There are hardware limitations to the classes of resources and the quantity (count) that
can be loaded, based on the number of Digital Signal Processors (DSP) in the system.
The TMS motherboard contains six DSPs and each MDM installed contains 12
additional DSPs. The limitations are generally not a factor, however, they need to be
considered when configuring the system. Configuring unnecessary resource classes
and counts can degrade system performance and occupy DSPs that are needed for
other resources.
Consider all sources of resource class configuration. In addition to the common
tms.cfg file, resources can be loaded as part of a phone line protocol. (See Protocol
Configuration Files on page 123.) The .cfg file contains the same
[CLASS] definition section as the tms.cfg file.
Protocols are assigned on a per span basis in the protocol package definitions section.
(See Line Card Protocol Package Definitions on page 114.) The number entered in
the COUNT statement in the [CLASS] definition section means that number of
resources will be loaded for each span the protocol is assigned to, in addition to those
defined in the tms.cfg file.
Example:
• A COUNT of 30 tgen resource classes is entered in the tms.cfg file.
• A COUNT of 24 tgen resource classes is entered in the
att_winkstart_proto.cfg file.
• The att_winkstart_proto.cfg is assigned to four spans in the protocol
package definition section.
A total count of 126 tone generator resources will be loaded.
The following table lists the available protocols and the resource classes that comprise
the protocol.

Resource Classes

Configuration File
(*_proto.cfg)

r1tx

r1rx

dtmf

tgen

ply

cpd

r2

ATT Winkstart

att_winkstart_proto.cfg

no

no

yes1

yes2

no2

no3

no

Feature Group D

fgd_eana_proto.cfg

yes

yes

no

no

no

no3

no

2

yes

no

Protocol

no

no

yes

no2

no

no

no

yes2

no2

no3

no

no

no

no

yes2

no2

no3

no

no

yes2

no2

no3

yes

CB Ground Start

cb_grndstart_proto.cfg

no

CB Loop Start

cb_loopstart_proto.cfg

(TBD)

Net5 ISDN

isdn_net5_proto.cfg

National ISDN

isdn_national_proto.cfg

R2 Saudi

r2_saudi_proto.cfg

no

no

1. The dtmf resource is required only if DNIS collection is enabled.
2. Either a tgen or a player resource may be used for outdialing and generating call progress.
3. The cpd resource is optional.

# P0602477 Ver: 3.1.11

Page 111

Avaya Media Processing Server Series System Reference Manual

The following table lists the quantity of each resource that can be loaded per DSP.
The Class Name column contains the exact string that should be entered in the
CLASS_NAME statement. The Configuration Definition File column contains the
name that should be entered in the CDF line. The Count/DSP is the number of that
resource a DSP can provide. Each resource loaded occupies a DSP, whether there is
only one instance, or up to the limit a DSP can handle. If the count exceeds the limit
by only one, another DSP will be loaded to handle the instance, and that DSP will not
be available for other resources.

Class Name

Configuration Definition File (*.cdf)

Count/DSP

dtmf

dtmfrx.cdf

30

tgen

tgen_us.cdf

32

tgen

tgen_uk.cdf

32

r1_mf_rx

mfr1rx.cdf

30

r1_mf_tx

mfr1tx.cdf

32

oki_player

okiply.cdf

30

oki_recorder

okirec.cdf

20

oki_fulldup_rec

okirec2.cdf

16

pcm_player

pcmply.cdf

30

pcm_recorder

pcmrec.cdf

30

pcm_fulldup_rec

pcmrec2.cdf

16

cpd

cpd_us.cdf

30

cpd

cpd_uk.cdf

30

r2

r2.cdf

12

In the preceding Example:, 126 tgen resources are loaded by the configuration.
Since a DSP can provide up to 32 tone generators, four DSPs are occupied by tgen
as a result of those configuration entries.
DTC Map Section
The term DTC stands for Digital Trunk Controller. It is synonymous with TMS.
The MPS relative configuration begins with the DTC map section. The DTC map
section ([DTCMAP]) is used to define the physical location of each TMS in the MPS
by its chassis and backplane slot (BPS) position, and the primary and secondary VOS
subcomponents to which they are assigned ([BIND]).

Page 112

# P0602477 Ver: 3.1.11

Base System Configuration

On the back of each VRC, there is a number selector that defines the number of the
chassis. (See VRC Rear Panel on page 26.) Ensure that these are uniquely set (starting
at 0) for each chassis in the system. This number corresponds to the chassis number to
be used in the [DTCMAP] section of the tms.cfg file.
A logical TMS number (TMS1, TMS2, etc.) is assigned to each TMS in the system.
Each TMS must have a primary VOS subcomponent bound to it. Typically, TMS1 is
bound to VOS1, TMS2 is bound to VOS2, and so on. If a redundant or backup MPS
node is used in the system, the MPS components on that node are also aliased to
secondary VOSs. Typically TMS1 is bound to VOS101, TMS2 is bound to VOS102,
and so on.
Under the Config column, is the configuration name ([CONFIG_NAME]) for each
TMS. This defines the configuration definition to use for each TMS. (See TMS
Resource Definition Section on page 108.)
There should always be uncommented BIND statements for each NIC in the chassis.
Only the Chassis Num and Chassis Slot (i.e., 7 and 8) should be entered, with
the remaining columns each containing a dash (-). If the chassis contains a Hub-NIC,
there is no need for NIC bind statements, or they can be commented out.

# P0602477 Ver: 3.1.11

Page 113

Avaya Media Processing Server Series System Reference Manual

DTC Map Section

;******************************************************************
;******************************************************************
; MPS relative configuration starts here
;******************************************************************
;******************************************************************
;
; This section will defines the TMS's in the MPS system.
; It assigns each TMS to a primary and secondary controlling MPS.
; The bound MPS will load and configure the associated
; TMS as a result of the following bind commands.
; TMS number must be from 1 to max TMS number.
;
;******************************************************************
[DTCMAP]
;----------------------------------------------------------------------;
Chassis
Backplane
TMS
Primary
Secondary
;
Num
Slot (BPS) Num
VOS Comp# VOS Comp# Config
;----------------------------------------------------------------------BIND
1
1
1
1
BasicConfig
BIND
1
2
2
2
BasicConfig
BIND
1
3
3
3
BasicConfig
BIND
1
4
4
4
BasicConfig
BIND
BIND

1
1

7
8

-

-

-

-

BIND
BIND
BIND
BIND

2
2
2
2

1
2
3
4

5
6
7
8

5
6
7
8

-

BasicConfig
BasicConfig
BasicConfig
BasicConfig

BIND
BIND
;

2
2

7
8

-

-

-

-

Line Card Protocol Package Definitions
A protocol must be specified for each span in the system. One LOAD statement is use
to specify the protocol for each span, and spans cannot be split (i.e., protocols cannot
be specified for individual lines). The LOAD statements are entered in tabular format
under the commented headings. The fields are described below.
•
•
•
•
•

Page 114

The TMS Num field contains the unique TMS number specified under the
DTC Map Section on page 112.
The PLI Slot is the slot containing the phone line card (DCC or ALI).
The Span Num is the span this LOAD statement is assigning the protocol to.
The svc-type field is required and the string is ISDN, SS7, or CAS.
The MpsNum, Outline, and Pool/Class fields are used for
configuration of legacy systems that are not otherwise configured to use the

# P0602477 Ver: 3.1.11

Base System Configuration

•

Pool Manager. (See Pool Manager (PMGR) on page 288.) These fields shall
each contain a dash (-) for all spans on the MPS.
The Protocol Pkg field contains the name of the protocol configuration
file (*_proto.cfg).

Line Card Protocol Package Definitions

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;
; This section is used to define the protocol packages to load
; to the line cards in this MPS system. For DCC cards there is a
; protocol package specified for span of the DCC card.
;
; TMS num - tms number (from [DTCMAP] section above)
;
; PLI slot - slot line card is plugged into on the TMS
;
; Span Num - this is the span number for a DCC card. For an
; analog card this is not applicable and a dsh will appear there
;
; Service type - ascii string that is returned to app in
; responses to GetInCall and Get OutLine containers
;
; mpsNum - the Vps number, if applicable, that these lines are
; plugged into to and the associated lines.
;
; Outline - The user specifies the lines of the span/card that are
; outbound lines via the following specification:
;
;
[-] | * |
; where:
;
s = start line number
;
e = end line number
;
* = all lines
;
- = no lines
;
; The above specification references lines s to e (inclusive) relative
; to span span_num.
;
; The user may specify that all lines for a particular span be placed
; in a pool by use of *.
;
; If the card is an ALI card the span number will be "-"
;
; Pool/class - the class name to use when creating the resource
; pool.
;
; Protocol package is file having TMS resources needed to support
; the requested protocol.
;

# P0602477 Ver: 3.1.11

Page 115

Avaya Media Processing Server Series System Reference Manual

Line Card Protocol Package Definitions (Continued):

;--------------------------------------------------------------------------------------------;
TMS
PLI
Span svc-type MpsNum
Outline
Pool/class Protocol
;
Num
Slot Num
Pkg
;--------------------------------------------------------------------------------------------LOAD 1
4
1
CAS
att_winkstart_proto.cfg
LOAD 1
4
2
CAS
att_winkstart_proto.cfg
LOAD 1
4
3
CAS
att_winkstart_proto.cfg
LOAD 1
4
4
CAS
att_winkstart_proto.cfg
LOAD 1
4
5
CAS
att_winkstart_proto.cfg
LOAD 1
4
6
CAS
att_winkstart_proto.cfg
LOAD 1
4
7
CAS
att_winkstart_proto.cfg
LOAD 1
4
8
CAS
att_winkstart_proto.cfg
LOAD
LOAD
LOAD
LOAD
LOAD
LOAD
LOAD
LOAD

2
2
2
2
2
2
2
2

4
4
4
4
4
4
4
4

1
2
3
4
5
6
7
8

CAS
CAS
CAS
CAS
CAS
CAS
CAS
CAS

-

-

-

att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg
att_winkstart_proto.cfg

MPS Line Definition Section
The [VPS_LINE_DEF] section is used to map physical TMS lines to logical line
numbers. Generally, one LINE statement is used to map each physical span in the
system. The LINE statements are entered in tabular format under the commented
headings. The fields are described below.
•
•
•
•

Page 116

The MPS from:to field contains the range of logical line numbers mapped
to the lines of the physical span.
The TMS Num field contains the unique logical TMS number defined under
DTC Map Section on page 112.
The PLI Slot Num field contains the TMS slot number (1-4) of the line
card (DCC or ALI) where the physical span resides.
The Span:channel field contains the span number on the DCC, a colon
(:), and the starting channel number (always 1).

# P0602477 Ver: 3.1.11

Base System Configuration

MPS Line Definition Section

;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;
;

M P S

L I N E D E F I N I T I O N
R s e t C r e a t i o n

S E C T I O N

This section maps the controlling VPS's lines to the physical
lines on the associtated TMS. This causes the creation of
rsets - one for each line mapped. The entries are specified as follows:
lines : These are the VPS line numbers that are to be
mapped.
TMS Num : This is the TMS number.
PLI Slot Num : This is the slot on the TMS which references
the card.
Span
: This is the span number for a DCC card. If
an analog card then this is not applicable
channel : This is the start channel (instance) for the
mapping.

[VPS_LINE_DEF]
;-------------------------------------------;
MPS
TMS
PLI Slot Span:channel
;
from:to Num
Num
;-------------------------------------------;
;
MAP TMS 1 lines
;
;
LINE
1:24
1
4
1:1
LINE
25:48
1
4
2:1
LINE
49:72
1
4
3:1
LINE
73:96
1
4
4:1
LINE
97:120
1
4
5:1
LINE
121:144 1
4
6:1
LINE
145:168 1
4
7:1
LINE
169:192 1
4
8:1

LINE
LINE
LINE
LINE
LINE
LINE
LINE
LINE

1:24
25:48
49:72
73:96
97:120
121:144
145:168
169:192

2
2
2
2
2
2
2
2

# P0602477 Ver: 3.1.11

4
4
4
4
4
4
4
4

1:1
2:1
3:1
4:1
5:1
6:1
7:1
8:1

Page 117

Avaya Media Processing Server Series System Reference Manual
Resource Set Table Section
The [RSET_TABLE] section is used to create custom resource sets for individual
lines or a range of lines. If this section is defined, it will use the specified
RSET_PROFILE for creating the RSet for that line or range of lines. The RSET
statements are entered in tabular format under the commented headings. The fields
are described below.
•
•
•

The MPS Line Num contains the range of logical line numbers that were
mapped in the MPS Line Definition Section on page 116.
The TMS Num field contains the unique logical TMS number defined under
DTC Map Section on page 112.
The Rset_Profile Name field contains the name used in the
RSET_PROFILE definition under System Description Section on page 106.

RSET Table Section

; this is a custom configuration table - if these are not specified here
; then the default rset profiles will be used and the lines above will
; be used to build the rsets.
;
[RSET_TABLE]
;-------------------------------------------------------------------;
MPS Line
TMS Rset_Profile
;
Num
Num Name
;-------------------------------------------------------------------RSET =
1:192
1
MPSLine
RSET =
1:192
2
MPSLine

Page 118

# P0602477 Ver: 3.1.11

Base System Configuration
Synclist Configuration Section
The synclist section is used to define the source(s) of timing and synchronization for
the computer telephony (CT) bus on the local node. It is a prioritized list for
maintaining CT bus operation using the failure redundancy features inherent in the
MPS architecture. (See TMS Computer Telephony (CT) Bus Clocking on page 265.)

Synclist Section

; SYNCLIST SECTION
;
; This section is to specify the SYNCLIST for Reference Source A and
; Reference Source B.
;
; Each line can specify the sync list for a particular BPS (Back plane slot )
; The order in which the sync list is specified will be the order in which
; the TMSs will try to synchronize with the network.
; For example, if the sync has to be obtained from span 5 and then span 2, then
; the REF_SRC line should specify span 5 before span 2 in the list.
;
; NOTE: The Sync List for a particular Reference Source should all be on the
; SAME CHASSIS. It can exist on more than one BPS, but the order is important.
;
; In HUBNIC (NICLESS) MODE, if the span list is specified on more than one BPS,
; only the list specified on the first BPS is used. All others are ignored.
; Also, if both Ref Source A and Ref Source B are being specified in this mode,
; they have to be on different BPS as a TMS cannot drive both the ref sources.
; It can either drive RefSrc A or RefSrc B.
;
; Format of the sync command line
;
; [SYNC_LISTS]
; REF_SRC
A/B
Chassis
Bps
Sync S:C:D-Range
; REF_SRC
A
1
1
4:0:1-5
; REF_SRC
B
2
1
4:0:1 4:0:2
;
;
[SYNC_LISTS]
;-------------------------------------------------------------;
RefSrcChBPSSpansList
;-------------------------------------------------------------REF_SRCA
1 1 4:0:1-8
REF_SRCA
1 2 4:0:1-8
REF_SRCA
1 3 4:0:1-8
REF_SRCA
1 4 4:0:1-8
REF_SRCB
REF_SRCB
REF_SRCB
REF_SRCB

2
2
2
2

1
2
3
4

4:0:1-8
4:0:1-8
4:0:1-8
4:0:1-8

# P0602477 Ver: 3.1.11

Page 119

Avaya Media Processing Server Series System Reference Manual
The comments contained in the synclist section provide some recommendations and
guidelines for configuring the synclist. The following is an expanded explanation.
The first uncommented line in this section contains the string [SYNC_LISTS], to
define the section to the startup scripts. Each subsequent (uncommented) line will
define the prioritized list of clocking sources to use. The entries are in tabular format
and the required fields on each line are:
• The string REF_SRC
• The reference source being defined (i.e. A or B)
• The chassis number
• The backplane slot (BPS)
• The TMS slot number (or DCC), card, and device number delimited by colons
(:).
For example, in the preceding sample, the first uncommented line after the
[SYNC_LISTS] line is
REF_SRC

A

1

1

4:0:1

4:0:3-8

4:0:2

This configuration states that the first timing source to be used for REFCLK_A
resides on chassis 1, BPS 1, slot 4, card 0, device 1. The slot is the slot number as
labeled on the front of the TMS. The card number is always 0 (additional numbers are
reserved for future enhancement). The device number is the span on the DCC. A
range of devices, or spans, can also be specified as shown in the second field
(4:0:3-8).
If the current clock source becomes disabled for any reason, the selection process
starts at the beginning of the list to obtain a valid source, rather than proceeding to the
next specified source in the list. For example, if the source of REFCLK_A is
currently span 8 on DCC4, the clock selection process starts checking at DCC4, span
1 (first on the list) and runs through the list, instead of going directly to DCC4 span 2
(last on the list).
Although there are no absolute limitations or rules to building the synclist, there are
recommendations for achieving the best degree of failure redundancy.
• In a multiple chassis system, the sources of REFCLK_A and REFCLK_B
should be obtained from different chassis.
• In a single chassis system, the sources of REFCLK_A and REFCLK_B
should be obtained from different BPSs (TMSs).
• A separate REF_SRC line should be used to define the list of sources from
each chassis BPS.
• All available clock sources should be listed.

Page 120

# P0602477 Ver: 3.1.11

Base System Configuration
For analog systems (and for testing digital systems for which there is no operating
span available or connected), the sync clocks are obtained from oscillators on the
TMS mother board(s). (See TMS Computer Telephony (CT) Bus Clocking on page
265.) To specify a local oscillator from a TMS, enter “-1” in the
:: fields, as follows:
REF_SRC

A

3

1

-1:-1:-1

The above statement specifies the source of REFCLK_A as the local oscillator on
chassis 3, BPS1 (TMS in chassis 3, slot 1).

# P0602477 Ver: 3.1.11

Page 121

Avaya Media Processing Server Series System Reference Manual

tms.cfg Major Section Functional Summary
tms.cfg File
Field

Page 122

Description

[SYSTEM]

Resource sets (singularly, rset) are defined here and referenced
later in the file at the [RSET_TABLE] (see below) of the Line
Definition Section. The procedure to build an rset is
included in the header information. A resource set is a group of
parameters that can later be referenced by name.

System Parameters

A group of parameters applied to the TMS as a whole and which
override same such hard coded values in the TMS.

[RSRC_CONFIG]

Defines the configurations, referenced by name, that can be used
in a TMS on the MPS component. The parameters for this
definition are explained in the lines of the file that follow, and
defined in the [CLASS] section (see next table entry).

[CLASS]

Defines the parameters for each [RSRC_CONFIG] to use (see
above). This includes the resources to be loaded (in addition to
the those defined in the .cfg files), each of which has a
corresponding [CLASS] definition. Included in this definition are
the number of specified resources to be loaded; the configuration
file (*.cdf) to use; and an optional class name for reference.
Parameters specified in [CLASS] override those of the *.cdf
file.

[DTCMAP]

Defines the actual TMS system configuration parameters. This
information is made up of three major subsections: BIND;
DCCLOAD (currently not supported); and LOAD. The format and
architecture of each is explicitly spelled out in the contents of the
file immediately preceding each subsection definition table.

[VPS_LINE_DEF]

Maps the MPS lines to the physical lines (spans) on the TMS.
Definition table contents are spelled out immediately preceding
the section to which they apply.

[RSET_TABLE]

Defines custom configurations, and references the [SYSTEM]
section where rsets were built earlier. If no table is present
rsets are built using default profiles in conjunction with the
information provided at [VPS_LINE_DEF] (see above).

[SYNC_LISTS]

Specifies the order in which the TMS’ attempt synchronization
with the network. The fashion in which the sync list is ordered
determines the execution of this process. The sync list for a
particular reference source must include entries for the same
chassis only.

# P0602477 Ver: 3.1.11

Base System Configuration
Protocol Configuration Files
Protocol configuration is defined by a .cfg file. One of these files is
required for each protocol. A protocol is assigned to any number of spans (not
individual lines) via the Line Card Protocol Package Definitions on page 114. The
.cfg file name is used in the Protocol Pkg field of that section of the
tms.cfg file.
Each protocol configuration file contains two sections:
•

•

!

The [SPAN_CLASS] section defines the resource set for the entire span(s)
that will use this protocol. The value of COUNT should be the number of lines
in a span (i.e., 24 for T1 or 30 for E1).
The [CLASS] section specifies the resources class(es) to be used to
implement the protocol. These are resource classes that are always used with
this protocol. This section is configured the same way as the [CLASS]
definition section of the tms.cfg file. (See TMS Resource Definition
Section on page 108.) The value of COUNT should be the number of lines in a
span (i.e., 24 for T1 or 30 for E1).

System hardware limitations should be considered when configuring the COUNT
value in the [CLASS] definition section. See Resource Limitations on page 111.

# P0602477 Ver: 3.1.11

Page 123

Avaya Media Processing Server Series System Reference Manual

;=================================================================
; proto.cfg
;=================================================================
;
; This file is used to define the set of resources required to load
; in order to perform a particular protocol.
;
; S P A N C L A S S
; the span class is a special class of resource for the proto.cfg file.
; it specifies the information used to load the span. If more than
; one span class section is specified the first one found will be used
; and subsequent specifications will be ignored
[SPAN_CLASS]
COUNT = 24
; number of resources of this class to load
CDF = ISDN.cdf
; block = TIM, CPD, DTMF, TGEN
;
;******************************************************************
; R E S O U R C E
C L A S S D E F I N I T I O N
; This section is used to define the protocol specific resources that
; should be loaded. This will allow additional dtmf,cpd,tgen,
; r2 resources to be loaded as well the ones specified in the proto.def
; files.
; This section specifies the configuration definition file (CDF) to use
; and the class name (optional) to assign to the created line resources.
; If the class name is specified here it will override any class name
; specified in the CDF file.
; Parameters specified here will override any parameters specified
; in CDF file.
; Mode definition is done at this level. Each set of configuration parameters
; specifies the values to set the paramters to for mode 0. This mode is used as the
; default mode for a resource. When specified here it will override
; the system defaults for the resources created.
; There will be one of these sections for each required resource.
;******************************************************************
;[CLASS]
;COUNT = 
; number of resources of this class to load
;CLASS_NAME = 
; class name to use for this resource.
;CDF = _.CDF
; block = TIM, CPD, DTMF, TGEN
;
;

Page 124

# P0602477 Ver: 3.1.11

Base System Configuration
The $MPSHOME/packages Directory
(This section applies to Solaris systems only)
This directory contains the actual installed Avaya software packages and default
configuration files. The subdirectory naming conventions and subdirectories located
in this directory are listed in the following table. In a typical configuration, not all
subdirectories are present. Only the packages with configuration issues not covered in
a user’s manual are presented here. For a list of manuals, please use the Reference
Material link available in PeriDoc.
The X-convention represents the numerical version of a package software release.

Avaya Software Packages
Symbolic
Link in
/opt/vps

As Found in
$MPSHOME/packages

Contents

PERIase

aseX.X.X

Directories and files specific to ASE.

PERIbrdge

brdgeX.X.X

Directories and files used for bridging calls in the
system.

PERIcmpat

cmpatX.X.X

Shared libraries only necessary for PeriView
release 5.X and MPS release 1.X compatibility.

PERIdist

distX.X.X

Used in distributing information from a source
location to destination nodes in the MPS
network; installs the web server and Perl scripts
used for this and by PeriDoc, maintains related
log files, installs a file compression utility.

PERIdocb

docbX.X.X

Software in support of PeriDoc, the
comprehensive resource used to access Avaya
on-line reference material and
documentation.

PERIfw

fwX.X.X

Installs system library that enables platformindependent process execution.

PERIgase

gaseX.X.X

Global ASE shared libraries only used between
release 5.X and MPS release 1.X.

PERIglobl

globlX.X.X

Current globally accessed directories and files
including libraries and binaries used by all other
packages.

PERIhostp

hostpX.X.X

Directories and files used in communicating with
host computers. Protocol files are not detailed in
this manual but instead can be found in the
Avaya Media Processing Server Series
COMMGR Reference Manual.

PERImps

mpsX.X.X

Directories and files used by MPS processes and
utilities.

# P0602477 Ver: 3.1.11

Page 125

Avaya Media Processing Server Series System Reference Manual

Avaya Software Packages
Symbolic
Link in
/opt/vps

As Found in
$MPSHOME/packages

PERIview

periviewX.X.X

Directories and files used by PeriView and its
tools.

PERIperl

perlX.X.X

Integrates the Perl programming language into
the Avaya software suite. Also sets the
environment variable for MPSHOME.

PERIplic

plicX.X.X

Directories used in Avaya package licensing.

PERItms

tmsX.X.X

Directories and files used by TMS processes and
utilities.

Contents

For sake of clarity and discussion, and as highly recommended by Avaya, this section
uses $MPSHOME as the default root directory for the packages subdirectory.
However, it is important to note that during installation, a user can elect to specify a
directory name of their own choosing. If a user-specified distribution directory other
than /opt/vps has been chosen, the released software packages reside in
/distdir/packages, where distdir is the name of the user-specified
distribution directory.
The subdirectories in $MPSHOME/common and $MPSHOME/mpsN look to the files
located in this directory by means of symbolic links. This provides for control over the
released software version used by the MPS system. If a user-specified distribution
directory other than /opt/vps has been chosen, the symbolic links follow the path
/distdir/packages/version, where distdir is the name of the
user-specified distribution directory and version is a version of any Avaya software
package installed on that system. The symbolic links themselves always exist in
/opt/vps.
See the Avaya Media Processing Server Series Solaris System Operator’s Guide for
more information about these subdirectories.

Page 126

# P0602477 Ver: 3.1.11

Base System Configuration
%MPSHOME%\PERIase - /opt/vps/PERIase
This directory contains the Application Services Environment (ASE) software. ASE is
the runtime environment for PeriProducer. By default, the system sets the ASEHOME
variable to /opt/vps/PERIase on Solaris systems and %MPSHOME%/PERIase.
The stats directory holds the application statistics, collected globally by the
VSUPD process for all MPS components defined on a node. The configuration files of
concern are both located in the PERIase/etc subdirectory.
The /etc/ase.conf file
This file has entries in the form of name: value and specifies where some
commonly referenced ASE directories are located. It also defines the shared memory
configuration. Currently the following named file entries are used for establishing
directory relationships:
ase.conf File Field

Description

MasterDBase

The location of the database master file.

LinkDir

Default location for LINK programs.

StatsDir

Location of applications raw statistics files generated by
VSUPD.

CopyDir

Location of statistics folders stubs.

WebRoot

Reserved for future enhancement.

AseCoreDir

Location for vcore files generated by VENGINE if it core
dumps.

VexLinkDirs

Default location for vexlink linking.

To prevent problems caused by a modem connection loss, amu redirection to a device
will not work unless the ase.conf file enables this functionality by setting the
AmuRedir variable to tty. This can be accomplished by uncommenting the
applicable line of the file.

# P0602477 Ver: 3.1.11

Page 127

Avaya Media Processing Server Series System Reference Manual

Example: ase.conf
# Location of ASE elements in the format
#
element: dir
# Full path can be defined using HOME and other env variables
#
MasterDBase:${ASEHOME}/etc/VASDBlist
LinkDir: ${ASEHOME}/link
StatsDir:${ASEHOME}/stats
CopyDir: ${ASEHOME}/copy
WebRoot: ${ASEHOME}/web
AseCoreDir:${ASEHOME}/tmp
VexLinkDirs:.;${ASEHOME}/link
#AmuRedir:tty
#
# Shared Memory variables
# SharedMemory ---> Shared memory directory (file-based SM)
# ShMemorySegments ---> Maximum number of shared memory segments
# ShMemoryUpperLevelItems ---> Maximum number of Upper Level Items
# ShMemoryRequests ---> Maximum number of diff requests ( DELAY, WAIT ) and SETs
#
ShMemorySegments:
99
ShMemoryUpperLevelItems:
150
ShMemoryRequests:
2048
#
# Only for file-based shared memory
SharedMemory:
${ASEHOME}/shmem
ConstSharedMemory:
${ASEHOME}/shmem
#

Shared memory configuration is established in the second half of the file. The
particulars for these entries are delineated in the section prior to their definitions. By
default, shared memory is generated to support no more than 99 segments (shared
folders in PeriPro), 150 total 01 level items in Linkage Section (e.g. 99 original 01
level items + 51 01 level items with REDEFINES clauses), and 2048 total outstanding
DELAY with REQID and WAIT requests. The default configuration can be changed
by modifying these variables and restarting the system.
If the number of segments is increased, the entries
set shmsys:shminfo_shmseg=101
set shmsys:shminfo_shmmni=101
in the file /etc/system also must be changed commensurately.
If the file contains the uncommented entry SharedMemory: dirname, an
application’s shared memory is implemented on top of files residing in the directory
specified by dirname. If the entry does not exist, is commented out, or the directory
cannot be accessed, the application’s shared memory is implemented on top of the
Solaris shared memory facility. On Windows machines, shared memory always
resides on top of the file system.

Page 128

# P0602477 Ver: 3.1.11

Base System Configuration
File Based Shared Memory (FBSM) is not transient by default (as Solaris shared
memory is) and is not removed or cleared when all applications exit or when the
machine reboots (unless the FBSM is located in /tmp on Solaris). VENGINE's
command line option -K is ignored for FBSM.
Use the following vassm utility option to remove a specified shared memory item:
vassm -c -i 
Normally, all Constant Shared Memory (CoSM) items reside in the directory specified
by the SharedMemory entry in this file. However, they can be placed into a separate
directory if the ase.conf file has the second entry ConstSharedMemory:
constdir, where constdir represents this separate directory.
While the SharedMemory dirname entry must always reside on a local partition
and have read/write permissions, the constdir directory may be located on a
mounted read-only file system. This allows CoSM items to be shared over a
distributed environment. Note that all CoSM initialization must be performed on the
machine where the files are physically located.
Both the dirname and contsdir directories can reside on a shared file system.
ASE relies on certain keywords defined in this file. However, a developer can add any
arbitrary name:value pair to the file and extract it in an application by using the
get-configuration CALL function. For more information on using CALL
functions, and for information on vexlinking, see the PeriProducer User’s Guide. To
find out more about Shared Memory, see the PeriView Reference Manual.
The /etc/services File
The services file contains a list of processes (i.e., the services), some of which
may be accessed by call processing or administrative applications. The file defines
the port and protocol associated with each service and is used by the ASE
(Application Services Environment) software process group.
VMST is aliased as vms in this file, but should not be confused with previous (“nonextended”) versions of VMS.
The following are sample excerpts of this file, followed by an explanatory table:

# P0602477 Ver: 3.1.11

Page 129

Avaya Media Processing Server Series System Reference Manual

Example: services
# Service
Port(s) Protocol
#
###########################################
# Attention :
#
ports here are NOT TCP/IP ports,
#
but rather are 'handles' to VMS/vvpethers !!
#
#
All ports have to be in increasing order and by default must
#
be less than 509. If more handles are required, use the
# 'vmst -p xxxx' command line option to increase the limit.
##########################################
vms
1-10
periweb
11-14
linfo rissue fissue
#pweb
12-14
linfo
periq
16
linfo
htmls
17
linfo
bankcore
18-20
linfo
sbrm
21-40
ping
commdaemon
41-60
vsupd
65
tcap
68
kick
ccss
69
kick
xsp
70
vps
amu
81-85
#
# PeriPro related services
#
peripro
101-110
vemul
111-120
timedaemon
121-130
screendaemon
131
#
vtcpd
132-135 linfo
#
#
# CTI/CTAP Related Services
ctapsrp
200
cti
201-205
csrouter
206-210
sntry
211-215
#
# Oracle daemon
#
sqlclnt
221-230
#
# Custom made services
#
# starting from 241
test
240

Page 130

# P0602477 Ver: 3.1.11

Base System Configuration

Example: services
# Service
Port(s) Protocol
#
###########################################
# Attention :
#
ports here are NOT TCP/IP ports,
#
but rather are 'handles' to VMS/vvpethers !!
#
#
All ports have to be in increasing order and by default must
#
be less than 509. If more handles are required, use the
# 'vmst -p xxxx' command line option to increase the limit.
##########################################
vms
1-10
periweb
11
linfo rissue fissue
htmls
12-15
linfo
periq
16-17
linfo
bankcore
18-20
linfo
sbrm
21-22
ping
commdaemon
23-24
jsb
61-64
linfo rissue fissue
vsupd
65
sip
67
kick
tcap
68
kick
ccss
69
kick
xsp
70
vps
clipsr2
71-75
linfo rissue
amu
81-85
#
# PeriPro related services
#
peripro
101-110
vemul
111-120
timedaemon
121-130
screendaemon
131
#
vtcpd
132-145 linfo
#
#
#
# CSS 4.0.0 Related Services (201-220)
cti
201-205
# IPML/ICM (CSVAPI and HDX) IVR SCCS (RSM)
csrsm
206-210
#
# Screen-Pop to TAPI Server M1/DMS
cstapi
211-215
#
# IVR.DLL Interface for SCCS Connected to DMS100
cstapisccs
216-220
#
# CSS 3.3.1 Resources not supported in CSS 4.0
#ctapsrp
200
#csrouter
206-210
#sntry
211-215
#
#
# Oracle and other dbase
#
sqlclnt
221-237
corbaclnt
238
linfo rissue fissue
dcomclnt
239
linfo rissue fissue
#
# Custom made services
#
# starting from 241
test
254

# P0602477 Ver: 3.1.11

Page 131

Avaya Media Processing Server Series System Reference Manual

Variable

Description

Service

Specifies the process name.

Port(s)

Identifies the system ports from which each process may be accessed.
The port numbers represent internal handles to the VMST/VVPETHER
processes (i.e., they are not TCP/IP ports). The numbers must be less than
509, must be unique in this file, and must not conflict with the port numbers
configured in the vos.cfg and the Solaris /etc/services files. The
entries should be specified in increasing order of the port numbers. If this
file is changed, all instances of VMST/VVPETHER and the services must
be restarted.

Protocol

Defines the protocol to use when accessing each process.

%MPSHOME%\PERIbrdge - /opt/vps/PERIbrdge
The PERIbrdge package is responsible for building the tmscomm component (see
The MPSHOME/tmscommN Directory on page 138) and installing the Network
Interface Controller Daemon (NCD) process (see NCD on page 45). The vos.cfg
SRP startup file located in the etc subdirectory is copied to the
MPSHOME/tmscommN/etc directory, where it is processed during system startup.
Make production changes to that file only. Should the need arise to revert to a "clean"
(original) version, copy (do not cut) the file from this package to the
MPSHOME/tmscommN/etc directory.
The actual process executable file and the script used to create the tmscomm
component are located under MPSHOME/bin. Do not make any changes to these
files.

Page 132

# P0602477 Ver: 3.1.11

Base System Configuration
%MPSHOME%\PERIdist - /opt/vps/PERIdist
The PERIdist package contains utilities used for distributing information from a
source location to destination nodes in the MPS network and processing Speech
Server related log files. It also contains Perl scripts and a web server used by PeriDoc,
the comprehensive online reference material and documentation resource. By default,
the system sets the PERIDISTHOME variable to /opt/vps/PERIdist on Solaris
systems and %MPSHOME%\PERIdist for Windows.
The apache directory contains files related to the web server. These files typically
should not be edited. However, to specify nodes, after the initial installation, which
are allowed to distribute files to the corresponding destination (PERIdist) node, edit
the apache/conf/httpd.conf file. Details on this step are contained in the
Avaya Packages Install Guides.
The \etc subdirectory of this package contains configuration files that determine the
location to which Speech Server related log files are written. It also contains utility
configuration files. Information on these files are contained in the speech recognition
resource guides listed at the Reference Material link in PeriDoc.

%MPSHOME%\PERIglobl - /opt/vps/PERIglobl
This directory contains globally accessed software used by all other packages.
On Solaris, this package’s /etc subdirectory contains configuration files
copied to $MPSHOME/common/etc during system setup (see The
MPSHOME/common/etc Directory on page 88). On Windows, these files, with the
exception of compgroups, are located in the \commonetc\etc subdirectory
of this package and copied to %MPSHOME%\common\etc. The compgroups file
resides in the Windows \etc subdirectory of the package and does not get copied
over. The configuration files included are:
• compgroups
• comptypes.cfg
• gen.cfg
• pmgr.cfg
• srp.cfg
• vpshosts
These files should be used as backups for their deployed versions which were copied
to the MPSHOME/common/etc directory. Make production changes to those files
only: should the need arise to revert to a "clean" (original) version, copy (do not cut)
the file from this package to the MPSHOME/common/etc directory.
The misc subdirectory of the PERIglobl package contains the alarm.uts file.
This file contains the information for all predefined alarms in the system and is used to
build the alarm database in MPSHOME/common/etc. This file should not be edited:
to add or delete alarms from the database, use the PeriView Alarm Manager (see the
PeriView Reference Manual for more information).

# P0602477 Ver: 3.1.11

Page 133

Avaya Media Processing Server Series System Reference Manual

%MPSHOME%\PERIview - /opt/vps/PERIview
This directory contains the software used to run PeriView, the suite of graphical tools
used for MPS system administration, operation, and control. This package’s etc
subdirectory contains configuration files copied to MPSHOME/common/etc during
system setup (see The MPSHOME/common/etc Directory on page 88). The
configuration files included are:
• global_users.cfg
• periview.cfg
• ptlimages.cfg
These files should be used as backups for their deployed versions which were copied
to the MPSHOME/common/etc directory. Make production changes to those files
only: should the need arise to revert to a "clean" (original) version, copy (do not cut)
the file from this package to the MPSHOME/common/etc directory.
Complete information concerning PeriView can be found in the PeriView Reference
Manual.

%MPSHOME%\PERIplic - /opt/vps/PERIplic
This directory contains the software necessary to run licensed Avaya packages. By
default, the system sets the plicHOME variable to %MPSHOME%\PERIplic for
Windows (no such variable is set on Solaris). Though there is no configuration file
directly related to PERIplic, certain options can be invoked when running the license
server (plicd). For details on this licensing mechanism, including file locations and
options, see the Avaya Packages Install Guides.

%MPSHOME%\PERItms - /opt/vps/PERItms
This directory contains the software used by the Telephony Media Server (TMS) for
system and parameter configuration. The PERItms subdirectory structure is described
in the following table. Only those subdirectories directly related to configuration
issues in the context of this manual are included. Some of the files identified in the
PERItms/cfg subdirectory (%MPSHOME%\PERItms\cfg and
opt/vps/PERItms/cfg) and %MPSHOME%\PERItms\images directory are
documented in the sections that follow this one. Those files located in the
PERItms/site-cfg subdirectory (%MPSHOME%\PERItms\site-cfg and
opt/vps/PERItms/site-cfg) are discussed in more detail at The
MPSHOME/common/etc/tms Directory on page 103.

Page 134

# P0602477 Ver: 3.1.11

Base System Configuration

PERItms
Subdirectory

Contents

cfg

Protocol configuration and definition files, as well as system level TMS
software and hardware configuration files. Protocol files are discussed
in the Media Processing Server Series Telephony Reference Manual.
The remaining files include the ali_triplets.cfg,
atm_drv_triplets.cfg, atm_triplets.cfg, cardtypes.cfg,
pcm_triplets.cfg, ps_triplets.cfg, scsi_triplets.cfg,
and tms_triplets.cfg files.

cfg

Contains the protocol Configuration Definition Files (*.cdf) as well as
the cardtypes.cfg file. CDF files are generally not modified during
the life cycle of the system and therefore are not discussed further in
this manual.

images

Contains the same files as the Solaris package /cfg directory with
the exception of the cardtypes.cfg file and CDF files (see Windows
entry immediately above). Otherwise, information in the first column of
this table applies.

etc

Parameter and text string data files.

site-cfg

Protocol configuration files and TMS system configuration files copied
to The MPSHOME/common/etc/tms Directory (see page 103).

The cardtypes.cfg and majority of the *.triplets files are configured
during manufacture and installation, and do not typically require editing. A few of the
files, identified below, contain parameter reset variables that stipulate the interval at
which the associated hardware resets itself when required. The defaults are normally
adequate for most installations but can be changed if needed. If settings are changed,
the system must be restarted for them to take effect.
The /cfg/atm_triplets.cfg File
The following is a copy of the atm_triplets.cfg file installed by default on
the system.
Example: atm_triplets.cfg
;
;
PARAM Reset_Time = 10
;

# P0602477 Ver: 3.1.11

Page 135

Avaya Media Processing Server Series System Reference Manual
The /cfg/ps_triplets.cfg File
The following is a copy of the ps_triplets.cfg file installed by default on the
system.
Example: ps_triplets.cfg
;
;
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
PARAM
;
;

Reset_Time = 10
Latch1_Write = 0xff
Latch2_Write = 0
Volt1_ConvFactor = 23444
Volt2_ConvFactor = 23444
Volt3_ConvFactor = 58668
Volt4_ConvFactor = 60445
Curr1_ConvFactor = 81949
Curr2_ConvFactor = 145859
Curr3_ConvFactor = 28549
Curr4_ConvFactor = 8037

The /cfg/tms_triplets.cfg File
The following is a copy of the tms_triplets.cfg file installed by default on
the system.
Example: tms_triplets.cfg
;
;
PARAM Reset_Time = 10
;

Page 136

# P0602477 Ver: 3.1.11

Base System Configuration
%MPSHOME%\PERImps - /opt/vps/PERImps
This directory contains the software used on each MPS component in a system. The
package contains configuration files copied to MPSHOME/mpsN/etc during system
setup (see The MPSHOME/mpsN Directory on page 139). On Solaris systems,
initialization and startup files are also included. These files should be used as backups
for their deployed (copied) versions. Make production changes to those files only.
Should the need arise to revert to a "clean" (original) version, copy (do not cut) the file
from this package to the applicable directories as outlined in the following table.

PERImps
Subdirectory

Contents

etc

Sample configuration files copied to The MPSHOME/mpsN/etc
Directory (see page 142).

componentetc

Sample configuration files copied to The MPSHOME/mpsN/etc
Directory (see page 142).

misc

S20vps.startup file which is copied to the /etc/rc3.d directory
(see S20vps.startup on page 65). Also includes the peri.cshrc
and peri.vshrc shell initialization files and peri.* user files,
copied to /home/peri (the default $HOME variable for user peri)
minus the peri prefix. See Installing Avaya Software on a Solaris
Platform for more information on these files.

# P0602477 Ver: 3.1.11

Page 137

Avaya Media Processing Server Series System Reference Manual

The MPSHOME/tmscommN Directory
The MPSHOME/tmscommN directory contains files used for bridging within and
between MPS components. This component is built by, and files copied from, the
PERIbrdge package (see %MPSHOME%\PERIbrdge - /opt/vps/PERIbrdge on
page 132).
The following guidelines apply when configuring the tmscomm component.
MPS 500
All nodes (including the secondary if one is available) must have one odd numbered
tmscomm component (for example: 1, 3, 5, 7).
Check the vpshosts file to ensure that there is an entry for the new component.
MPS 1000
There can be, at most, two tmscomm components within an MPS 1000 cluster and
these must be paired and reside on two separate application processor nodes.
For N+1 configurations, it is not necessary that a tmscomm component reside in the
secondary node as long as a tmscomm pair exists in the cluster. A tmscomm pair is
defined as an odd/even numbered pair of tmscomm components. That is, if there is a
tmscomm1 present in the cluster, then tmscomm2 must be present. Similarly (for
example), tmscomm3 (odd) pairs with tmscomm4 (even). Also, as an example,
tmscomm2 and tmscomm3 is not considered a valid pair of tmscomm components.
Check the vpshosts file on all application processors within this MPS 1000 cluster to
ensure that there is an entry for the new component(s).
The /etc subdirectory of the tmscomm component contains the vos.cfg file
used to commence the NCD process at system startup (for additional information
about this process, see NCD on page 45). A copy of the vos.cfg file follows: for
an explanation of its format, see The vos.cfg File on page 143.
Example: vos.cfg
#
# Example vos.cfg file.
#

# NAMEHOSTPORTPRICOMMAND LINE
ncd

Page 138

-

-

0

ncd

# P0602477 Ver: 3.1.11

Base System Configuration
The MPSHOME/mpsN Directory
The $MPSHOME/mpsN (%MPSHOME%\mpsN) directory path contains configuration
and operations files specific to a single MPS component. The letter N denotes a
number that identifies an MPS component associated with the node. The number
assigned to an MPS can be obtained by issuing the VSH command comp.
One mpsN directory path exists for each MPS defined on the node in The vpshosts
File (see page 93). For example, if four such components (numbered 1 through 4 in
this example) are listed in the file, the following directories exist on the node:
$MPSHOME/mps1 (%MPSHOME%\mps1), $MPSHOME/mps2
(%MPSHOME%\mps2), $MPSHOME/mps3 (%MPSHOME%\mps3), and
$MPSHOME/mps4 (%MPSHOME%\mps4).
The files identified in the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) and
$MPSHOME/mpsN/apps (%MPSHOME%\mpsN\apps) directories are documented
following this table.
MPSHOME/mpsN
Directory

Description

apps

Contains call processing and administrative application executable
(*.vex) and configuration (*.acfg) files. Executable and configuration
files used by VENGINE for all call processing and administration
applications are copied to this directory by means of PeriView’s
Application Manager—Assign/(Re)Start Lines—Assign process.
Application executable files are identified as .vex.
Application configuration files are identified as .acfg and
defined in the aseLines.cfg file when an application is assigned to a
location.
Shared Libraries, identified as .so, are defined for an
application by means of PeriView’s Application Manager—Configure
Application—Shared Libraries Option. Shared Libraries are a copied to
the apps/lib directory by means of PeriView’s Application Manager—
Assign/(Re)Start Lines—Assign process when the application for which it
has been configured is assigned.
For additional information, see The MPSHOME/mpsN/apps Directory on
page 140.

etc

# P0602477 Ver: 3.1.11

Configuration and administration Files. Files include:
vos.cfg
commgr.cfg
vmm.cfg
vmm-mmf.cfg
ase.cfg
aseLines.cfg
ccm_phoneline.cfg
ccm_admin.cfg
tcad-tms.cfg
tcad.cfg
trip.cfg

Page 139

Avaya Media Processing Server Series System Reference Manual
The MPSHOME/mpsN/apps Directory
The $MPSHOME/mpsN/apps (%MPSHOME%\mpsN\apps) directory contains
MPS application executable files (*.vex) and configuration files (*.acfg).
The executable and configuration files used by VENGINE for all call processing and
administrative applications are copied to this directory by means of PeriView’s
Application Manager—Assign/(Re)Start Lines—Assign process. For a complete
description of this process, refer to the PeriView Reference Manual.
The following are application file types:
MPSHOME/mpsN/apps
File

Description

.vex

Call processing or administrative application’s executable file.
This file is copied to this directory when the application is
assigned to a phone line.

.acfg

Application configuration file. This file is copied to this
directory when the application is assigned to a phone line.

lib/.so

Shared library file configured for an application. Shared
libraries for these applications are located in the subdirectory
apps/lib. These files also are copied during the
Assign/(Re)Start Lines process. However, shared libraries
must be defined for the application by the Application
Manager—Configure Application—Shared Libraries option
prior to being assigned along with the application.

Each application has a single configuration file that defines the application’s
configuration parameters. This allows applications executing on multiple lines to use
the same configuration options.
An application’s configuration file allows configuration options to be specified
outside of the actual application itself. By editing the configuration file, the
application can execute with a different set of parameters than those that would have
been hard coded otherwise. In this way, an application can remain unaltered
regardless of the configuration parameters with which it is executing. For example,
by modifying the spoken language parameter in the configuration file, the application
stays the same but the spoken language used changes. Changes to application
configuration files will only affect applications assigned subsequent to the changes.
Applications that have been assigned/started before the modifications will have to be
terminated and unassigned with the PeriView Terminate/Un-Assign Lines tool then
reassigned and restarted to have the changes take affect.

Page 140

# P0602477 Ver: 3.1.11

Base System Configuration
The file extension (*.acfg) is appended to the application name when the file is
defined with the Application Manager—Configure Applications tool. If a
configuration file is not defined before an application is assigned, a default file is
created automatically during the Assign process. When the application is assigned, it
gets appended to the list in the aseLines.cfg file (see The aseLines.cfg File
on page 149). This list may be reordered with the Application Manager—Line Start
Order During Reboot tool.
The aseLines.cfg file looks for applications in the directory
$MPSHOME/mpsN/apps (%MPSHOME%\mpsN\apps). There are a number of
variables that may be defined within an application’s configuration (*.acfg) file,
some of which are illustrated in the following example (typical applications may have
more or less).
Example: .acfg
#
# This file is automatically generated by the application
# manager. Do not edit.
#
type=peripro
softTerm=600
args="-r -l 500 -k 600 -M 0 -h -D 128 -t 60 -o \"Please hold on.\"1
-C libpbisSYS.so -B /usr/lib/libc.so -B /usr/lib/ld.so "
interp=vengine
env=""
appmanFlags="-C is1:/home/peri/SHARED_LIBS_TEST/libpbisP.so -C
is1:/home/peri/SHARED_LIBS_TEST/libpbisSYS.so "

The file configured by means of PeriView’s Application Manager Configure
Application Tool (or created on Assign) is automatically generated and should not be
edited manually. For additional information about application management, refer to
the PeriView Reference Manual. For information about application development, see
the PeriProducer User’s Guide.

# P0602477 Ver: 3.1.11

Page 141

Avaya Media Processing Server Series System Reference Manual
The MPSHOME/mpsN/etc Directory
The $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory contains files for
defining SRP, system, and application configuration parameters which may be unique
for each MPS component. These files are identified in the following table and further
described thereafter.
MPSHOME/mpsN/etc

Page 142

File

Description

vos.cfg

Identifies the processes that run in the MPS VOS process
group.

commgr.cfg

Configuration parameters required to manage external host
communications.

vmm.cfg

Configuration parameters for the Voice Memory Manager
(VMM).

vmm-mmf.cfg

Multi-Media Format files (MMFs) to be activated during
system startup and related performance parameters.

ase.cfg

Lists processes that will be running in the ASE process
group.

aseLines.cfg

Lists applications running on the specified MPS and the
physical phone lines on which they are running.

ccm_admin.cfg

Stipulates phone line and service parameters for
administrative applications.

ccm_phoneline.cfg

Stipulates phone line state and service parameter values.

tcad-tms.cfg

Configuration and startup parameters for the TMS.

tcad.cfg

TMS healthcheck and debug options.

trip.cfg

Process alarm and debug parameters.

# P0602477 Ver: 3.1.11

Base System Configuration
The vos.cfg File
The vos.cfg file identifies the VOS processes that run on the MPS. This file is
stored in the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory, and is
used by SRP to start MPS-specific processes during system startup. The following is
an example of this file:
Example: vos.cfg
#
# Example vos.cfg file.
#
# NAME
HOST
PORT

PRI

COMMAND LINE

trip
0
trip
tcad
0
tcad
vmm
0
vmm
ccma
0
"ccm -c admin"
ccm
0
"ccm -c tms -s 1-48"
commgr
0
commgr
vstat
0
vstat
#
# Uncomment the appropriate host protocol entries
#
#atte
0
atte
#vpstn3270 0
vpstn3270
#appc_cm
0
appc_cm
#cca_mgr
0
cca_mgr
#cca_serv 
0
cca_serv
#geotel
0
geotel
#pos_serv 
0
pos_serv

Variable

Description

NAME

Shorthand notation by which that process is known to SRP, vsh, and
any other process that attempts to connect to it by name (essentially the
process' well-known system name).

HOST

Allows the process to be started on a remote node. A dash ("-")
specifies the local node.

PORT

Specifies the well-known port the process uses for IPC communication
with other processes. If a dash is present, it indicates that the system
fills in the port value at run time. A static port number only needs to be
assigned for those processes that do not register with SRP, and must
not conflict with the port numbers configured in the Solaris
/etc/services file.

PRI

Real-time (RT) priority. This field is currently not used on Windows. A 0
indicates that the process should be run under the time-sharing priority
class.

# P0602477 Ver: 3.1.11

Page 143

Avaya Media Processing Server Series System Reference Manual

Variable

Description

COMMAND LINE

Actual command line (binary) to be executed. Command line
arguments can be specified if the command and all arguments are
enclosed in quotes (see proxy in examples above). The normal shell
backslash escape mode ("\")may be used to embed quotes in the
command line. A command with a path component with a leading slash
is assumed to be a full path designation and SRP makes no other
attempt to locate the program. If the command path doesn’t begin with
a slash, SRP uses the (system) PATH environment variable to locate the
item. Avaya package installations add the various binary location paths
to this environment variable during their executions.

The commgr.cfg File
The COMMGR (Communications Manager) is the VOS software process that
provides external host management functions. This process is a generic application
interface to communication services independent of protocol. For information about
the COMMGR process and related COMMGR commands, see COMMGR on page
43.
The commgr.cfg file defines configuration parameters for the COMMGR process.
It is stored in the directory $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc).
For more information and protocol-specific examples, refer to the COMMGR
Reference Manual.

VMM Configuration Files
VMM is responsible for many of the speech recording and playback functions in the
MPS system. VMM provides run-time services for application-controlled playback
and recording of speech elements. There are two VMM configuration files, which are
stored in the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory:
vmm.cfg and vmm-mmf.cfg. For information about the VMM process, see VMM
on page 49.
The vmm.cfg File
The vmm.cfg file defines configuration parameters for the Voice Memory Manager.
Any configuration option available to VMM can be entered here and processed for
VMM on system startup; however, options to VMM entered at a system console
override those provided in this file. For the default file, basic descriptions and formats
of file entries are given immediately preceding the actual data to which they apply,
and are relatively self-explanatory. The options contained in the default file can only
be issued through the file and not from the command line. The following example is
the basic default file provided with the system.

Page 144

# P0602477 Ver: 3.1.11

Base System Configuration

Example: vmm.cfg
#
# Example vmm.cfg file.
#
# Note: For all available configuration options see documentation
# for VMM.
#
#
# numcachethd 
#
# Set the number of cache channel management threads to
# be used by VMM's cache management thread for loading
# audio data into voice data memory (VDM).
#
#
 is the number of cache channel management
#
threads to be started.
#
#
default = 1
numcachethd 8
#
# tonetable 
#
# Specifies the MMF containing the tone table to be used
# to generate tones when not using a hardware based tone
# generator.
#
#
 must be a full path (.mmd or .mmi
#
extensions are not needed but will be accepted).
#
#
default = no tone table will be loaded.
#
tonetable /mmf/peri/dtmf
#
# vmdmaxlock 
#
# Specifies the maximum amount of VDM to use for locking
# elements. Care must be taken when modifying this parameter
# to insure that there is enough VDM available to page in
# data that is not locked as needed.
#
#
 must be a whole percentage from 0-100.
#
#
default = 50
#
vdmmaxlock 50

For a full list of commands and options available to VMM, see the VMM Commands
section in the Avaya Media Processing Server Series Command Reference Manual.

# P0602477 Ver: 3.1.11

Page 145

Avaya Media Processing Server Series System Reference Manual
The vmm-mmf.cfg File
The vmm-mmf.cfg file identifies performance parameters related to MMF files.
Any configuration option available to VMM in relation to MMF file processing is
entered here and operated on upon system startup. However, options to VMM entered
at a system console override those provided in this file. Basic descriptions and formats
of file entries are given immediately preceding the actual data to which they apply,
and are relatively self-explanatory. Uncomment a line to activate that option
(commented items depict the default value). Starting with MPS 2.1, MMF files are
loaded automatically when placed in the appropriate (sub)directory in
$MEDIAFILEHOME. Loading MMFs in vmm-mmf.cfg is still supported but not
recommended.
The following example is the basic default file provided with the system.
Example: vmm-mmf.cfg

Sheet 1 of 2

#
# Example vmm-mmf.cfg file.
#
# Note: For all available console options see documentation
#
for VMM.
#
#
#
# loadall 
#
# When set to on VMM will attempt to lock all elements
# into VDM upon activation on a first come, first serve
# basis. When set to off, only elements flagged for
# locking into VDM will be locked into VDM (again on a
# first come, first serve basis).
#
#
default = off
#
# Note: The loadall setting can be changed before and after
# mmfload commands. This allows some MMFs to have all
# elements locked in VDM while others have no elements
# locked in VDM.
#
#loadall on
#
# preload 
#
# Specifies the number of seconds of audio data to load
# for each element that is locked in VDM.
#
#
default = all
#
# Note: This option can be used to lock "more" elements in
# VDM by only locking the first n seconds of each element.
# If the remaining data for the element is needed it will
# be paged in as needed by VMM. As with the loadall option,
# preload can be changed before and after mmfload command.
#
# preload 2

Page 146

# P0602477 Ver: 3.1.11

Base System Configuration
Example: vmm-mmf.cfg

Sheet 2 of 2

#
# mmfload 
#
# Activates  for the application .
# If APPName is not specified the MMF will be activated
# system wide (in the system hash table).
#
#  must be a full path (.mmd or .mmi
#
extensions are not needed but will be accepted).
#
#  may be either "system" or the name of some
# application. If it is not specified "system" will be
# used by default.
#

If all the elements do not fit into Voice Data Memory (VDM) when you load
(activate) the vocabulary file, there is not enough voice memory. To alleviate this
situation, perform the following steps:
• Remove any elements not used by applications. See the PeriStudio User’s Guide
for more information on this procedure.
• Set the vmm loadall command to off. This allows only elements with a
lock flag set to be loaded into VDM (limits total number of elements loaded).
• Set the vmm preload option to accommodate the loadall command (see
previous bullet). If loadall is off, set preload to all. If loadall is
on, the number of seconds to preload into VDM should be kept small if you are
encountering this condition.
If this file is modified, VMM must be stopped and restarted for the changes to take
effect.
For a full list of commands and options available to VMM, see the VMM Commands
section in the Avaya Media Processing Server Series Command Reference Manual.

# P0602477 Ver: 3.1.11

Page 147

Avaya Media Processing Server Series System Reference Manual

ASE Configuration Files
The ase.cfg File
The ase.cfg file identifies the names of processes that are associated with the
Application Services Environment (see ASE Processes on page 36). If processes are
intended to be run on nodes other than the one containing this file, this is to be
indicated for each process in the HOST column. Otherwise, a dash in this column
indicates the local node.
VMST is aliased as vms in its SRP startup files, but should not be confused with
previous (“non-extended”) versions of VMS.
Example: ase.cfg
$1
#
# Example ase.cfg file.
#
# NAME HOST
PRI
COMMAND LINE
vms
0
vms (Solaris entry)
vmst
0
vmst (NT entry)

The string "$1" must be the only entry on the first line of this file. If the line does not
exist, it must be added manually.

Page 148

Field Name

Description

NAME

Shorthand notation by which that process is known to SRP, vsh, and
any other process that attempts to connect to it by name (essentially
the process' well-known system name).

HOST

Lists host node used for command and application processing. If
processes are to run locally only, this column contains a dash.

PRI

Real time priority. Currently not supported on Windows. A value of 0
in this column both forces the use of the time-sharing class (in case it
was set to something else) and sets the numeric priority value to the
default base priority for the class. This setting should not be changed
on Solaris systems.

COMMAND

Actual command line (binary) to be executed. Command line
arguments can be specified if the command and all arguments are
enclosed in quotes (see proxy in examples above). The normal shell
backslash escape mode ("\")may be used to embed quotes in the
command line. A command with a path component with a leading
slash is assumed to be a full path designation and SRP makes no
other attempt to locate the program. If the command path doesn’t
begin with a slash, SRP uses the (system) PATH environment variable
to locate the item. Avaya package installations add the various binary
location paths to this environment variable during their executions.

# P0602477 Ver: 3.1.11

Base System Configuration
The aseLines.cfg File
The aseLines.cfg file identifies the applications to be run on the specified MPS
and the physical phone lines to which they have been assigned. This file is stored in
the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory.
Application and location information is added to this file by means of PeriView’s
Application Manager—Assign/(Re)Start Lines Tool—Assign process. Each time an
application is assigned, it gets appended to the end of this list: conversely, when an
application is unassigned with the Terminate/Un-Assign Lines tool, its entry is
removed. This list drives the display in these tools. When the tool is launched, the
order of the applications reflects the order of this list. The list can be reordered with
the Application Manager—Line Start Order During Reboot tool: when it is, the
aseLines.cfg file reflects the new order. For information about these procedures,
see the PeriView Reference Manual.
See that table on the following page for explanations about each entry in the sample
file that follows.
Example: aseLines.cfg
$1
#
# aseLines.cfg: line --> application database
#
# This file was generated by SRP on:
# Wed Apr 19 11:01:24 2000
#
#
# LineNode
#
1
2
3
4
39
26
77
18

Application

womquatECVinyl
womquatFabPoos
womquatTestRun
womquatEskiePix
womquatOutOfLuck
womquatOutOfLuck
womquatECVinyl
womquatECVinyl

User
jdg
jdg
peri
jftdel
peri
peri
jdg
jdg

Field Name

Description

Line

Numeric designation of the MPS phone line to which the application is
assigned.

Node

Node name on which the line is configured and the application
executes.

Application

Name of application assigned to the line. The application’s
configuration (*.acfg) and executable (*.vex) files are located in
the $MPSHOME/mpsN/apps (%MPSHOME%\mpsN\apps) directory of
the MPS.

# P0602477 Ver: 3.1.11

Page 149

Avaya Media Processing Server Series System Reference Manual

Field Name

Description

User

Name of user who assigned the application to the line. This
information is used by the Application Manager for security purposes.
For detailed information concerning user security, see the PeriView
Reference Manual.

The string "$1" must be the only entry on the first line of this file. If the line does not
exist, it must be added manually.
In the example file above, user jdg assigned the application named ECVinyl to
line 1 of node womquat first. This user assigned the same application to lines 77
and 18, in that order, but only after lines 2, 3, 4, 39, and 26 had applications
assigned to them. Though lines 2, 3, and 4 had different applications assigned to them
by different users, they were assigned to those lines sequentially. Conversely, user
peri assigned the application OutOfLuck to line number 39 before assigning it to
the lower numbered line 26. Thus, this order was established in the aseLines.cfg
file, and is carried over as the default view in the applicable PeriView Application
Manager tools.

Page 150

# P0602477 Ver: 3.1.11

Base System Configuration
CCM Configuration Files
The ccm_phoneline.cfg File
The ccm_phoneline.cfg file stipulates phone line state and service parameter
values. It is stored in the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc)
directory.
Any phone line configuration option available to CCM is entered here and processed
for CCM on system startup. However, options to CCM entered at a system console
override those provided in this file. Basic descriptions and formats of file entries are
given immediately preceding the actual data to which they apply, and are relatively
self-explanatory. Uncomment a line to activate that option (commented items depict
the default value). The following example is the basic default file provided with the
system.
Example: ccm_phoneline.cfg
#
#
#
#
#
#
#
#
#

Sheet 1 of 4

$Id: ccm_phoneline.cfg,v 1.9 2002/02/19 20:58:02 russg Exp $
Example ccm_phoneline.cfg file.
Note that options in this file will be overridden by
console options to ccm

#
# defLineState 
#
# Set the default state for the phone line. The phone line
# Will enter this state at startup, and whenever a call disconnects.
# The available values for  are BUSY and NOANSWER.
#
Default = BUSY
#
# defLineState NoAnswer
#
# setEditSeq =[,DETECTALWAYS][,ENABLE][,KEEPTERM]
#
#
user seq. : any of the following:
#
US0 - User edit sequence #0
#
US1 - User edit sequence #1
#
US2 - User edit sequence #2
#
US3 - User edit sequence #3
#
USDEL
- Delete (empties DTMF buffer) user edit sequence
#
Removes all digits from the digit buffer. This edit
#
sequence is active only when a request for digits is
#
pending.
#
USBKSP
- Backspace user edit sequence
#
Removes the last DTMF digit from the input buffer. This
#
edit sequence is active only when a request for digits is
#
pending.
#
USTERMCHAR - Input Termination user edit sequence
#
Causes DTMF input to complete. This edit sequence is
#
active only when a request for digits is pending.
#
seq. str : A sequence of 0 to 4 characters from the following character set:
#
{0,1,2,3,4,5,6,7,8,9,*,#}
#
#
NOTE: '=' Results in the configuration being cleared out
#

# P0602477 Ver: 3.1.11

Page 151

Avaya Media Processing Server Series System Reference Manual

Example: ccm_phoneline.cfg

Sheet 2 of 4

#
DETECTALWAYS : If this argument is provided then the detection of this edit
#
sequence will occur (if enabled) whether or not a mx_ReceiveDTMF()
#
is pending. When this argument is not provided then detection will
#
only occur (if enabled) when an mx_ReceiveDTMF is pending.
#
#
The exception to this are the USDEL, USBKSP and USTERMCHAR user edit
#
sequences which this argument doesn't apply (i.e., USDEL and USBKSP
are
#
always active and US_TERMCHAR is only active when a mx_ReceoveDTMF()
#
request is in progress).
#
ENABLE
: If this argument is provided then the edit sequence will be enabled.
#
If this argument is NOT provided then the edit sequence will be
disabled.
#
#
KEEPTERM : This argument is only valid when programming the sequence USTERMCHAR.
This
#
argument causes the USTERMCHAR edit sequence to be retained and
returned in
#
the event MX_EVENT_RECEIVE_DTMF_COMPLETE. If this argument is not
provided
#
when the USTERMCHAR edit sequence is programmed then the edit sequence
will
#
be removed from the digit buffer and will NOT be returned in the event
#
MX_EVENT_RECEIVE_DTMF_COMPLETE.
#
#
Configures a user edit sequence. User edit sequences are used for detecting
#
special caller keystrokes (touch tone input). This gives the caller and
application
#
additional control over DTMF input.
#
#
NOTE: The special case where NO user sequence is provided. This results in
#
the user edit sequence being cleared
#
#
Default = There are NO default edit sequences they must be configured and enabled
#
if they are to be used..
#
# setEditSeq ....
setEditSeq 'USBKSP=*0*'
setEditSeq 'USDEL=*#'
setEditSeq 'USTERMCHAR=#,enable'
#
# enEditSeq [,...]
#
#
user seq. : any of the following:
#
US0
- User edit sequence #0
#
US1
- User edit sequence #1
#
US2
- User edit sequence #2
#
US3
- User edit sequence #3
#
USDEL
- Delete user edit sequence
#
USBKSP
- Backspace user edit sequence
#
USTERMCHAR - Input Termination user edit sequence
#
#
Enables all of the edit sequences specified in the argument list.
#
Default = All edit sequences are disabled by default.
#
# enEditSeq ....

Page 152

# P0602477 Ver: 3.1.11

Base System Configuration
Example: ccm_phoneline.cfg

Sheet 3 of 4

#
# maxCacheLoadSize 
#
# Sets the maximum number of pages in a single cache load request.
# Max Pages = (max size in kilobytes) / (size of a single VDM page in kilobytes)
#
Range of values is 2, 3, 4, ..., 100.
#
Default = 32
#
# maxCacheLoadSize 32
#
# setSvcParam =
#
# Sets a service parameter for CCM/TMS.
#
# Available parameters
# ====================
#
cpdMinSil
>= 3000ms and <= 86400000ms
#
Sets the minimum amount of silence required for the CPD resource to generate a
#
SILENCE event.
#
# defLineState
Busy or NoAnswer (Default Busy)
# Sets the default state between calls for CCM and the phone line resource.
#
# discguard
5s - 10m, or 0 to disable (Default 5m)
# The maximum time CCM will wait for all outstanding responses to be received
# before it will force the disconnect sequence to complete
#
# discstrip
>= 0ms (Default 0ms)
# Sets the amount of data to strip from the end of a recording that is terminated
# by the caller hanging up (disconnect).
#
# dtmfguard
ON or OFF (Default OFF)
# Used to turn touch tone validation in TMS on or off.
#
# dtmftonedur
40ms - 2040ms (Default 40ms)
# The minimum duration a touch-tone must exist for before the TMS considers
# it to be valid.
#
# first
2s - 86400s (Default 10s)
# Sets the maximum amount of time allowed for a caller to enter the first
# touch-tone in an input sequence (first character timeout).
#
# firstsil
0ms - 20400ms (Default 0ms)
# Sets the amount of silence required to abort a record on first silence
# detection (before voice starts). This parameter only applies to synchronous
# recordings.
#
# inter
2s - 86400s (Default 10s)
# Sets the maximum amount of time allowed for a caller to pause in-between
# touch-tones in a multiple tone input sequence.
#

# P0602477 Ver: 3.1.11

Page 153

Avaya Media Processing Server Series System Reference Manual

Example: ccm_phoneline.cfg

Sheet 4 of 4

# intersil
0ms - 20400ms (Default 0ms)
# Sets the amount of silence required to automatically abort a recording at
# the end of voice. This parameter only applies to synchronous recordings
#
# pickup
1s - 86400s (Default 30s)
# Sets the guard timer for answering a call originated by the TMS.
#
# rsrcallocguard
0s - 86400s (Default is 1s)
# Specifies the time that TMS should wait for a resource to become available
# during a request by CCM to add a resource to its RSET.
#
# silstrip
0ms - 20400ms (Default 0ms)
# Sets the minimum amount of silence required before the DSP will start
# stripping the silence from the recording. This parameter only applies
# to synchronous recordings
#
# silthresh
0 - 63750 (Default 32)
# Sets the minimum amount of noise needed to distinguish between silence and
# non-silence for a recorder. This parameter only applies to synchronous recordings.
#
# totalcall
1s - 254h (Default 10m)
# Total call guard timer which is started when the connect event is sent to MX.
#
# ttstrip
>= 0ms (Default 100ms)
# The number of milliseconds of data to strip from the end of a recording
# that is terminated by a touch tone. This parameter only applies to
# synchronous recordings
##
# setsvcparam dtmfguard=on
#
# recmode 
#
# Sets the mode of recording that will be used (i.e., Disk based or Network based).
#
This parameter affects both synchronous and asynchronous recordings.
#
Available values for  are DISK or NETWORK.
#
Default = NETWORK
#
#
NOTE: This parameter can not be set from vsh console or by the application,
#
it can only be set during configuration.
#
# recmode NETWORK

For a full list of commands and options available to CCM, see the CCM Commands
section in the Avaya Media Processing Server Series Command Reference Manual.

Page 154

# P0602477 Ver: 3.1.11

Base System Configuration
The ccm_admin.cfg File
The ccm_admin.cfg file stipulates service parameter values for administrative
lines to which administrative applications are assigned. This file is stored in the
%MPSHOME%\mpsN\etc directory.
Any configuration option available to an administrative CCM (ccma - see The
vos.cfg File on page 143) is entered here and processed for this instance of CCM
on system startup. However, options to CCM entered at a system console override
those provided in this file. Basic descriptions and formats of file entries are given
immediately preceding the actual data to which they apply. Uncomment a line to
activate that option. For a full list of commands and options available to CCM, see the
CCM Commands section in the Avaya Media Processing Server Series Command
Reference Manual. The following example is the basic default file provided with the
system.
Example: ccm_admin.cfg Sheet 1 of 2
#
#
#
#
#
#
#
#
#

$Id: ccm_admin.cfg,v 1.4 2002/02/19 20:58:02 russg Exp $
Example ccm_admin.cfg file.
Note that options in this file will be overridden by
console options to ccm

#
# maxCacheLoadSize 
#
# Sets the maximum number of pages in a single cache load request.
# Max Pages = (max size in kilobytes) / (size of a single VDM page in kilobytes)
#
Range of values is 2, 3, 4, ..., 100.
#
Default = 32
#
# maxCacheLoadSize 32
#
# setSvcParam =
#
# Sets a service parameter for CCM/TMS.
#

# P0602477 Ver: 3.1.11

Page 155

Avaya Media Processing Server Series System Reference Manual

Example: ccm_admin.cfg Sheet 2 of 2
# Available parameters
# ====================#
# discguard
5s - 10m, or 0 to disable (Default 5m)
# The maximum time CCM will wait for all outstanding responses to be received
# before it will force the reset/disconnect sequence to complete
#
# firstsil
0ms - 20400ms (Default 0ms)
# Sets the amount of silence required to abort a record on first silence
# detection (before voice starts). This parameter only applies to synchronous
# recordings.
#
# intersil
0ms - 20400ms (Default 0ms)
# Sets the amount of silence required to automatically abort a recording at
# the end of voice. This parameter only applies to synchronous recordings
#
# rsrcallocguard
0s - 86400s (Default is 1s)
# Specifies the time that TMS should wait for a resource to become available
# during a request by CCM to add a resource to its RSET.
#
# silstrip
0ms - 20400ms (Default 0ms)
# Sets the minimum amount of silence required before the DSP will start
# stripping the silence from the recording. This parameter only applies
# to synchronous recordings
#
# silthresh
0 - 63750 (Default 32)
# Sets the minimum amount of noise needed to distinguish between silence and
# non-silence for a recorder. This parameter only applies to synchronous recordings.
#
# ttstrip
>= 0ms (Default 100ms)
# The number of milliseconds of data to strip from the end of a recording
# that is terminated by a touch tone. This parameter only applies to
# synchronous recordings
#
# setsvcparam discguard=5m
#
# recmode 
#
#
Sets the mode of recording that will be used (i.e., Disk based or Network based).
#
This parameter affects both synchronous and asynchronous recordings.
#
Available values for  are DISK or NETWORK.
#
Default = NETWORK
#
#
NOTE: This parameter can not be set from vsh console or by the application,
#
it can only be set during configuration.
#
# recmode NETWORK

Page 156

# P0602477 Ver: 3.1.11

Base System Configuration
TCAD Configuration Files
The tcad-tms.cfg File
The tcad-tms.cfg file stipulates configuration and startup parameters for the
TMS. Basic descriptions and formats of file entries are given immediately preceding
the actual data to which they apply, and are relatively self-explanatory. Uncomment a
line to activate that option (commented items depict the default value). The following
example is the basic default file provided with the system.
Example: tcad-tms.cfg
#
# Example tcad-tms.cfg file
#
#
# tms-cfg-timeout n
#
# Synopsis:
# Set the maximum amount of time(in seconds) to wait
# for a response for a single signal sent to TMS.
# n = seconds, 0 disables timeout.
#
Default = 300
#
#tms-cfg-timeout
60
#
#tms-cfg-start
#
# Synopsis:
# Uses tms_AcceptConfig to try to start config. If
# request is rejected by TMS, load aborts, otherwise
# system state will be set to 'config'.
#
tms-cfg-start
#
#syssetparams '  '
#
# Synopsis:
# Sets one system parameter.
# id = parameter id
# val = a uint specifying the value
#
#
# Start Loading/Configuring the TMS hardware
#
ldr-start
#
# Notify tcad that load of TMS is complete
#
tms-cfg-done

# P0602477 Ver: 3.1.11

Page 157

Avaya Media Processing Server Series System Reference Manual
The tcad.cfg File
The tcad.cfg file stipulates TMS debug options. It is stored in the
$MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory.
Basic descriptions and formats of file entries are given immediately preceding the
actual data to which they apply, and are relatively self-explanatory. Uncomment a line
to activate that option (commented items depict the default value). Options to TCAD
entered at a system console override those provided in this file. The following
example is the basic default file provided with the system.
Example: tcad.cfg
#
# Example tcad.cfg file.
#
# Note that options in this file will be overridden by
# consoled options to tcad
#
#
# dlogDbgOn ,
#
# Enable tcad debug output and rediret it to stdout
# To redirect it to a file rename stdout to file
#
#dlogDbgOn stdout,general

!

Page 158

The STDOUT and STDERR destination objects are for debugging purposes only
and should not be used.

# P0602477 Ver: 3.1.11

Base System Configuration
TRIP Configuration Files
The trip.cfg File
The trip.cfg file stipulates process alarm, healthcheck, and debug parameters. It is
stored in the $MPSHOME/mpsN/etc (%MPSHOME%\mpsN\etc) directory.
Basic descriptions and formats of file entries are given immediately preceding the
actual data to which they apply, and are relatively self-explanatory. Uncomment a line
to activate that option (commented items depict the default value). Options to TRIP
entered at a system console override those provided in this file. The following
example is the basic default file provided with the system.

Example: trip.cfg Sheet 1 of 2
#
# Example trip.cfg file.
#
# Note that options in this file will be overridden by
# console options to trip
#
#
#
#hc-interval n
#
# Synopsis:
# TMS system internal health check interval, in seconds.
# n = seconds, range 0..TBD. Value of 0 disables health check.
#
Default = 2 (every 2 seconds).
#
hc-interval 15
#hc-miss-cnt-max n
#
# Synopsis:
# TMS Health check miss count maximum
# n = count allowed to be missed, range 0..100
#
Default = 5.
#
hc-miss-cnt-max 3
#secondary-vos-ctrl-delay n
#
# Synopsis:
# A secondary TRIP will delay attempting to get control of the
# TMS by the amount of seconds specified by this parameter.
# n = seconds, range 0..TBD.
#
Default = 300 (5 min)
secondary-vos-ctrl-delay 300

# P0602477 Ver: 3.1.11

Page 159

Avaya Media Processing Server Series System Reference Manual

Example: trip.cfg Sheet 2 of 2
#
# defaultroute procName
#
# Set the default route for asynchronous TMS messages
# (i.e., alarms) to the specified process.
# procName = name of process to route asynchronous TMS messages to
#
Default = tcad
#
#
# defaultroute tcad
#
# dlogDbgOn ,
#
# Enable tcad debug output and rediret it to stdout
# To redirect it to a file rename stdout to file
#
#dlogDbgOn stdout,general

!

The STDOUT and STDERR destination objects are for debugging purposes only
and should not be used.

TMS Watchdog Functions
All traffic between an MPS node and its associated TMS systems are sent through
TRIP. After the connection between a TMS and TRIP is established, it regularly sends
the TMS a ping message, which resets the watchdog timer in the Network Interface
Card (NIC). If the watchdog timer expires (i.e., is not reset because of a system
failure), the TMS can reboot the host node. Similarly, if TRIP fails to receive a reply
from the NIC card, it can reset the TMS.
The hc-interval entry in this file indicates the time interval, in seconds, for
TRIP to ping the TMS. The hc-miss-cnt-max entry stipulates the number of
missed pings allowed before TRIP reboots the TMS. Both of these settings are used in
conjunction with the watchdog timer in the NIC card (see Network Interface
Controller (NIC) or Hub-NIC on page 27).

Page 160

# P0602477 Ver: 3.1.11

Base System Configuration

This page has been intentionally left blank.

# P0602477 Ver: 3.1.11

Page 161

Avaya Media Processing Server Series System Reference Manual

Page 162

# P0602477 Ver: 3.1.11

Common
Configuration
This chapter covers:
1. Multi-Media Format Files
(MMFs)
2. Call Simulator Facility
3. Alarm Filtering
4. Interapplication/Host Service
Daemon Data Exchange

Avaya Media Processing Server Series System Reference Manual

Multi-Media Format Files (MMFs)
A Multi-Media Format (MMF) file contains audio elements (vocabulary and Caller
Message Recording [CMR]) and/or fax data. An individual message in an MMF file
is called an element. One MMF file will normally contain many elements. A single
MMF actually consists of two files:
• The index file consists of element names, sizes and other attributes organized
by means of Element Access Pointers (or EAPs). The index file has a .mmi
extension.
• The data file contains audio data (audio, fax, Telecommunications Device for
the Deaf [TDD] tones, etc.) and has a .mmd extension.

Vocab
Vocab.mmd

Vocab.mmi
EAP#1

Audio Data
File

EAP#2
EAP#3

Element Index
File

Anatomy of an MMF File

In the diagram above, each entry in Vocab.mmi points to audio message data in
Vocab.mmd. Together, they constitute an MMF element, which is the entity
comprised by an index/recording pair of data entries. This scheme allows MMF
elements to be accessed randomly (in any order).

How to Create an MMF File
In order to use MMF vocabulary files, empty MMF files must be created into which
vocabulary elements or recorded messages can be stored. This is accomplished with
the mkmf command or with PeriStudio. For information on how to create an MMF
file with PeriStudio, refer to the PeriStudio User’s Guide.

Page 164

# P0602477 Ver: 3.1.11

Common Configuration
Vocabulary MMF Files vs. CMR MMF Files
Applications use MMF files as vocabularies to output named elements over the
telephone lines. To make recordings from callers, applications use MMF files that are
designated for use with the CMR (Caller Message Recording) feature. Although it is
possible to both record into and play back from a single MMF file, separate files are
generally used for vocabulary and CMR functions.
For more information on CMR, see the Avaya Media Processing Server Series Caller
Message Recording (CMR) Feature Documentation. The sections that follow
concentrate on using MMF files for vocabularies.
MMF Vocabulary
File Created

MMF loaded to
Voice Memory

MMF Recorded
on Disk

vocab
1.Greeting
2.Your Balance
3.Mortgage
4.One hundred

Speaks to the Caller

MMF CMR File
Created

System and File
Parameters Set

Caller Speaks to System

Vocabulary vs. CMR MMF Functionality

For the purpose of providing voice output over telephone lines, MMF files are played
from cache memory.

# P0602477 Ver: 3.1.11

Page 165

Avaya Media Processing Server Series System Reference Manual

Activating MMF Files
When an MMF file is activated, its element names are loaded into system memory for
fast lookup. The recorded data of the elements are also loaded into Voice Data
Memory (VDM).
VMM automatically loads all MMFs placed in the $MEDIAFILEHOME directory
structure. The directory structure is configured according to component and the
function of the MMF:
$MEDIAFILEHOME

/mpsX

/system

/digitTable

Page 166

/default

/mpsY
.
.
.
/app_nameX

/record /default

/record

•

$MEDIAFILEHOME - The root media file directory (typically /mmf/peri)

•

/mpsN - The component subdirectory. There is one subdirectory for each
component installed on the system.

•

/system - Contains files for MMFs used system wide. There should be only
one system subdirectory per component. Any MMFs placed in this
subdirectory are available to all applications.

•

/appname - Contains files for MMFs used by a specific application. There
should be an application-specific subdirectory for each application that uses
an application-specific MMF. Any MMFs placed in this application are
available only to the specific application.

•

/record - Contains the MMF used as the default record (CMR) MMF for
the system/application. This should contain only one MMF.

•

/default - Contains the MMF used as the default play MMF for the
system/application. This should contain only one MMF.

# P0602477 Ver: 3.1.11

Common Configuration
For example, the component MPS1 uses library.mmf as the default system play
MMF and messages.mmf as the default system record MMF. An application
banking1 uses an application specific MMF, banking.mmf, as a default play
MMF. The numset.mmf application is available to all applications. The following
are the directory locations for the MMFs:
$MEDIAFILEHOME/mps1/system/play/library.mmf
$MEDIAFILEHOME/mps1/system/record/record.mmf
$MEDIAFILEHOME/mps1/system/numset.mmf
$MEDIAFILEHOME/mps1/banking/play/banking.mmf
Loading MMFs using the vmm mmfload command in the vmm-mmf.cfg file is
still supported but not recommended. Using mmfload in vmm-mmf.cfg instead of
using the $MEDIAFILEHOME directory structure/VMM automatic load function can
create problems in N+1 redundancy systems. mmfload commands in vmmmmf.cfg are processed after VMM finishes loading all other MMFs in the
$MEDIAFILEHOME directory structure.
Both mmfload and mmfunload (unload MMFs) commands may be issued from the
VSH command line while the system is running. For a step-by-step procedure of how
to activate and deactivate MMF files, see the Avaya Media Processing Server Series
System Operator’s Guide.
You can also set applications up with their own dedicated MMF files for recording
and speech playback. (See Application-Specific MMF Files on page 174.)

# P0602477 Ver: 3.1.11

Page 167

Avaya Media Processing Server Series System Reference Manual
Delimited and Partial Loading
By default, the system loads each element’s full name into system memory (as
opposed to voice memory). Complete element name loading may cause complications
if system memory is limited. There are two methods of conserving system memory
when loading elements, which are set using vmm nload command in the vmm.cfg
configuration file (see The vmm.cfg File on page 144):
• Delimited loading loads element names up to a special delimiter character
(the semicolon ";"). When creating elements in PeriStudio, assign names to
the elements such that they contain the delimiter character. Then place the
command vmm nload del into the vmm.cfg configuration file.
• Partial loading saves memory by only loading a certain number of characters
of each element name, but this increases the possibility of a name conflict.
To specify partial loading, set nload to the number of characters to load
from the element names. For example, to load only the first 10 characters, put
the following line in the vmm.cfg configuration file: vmm nload 10
The table below compares the two methods of loading element names into memory:
• Partial loading has been set to the first three characters.
• Delimited loading always uses the semicolon (";") as the delimiting
character.

Delimited vs. Partial Loading of Vocabulary Labels
Element name

Partial loading

Delimited loading

Greeting

Gre

Greeting

Yes

Yes

Yes

No

No

No

P123;Mortgage

P12

P123

P456;Check

P45

P456

Breakaway

Bre

Breakaway

Thank;you;for

Tha

Thank

BlahBlahBlah

Bla

BlahBlahBlah

Once the system is initialized, the value for nload cannot be modified.

!

Page 168

VMM allows identical element names after the names are truncated. However,
only the element that was loaded first will be accessed when referenced. To avoid
this problem, make sure that all element names will be unique after the partial or
delimited loading you selected. (See the above table.)

# P0602477 Ver: 3.1.11

Common Configuration
Audio Playback
By default, VMM does not attempt to load all vocabulary elements into VDM. If
"loadall on" is specified in the vmm-mmf.cfg file, VMM attempts to load all
vocabulary items into VDM. Elements not loaded into audio memory will be cached
in/out of memory as necessary.
Proper setting of vdmmaxlock is important to insure there is enough VDM reserved
for caching. If the size of the activated MMF elements exceeds available voice
memory (VDM becomes depleted), an alarm is generated. If some of the VDM is
freed by deactivating one or more MMF files, the MMF files to be loaded must be
deactivated and then reactivated in order to use newly available memory. The
following parameters directly affect VDM performance. pagesize, vdmmaxlock,
and preload are set in the vmm.cfg file (see The vmm.cfg File on page 144).
loadall is set in the vmm-mmf.cfg file (see The vmm-mmf.cfg File on page
146). If changes are made to these entries, VMM must be stopped and then restarted
for the changes to take affect. For information on stopping and starting VMM, see the
Avaya Media Processing Server Series System Operator’s Guide.

Configuration Parameters for Voice Data Memory Management
VMM Parameter

Description

pagesize

The size, in kilobytes, to use for a single segment of VDM. A value
that is too large means that more memory is taken from VDM.
Decreasing the value makes more efficient use of VDM (less wasted
space) but uses more system memory. The default value is 8 Kb.
Typically, the defaults should be used. Changes to this parameter
should be considered in the context of the value of vdmmaxlock
(see below).

vdmmaxlock

Specifies the maximum amount of VDM, as a percentage, to use for
locking elements. This option is used to ensure there is sufficient
VDM available for the VMM caching mechanism to function
efficiently. Unless good reason exists otherwise, the default value of
50% should be used. Increasing this value makes audio element
access quicker, but reduces VDM available for caching audio data
not locked in VDM; decreasing this value has the opposite affect.
Changes to this parameter should be considered in the context of
the value of pagesize (see above).

preload

Specifies the number of seconds of audio to load into VDM prior to
an element’s initial usage. This option is used in conjunction with
loadall (see below). If loadall is turned on, VMM attempts to
preload audio data for each element: if off, VMM makes the attempt
only for locked elements (see “Custom Loading” on page 171 for
more information). If set in the vmm-mmf.cfg file, should precede
any mmfload commands. Default value is all (audio data loaded
into VDM).

# P0602477 Ver: 3.1.11

Page 169

Avaya Media Processing Server Series System Reference Manual

Configuration Parameters for Voice Data Memory Management
VMM Parameter

Description

loadall

Determines whether VMM should load and lock all elements into
VDM when activating MMF files. This option is used in conjunction
with preload (see above). When loadall is off (the default),
only elements with the lock flag set are loaded into memory. When
loadall is on, VMM attempts to load all elements into VDM,
regardless of their lock flag status (see “Custom Loading” on
page 171 for more information). If set in the vmm-mmf.cfg file,
should precede any preload and mmfload commands.

The following formula should be used to determine the maximum safe setting for
vdmmaxlock. Note that this calculates the maximum safe setting; not the optimal
setting, which depends upon what vocabulary items are spoken and their frequency.
Exceeding the value determined using this calculation may result in the system failing
to play an item.
× maxCacheLoadSize × numberLines-⎞
⎛ 2-----------------------------------------------------------------------------------------------------⎝
⎠
pageSize
maxvdmmaxlock = 100 – -----------------------------------------------------------------------------------------------------------numberCachePages

where:
maxCacheLoadSize =
numberLines =
pageSize =
numberCache Pages =

Page 170

the value of ccm maxcacheloadsize
the number of lines in the system
the value of vmm pagesize
Number of pages in cache value returned by
vmm cachestatus

# P0602477 Ver: 3.1.11

Common Configuration
Custom Loading
If all of the data doesn’t fit into voice memory, it will be necessary to select which
elements to load into memory and which ones to cache. By default, elements are
loaded on a first-come-first-serve basis.

MMF on Disk

MMF Vocabulary
File

MMF Loaded to
Voice Memory

90% of
speech
playback
10% of
accesses

Speaking to the Caller
MMF Lock and Load Example

For example, if 90% of the speech playback comes from 10% of the elements, to save
memory, set the lock flags of the frequently used elements and allow the rest to be
played as needed. (The setting of lock flag is done in PeriStudio. See the PeriStudio
User’s Guide for more information.) To enable selective loading, issue the vmm
loadall off command from the vmm.cfg file prior to the command for loading
the MMF file.
On the other hand, if there is an MMF file for which all elements should be loaded into
voice memory, use the vmm loadall on command. Once loadall is enabled,
this setting stays in effect until explicitly changed.
When the size of voice memory is less than the total combined size of all audio data, it
is best to lock the most frequently used elements and adjust the vmm preload
value in conjunction with the vmm loadall option. To determine which elements
are spoken frequently, use the vmm refstatus  option.
The vmm loadall parameter may be set from the command line, and changed on an
as-needed basis.

# P0602477 Ver: 3.1.11

Page 171

Avaya Media Processing Server Series System Reference Manual

Using Hash Tables
To improve access time to vocabulary MMF files, the VMM process creates a hash
table. The following example illustrates this concept:
A Lotto application that runs on four phone lines (1-4) uses the lotto MMF
file. The elements within the file may be located on disk or in voice memory.
The application accesses the file through a hash table.

Lotto Application
1

2

3

4

lotto
1.Greeting
2.You Win
3.You Lose

Hash Table

4.Play Again

Basic Hash Table Schematic

Application-specific hash tables are created using the following command, which
must be issued before those applications are started. If this command is not used, the
VMM process automatically generates the hash tables and sets the hashfirst
sequence to first search the system-wide hash tables.
vmm appinit 

To change the hash table lookup sequence for an application, enter the following
command. local indicates the application looks to its own hash table first: system
instructs it to use initially the system wide hash table. If an element is not found in
one, the other is then searched.
vmm hashfirst ,{system | local}

Page 172

# P0602477 Ver: 3.1.11

Common Configuration

The following is important information about hash tables:
• One hash table can index multiple vocabulary files. However, it cannot
distinguish duplicate element names across index files.
• A hash table can service the entire system or just one application.
• To automate the entire MMF process, the appinit and hashfirst
commands can be added to the vmm-mmf.cfg file, in that order.
System MMF Files
This is the simplest way to organize vocabulary MMF files. System MMF files are
public. They can be accessed by any application on the system.

Bank Balance Application

Mortgage Application
Mortgage
1

2

3

4

5

6

7

8

Bank

1.Greeting

1.Greeting

2.Interest Rate

2.Your Balance

3.Mortgage

3.Checking

4.Overdue
System

4.One hundred

System Hash Table Schematic

In the illustration above, the two applications use two different vocabulary MMF files
that are hashed together into a single system hash table. Typically, system MMF files
contain such common and frequently accessed elements, such as Dual Tone MultiFrequency (DTMF) tones (dtmf) and numeric elements (numset).
All vocabulary elements could be hashed into a single system table, like in the
illustration above, but this is not recommended (if there is more than one application).
Large hash tables can impact system performance because of the longer look-up time.
Also, a single hash table does not allow duplicate element names (i.e., every element
name in every hashed MMF file must be unique).

# P0602477 Ver: 3.1.11

Page 173

Avaya Media Processing Server Series System Reference Manual
In the preceding illustration, both MMF files have an element called Greeting.
When these elements are hashed together, the first element hashed (Mortgage’s
Greeting) is the one that is spoken. So, if the Bank application requests to speak its
Greeting, it will get the Mortgage Greeting instead. To overcome this problem,
use application-specific MMF files.
It is recommended that the system hash table be used only for common MMF files that
will be accessed by several applications. Use application-specific hash tables for all
other MMF files. If there is only one online call processing application, all MMF files
should be activated using the system hash table.
Application-Specific MMF Files
The following illustrates how multiple applications can use application-specific MMF
files to avoid element name conflicts:
Mortgage Application

1

2

3

Bank Balance Application

4

5

6

7

8

Mortgage

Bank

1.Greeting

1.Greeting

2.Interest Rate

2.Your Balance

3.Mortgage

Mortgage

3.Checking

Bank

Money
System

1.One
2.One Hundred
3.One Thousand

Application Hash Table Schematic

In this configuration, there are still two vocabulary elements with the name
Greeting. However, each application has been given its own MMF file, and all
common elements (such as dollar amounts) have been grouped into one system MMF
file. The hashfirst parameter is also set to local, which causes speak requests
to attempt element lookups first in the application-specific MMF file.

Page 174

# P0602477 Ver: 3.1.11

Common Configuration
This setup works as follows:
• If the Mortgage application attempts to speak its Greeting, the Avaya
Media Processing Server (MPS) first looks at the Mortgage hash table, finds
the correct element, and speaks it.
• If the Bank application attempts to speak its Greeting, the MPS first looks
at the Bank hash table, finds the correct element, and then speaks it.
• If the Mortgage application attempts to speak One Thousand dollars, the MPS
first searches the Mortgage hash table, then proceeds to the system hash table
and find the element(s) One Thousand dollars.
• If the Bank application attempts to speak One Thousand dollars, the MPS first
searches the Bank hash table, then proceeds to the system hash table and finds
the element One Thousand dollars.
• The Bank application, for example, cannot access Interest Rate, which
is an element specific to the Mortgage application. That is, applicationspecific elements can only be accessed by the applications for which they
have been activated.
Default Vocabulary and Record MMF Files
MMF files may be set as default MMF play or record files. This is used to emulate
previous generation Avaya systems that use the 24-Byte Header mode. The default
vocabulary is the only vocabulary MMF file that is searched when an application
makes a speak request that specifies an element number (instead of an element name).
To set a default vocabulary file for a specific application or system-wide, add the
MMF to the specific subdirectory in the $MEDIAFILEHOME directory structure (see
Activating MMF Files on page 166).
System-wide Record:
$MEDIAFILEHOME/component/system/record/mmfname
System-wide Vocabulary:
$MEDIAFILEHOME/component/system/default/mmfname
Application-specific Record:
$MEDIAFILEHOME/component/appname/record/mmfname
Application-specific Vocabulary:
$MEDIAFILEHOME/component/appname/default/mmfname

# P0602477 Ver: 3.1.11

Page 175

Avaya Media Processing Server Series System Reference Manual

Diagnostics and Reports
The following table explains some useful MMF file diagnostics commands:

Commands for MMF Diagnostics
Command

Description

vmm mmfstatus

Shows the MMF status report, including MMF files that
are currently activated, and the number of elements loaded
from each MMF file. Also includes space allocations for
each file.

vmm refstatus


Displays the elements in  including their
EAP numbers, how many times they have been referenced,
and whether or not an item is locked in VDM. You can use
this command to verify that all elements were loaded.

vmm hashreport


Displays a hash table report, indicating which elements
have been loaded to the hash tables, along with the lengths
of the elements and the MMF files they were loaded from.
Use all to display a report for each active application
hash table as well as the system hash table.

vmm appstatus


Displays an application status report, including which
MMF files have been activated for application-specific
use.

vmm repconfig

Displays the VMM configuration report, including the
parameters used during MMF file activations.

All status reports can be issued with the shorthand "st" if desired (i.e. vmm
mmfstatus or vmm mmfst).
The MPS allows MMF files to contain both digital and analog versions of an element.
(They may both have the same name.) If a name conflict exists, the first one loaded
will be spoken.

Page 176

# P0602477 Ver: 3.1.11

Common Configuration
Synchronizing MMF Files Across Nodes
Presently available on Solaris platforms only.

In instances where many nodes utilize the same MMF files, and changes to these files
would mean putting an undo burden on network facilities, management, and customer
use, the Zero Administration for Prompts (ZAP) utility is used to automate the
process.
This automated MMF file synchronization facility provides a means of administering
updates to and maintaining consistency between all activated instances of an MMF
file which reside on different nodes on a network. It determines if a set of MMFs
contain identical elements and provides the capability to rectify any differences
between files. In addition, reports illustrating the differences between the source and
target MMF files and the results of modifications made to the target MMF files are
generated.
By definition, the ZAP facility requires a master MMF file be designated as the
reference file. This file can exist on any node in the network. All additions, deletions,
and modifications must be made to this designated file only, preferably through the
use of PeriStudio (see the PeriStudio User’s Guide and the Avaya Media Processing
Server Series System Operator’s Guide for further information).
ZAP requires the presence of the /etc/vpsrc.sh file on every node that is
synchronized. This file is usually present as part of the standard MPS installation.
ZAP and MMF files on the MPS
In an MPS system, when ZAP updates any MMF file, it is required that there exists a
copy of that MMF file for each component in the system. It is recommended that a
directory be created for each of the MPS components on the MMF partition and all the
files, that ZAP operates on, be duplicated under these directories. Make sure that the
/opt/vps/mpsN/etc/vmm-mmf.cfg files on the system are updated to reflect
the change in the file locations.
For example:
On an MPS 500 (with components mps1 and mps2), the MMF “myPrompts” needs
to be updated periodically by ZAP. Hence, the following directories must be created:
/mmf/mps1
/mmf/mps2
The MMF “myPrompts” must be copied into each of these directories. The files
$MPSHOME/mpsN/etc/vmm-mmf.cfg must have the following line added:
mmfload /mmf/mpsN/myPrompts

# P0602477 Ver: 3.1.11

Page 177

Avaya Media Processing Server Series System Reference Manual

Ensure that any previous references to the MMF in vmm-mmf.cfg file are removed.
MMF Abbreviated Content (MAC) File
For any form of synchronization, an MMF Abbreviated Content (MAC) file
is created from the designated master file and placed into the
$MPSHOME/common/zap/distribution directory of the reference node and
$MPSHOME/common/zap directory of the target node(s). By default this file uses
the base name of the reference MMF file that is specified. The -m 
option allows a pre-existing MAC file to be specified during an update (where
mac_name indicates the path and name of the file).
The MAC file is compressed to reduce the time and load the transfer places on the
network. It uses attributes and a 32-bit Cyclic Redundancy Checking (CRC) value
for each element in the reference MMF file to compare it to the target MMF file.
This 32-bit CRC value represents the elemental data without having to actually store
the data in the MAC file. Thus, the MAC file is much smaller in size than its MMF file
counterpart.
The MAC file is decompressed when verification commences. The verification
process compares each element in the target MMF file against its counterpart in the
MAC file, and consists of a comparison of each element’s attributes followed by its
32-bit CRC value. If either of the comparisons is found to be inconsistent, the element
is flagged as requiring an update: after all comparisons are completed, these elements
are downloaded from the source and updated on the target. Conversely, the target
MMF file is also checked for elements that were not found in the MAC. In this case,
the extraneous elements are deleted from the target file.
If multiple element names exist with the same encoding, ZAP only uses the first
element with the duplicated name and encoding from the source MMF file to update
the target MMF file. This is due to the fact that VMM only uses the first item in the
source MMF file with a particular name and encoding as a reference; therefore, only
this first element needs to be updated and maintained. The element which appears first
in the target MMF file (i.e. the element with the lowest EAP number) is updated;
however, none of the remaining duplicate elements is updated. A warning is placed
into the update results log file indicating that multiple elements with the same
encoding are present in the MMF file (see Log Files on page 188).
If duplicate element names with different encodings exist in an MMF file, only one
copy of an element is added to the target MMF, and this element is the one in the
source that has the highest EAP number. The condition caused by duplicate element
names can be eliminated by assigning unique names to all elements within an MMF
file.
The following paragraphs offer suggestions for running ZAP, though the modes are
not mutually exclusive (that is, either form can be used in either instance).
Basic Implementation (Low Volume/Traffic)
In environments where network traffic saturation is not a concern or there are few

Page 178

# P0602477 Ver: 3.1.11

Common Configuration
MPS’ or only one node in the system, ZAP can be run directly from a command line
without any other intervention. To initiate the facility for all activated instances of an
MMF file, use the command line syntax zap , where mmf_name
indicates the path and name of the reference MMF file. The facility must be initiated
from a command line of the node on which the reference file resides.
By default, all nodes and MPS’ listed in the $MPSHOME/common/etc/vpshosts
file on the reference node are addressed. This is called distributed synchronization,
where the synchronization of the target nodes is scheduled in groups of up to ten, with
each group having its synchronization starting one minute apart. This staggered
scheduling helps to limit use of network bandwidth during the data transfers.

# P0602477 Ver: 3.1.11

Page 179

Avaya Media Processing Server Series System Reference Manual
Command Line Options
To specify the nodes and MPS’ that are actually zapped, as opposed to all those
located in $MPSHOME/common/etc/vpshosts, a user-defined file is created in
the same format as the vpshosts file and used in place of it. This file can be located
anywhere on the reference node. To use this option, specify the -f switch followed by
the alternate file name (if located in the current directory) or the path and alternate file
name.

!

The alternate file used with the -f option must be in the same format as the
vpshosts file. As a suggestion, make a copy of the vpshosts file, edit it to
include the desired entries, then save that file with the alternate name. Do not
overwrite the existing vpshosts file!
In addition, because ZAP references the local node’s vpshosts file to determine
which MPS’ are available to update, it is imperative that all MPS’ in the entire
network appear in that file (as well as the corresponding files on all remote nodes).
This file equivalency guarantees that all MPS’ in an alternate file also appear in the
local (reference) node’s vpshosts file.
Selective synchronization causes a specified node or MPS to be synchronized
immediately. This is accomplished by using the -n option to specify a specific node
as the target, where all active instances of the MMF on all MPS’ on that node are
addressed. Use the -v option to specify a specific MPS when only that copy of the
MMF needs updating.
In instances where mixed systems have not had all target nodes updated to use the
latest ZAP release or which have security in place that does not allow remote ZAP
sessions to complete correctly, the -L option must be used to ensure compatibility.
This command line option forces all applicable components on all nodes to be updated
directly from the local (reference) node.
The -L option prevents any remote ZAP processes from occurring, thereby overriding
any zap.network.cfg files that have been defined (see Advanced
Implementation (High Volume/Traffic) below).
Additional command line options are included at “Synchronization (ZAP) Command
Summary” on page 191.

Page 180

# P0602477 Ver: 3.1.11

Common Configuration
Advanced Implementation (High Volume/Traffic)
By default, ZAP connects from a local (reference) node to all remote (target) nodes
(see Basic Implementation (Low Volume/Traffic) on page 178). Where multiple
LANs exist, which in turn contain multiple nodes that need to be updated by ZAP,
network traffic is further reduced and performance improved by having ZAP function
on a proxy basis. In this case ZAP updates one MPS for a particular node in a group
(LAN): each of the other MPS’ on this node, and one MPS on each of the other nodes
in the group, are updated remotely from this “locally updated” (proxy) server. This
functionality requires the presence of a user-defined zap.networks.cfg file.
The order of nodes in the zap.networks.cfg file determines the order in which
each node acts as a proxy for its group. Analogously, the order of MPS’ in each node’s
vpshosts file determines the order in which each acts as a proxy for that node. If a
node or MPS is unavailable for any reason, ZAP moves to the next one in the
sequence.
The zap.networks.cfg File
The zap.networks.cfg file must contain every node in the network since this
file is used to determine the topography of the network. If a specific series of MPS’
needs to be updated, the update can be instituted through use of the -f option (see
Individual Group Update Option on page 183).
The most commonly suggested format of the zap.networks.cfg file is to have
each LAN defined as a group; however, other arrangements are also possible,
depending on site requirements. In all cases, the following syntax rules must be
followed:
• Groups are defined by using the term [GROUP] on its own line. All nodes
that follow are construed as belonging to that group until ZAP encounters
another [GROUP] tag or the end of the file.
• Only one node is listed per line, and each node must belong to only one group.
• No empty groups are allowed, and no node can appear ahead of the first
group.
• An pound symbol (#) precedes commented data. This symbol must appear at
the beginning of a line (comments entire line) or have at least one space
before it.
• Blank lines are ignored.
With these rules in mind, a sample zap.networks.cfg file might appear as
follows:

# P0602477 Ver: 3.1.11

Page 181

Avaya Media Processing Server Series System Reference Manual

# Start of

zap.network.cfg file

[GROUP] #Group 1
nodeA
nodeB
nodeC
[GROUP] #Group 2
nodeD
nodeE #this node is in the middle
nodeF
[GROUP] #Group 3
nodeG
nodeH
nodeJ
#EOF

The zap.networks.cfg file must be placed into the
$MPSHOME/common/etc directory. If the file is built so that every LAN is
its own group, only one MPS on one node in each group is updated directly, with the
remainder in that group being updated by this node remotely. Using the sample file
shown above, and given that ZAP was started on nodeA, one MPS on one node in
group 2 and one MPS on one node in group 3 is updated via network traffic; each of
the other nodes in the groups are updated on a localized basis by this initial MPS.
Group 1 contains the local node (nodeA), and so does not require any network-wide
update; instead, all MPS’ in this group are updated by nodeA. Only the MPS’ listed
in the vpshosts file on nodeA are addressed. If any node in any group contains
MPS’ that are not in this file, those servers are not updated.
Though the vpshosts files on remote nodes can in theory have more MPS’ listed
than that on the reference node (these others do not get updated), in practice they
should never have fewer than those of the vpshosts file on the reference node.

Page 182

# P0602477 Ver: 3.1.11

Common Configuration
Individual Group Update Option
To update all MPS’ on all nodes in a group, use the zap -G 
option. This causes ZAP to update the MPS’ it finds in the reference node’s
vpshosts file for nodes defined for the group. For instance, if the
zap.network.cfg file contained the following:
# Start of

zap.network.cfg file

[GROUP] #Group 1
nodeA
nodeB
nodeC
[GROUP] #Group 2
nodeD
#EOF
and the command zap -G 1 is issued on nodeA, all MPS’ listed in the
vpshosts file on node nodeA, for nodeA, nodeB, and nodeC, are synchronized
in accordance with the guidelines discussed earlier.
To limit the MPS’ within a group that get synchronized, issue the -G option in
combination with the alternate (vpshosts) file option (see Command Line Options
on page 180):
zap -G  -f 
In this instance, refer to the previous zap.network.cfg file example for
illustrative purposes and assume that each node contains four MPS’. By using an
alternate vpshosts file that contains the following:
#COMP
1
2
5
16

NODE
nodeA
nodeA
nodeB
nodeD

TYPE
VPS
VPS
VPS
VPS

the command zap -G 1 -f alternate only synchronizes MPS numbers 1
and 2 on nodeA and MPS 5 on nodeB. Notice that MPS 16 on nodeD does not
get synchronized because that node does not belong to group 1.

# P0602477 Ver: 3.1.11

Page 183

Avaya Media Processing Server Series System Reference Manual
Using Multiple zap.network.cfg Files
In general the zap.network.cfg file exists only on the reference node. This
requires that the initial update for each group travel over network pathways. If slow or
ineffective links exist within these paths, overall system performance can be adversely
affected. To circumvent these deficient links, additional zap.network.cfg files
are defined on the remote nodes.
The additional zap.network.cfg files must be defined differently from those on
the reference node. If the remote nodes contain the exact same file as that of the
reference node, ZAP behaves the same way as if the additional configuration files did
not exist.
This functionality is illustrated in the following example. The network topography
contains three LANs with two slow links between them.

nodeA

nodeB

nodeG

nodeE

nodeD

nodeH

(reference node)

Slow Link

Ethernet

nodeC

Ethernet

nodeF

Slow Link

Ethernet

nodeJ

In this example there are two zap.network.cfg files: one is located on nodeA,
the reference node, and the other on nodeD, nodeE, and nodeF. There is no file on
the remaining nodes. (This format should not be construed as a requirement; rather,
further customization can be made by using various file location configurations.) The
files are defined as follows:

nodeA

[GROUP] #reference node
nodeA
nodeB
nodeC
[GROUP]
nodeD
nodeE
nodeF
nodeG
nodeH
nodeJ

Page 184

nodeD, nodeE, and nodeF

[GROUP] #secondary zap file
nodeD
nodeE
nodeF
[GROUP] #other group
nodeG
nodeH
nodeJ

# P0602477 Ver: 3.1.11

Common Configuration
If the zap.network.cfg file existed only on nodeA, and each LAN were its
own group, the reference node would have to update one MPS in each LAN, requiring
it to travel over a total of three slow links (one to nodeD, nodeE, and nodeF and
two to nodeG, nodeH, and nodeJ). With the example scenario in place, the
reference node updates one MPS on nodeB and nodeC, then tries one MPS on each
of the other nodes in the order they appear in its zap.network.cfg file. ZAP
detects that there is another zap.network.cfg file on nodeD (or nodeE or
nodeF if one of the other nodes fails): instead of nodeD updating one MPS for every
node in its group as defined on nodeA, it updates one MPS for every node in its group
and one from the other group as defined on nodeD (see nodeD, nodeE, and
nodeF on page 184). Initial processing time may be slower because the nodes in the
latter group are not updated until one MPS in that of nodeD completes (as opposed to
parallel processing ZAP normally uses): however, overall processing time and
network congestion are reduced substantially since the number of times ZAP would
have had to travel over the slow links is also reduced. Though this example uses a very
basic model, the savings becomes substantial on systems of greater complexity.
Updating a Specific Element
By default ZAP compares each target MMF with the designated MMF on the
reference node and transmits to each one those elements which are different. In
instances where the element that has changed is known, ZAP can be directed to update
only that element and ignore any other comparison of the file. This increases
significantly to the speed at which ZAP functions.
In this case, instead of updating one MPS per node and then executing other remote
instances, ZAP copies the file created from selected element(s) to the remote node and
executes a remote ZAP on all MPS’ on that node.
To update a specific element, use the -e option in the following manner from the
node that contains the updated element:
zap -e {@  | <“Element Name”>} 
If specifying an element name that contains spaces, it must be enclosed in quotes. This
ensures that the variable is passed as one argument to ZAP. If there are no spaces in
the element name, the quotes may be omitted. Multiple element names and/or EAP
numbers are stipulated through multiple -e arguments.
As an example, an MMF file named Talk2Me contains the following elements:
EAP#
Element Name
-------------------------1
Welcome Message
2
Salutations
3
Goodbye Message

# P0602477 Ver: 3.1.11

Page 185

Avaya Media Processing Server Series System Reference Manual
To update the second and third elements from this reference file to all other nodes in
the vpshosts file of the local machine, issue any of the following commands:
zap
zap
zap
zap

-e
-e
-e
-e

Salutations -e “Goodbye Message” Talk2Me
“Goodbye Message” -e @2 Talk2Me
@3 -e Salutations Talk2Me
@3 -e @2 Talk2Me

Though the location of the -e option in the command is imperative, the order
that multiple elements appear is not.
Additional ZAP command line options may be used as well. The following examples
show, in order and on a limited basis, how to update these elements on all the MPS’ on
only the node named womquat; on only MPS 11; on all the nodes in the alternate
vpshosts file named usethisone; and on all the nodes in group 3 of the
zap.network.cfg file defined on the local node. Other options can also be used
and combined depending on the complexity of the situation.
zap
zap
zap
zap

-e
-e
-e
-e

Salutations -e @3 -n womquat Talk2Me
@2 -e “Goodbye Message” -v 11 Talk2Me
@2 -e @3 -f usethisone Talk2Me
Salutations -e “Goodbye Message” -G 3 Talk2Me

Consolidating Multiple Element Updates
When multiple elements in an MMF file need to be updated, and use of the option
documented above becomes unwieldy, use the -E  option instead. (If
this file is not located in the current working directory, the path to it must also be
included.) The plain text file must adhere to the following conditions:
• elements may be listed by EAP number, name, or a combination of both
• elements numbers must be preceded by the @ sign
• elements containing spaces in the name must not use quotes
• each entry must be listed on a separate line
For instance, zap -E thisfile Talk2Me updates only those elements found
in the file named thisfile, for the MMF file named Talk2Me, on the nodes listed
in the vpshosts file of the reference node. Other options can also be used and
combined with this one depending on the complexity of the situation

!

Page 186

Do not use the upper case -E option with the lower case -e option: these two
must not be combined.

# P0602477 Ver: 3.1.11

Common Configuration
Exception Processing
If a remote node fails to respond or a MAC file cannot be transferred, an attempt is
made at a later time to retransmit the file. The number of retries is preconfigured at
three, but may be specified otherwise using the -r option. The time interval between
retries is likewise preconfigured and can be changed through use of the -d option: the
default is 30 minutes, but at a minimum this interval must be set to ten minutes. It is
also possible to schedule a date and/or time for the synchronization to take place. This
is accomplished by using the -t option. These options can be used individually or in
combination.
When using ZAP with groups (see Advanced Implementation (High Volume/Traffic)
on page 181), the retry count specifies the maximum number of retry attempts made
directly by the reference node.
Inconsistencies detected during synchronization are resolved by either deleting
extraneous elements from the target MMF file and/or downloading all unreferenced
elements in the MAC file from the reference node and adding them to the target MMF.
Any errors recorded during this process are added to the update results log file. Every
individual update and delete request is processed regardless of whether or not an
unsuccessful operation has occurred during the procedure. The procedure is
performed on all target nodes independently, and can proceed simultaneously or
individually, depending on how the process was initiated (see Basic Implementation
(Low Volume/Traffic) on page 178.
The -A command line option enables ZAP to generate an alarm upon completion of
synchronization on each MPS, regardless of whether an attempt was successful or not.
This alarm contains one of the statuses found in the following table.
Completed

Synchronization was successful

Failed

Synchronization and all retry attempts were unsuccessful.

NotActive

The MMF file on the particular MPS was not active and
therefore could not be synchronized.

Terminated The process was killed by the user, either by pressing CTRL-C,
By User
issuing a kill  command, or by some other means.
Each time any MPS finishes ZAP processing with a status of Completed or
NotActive, it generates an alarm message. If processing on every MPS on a node
fails or succeeds, only one alarm message is generated for the entire node. If every
node in a group experiences failed processing, one alarm message is displayed for
each node in the group.
The synchronization alarms instituted by the -A option appear in the following
format:
ZAP: Sync of [] on []
has completed with status [].

# P0602477 Ver: 3.1.11

Page 187

Avaya Media Processing Server Series System Reference Manual
where  is the base name of the MMF file that was to be updated,
 identifies the node (as listed in the vpshosts file of the
node on which the MMF file update is attempted) and MPS number on which the
synchronization attempt took place, and  is the end result in accordance
with those listed previously.
If the.mps# portion of the alarm text is omitted, the status message refers to all
selected MPS’ on the target node.
For example, a successful synchronization of the MMF file named test_mmf on
MPS number 238 located on the node identified as is29538 appears in the
Alarm Viewer as follows:
Fri Oct 16 15:14:13  02040 Severity 1
ZAP: Sync of [test_mmf] on [is29538.238] has
completed with status
[Completed].
Log Files
Several log files are generated during ZAP execution. These log files are stored in the
$MPSHOME/common/log directory of the reference node, and can be viewed using
any ASCII text editor. Administration of all ZAP log files (i.e. when the files should
be removed) is left to the discretion of the user. The files are generated on an
individual basis or can be combined (see Consolidation of Log Files on page 190).
After synchronization retries are exhausted, an error message is displayed on the
console and entered into the synchronization distribution log file. This log file is
generated by the node originating the synchronization request with the name
zap.distribute.refnode.mmf_name.selected_elements.MMDDCCYY
, where refnode is the name of the node originating the synchronization request,
mmf_name indicates the base name of the reference MMF file,
selected_elements is the name or EAP number of the element(s) that have been
selected for updates, and MMDDCCYY indicates the date the file was generated. In
addition to errors encountered during the synchronization process, this file contains
information regarding the distribution and completion status for all MMF file
synchronization requests. It also contains information on which nodes were not
notified of the updates and the reason thereof. If a zap.network.cfg file were
present but incorrectly formatted and thus not usable, this error is entered into the log
file as well. This file is appended to and never overwritten. A new file is created on a
daily basis for each unique MMF file and selected elements that are synchronized
during the day, and all log files for previous days are left intact.

If all elements within an MMF have been selected for updating, the
selected_elements portion of the log file name appears as ALL_ELEMENTS.

Page 188

# P0602477 Ver: 3.1.11

Common Configuration
If an elemental comparison finds inconsistencies between the MAC and target
MMF files, the MMF file is considered inconsistent and the errors are logged
to the update results log file. This file is named in the format
zap.results.target_node.mps#.mmf_name.selected_elements,
where target_node is the name of the remote node where the synchronization has
occurred, mps# is the number of the MPS on which the target MMF file is located,
mmf_name indicates the base name of the reference MMF file, and
selected_elements is the name or EAP number of the element(s) that have been
selected for updates. The file also contains information on modifications made to the
MMF file. This log file is generated by the remote (target) node: each target node and
MPS with an active MMF file has its own corresponding log file. The file is appended
to and never overwritten, but is automatically renamed as *.bak (backup) when it
reaches its predetermined size of 100K, and a new file created.
Upon completion of the zap process, all synchronized MMF files contain identical
elements and data, even though the elements may be stored at different positions
within the files. This result is known as logical equivalence. The synchronization
status log file contains the state of the synchronization process for each target node.
The file naming convention exists as
zap.status.refnode.mmf_name.selected_elemets.MMDDCCYY,
where refnode is the name of the reference node, mmf_name indicates the base
name of the reference MMF file, selected_elements is the name or EAP
number of the element(s) that have been selected for updates, and MMDDCCYY the date
the file was generated. A new file is created daily for each unique MMF file and
selected elements synchronized during the day, leaving the status of all prior days
intact.
The zap.debug.log file contains a history of each instance of ZAP processing
initiated from the node. This file is most often used by Avaya to troubleshoot
unexpected results that may occur, and can be used for informational purposes by
customers as well. The file is appended to and never overwritten, but is automatically
renamed as *.bak (backup) when it reaches its predetermined size of 1 MB, and a
new file created.
If a ZAP process is terminated during execution by pressing CTRL-C or issuing a
kill  command, ZAP attempts to update the applicable log files and delete
any temporary files that may have been created during processing. If ZAP is
terminated with the kill -9  command (highly discouraged), these
temporary files are not removed. If, after terminating an instance of ZAP, future
attempts at using the utility fail, all files named /tmp/zap.* must be removed
from the local and all remote nodes (this most often occurs when using the kill -9
command, and is one of the reasons it is highly discouraged).

# P0602477 Ver: 3.1.11

Page 189

Avaya Media Processing Server Series System Reference Manual
Consolidation of Log Files
By default log files are created whenever ZAP is used, and are never overwritten.
While administration of these files is left to the discretion of the user, this can
eventually lead to disk saturation if files are not off-loaded or deleted. To reduce this
need for manual intervention, use the -C option to consolidate the files.
Use of the -C option must be consistent: all instances of ZAP must either use it or
leave it out. When instituted, ZAP initially creates the individual log files as it would
without the option: however, when the ZAP process completes, each individual file is
merged into the corresponding consolidated log file. A maximum of seven files are
created if using ZAP on a proxy basis; four files are created if using ZAP without
proxies (these numbers do not include backup files). The zap.debug.log file is
created as usual. The other files are consolidated into the following:
For instances of ZAP started on the local node:
• zap.status.log
• zap.results.log
• zap.distribute.log
For instances of ZAP that use the local node as a proxy:
• zap.status.proxy.log
• zap.results.proxy.log
• zap.distribute.proxy.log
Each consolidated file can reach a maximum size of 1 MB. When this limit is reached,
the file is appended with a .bak extension and a new file created. If this new file
then reaches the maximum size, it too is renamed and the previous backup file is
replaced by it.
If ZAP is used without the -E option, then run later the same day with the option for
the same command, the log file generated earlier in the day is merged along with the
latest one into the consolidated log file.

Page 190

# P0602477 Ver: 3.1.11

Common Configuration
Synchronization (ZAP) Command Summary
The following table contains a list of the options available to ZAP. To initiate the
synchronization process, enter the zap command in a VSH window on the reference
node (i.e., the node containing the MMF file that is used to update other instances of
the file across the network).
zap

[ -A -C
-d  -e <@ EAP# | "Element Name">
-E  -f  -G  -L

-m  -n  -r 
-t 

Navigation menu