Deployment Guide Open Splice

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 361

DownloadDeployment Guide Open Splice
Open PDF In BrowserView PDF
Deployment Guide
Release 6.x

Contents
1

Preface
1.1 About the Deployment Guide
1.2 Intended Audience . . . . . .
1.3 Organisation . . . . . . . . .
1.4 Conventions . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

1
1
1
1
1

2

Overview
2.1 Vortex OpenSplice Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.1
Single Process architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.2
Shared Memory architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.3
Comparison of Deployment Architectures . . . . . . . . . . . . . . . . . . . . . . . . .
2.1.4
Configuring and Using the Deployment Architectures . . . . . . . . . . . . . . . . . . .
2.2 Vortex OpenSplice Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.1
Starting Vortex OpenSplice for a Single Process Deployment . . . . . . . . . . . . . . .
2.2.2
Starting Vortex OpenSplice for a Shared Memory Deployment . . . . . . . . . . . . . .
2.2.3
Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3.1 Diagnostic Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3.2 Vortex OpenSplice Tuner . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.3.3 Vortex OpenSplice Memory Management Statistics Monitor . . . . . . . . . . .
2.2.4
Stopping Vortex OpenSplice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.4.1 Stopping a Single Process deployment . . . . . . . . . . . . . . . . . . . . . .
2.2.4.2 Stopping a Shared Memory deployment . . . . . . . . . . . . . . . . . . . . .
2.2.4.2.1
Stopping OSPL by using signals . . . . . . . . . . . . . . . . . . . .
2.2.4.2.2
Stopping Applications in Shared Memory Mode . . . . . . . . . . . .
2.2.5
Deploying Vortex OpenSplice on VxWorks 6.x . . . . . . . . . . . . . . . . . . . . . .
2.2.6
Deploying Vortex OpenSplice on Integrity . . . . . . . . . . . . . . . . . . . . . . . . .
2.2.7
Installing/Uninstalling the Vortex OpenSplice C# Assembly to the Global Assembly Cache
2.2.7.1 Installing the C# Assembly to the Global Assembly Cache . . . . . . . . . . . .
2.2.7.2 Uninstalling the C# Assembly from the Global Assembly Cache . . . . . . . .
2.3 Vortex OpenSplice Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.1
Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2
Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2.1 The OSPL_URI environment variable . . . . . . . . . . . . . . . . . . . . . .
2.3.3
Configuration of Single Process deployment . . . . . . . . . . . . . . . . . . . . . . . .
2.3.4
Configuration of Shared Memory deployment . . . . . . . . . . . . . . . . . . . . . . .
2.3.5
Temporary Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4 Applications which operate in multiple domains . . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.1
Interaction with a Networking Service . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5 Time-jumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1
Effect on data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.2
Effect on processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.3
Background information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6 Time stamps and year 2038 limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.1
CORBA C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.2
CORBA Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.3
Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.4
Platform support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.6.5
DDS_Time structure change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3
3
3
4
5
6
6
6
7
7
7
7
8
8
8
8
8
9
9
9
10
10
10
11
11
11
11
12
12
12
13
13
13
13
14
14
14
15
15
15
15
15

3

Service Descriptions

17

4

The Domain Service

18

i

5

6

The Durability Service
5.1 Durability Service Purpose . . . . . . . . . . . . .
5.2 Durability Service Concepts . . . . . . . . . . . .
5.2.1
Role and Scope . . . . . . . . . . . . . .
5.2.2
Name-spaces . . . . . . . . . . . . . . .
5.2.3
Name-space policies . . . . . . . . . . .
5.2.3.1 Alignment policy . . . . . . . .
5.2.3.2 Durability policy . . . . . . . . .
5.2.3.3 Delayed alignment policy . . . .
5.2.3.4 Merge policy . . . . . . . . . . .
5.2.3.5 Prevent aligning equal data sets .
5.2.3.6 Dynamic name-spaces . . . . . .
5.2.3.7 Master/slave . . . . . . . . . . .
5.3 Mechanisms . . . . . . . . . . . . . . . . . . . .
5.3.1
Interaction with other durability services .
5.3.2
Interaction with other OpenSplice services
5.3.3
Interaction with applications . . . . . . .
5.3.4
Parallel alignment . . . . . . . . . . . . .
5.3.5
Tracing . . . . . . . . . . . . . . . . . .
5.4 Lifecycle . . . . . . . . . . . . . . . . . . . . . .
5.4.1
Determine connectivity . . . . . . . . . .
5.4.2
Determine compatibility . . . . . . . . .
5.4.3
Master selection . . . . . . . . . . . . . .
5.4.4
Persistent data injection . . . . . . . . . .
5.4.5
Discover historical data . . . . . . . . . .
5.4.6
Align historical data . . . . . . . . . . .
5.4.7
Provide historical data . . . . . . . . . .
5.4.8
Merge historical data . . . . . . . . . . .
5.5 Threads description . . . . . . . . . . . . . . . .
5.5.1
ospl_durability . . . . . . . . . . . . . .
5.5.2
conflictResolver . . . . . . . . . . . . . .
5.5.3
statusThread . . . . . . . . . . . . . . . .
5.5.4
d_adminActionQueue . . . . . . . . . . .
5.5.5
AdminEventDispatcher . . . . . . . . . .
5.5.6
groupCreationThread . . . . . . . . . . .
5.5.7
sampleRequestHandler . . . . . . . . . .
5.5.8
resendQueue . . . . . . . . . . . . . . .
5.5.9
masterMonitor . . . . . . . . . . . . . .
5.5.10 groupLocalListenerActionQueue . . . . .
5.5.11 d_groupsRequest . . . . . . . . . . . . .
5.5.12 d_nameSpaces . . . . . . . . . . . . . .
5.5.13 d_nameSpacesRequest . . . . . . . . . .
5.5.14 d_status . . . . . . . . . . . . . . . . . .
5.5.15 d_newGroup . . . . . . . . . . . . . . .
5.5.16 d_sampleChain . . . . . . . . . . . . . .
5.5.17 d_sampleRequest . . . . . . . . . . . . .
5.5.18 d_deleteData . . . . . . . . . . . . . . .
5.5.19 dcpsHeartbeatListener . . . . . . . . . .
5.5.20 d_capability . . . . . . . . . . . . . . . .
5.5.21 remoteReader . . . . . . . . . . . . . . .
5.5.22 persistentDataListener . . . . . . . . . .
5.5.23 historicalDataRequestHandler . . . . . .
5.5.24 durabilityStateListener . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

19
19
19
19
20
21
21
21
22
22
23
24
24
25
25
25
25
25
26
26
27
27
27
28
29
29
29
29
30
30
30
30
30
30
30
31
31
31
31
31
31
32
32
32
32
32
32
33
33
33
33
33
33

The Networking Service
6.1 The Native Networking Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 The Secure Native Networking Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1
Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34
34
34
34

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

ii

6.2.1.1 Availability . . . . . . . . . . . . . . . . . . . . . .
6.2.1.2 How to set the level parameter in zlib . . . . . . . . .
6.2.1.3 How to switch to other built-in compressors . . . . .
6.2.1.4 How to write a plugin for another compression library
6.2.2
How to configure for a plugin . . . . . . . . . . . . . . . . . .
6.2.3
Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . .
7

The DDSI2 and DDSI2E Networking Services
7.1 DDSI Concepts . . . . . . . . . . . . . . . . . . . . . . . . .
7.1.1
Mapping of DCPS domains to DDSI domains . . . .
7.1.2
Mapping of DCPS entities to DDSI entities . . . . .
7.1.3
Reliable communication . . . . . . . . . . . . . . .
7.1.4
DDSI-specific transient-local behaviour . . . . . . .
7.1.5
Discovery of participants & endpoints . . . . . . . .
7.2 Vortex OpenSplice DDSI2 specifics . . . . . . . . . . . . . .
7.2.1
Translating between Vortex OpenSplice and DDSI .
7.2.2
Federated versus Standalone deployment . . . . . . .
7.2.3
Discovery behaviour . . . . . . . . . . . . . . . . .
7.2.3.1 Local discovery and built-in topics . . . . .
7.2.3.2 Proxy participants and endpoints . . . . . .
7.2.3.3 Sharing of discovery information . . . . . .
7.2.3.4 Lingering writers . . . . . . . . . . . . . .
7.2.3.5 Start-up mode . . . . . . . . . . . . . . . .
7.2.4
Writer history QoS and throttling . . . . . . . . . . .
7.2.5
Unresponsive readers & head-of-stream blocking . .
7.2.6
Handling of multiple partitions and wildcards . . . .
7.2.6.1 Publishing in multiple partitions . . . . . .
7.2.6.2 Wildcard partitions . . . . . . . . . . . . .
7.3 Network and discovery configuration . . . . . . . . . . . . .
7.3.1
Networking interfaces . . . . . . . . . . . . . . . . .
7.3.1.1 Multicasting . . . . . . . . . . . . . . . . .
7.3.1.2 Discovery configuration . . . . . . . . . . .
7.3.1.2.1
Discovery addresses . . . . . . . .
7.3.1.2.2
Asymmetrical discovery . . . . .
7.3.1.2.3
Timing of SPDP packets . . . . .
7.3.1.2.4
Endpoint discovery . . . . . . . .
7.3.2
Combining multiple participants . . . . . . . . . . .
7.3.3
Controlling port numbers . . . . . . . . . . . . . . .
7.3.4
Coexistence with Vortex OpenSplice RTNetworking
7.4 Data path configuration . . . . . . . . . . . . . . . . . . . .
7.4.1
Data path architecture . . . . . . . . . . . . . . . . .
7.4.2
Transmit-side configuration . . . . . . . . . . . . . .
7.4.2.1 Transmit processing . . . . . . . . . . . . .
7.4.2.2 Retransmit merging . . . . . . . . . . . . .
7.4.2.3 Retransmit backlogs . . . . . . . . . . . . .
7.4.2.4 Controlling fragmentation . . . . . . . . . .
7.4.3
Receive-side configuration . . . . . . . . . . . . . .
7.4.3.1 Receive processing . . . . . . . . . . . . .
7.4.3.2 Minimising receive latency . . . . . . . . .
7.4.4
Direction-independent settings . . . . . . . . . . . .
7.4.4.1 Maximum sample size . . . . . . . . . . . .
7.5 DDSI2E Enhanced features . . . . . . . . . . . . . . . . . .
7.5.1
Introduction to DDSI2E . . . . . . . . . . . . . . .
7.5.2
Channel configuration . . . . . . . . . . . . . . . .
7.5.2.1 Channel configuration overview . . . . . . .
7.5.2.2 Transmit side . . . . . . . . . . . . . . . .
7.5.2.3 Receive side . . . . . . . . . . . . . . . . .
7.5.2.4 Discovery traffic . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

35
35
35
35
37
37

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

38
38
39
39
39
40
40
40
40
41
41
41
42
42
43
43
43
44
45
45
45
45
45
46
46
46
47
47
47
48
48
49
49
49
50
50
51
51
52
52
52
53
54
54
54
54
54
54
55
56
56

iii

7.5.2.5 On interoperability . . . . . . . . . . . . .
Network partition configuration . . . . . . . . . .
7.5.3.1 Network partition configuration overview .
7.5.3.2 Matching rules . . . . . . . . . . . . . . .
7.5.3.3 Multiple matching mappings . . . . . . .
7.5.3.4 On interoperability . . . . . . . . . . . . .
7.5.4
Encryption configuration . . . . . . . . . . . . . .
7.5.4.1 Encryption configuration overview . . . .
7.5.4.2 On interoperability . . . . . . . . . . . . .
Thread configuration . . . . . . . . . . . . . . . . . . . . .
Reporting and tracing . . . . . . . . . . . . . . . . . . . .
Compatibility and conformance . . . . . . . . . . . . . . .
7.8.1
Conformance modes . . . . . . . . . . . . . . . .
7.8.1.1 Compatibility issues with RTI . . . . . . .
7.8.1.2 Compatibility issues with TwinOaks . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

56
56
56
57
57
57
57
57
57
57
58
59
59
60
60

8

The NetworkingBridge Service
8.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.2 Example Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62
62
62

9

The Tuner Service

65

7.5.3

7.6
7.7
7.8

10 The DbmsConnect Service
10.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2 DDS and DBMS Concepts and Types Mapping . . . . . . .
10.3 General DbmsConnect Concepts . . . . . . . . . . . . . . .
10.4 DDS to DBMS Use Case . . . . . . . . . . . . . . . . . . .
10.4.1 DDS to DBMS Scenario . . . . . . . . . . . . . .
10.4.2 DDS to DBMS Configuration . . . . . . . . . . .
10.4.2.1 DDS to DBMS Configuration Explanation
10.5 DBMS to DDS Use Case . . . . . . . . . . . . . . . . . . .
10.5.1 DBMS to DDS Scenario . . . . . . . . . . . . . .
10.5.2 DBMS to DDS Configuration . . . . . . . . . . .
10.5.2.1 DBMS to DDS Configuration Explanation
10.6 Replication Use Case . . . . . . . . . . . . . . . . . . . . .
10.6.1 Replication Scenario . . . . . . . . . . . . . . . .
10.6.2 Replication Configuration . . . . . . . . . . . . .
10.6.2.1 Replication Configuration Explanation . .
11 Tools
11.1
11.2
11.3
11.4

Introduction . . . . . . . . . . . . . . . . . . . . .
osplconf: the OpenSplice Configuration editor . . .
ospl: the OpenSplice service manager . . . . . . . .
mmstat: Memory Management Statistics . . . . . .
11.4.1 The memory statistics mode . . . . . . . .
11.4.2 The memory statistics difference mode . . .
11.4.3 The meta-object references mode . . . . . .
11.4.4 The meta-object references difference mode

12 Configuration
12.1 OpenSplice . . . . . . . . . . . . . . . .
12.2 Domain . . . . . . . . . . . . . . . . . .
12.2.1 Name . . . . . . . . . . . . . .
12.2.2 Id . . . . . . . . . . . . . . . .
12.2.3 Role . . . . . . . . . . . . . . .
12.2.4 Lease . . . . . . . . . . . . . .
12.2.4.1 ExpiryTime . . . . . .
12.2.4.1.1 update_factor
12.2.5 GeneralWatchdog . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

66
67
67
68
68
68
68
69
69
69
69
70
70
70
70
71

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

72
72
73
74
75
75
76
77
78

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

79
80
81
81
82
82
82
83
83
83

iv

12.2.5.1 Scheduling . . . . . . . . . . .
12.2.5.1.1 Priority . . . . . . .
12.2.5.1.1.1 priority_kind
12.2.5.1.2 Class . . . . . . . . .
12.2.6 ServiceTerminatePeriod . . . . . . . .
12.2.7 SingleProcess . . . . . . . . . . . . . .
12.2.8 Description . . . . . . . . . . . . . . .
12.2.9 CPUAffinity . . . . . . . . . . . . . . .
12.2.10 InProcessExceptionHandling . . . . . .
12.2.11 Daemon . . . . . . . . . . . . . . . . .
12.2.11.1 Locking . . . . . . . . . . . .
12.2.11.2 Watchdog . . . . . . . . . . .
12.2.11.2.1 Scheduling . . . . .
12.2.11.2.1.1 Priority . . .
12.2.11.2.1.2 priority_kind
12.2.11.2.1.3 Class . . . . .
12.2.11.2.2 StackSize . . . . . .
12.2.11.3 shmMonitor . . . . . . . . . .
12.2.11.3.1 Scheduling . . . . .
12.2.11.3.1.1 Priority . . .
12.2.11.3.1.2 priority_kind
12.2.11.3.1.3 Class . . . . .
12.2.11.3.2 StackSize . . . . . .
12.2.11.4 KernelManager . . . . . . . .
12.2.11.4.1 Scheduling . . . . .
12.2.11.4.1.1 Priority . . .
12.2.11.4.1.2 priority_kind
12.2.11.4.1.3 Class . . . . .
12.2.11.4.2 StackSize . . . . . .
12.2.11.5 GarbageCollector . . . . . . .
12.2.11.5.1 Scheduling . . . . .
12.2.11.5.1.1 Priority . . .
12.2.11.5.1.2 priority_kind
12.2.11.5.1.3 Class . . . . .
12.2.11.5.2 StackSize . . . . . .
12.2.11.6 ResendManager . . . . . . . .
12.2.11.6.1 Scheduling . . . . .
12.2.11.6.1.1 Priority . . .
12.2.11.6.1.2 priority_kind
12.2.11.6.1.3 Class . . . . .
12.2.11.6.2 StackSize . . . . . .
12.2.11.7 Heartbeat . . . . . . . . . . . .
12.2.11.7.1 transport_priority . .
12.2.11.7.2 Scheduling . . . . .
12.2.11.7.2.1 Priority . . .
12.2.11.7.2.2 priority_kind
12.2.11.7.2.3 Class . . . . .
12.2.11.7.3 ExpiryTime . . . . .
12.2.11.7.3.1 update_factor
12.2.11.7.4 StackSize . . . . . .
12.2.11.8 Tracing . . . . . . . . . . . . .
12.2.11.8.1 synchronous . . . . .
12.2.11.8.2 OutputFile . . . . . .
12.2.11.8.3 Timestamps . . . . .
12.2.11.8.3.1 absolute . . .
12.2.11.8.4 Verbosity . . . . . .
12.2.12 Database . . . . . . . . . . . . . . . .
12.2.12.1 Size . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

83
84
84
84
84
85
85
85
86
86
86
86
87
87
87
87
88
88
88
88
88
89
89
89
89
90
90
90
90
91
91
91
91
91
92
92
92
92
93
93
93
93
93
94
94
94
94
95
95
95
95
96
96
96
96
96
97
97

v

12.2.12.2 Threshold . . . . . . . . . . .
12.2.12.3 Address . . . . . . . . . . . .
12.2.12.4 Locking . . . . . . . . . . . .
12.2.13 Listeners . . . . . . . . . . . . . . . .
12.2.13.1 StackSize . . . . . . . . . . .
12.2.14 Service . . . . . . . . . . . . . . . . .
12.2.14.1 name . . . . . . . . . . . . . .
12.2.14.2 enabled . . . . . . . . . . . . .
12.2.14.3 Command . . . . . . . . . . .
12.2.14.4 MemoryPoolSize . . . . . . .
12.2.14.5 HeapSize . . . . . . . . . . . .
12.2.14.6 StackSize . . . . . . . . . . .
12.2.14.7 Configuration . . . . . . . . .
12.2.14.8 Scheduling . . . . . . . . . . .
12.2.14.8.1 Priority . . . . . . .
12.2.14.8.1.1 priority_kind
12.2.14.8.2 Class . . . . . . . . .
12.2.14.9 Locking . . . . . . . . . . . .
12.2.14.10FailureAction . . . . . . . . .
12.2.15 GIDKey . . . . . . . . . . . . . . . . .
12.2.15.1 groups . . . . . . . . . . . . .
12.2.16 Application . . . . . . . . . . . . . . .
12.2.16.1 name . . . . . . . . . . . . . .
12.2.16.2 enabled . . . . . . . . . . . . .
12.2.16.3 Command . . . . . . . . . . .
12.2.16.4 Arguments . . . . . . . . . . .
12.2.16.5 Library . . . . . . . . . . . . .
12.2.17 BuiltinTopics . . . . . . . . . . . . . .
12.2.17.1 enabled . . . . . . . . . . . . .
12.2.17.2 logfile . . . . . . . . . . . . .
12.2.18 PriorityInheritance . . . . . . . . . . .
12.2.18.1 enabled . . . . . . . . . . . . .
12.2.19 Report . . . . . . . . . . . . . . . . . .
12.2.19.1 append . . . . . . . . . . . . .
12.2.19.2 verbosity . . . . . . . . . . . .
12.2.20 Statistics . . . . . . . . . . . . . . . . .
12.2.20.1 Category . . . . . . . . . . . .
12.2.20.1.1 enabled . . . . . . .
12.2.20.1.2 name . . . . . . . . .
12.2.21 RetentionPeriod . . . . . . . . . . . . .
12.2.22 ReportPlugin . . . . . . . . . . . . . .
12.2.22.1 Library . . . . . . . . . . . . .
12.2.22.1.1 file_name . . . . . .
12.2.22.2 Initialize . . . . . . . . . . . .
12.2.22.2.1 symbol_name . . . .
12.2.22.2.2 argument . . . . . .
12.2.22.3 Report . . . . . . . . . . . . .
12.2.22.3.1 symbol_name . . . .
12.2.22.4 TypedReport . . . . . . . . . .
12.2.22.4.1 symbol_name . . . .
12.2.22.5 Finalize . . . . . . . . . . . .
12.2.22.5.1 symbol_name . . . .
12.2.22.6 SuppressDefaultLogs . . . . .
12.2.22.7 ServicesOnly . . . . . . . . . .
12.2.23 ResourceLimits . . . . . . . . . . . . .
12.2.23.1 MaxSamples . . . . . . . . . .
12.2.23.1.1 WarnAt . . . . . . .
12.2.23.2 MaxInstances . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

97
98
98
99
99
99
99
100
100
100
101
101
101
101
102
102
102
102
103
103
103
103
104
104
104
105
105
105
105
105
106
106
106
106
106
107
107
107
107
108
108
108
108
109
109
109
109
110
111
111
112
112
112
112
113
113
113
113

vi

12.2.23.2.1 WarnAt . . . . . . .
12.2.23.3 MaxSamplesPerInstance . . . .
12.2.23.3.1 WarnAt . . . . . . .
12.2.24 PartitionAccess . . . . . . . . . . . . .
12.2.24.1 partition_expression . . . . . .
12.2.24.2 access_mode . . . . . . . . . .
12.2.25 SystemId . . . . . . . . . . . . . . . .
12.2.25.1 Range . . . . . . . . . . . . .
12.2.25.1.1 min . . . . . . . . .
12.2.25.1.2 max . . . . . . . . .
12.2.25.2 UserEntropy . . . . . . . . . .
12.2.26 TopicAccess . . . . . . . . . . . . . . .
12.2.26.1 topic_expression . . . . . . . .
12.2.26.2 access_mode . . . . . . . . . .
12.2.27 UserClock . . . . . . . . . . . . . . . .
12.2.27.1 y2038Ready . . . . . . . . . .
12.2.27.2 UserClockModule . . . . . . .
12.2.27.3 UserClockStart . . . . . . . .
12.2.27.3.1 enabled . . . . . . .
12.2.27.4 UserClockStop . . . . . . . . .
12.2.27.4.1 enabled . . . . . . .
12.2.27.5 UserClockQuery . . . . . . . .
12.2.27.5.1 enabled . . . . . . .
12.2.28 DurablePolicies . . . . . . . . . . . . .
12.2.28.1 Policy . . . . . . . . . . . . .
12.2.28.1.1 obtain . . . . . . . .
12.2.28.1.2 cache . . . . . . . .
12.2.29 y2038Ready . . . . . . . . . . . . . . .
12.2.30 Filters . . . . . . . . . . . . . . . . . .
12.2.30.1 Filter . . . . . . . . . . . . . .
12.2.30.1.1 content . . . . . . . .
12.2.30.1.2 PartitionTopic . . . .
12.3 DurabilityService . . . . . . . . . . . . . . . . .
12.3.1 name . . . . . . . . . . . . . . . . . .
12.3.2 ClientDurability . . . . . . . . . . . . .
12.3.2.1 enabled . . . . . . . . . . . . .
12.3.2.2 EntityNames . . . . . . . . . .
12.3.2.2.1 Publisher . . . . . .
12.3.2.2.2 Subscriber . . . . . .
12.3.2.2.3 Partition . . . . . . .
12.3.3 Watchdog . . . . . . . . . . . . . . . .
12.3.3.1 deadlockDetection . . . . . . .
12.3.3.2 Scheduling . . . . . . . . . . .
12.3.3.2.1 Priority . . . . . . .
12.3.3.2.1.1 priority_kind
12.3.3.2.2 Class . . . . . . . . .
12.3.4 Network . . . . . . . . . . . . . . . . .
12.3.4.1 latency_budget . . . . . . . . .
12.3.4.2 transport_priority . . . . . . .
12.3.4.3 Heartbeat . . . . . . . . . . . .
12.3.4.3.1 latency_budget . . .
12.3.4.3.2 transport_priority . .
12.3.4.3.3 Scheduling . . . . .
12.3.4.3.3.1 Priority . . .
12.3.4.3.3.2 priority_kind
12.3.4.3.3.3 Class . . . . .
12.3.4.3.4 ExpiryTime . . . . .
12.3.4.3.4.1 update_factor

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

113
114
114
114
114
114
115
115
115
115
116
116
116
116
117
117
117
118
118
118
118
118
119
119
119
120
120
120
121
121
121
122
122
122
122
123
123
123
123
123
124
124
124
124
124
125
125
125
125
126
126
126
127
127
127
127
127
128

vii

12.3.4.4 InitialDiscoveryPeriod . . . . . . . .
12.3.4.5 Alignment . . . . . . . . . . . . . .
12.3.4.5.1 latency_budget . . . . . .
12.3.4.5.2 transport_priority . . . . .
12.3.4.5.3 TimeAlignment . . . . . .
12.3.4.5.4 AlignerScheduling . . . .
12.3.4.5.4.1 Priority . . . . . .
12.3.4.5.4.2 priority_kind . . .
12.3.4.5.4.3 Class . . . . . . . .
12.3.4.5.5 AligneeScheduling . . . .
12.3.4.5.5.1 Priority . . . . . .
12.3.4.5.5.2 priority_kind . . .
12.3.4.5.5.3 Class . . . . . . . .
12.3.4.5.6 RequestCombinePeriod . .
12.3.4.5.6.1 Initial . . . . . . .
12.3.4.5.6.2 Operational . . . .
12.3.4.5.7 Partition . . . . . . . . . .
12.3.4.5.7.1 Name . . . . . . .
12.3.4.5.7.2 alignment_priority
12.3.4.5.7.3 latency_budget . .
12.3.4.5.7.4 transport_priority .
12.3.4.5.8 TimeToWaitForAligner . .
12.3.4.6 WaitForAttachment . . . . . . . . .
12.3.4.6.1 maxWaitCount . . . . . .
12.3.4.6.2 ServiceName . . . . . . .
12.3.5 MasterElection . . . . . . . . . . . . . . . .
12.3.5.1 WaitTime . . . . . . . . . . . . . .
12.3.6 Persistent . . . . . . . . . . . . . . . . . . .
12.3.6.1 StoreDirectory . . . . . . . . . . . .
12.3.6.2 StoreSessionTime . . . . . . . . . .
12.3.6.3 StoreSleepTime . . . . . . . . . . .
12.3.6.4 StoreMode . . . . . . . . . . . . . .
12.3.6.5 SmpCount . . . . . . . . . . . . . .
12.3.6.6 KeyValueStore . . . . . . . . . . . .
12.3.6.6.1 type . . . . . . . . . . . .
12.3.6.6.2 StorageParameters . . . . .
12.3.6.6.3 Compression . . . . . . .
12.3.6.6.3.1 algorithm . . . . .
12.3.6.6.3.2 enabled . . . . . .
12.3.6.7 StoreOptimizeInterval . . . . . . . .
12.3.6.8 Scheduling . . . . . . . . . . . . . .
12.3.6.8.1 Priority . . . . . . . . . .
12.3.6.8.1.1 priority_kind . . .
12.3.6.8.2 Class . . . . . . . . . . . .
12.3.7 NameSpaces . . . . . . . . . . . . . . . . .
12.3.7.1 NameSpace . . . . . . . . . . . . .
12.3.7.1.1 name . . . . . . . . . . . .
12.3.7.1.2 Partition . . . . . . . . . .
12.3.7.1.3 PartitionTopic . . . . . . .
12.3.7.2 Policy . . . . . . . . . . . . . . . .
12.3.7.2.1 Merge . . . . . . . . . . .
12.3.7.2.1.1 type . . . . . . . .
12.3.7.2.1.2 scope . . . . . . .
12.3.7.2.2 nameSpace . . . . . . . .
12.3.7.2.3 durability . . . . . . . . .
12.3.7.2.4 aligner . . . . . . . . . . .
12.3.7.2.5 alignee . . . . . . . . . . .
12.3.7.2.6 delayedAlignment . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

128
128
129
129
129
130
130
130
130
130
131
131
131
131
132
132
132
132
133
133
133
133
134
134
134
134
135
135
135
135
136
136
136
136
137
137
138
138
139
139
139
139
140
140
140
140
141
141
141
141
142
142
143
143
143
144
144
144

viii

12.3.7.2.7 equalityCheck . . . . . . . .
12.3.7.2.8 masterPriority . . . . . . . .
12.3.8 EntityNames . . . . . . . . . . . . . . . . . .
12.3.8.1 Publisher . . . . . . . . . . . . . . . .
12.3.8.2 Subscriber . . . . . . . . . . . . . . .
12.3.8.3 Partition . . . . . . . . . . . . . . . .
12.3.9 Tracing . . . . . . . . . . . . . . . . . . . . .
12.3.9.1 synchronous . . . . . . . . . . . . . .
12.3.9.2 OutputFile . . . . . . . . . . . . . . .
12.3.9.3 Timestamps . . . . . . . . . . . . . .
12.3.9.3.1 absolute . . . . . . . . . . .
12.3.9.4 Verbosity . . . . . . . . . . . . . . . .
12.4 SNetworkService . . . . . . . . . . . . . . . . . . . . .
12.4.1 name . . . . . . . . . . . . . . . . . . . . . .
12.4.2 Watchdog . . . . . . . . . . . . . . . . . . . .
12.4.2.1 Scheduling . . . . . . . . . . . . . . .
12.4.2.1.1 Priority . . . . . . . . . . .
12.4.2.1.1.1 priority_kind . . . .
12.4.2.1.2 Class . . . . . . . . . . . . .
12.4.3 General . . . . . . . . . . . . . . . . . . . . .
12.4.3.1 NetworkInterfaceAddress . . . . . . .
12.4.3.1.1 forced . . . . . . . . . . . .
12.4.3.1.2 ipv6 . . . . . . . . . . . . .
12.4.3.1.3 bind . . . . . . . . . . . . .
12.4.3.1.4 allowReuse . . . . . . . . .
12.4.3.2 EnableMulticastLoopback . . . . . . .
12.4.3.3 LegacyCompression . . . . . . . . . .
12.4.3.4 Reconnection . . . . . . . . . . . . .
12.4.3.4.1 allowed . . . . . . . . . . .
12.4.4 Partitioning . . . . . . . . . . . . . . . . . . .
12.4.4.1 GlobalPartition . . . . . . . . . . . .
12.4.4.1.1 Address . . . . . . . . . . .
12.4.4.1.2 SecurityProfile . . . . . . .
12.4.4.1.3 MulticastTimeToLive . . . .
12.4.4.2 NetworkPartitions . . . . . . . . . . .
12.4.4.2.1 NetworkPartition . . . . . .
12.4.4.2.1.1 Name . . . . . . . .
12.4.4.2.1.2 Address . . . . . . .
12.4.4.2.1.3 Connected . . . . . .
12.4.4.2.1.4 Compression . . . .
12.4.4.2.1.5 SecurityProfile . . .
12.4.4.2.1.6 MulticastTimeToLive
12.4.4.3 IgnoredPartitions . . . . . . . . . . .
12.4.4.3.1 IgnoredPartition . . . . . . .
12.4.4.3.1.1 DCPSPartitionTopic .
12.4.4.4 PartitionMappings . . . . . . . . . . .
12.4.4.4.1 PartitionMapping . . . . . .
12.4.4.4.1.1 NetworkPartition . .
12.4.4.4.1.2 DCPSPartitionTopic .
12.4.5 Security . . . . . . . . . . . . . . . . . . . . .
12.4.5.1 enabled . . . . . . . . . . . . . . . . .
12.4.5.2 SecurityProfile . . . . . . . . . . . . .
12.4.5.2.1 Name . . . . . . . . . . . .
12.4.5.2.2 Cipher . . . . . . . . . . . .
12.4.5.2.3 CipherKey . . . . . . . . . .
12.4.5.3 AccessControl . . . . . . . . . . . . .
12.4.5.3.1 enabled . . . . . . . . . . .
12.4.5.3.2 policy . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

145
145
145
146
146
146
146
146
147
147
147
147
148
148
148
148
148
149
149
149
149
150
150
150
150
151
151
151
151
152
152
152
153
153
153
153
153
154
154
154
154
155
155
155
155
155
156
156
156
156
156
157
157
157
158
158
158
158

ix

12.4.5.3.3 AccessControlModule . . . . . . .
12.4.5.3.3.1 enabled . . . . . . . . . .
12.4.5.3.3.2 type . . . . . . . . . . . .
12.4.5.4 Authentication . . . . . . . . . . . . . . . .
12.4.5.4.1 enabled . . . . . . . . . . . . . .
12.4.5.4.2 X509Authentication . . . . . . . .
12.4.5.4.2.1 Credentials . . . . . . . .
12.4.5.4.2.2 Key . . . . . . . . . . . .
12.4.5.4.2.3 Cert . . . . . . . . . . . .
12.4.5.4.2.4 TrustedCertificates . . . .
12.4.6 Channels . . . . . . . . . . . . . . . . . . . . . . .
12.4.6.1 Channel . . . . . . . . . . . . . . . . . . .
12.4.6.1.1 name . . . . . . . . . . . . . . . .
12.4.6.1.2 reliable . . . . . . . . . . . . . . .
12.4.6.1.3 default . . . . . . . . . . . . . . .
12.4.6.1.4 enabled . . . . . . . . . . . . . .
12.4.6.1.5 priority . . . . . . . . . . . . . .
12.4.6.1.6 PortNr . . . . . . . . . . . . . . .
12.4.6.1.7 FragmentSize . . . . . . . . . . .
12.4.6.1.8 Resolution . . . . . . . . . . . . .
12.4.6.1.9 AdminQueueSize . . . . . . . . .
12.4.6.1.10 CompressionBufferSize . . . . . .
12.4.6.1.11 CompressionThreshold . . . . . .
12.4.6.1.12 Sending . . . . . . . . . . . . . .
12.4.6.1.12.1 CrcCheck . . . . . . . . .
12.4.6.1.12.2 QueueSize . . . . . . . . .
12.4.6.1.12.3 MaxBurstSize . . . . . . .
12.4.6.1.12.4 ThrottleLimit . . . . . . .
12.4.6.1.12.5 ThrottleThreshold . . . . .
12.4.6.1.12.6 MaxRetries . . . . . . . .
12.4.6.1.12.7 RecoveryFactor . . . . . .
12.4.6.1.12.8 DiffServField . . . . . . .
12.4.6.1.12.9 DontRoute . . . . . . . . .
12.4.6.1.12.10 DontFragment . . . . . . .
12.4.6.1.12.11 TimeToLive . . . . . . . .
12.4.6.1.12.12 Scheduling . . . . . . . .
12.4.6.1.12.13 Priority . . . . . . . . . .
12.4.6.1.12.14 priority_kind . . . . . . .
12.4.6.1.12.15 Class . . . . . . . . . . . .
12.4.6.1.13 Receiving . . . . . . . . . . . . .
12.4.6.1.13.1 CrcCheck . . . . . . . . .
12.4.6.1.13.2 ReceiveBufferSize . . . . .
12.4.6.1.13.3 Scheduling . . . . . . . .
12.4.6.1.13.4 Priority . . . . . . . . . .
12.4.6.1.13.5 priority_kind . . . . . . .
12.4.6.1.13.6 Class . . . . . . . . . . . .
12.4.6.1.13.7 DefragBufferSize . . . . .
12.4.6.1.13.8 SMPOptimization . . . . .
12.4.6.1.13.9 enabled . . . . . . . . . .
12.4.6.1.13.10 MaxReliabBacklog . . . .
12.4.6.1.13.11 PacketRetentionPeriod . .
12.4.6.1.13.12 ReliabilityRecoveryPeriod
12.4.6.1.14 AllowedPorts . . . . . . . . . . .
12.4.6.2 AllowedPorts . . . . . . . . . . . . . . . .
12.4.7 Discovery . . . . . . . . . . . . . . . . . . . . . . .
12.4.7.1 enabled . . . . . . . . . . . . . . . . . . . .
12.4.7.2 Scope . . . . . . . . . . . . . . . . . . . .
12.4.7.3 PortNr . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

159
159
159
159
159
160
160
160
160
160
161
161
161
161
162
162
162
162
163
163
163
163
164
164
164
164
165
165
165
166
166
166
167
167
167
167
167
168
168
168
168
169
169
169
169
169
170
170
170
170
170
171
171
171
172
172
172
173

x

12.4.7.4 ProbeList . . . . . . . . . . . .
12.4.7.5 Sending . . . . . . . . . . . .
12.4.7.5.1 CrcCheck . . . . . .
12.4.7.5.2 DiffServField . . . .
12.4.7.5.3 DontRoute . . . . . .
12.4.7.5.4 DontFragment . . . .
12.4.7.5.5 TimeToLive . . . . .
12.4.7.5.6 Scheduling . . . . .
12.4.7.5.6.1 Priority . . .
12.4.7.5.6.2 priority_kind
12.4.7.5.6.3 Class . . . . .
12.4.7.5.7 Interval . . . . . . .
12.4.7.5.8 SafetyFactor . . . . .
12.4.7.5.9 SalvoSize . . . . . .
12.4.7.6 Receiving . . . . . . . . . . .
12.4.7.6.1 CrcCheck . . . . . .
12.4.7.6.2 Scheduling . . . . .
12.4.7.6.2.1 Priority . . .
12.4.7.6.2.2 priority_kind
12.4.7.6.2.3 Class . . . . .
12.4.7.6.3 DeathDetectionCount
12.4.7.6.4 ReceiveBufferSize . .
12.4.8 Tracing . . . . . . . . . . . . . . . . .
12.4.8.1 enabled . . . . . . . . . . . . .
12.4.8.2 OutputFile . . . . . . . . . . .
12.4.8.3 Timestamps . . . . . . . . . .
12.4.8.3.1 absolute . . . . . . .
12.4.8.4 Categories . . . . . . . . . . .
12.4.8.4.1 Default . . . . . . . .
12.4.8.4.2 Configuration . . . .
12.4.8.4.3 Construction . . . . .
12.4.8.4.4 Destruction . . . . .
12.4.8.4.5 Mainloop . . . . . .
12.4.8.4.6 Groups . . . . . . . .
12.4.8.4.7 Send . . . . . . . . .
12.4.8.4.8 Receive . . . . . . .
12.4.8.4.9 Throttling . . . . . .
12.4.8.4.10 Test . . . . . . . . .
12.4.8.4.11 Discovery . . . . . .
12.4.8.5 Verbosity . . . . . . . . . . . .
12.4.9 Compression . . . . . . . . . . . . . .
12.4.9.1 PluginLibrary . . . . . . . . .
12.4.9.2 PluginInitFunction . . . . . . .
12.4.9.3 PluginParameter . . . . . . . .
12.5 NetworkService . . . . . . . . . . . . . . . . .
12.5.1 name . . . . . . . . . . . . . . . . . .
12.5.2 Watchdog . . . . . . . . . . . . . . . .
12.5.2.1 Scheduling . . . . . . . . . . .
12.5.2.1.1 Priority . . . . . . .
12.5.2.1.1.1 priority_kind
12.5.2.1.2 Class . . . . . . . . .
12.5.3 General . . . . . . . . . . . . . . . . .
12.5.3.1 NetworkInterfaceAddress . . .
12.5.3.1.1 forced . . . . . . . .
12.5.3.1.2 ipv6 . . . . . . . . .
12.5.3.1.3 bind . . . . . . . . .
12.5.3.1.4 allowReuse . . . . .
12.5.3.2 EnableMulticastLoopback . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

173
173
173
174
174
174
175
175
175
175
175
176
176
176
176
176
177
177
177
177
178
178
178
178
178
179
179
179
179
179
180
180
180
180
181
181
181
181
182
182
182
183
183
183
183
184
184
184
184
185
185
185
185
185
186
186
186
187

xi

12.5.3.3 LegacyCompression . . . . . . . . . .
12.5.3.4 Reconnection . . . . . . . . . . . . .
12.5.3.4.1 allowed . . . . . . . . . . .
12.5.4 Partitioning . . . . . . . . . . . . . . . . . . .
12.5.4.1 GlobalPartition . . . . . . . . . . . .
12.5.4.1.1 Address . . . . . . . . . . .
12.5.4.1.2 MulticastTimeToLive . . . .
12.5.4.2 NetworkPartitions . . . . . . . . . . .
12.5.4.2.1 NetworkPartition . . . . . .
12.5.4.2.1.1 Name . . . . . . . .
12.5.4.2.1.2 Address . . . . . . .
12.5.4.2.1.3 Connected . . . . . .
12.5.4.2.1.4 Compression . . . .
12.5.4.2.1.5 SecurityProfile . . .
12.5.4.2.1.6 MulticastTimeToLive
12.5.4.3 IgnoredPartitions . . . . . . . . . . .
12.5.4.3.1 IgnoredPartition . . . . . . .
12.5.4.3.1.1 DCPSPartitionTopic .
12.5.4.4 PartitionMappings . . . . . . . . . . .
12.5.4.4.1 PartitionMapping . . . . . .
12.5.4.4.1.1 NetworkPartition . .
12.5.4.4.1.2 DCPSPartitionTopic .
12.5.5 Channels . . . . . . . . . . . . . . . . . . . .
12.5.5.1 Channel . . . . . . . . . . . . . . . .
12.5.5.1.1 name . . . . . . . . . . . . .
12.5.5.1.2 reliable . . . . . . . . . . . .
12.5.5.1.3 default . . . . . . . . . . . .
12.5.5.1.4 enabled . . . . . . . . . . .
12.5.5.1.5 priority . . . . . . . . . . .
12.5.5.1.6 PortNr . . . . . . . . . . . .
12.5.5.1.7 FragmentSize . . . . . . . .
12.5.5.1.8 Resolution . . . . . . . . . .
12.5.5.1.9 AdminQueueSize . . . . . .
12.5.5.1.10 CompressionBufferSize . . .
12.5.5.1.11 CompressionThreshold . . .
12.5.5.1.12 Sending . . . . . . . . . . .
12.5.5.1.12.1 CrcCheck . . . . . .
12.5.5.1.12.2 QueueSize . . . . . .
12.5.5.1.12.3 MaxBurstSize . . . .
12.5.5.1.12.4 ThrottleLimit . . . .
12.5.5.1.12.5 ThrottleThreshold . .
12.5.5.1.12.6 MaxRetries . . . . .
12.5.5.1.12.7 RecoveryFactor . . .
12.5.5.1.12.8 DiffServField . . . .
12.5.5.1.12.9 DontRoute . . . . . .
12.5.5.1.12.10 DontFragment . . . .
12.5.5.1.12.11 TimeToLive . . . . .
12.5.5.1.12.12 Scheduling . . . . .
12.5.5.1.12.13 Priority . . . . . . .
12.5.5.1.12.14 priority_kind . . . .
12.5.5.1.12.15 Class . . . . . . . . .
12.5.5.1.13 Receiving . . . . . . . . . .
12.5.5.1.13.1 CrcCheck . . . . . .
12.5.5.1.13.2 ReceiveBufferSize . .
12.5.5.1.13.3 Scheduling . . . . .
12.5.5.1.13.4 Priority . . . . . . .
12.5.5.1.13.5 priority_kind . . . .
12.5.5.1.13.6 Class . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

187
187
187
188
188
188
189
189
189
189
189
190
190
190
190
191
191
191
191
191
192
192
192
192
193
193
193
193
194
194
194
194
195
195
195
195
196
196
196
196
197
197
197
197
198
198
198
198
199
199
199
199
199
200
200
200
200
201

xii

12.5.5.1.13.7 DefragBufferSize . . . . .
12.5.5.1.13.8 SMPOptimization . . . . .
12.5.5.1.13.9 enabled . . . . . . . . . .
12.5.5.1.13.10 MaxReliabBacklog . . . .
12.5.5.1.13.11 PacketRetentionPeriod . .
12.5.5.1.13.12 ReliabilityRecoveryPeriod
12.5.5.1.14 AllowedPorts . . . . . . . . . . .
12.5.5.2 AllowedPorts . . . . . . . . . . . . . . . .
12.5.6 Discovery . . . . . . . . . . . . . . . . . . . . . . .
12.5.6.1 enabled . . . . . . . . . . . . . . . . . . . .
12.5.6.2 Scope . . . . . . . . . . . . . . . . . . . .
12.5.6.3 PortNr . . . . . . . . . . . . . . . . . . . .
12.5.6.4 ProbeList . . . . . . . . . . . . . . . . . . .
12.5.6.5 Sending . . . . . . . . . . . . . . . . . . .
12.5.6.5.1 CrcCheck . . . . . . . . . . . . .
12.5.6.5.2 DiffServField . . . . . . . . . . .
12.5.6.5.3 DontRoute . . . . . . . . . . . . .
12.5.6.5.4 DontFragment . . . . . . . . . . .
12.5.6.5.5 TimeToLive . . . . . . . . . . . .
12.5.6.5.6 Scheduling . . . . . . . . . . . .
12.5.6.5.6.1 Priority . . . . . . . . . .
12.5.6.5.6.2 priority_kind . . . . . . .
12.5.6.5.6.3 Class . . . . . . . . . . . .
12.5.6.5.7 Interval . . . . . . . . . . . . . .
12.5.6.5.8 SafetyFactor . . . . . . . . . . . .
12.5.6.5.9 SalvoSize . . . . . . . . . . . . .
12.5.6.6 Receiving . . . . . . . . . . . . . . . . . .
12.5.6.6.1 CrcCheck . . . . . . . . . . . . .
12.5.6.6.2 Scheduling . . . . . . . . . . . .
12.5.6.6.2.1 Priority . . . . . . . . . .
12.5.6.6.2.2 priority_kind . . . . . . .
12.5.6.6.2.3 Class . . . . . . . . . . . .
12.5.6.6.3 DeathDetectionCount . . . . . . .
12.5.6.6.4 ReceiveBufferSize . . . . . . . . .
12.5.7 Tracing . . . . . . . . . . . . . . . . . . . . . . . .
12.5.7.1 enabled . . . . . . . . . . . . . . . . . . . .
12.5.7.2 OutputFile . . . . . . . . . . . . . . . . . .
12.5.7.3 Timestamps . . . . . . . . . . . . . . . . .
12.5.7.3.1 absolute . . . . . . . . . . . . . .
12.5.7.4 Categories . . . . . . . . . . . . . . . . . .
12.5.7.4.1 Default . . . . . . . . . . . . . . .
12.5.7.4.2 Configuration . . . . . . . . . . .
12.5.7.4.3 Construction . . . . . . . . . . . .
12.5.7.4.4 Destruction . . . . . . . . . . . .
12.5.7.4.5 Mainloop . . . . . . . . . . . . .
12.5.7.4.6 Groups . . . . . . . . . . . . . . .
12.5.7.4.7 Send . . . . . . . . . . . . . . . .
12.5.7.4.8 Receive . . . . . . . . . . . . . .
12.5.7.4.9 Throttling . . . . . . . . . . . . .
12.5.7.4.10 Test . . . . . . . . . . . . . . . .
12.5.7.4.11 Discovery . . . . . . . . . . . . .
12.5.7.5 Verbosity . . . . . . . . . . . . . . . . . . .
12.5.8 Compression . . . . . . . . . . . . . . . . . . . . .
12.5.8.1 PluginLibrary . . . . . . . . . . . . . . . .
12.5.8.2 PluginInitFunction . . . . . . . . . . . . . .
12.5.8.3 PluginParameter . . . . . . . . . . . . . . .
12.6 NetworkingBridgeService . . . . . . . . . . . . . . . . . . .
12.6.1 name . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

201
201
201
201
202
202
202
203
203
203
204
204
204
204
205
205
205
206
206
206
206
206
207
207
207
207
208
208
208
208
209
209
209
209
209
210
210
210
210
211
211
211
211
211
212
212
212
212
213
213
213
213
214
214
214
215
215
215

xiii

12.6.2

Exclude . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.2.1 Entry . . . . . . . . . . . . . . . . . . . . . . .
12.6.2.1.1 DCPSPartitionTopic . . . . . . . . . .
12.6.3 Include . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.3.1 Entry . . . . . . . . . . . . . . . . . . . . . . .
12.6.3.1.1 DCPSPartitionTopic . . . . . . . . . .
12.6.4 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.4.1 AppendToFile . . . . . . . . . . . . . . . . . .
12.6.4.2 EnableCategory . . . . . . . . . . . . . . . . .
12.6.4.3 OutputFile . . . . . . . . . . . . . . . . . . . .
12.6.4.4 Verbosity . . . . . . . . . . . . . . . . . . . . .
12.6.5 Watchdog . . . . . . . . . . . . . . . . . . . . . . . . .
12.6.5.1 Scheduling . . . . . . . . . . . . . . . . . . . .
12.6.5.1.1 Class . . . . . . . . . . . . . . . . . .
12.6.5.1.2 Priority . . . . . . . . . . . . . . . .
12.6.5.1.2.1 priority_kind . . . . . . . . .
12.7 DDSI2EService . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.1 name . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.2 Channels . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.2.1 Channel . . . . . . . . . . . . . . . . . . . . .
12.7.2.1.1 Name . . . . . . . . . . . . . . . . .
12.7.2.1.2 TransportPriority . . . . . . . . . . .
12.7.2.1.3 AuxiliaryBandwidthLimit . . . . . . .
12.7.2.1.4 DataBandwidthLimit . . . . . . . . .
12.7.2.1.5 DiffServField . . . . . . . . . . . . .
12.7.2.1.6 QueueSize . . . . . . . . . . . . . . .
12.7.2.1.7 Resolution . . . . . . . . . . . . . . .
12.7.3 Compatibility . . . . . . . . . . . . . . . . . . . . . . .
12.7.3.1 AckNackNumbitsEmptySet . . . . . . . . . . .
12.7.3.2 ArrivalOfDataAssertsPpAndEpLiveliness . . .
12.7.3.3 AssumeRtiHasPmdEndpoints . . . . . . . . . .
12.7.3.4 ExplicitlyPublishQosSetToDefault . . . . . . .
12.7.3.5 ManySocketsMode . . . . . . . . . . . . . . .
12.7.3.6 RespondToRtiInitZeroAckWithInvalidHeartbeat
12.7.3.7 StandardsConformance . . . . . . . . . . . . .
12.7.4 Discovery . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.4.1 AdvertiseBuiltinTopicWriters . . . . . . . . . .
12.7.4.2 DSGracePeriod . . . . . . . . . . . . . . . . .
12.7.4.3 DefaultMulticastAddress . . . . . . . . . . . .
12.7.4.4 DomainId . . . . . . . . . . . . . . . . . . . .
12.7.4.5 GenerateBuiltinTopics . . . . . . . . . . . . . .
12.7.4.6 LocalDiscoveryPartition . . . . . . . . . . . . .
12.7.4.7 MaxAutoParticipantIndex . . . . . . . . . . . .
12.7.4.8 ParticipantIndex . . . . . . . . . . . . . . . . .
12.7.4.9 Peers . . . . . . . . . . . . . . . . . . . . . . .
12.7.4.9.1 Group . . . . . . . . . . . . . . . . .
12.7.4.9.1.1 Peer . . . . . . . . . . . . . .
12.7.4.9.1.2 Address . . . . . . . . . . . .
12.7.4.9.2 Peer . . . . . . . . . . . . . . . . . .
12.7.4.9.2.1 Address . . . . . . . . . . . .
12.7.4.10 Ports . . . . . . . . . . . . . . . . . . . . . . .
12.7.4.10.1 Base . . . . . . . . . . . . . . . . . .
12.7.4.10.2 DomainGain . . . . . . . . . . . . . .
12.7.4.10.3 MulticastDataOffset . . . . . . . . . .
12.7.4.10.4 MulticastMetaOffset . . . . . . . . .
12.7.4.10.5 ParticipantGain . . . . . . . . . . . .
12.7.4.10.6 UnicastDataOffset . . . . . . . . . . .
12.7.4.10.7 UnicastMetaOffset . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

215
215
216
216
216
216
216
217
217
217
217
218
218
218
218
218
219
219
219
219
219
220
220
220
220
221
221
221
222
222
222
222
223
223
223
224
224
224
224
224
225
225
225
225
226
226
226
226
226
226
227
227
227
227
227
228
228
228

xiv

12.7.4.11 SPDPInterval . . . . . . . . . . . . . . .
12.7.4.12 SPDPMulticastAddress . . . . . . . . . .
12.7.5 General . . . . . . . . . . . . . . . . . . . . . . .
12.7.5.1 AllowMulticast . . . . . . . . . . . . . .
12.7.5.2 CoexistWithNativeNetworking . . . . . .
12.7.5.3 DontRoute . . . . . . . . . . . . . . . . .
12.7.5.4 EnableMulticastLoopback . . . . . . . . .
12.7.5.5 ExternalNetworkAddress . . . . . . . . .
12.7.5.6 ExternalNetworkMask . . . . . . . . . . .
12.7.5.7 FragmentSize . . . . . . . . . . . . . . .
12.7.5.8 MaxMessageSize . . . . . . . . . . . . .
12.7.5.9 MulticastRecvNetworkInterfaceAddresses
12.7.5.10 MulticastTimeToLive . . . . . . . . . . .
12.7.5.11 NetworkInterfaceAddress . . . . . . . . .
12.7.5.12 StartupModeCoversTransient . . . . . . .
12.7.5.13 StartupModeDuration . . . . . . . . . . .
12.7.5.14 UseIPv6 . . . . . . . . . . . . . . . . . .
12.7.6 Internal . . . . . . . . . . . . . . . . . . . . . . .
12.7.6.1 AccelerateRexmitBlockSize . . . . . . . .
12.7.6.2 AggressiveKeepLastWhc . . . . . . . . .
12.7.6.3 AggressiveKeepLastWhc . . . . . . . . .
12.7.6.4 AssumeMulticastCapable . . . . . . . . .
12.7.6.5 AutoReschedNackDelay . . . . . . . . . .
12.7.6.6 AuxiliaryBandwidthLimit . . . . . . . . .
12.7.6.7 BuiltinEndpointSet . . . . . . . . . . . .
12.7.6.8 ConservativeBuiltinReaderStartup . . . .
12.7.6.9 ControlTopic . . . . . . . . . . . . . . . .
12.7.6.9.1 enable . . . . . . . . . . . . . .
12.7.6.9.2 initialreset . . . . . . . . . . . .
12.7.6.9.3 Deaf . . . . . . . . . . . . . . .
12.7.6.9.4 Mute . . . . . . . . . . . . . . .
12.7.6.10 DDSI2DirectMaxThreads . . . . . . . . .
12.7.6.11 DefragReliableMaxSamples . . . . . . . .
12.7.6.12 DefragUnreliableMaxSamples . . . . . .
12.7.6.13 DeliveryQueueMaxSamples . . . . . . . .
12.7.6.14 ForwardAllMessages . . . . . . . . . . .
12.7.6.15 ForwardRemoteData . . . . . . . . . . . .
12.7.6.16 GenerateKeyhash . . . . . . . . . . . . .
12.7.6.17 HeartbeatInterval . . . . . . . . . . . . .
12.7.6.17.1 max . . . . . . . . . . . . . . .
12.7.6.17.2 min . . . . . . . . . . . . . . .
12.7.6.17.3 minsched . . . . . . . . . . . .
12.7.6.18 LateAckMode . . . . . . . . . . . . . . .
12.7.6.19 LeaseDuration . . . . . . . . . . . . . . .
12.7.6.20 LegacyFragmentation . . . . . . . . . . .
12.7.6.21 LogStackTraces . . . . . . . . . . . . . .
12.7.6.22 MaxParticipants . . . . . . . . . . . . . .
12.7.6.23 MaxQueuedRexmitBytes . . . . . . . . .
12.7.6.24 MaxQueuedRexmitMessages . . . . . . .
12.7.6.25 MaxSampleSize . . . . . . . . . . . . . .
12.7.6.26 MeasureHbToAckLatency . . . . . . . . .
12.7.6.27 MinimumSocketReceiveBufferSize . . . .
12.7.6.28 MinimumSocketSendBufferSize . . . . .
12.7.6.29 MirrorRemoteEntities . . . . . . . . . . .
12.7.6.30 MonitorPort . . . . . . . . . . . . . . . .
12.7.6.31 NackDelay . . . . . . . . . . . . . . . . .
12.7.6.32 PreEmptiveAckDelay . . . . . . . . . . .
12.7.6.33 PrimaryReorderMaxSamples . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

228
228
229
229
229
229
230
230
230
230
231
231
231
232
232
232
232
233
233
233
233
234
234
234
234
235
235
235
235
236
236
236
236
237
237
237
237
237
238
238
238
238
239
239
239
239
240
240
240
240
240
241
241
241
241
242
242
242

xv

12.7.6.34 PrioritizeRetransmit . . . . . . . . . .
12.7.6.35 RediscoveryBlacklistDuration . . . . .
12.7.6.35.1 enforce . . . . . . . . . . . .
12.7.6.36 ResponsivenessTimeout . . . . . . . .
12.7.6.37 RetransmitMerging . . . . . . . . . .
12.7.6.38 RetransmitMergingPeriod . . . . . . .
12.7.6.39 RetryOnRejectBestEffort . . . . . . .
12.7.6.40 RetryOnRejectDuration . . . . . . . .
12.7.6.41 SPDPResponseMaxDelay . . . . . . .
12.7.6.42 ScheduleTimeRounding . . . . . . . .
12.7.6.43 SecondaryReorderMaxSamples . . . .
12.7.6.44 SquashParticipants . . . . . . . . . . .
12.7.6.45 SuppressSPDPMulticast . . . . . . . .
12.7.6.46 SynchronousDeliveryLatencyBound .
12.7.6.47 SynchronousDeliveryPriorityThreshold
12.7.6.48 Test . . . . . . . . . . . . . . . . . .
12.7.6.48.1 XmitLossiness . . . . . . . .
12.7.6.49 UnicastResponseToSPDPMessages . .
12.7.6.50 UseMulticastIfMreqn . . . . . . . . .
12.7.6.51 Watermarks . . . . . . . . . . . . . .
12.7.6.51.1 WhcAdaptive . . . . . . . .
12.7.6.51.2 WhcHigh . . . . . . . . . .
12.7.6.51.3 WhcHighInit . . . . . . . .
12.7.6.51.4 WhcLow . . . . . . . . . . .
12.7.6.52 WriterLingerDuration . . . . . . . . .
12.7.7 Partitioning . . . . . . . . . . . . . . . . . . .
12.7.7.1 IgnoredPartitions . . . . . . . . . . .
12.7.7.1.1 IgnoredPartition . . . . . . .
12.7.7.1.1.1 DCPSPartitionTopic .
12.7.7.2 NetworkPartitions . . . . . . . . . . .
12.7.7.2.1 NetworkPartition . . . . . .
12.7.7.2.1.1 Address . . . . . . .
12.7.7.2.1.2 Connected . . . . . .
12.7.7.2.1.3 Name . . . . . . . .
12.7.7.2.1.4 SecurityProfile . . .
12.7.7.3 PartitionMappings . . . . . . . . . . .
12.7.7.3.1 PartitionMapping . . . . . .
12.7.7.3.1.1 DCPSPartitionTopic .
12.7.7.3.1.2 NetworkPartition . .
12.7.8 SSL . . . . . . . . . . . . . . . . . . . . . . .
12.7.8.1 CertificateVerification . . . . . . . . .
12.7.8.2 Ciphers . . . . . . . . . . . . . . . . .
12.7.8.3 Enable . . . . . . . . . . . . . . . . .
12.7.8.4 EntropyFile . . . . . . . . . . . . . .
12.7.8.5 KeyPassphrase . . . . . . . . . . . . .
12.7.8.6 KeystoreFile . . . . . . . . . . . . . .
12.7.8.7 SelfSignedCertificates . . . . . . . . .
12.7.8.8 VerifyClient . . . . . . . . . . . . . .
12.7.9 Security . . . . . . . . . . . . . . . . . . . . .
12.7.9.1 SecurityProfile . . . . . . . . . . . . .
12.7.9.1.1 Cipher . . . . . . . . . . . .
12.7.9.1.2 CipherKey . . . . . . . . . .
12.7.9.1.3 Name . . . . . . . . . . . .
12.7.10 Sizing . . . . . . . . . . . . . . . . . . . . . .
12.7.10.1 EndpointsInSystem . . . . . . . . . .
12.7.10.2 EndpointsInSystem . . . . . . . . . .
12.7.10.3 LocalEndpoints . . . . . . . . . . . .
12.7.10.4 NetworkQueueSize . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

242
243
243
243
243
244
244
244
245
245
245
245
246
246
246
246
247
247
247
247
247
248
248
248
248
249
249
249
249
249
249
250
250
250
250
250
250
251
251
251
251
251
252
252
252
252
252
253
253
253
253
254
254
254
254
255
255
255

xvi

12.7.10.5 NetworkQueueSize . . . . . . . . . . . . . . .
12.7.10.6 ReceiveBufferChunkSize . . . . . . . . . . . .
12.7.10.7 ReceiveBufferChunkSize . . . . . . . . . . . .
12.7.10.8 Watermarks . . . . . . . . . . . . . . . . . . .
12.7.10.8.1 WhcAdaptive . . . . . . . . . . . . .
12.7.10.8.2 WhcHigh . . . . . . . . . . . . . . .
12.7.10.8.3 WhcHighInit . . . . . . . . . . . . .
12.7.10.8.4 WhcLow . . . . . . . . . . . . . . . .
12.7.11 TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.11.1 Enable . . . . . . . . . . . . . . . . . . . . . .
12.7.11.2 NoDelay . . . . . . . . . . . . . . . . . . . . .
12.7.11.3 Port . . . . . . . . . . . . . . . . . . . . . . .
12.7.11.4 ReadTimeout . . . . . . . . . . . . . . . . . .
12.7.11.5 WriteTimeout . . . . . . . . . . . . . . . . . .
12.7.12 ThreadPool . . . . . . . . . . . . . . . . . . . . . . . .
12.7.12.1 Enable . . . . . . . . . . . . . . . . . . . . . .
12.7.12.2 ThreadMax . . . . . . . . . . . . . . . . . . .
12.7.12.3 Threads . . . . . . . . . . . . . . . . . . . . .
12.7.13 Threads . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.13.1 Thread . . . . . . . . . . . . . . . . . . . . . .
12.7.13.1.1 Name . . . . . . . . . . . . . . . . .
12.7.13.1.2 Scheduling . . . . . . . . . . . . . .
12.7.13.1.2.1 Class . . . . . . . . . . . . . .
12.7.13.1.2.2 Priority . . . . . . . . . . . .
12.7.13.1.3 StackSize . . . . . . . . . . . . . . .
12.7.14 Tracing . . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.14.1 AppendToFile . . . . . . . . . . . . . . . . . .
12.7.14.2 EnableCategory . . . . . . . . . . . . . . . . .
12.7.14.3 OutputFile . . . . . . . . . . . . . . . . . . . .
12.7.14.4 PacketCaptureFile . . . . . . . . . . . . . . . .
12.7.14.5 Timestamps . . . . . . . . . . . . . . . . . . .
12.7.14.5.1 absolute . . . . . . . . . . . . . . . .
12.7.14.6 Verbosity . . . . . . . . . . . . . . . . . . . . .
12.7.15 Watchdog . . . . . . . . . . . . . . . . . . . . . . . . .
12.7.15.1 Scheduling . . . . . . . . . . . . . . . . . . . .
12.7.15.1.1 Class . . . . . . . . . . . . . . . . . .
12.7.15.1.2 Priority . . . . . . . . . . . . . . . .
12.7.15.1.2.1 priority_kind . . . . . . . . .
12.8 DDSI2Service . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.8.1 name . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.8.2 Compatibility . . . . . . . . . . . . . . . . . . . . . . .
12.8.2.1 AckNackNumbitsEmptySet . . . . . . . . . . .
12.8.2.2 ArrivalOfDataAssertsPpAndEpLiveliness . . .
12.8.2.3 AssumeRtiHasPmdEndpoints . . . . . . . . . .
12.8.2.4 ExplicitlyPublishQosSetToDefault . . . . . . .
12.8.2.5 ManySocketsMode . . . . . . . . . . . . . . .
12.8.2.6 RespondToRtiInitZeroAckWithInvalidHeartbeat
12.8.2.7 StandardsConformance . . . . . . . . . . . . .
12.8.3 Discovery . . . . . . . . . . . . . . . . . . . . . . . . .
12.8.3.1 AdvertiseBuiltinTopicWriters . . . . . . . . . .
12.8.3.2 DSGracePeriod . . . . . . . . . . . . . . . . .
12.8.3.3 DefaultMulticastAddress . . . . . . . . . . . .
12.8.3.4 DomainId . . . . . . . . . . . . . . . . . . . .
12.8.3.5 GenerateBuiltinTopics . . . . . . . . . . . . . .
12.8.3.6 LocalDiscoveryPartition . . . . . . . . . . . . .
12.8.3.7 MaxAutoParticipantIndex . . . . . . . . . . . .
12.8.3.8 ParticipantIndex . . . . . . . . . . . . . . . . .
12.8.3.9 Peers . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

255
256
256
256
256
257
257
257
257
257
258
258
258
258
259
259
259
259
259
259
260
260
260
260
261
261
261
261
262
262
262
262
263
263
263
264
264
264
264
264
265
265
265
265
266
266
266
266
267
267
267
267
268
268
268
268
268
269

xvii

12.8.3.9.1 Group . . . . . . . . . . . . . .
12.8.3.9.1.1 Peer . . . . . . . . . . .
12.8.3.9.1.2 Address . . . . . . . . .
12.8.3.9.2 Peer . . . . . . . . . . . . . . .
12.8.3.9.2.1 Address . . . . . . . . .
12.8.3.10 Ports . . . . . . . . . . . . . . . . . . . .
12.8.3.10.1 Base . . . . . . . . . . . . . . .
12.8.3.10.2 DomainGain . . . . . . . . . . .
12.8.3.10.3 MulticastDataOffset . . . . . . .
12.8.3.10.4 MulticastMetaOffset . . . . . .
12.8.3.10.5 ParticipantGain . . . . . . . . .
12.8.3.10.6 UnicastDataOffset . . . . . . . .
12.8.3.10.7 UnicastMetaOffset . . . . . . .
12.8.3.11 SPDPInterval . . . . . . . . . . . . . . .
12.8.3.12 SPDPMulticastAddress . . . . . . . . . .
12.8.4 General . . . . . . . . . . . . . . . . . . . . . . .
12.8.4.1 AllowMulticast . . . . . . . . . . . . . .
12.8.4.2 CoexistWithNativeNetworking . . . . . .
12.8.4.3 DontRoute . . . . . . . . . . . . . . . . .
12.8.4.4 EnableMulticastLoopback . . . . . . . . .
12.8.4.5 ExternalNetworkAddress . . . . . . . . .
12.8.4.6 ExternalNetworkMask . . . . . . . . . . .
12.8.4.7 FragmentSize . . . . . . . . . . . . . . .
12.8.4.8 MaxMessageSize . . . . . . . . . . . . .
12.8.4.9 MulticastRecvNetworkInterfaceAddresses
12.8.4.10 MulticastTimeToLive . . . . . . . . . . .
12.8.4.11 NetworkInterfaceAddress . . . . . . . . .
12.8.4.12 StartupModeCoversTransient . . . . . . .
12.8.4.13 StartupModeDuration . . . . . . . . . . .
12.8.4.14 UseIPv6 . . . . . . . . . . . . . . . . . .
12.8.5 Internal . . . . . . . . . . . . . . . . . . . . . . .
12.8.5.1 AccelerateRexmitBlockSize . . . . . . . .
12.8.5.2 AggressiveKeepLastWhc . . . . . . . . .
12.8.5.3 AssumeMulticastCapable . . . . . . . . .
12.8.5.4 AutoReschedNackDelay . . . . . . . . . .
12.8.5.5 BuiltinEndpointSet . . . . . . . . . . . .
12.8.5.6 ConservativeBuiltinReaderStartup . . . .
12.8.5.7 ControlTopic . . . . . . . . . . . . . . . .
12.8.5.7.1 enable . . . . . . . . . . . . . .
12.8.5.7.2 Deaf . . . . . . . . . . . . . . .
12.8.5.7.3 Mute . . . . . . . . . . . . . . .
12.8.5.8 DDSI2DirectMaxThreads . . . . . . . . .
12.8.5.9 DefragReliableMaxSamples . . . . . . . .
12.8.5.10 DefragUnreliableMaxSamples . . . . . .
12.8.5.11 DeliveryQueueMaxSamples . . . . . . . .
12.8.5.12 ForwardAllMessages . . . . . . . . . . .
12.8.5.13 ForwardRemoteData . . . . . . . . . . . .
12.8.5.14 GenerateKeyhash . . . . . . . . . . . . .
12.8.5.15 HeartbeatInterval . . . . . . . . . . . . .
12.8.5.15.1 max . . . . . . . . . . . . . . .
12.8.5.15.2 min . . . . . . . . . . . . . . .
12.8.5.15.3 minsched . . . . . . . . . . . .
12.8.5.16 LateAckMode . . . . . . . . . . . . . . .
12.8.5.17 LeaseDuration . . . . . . . . . . . . . . .
12.8.5.18 LegacyFragmentation . . . . . . . . . . .
12.8.5.19 LogStackTraces . . . . . . . . . . . . . .
12.8.5.20 MaxParticipants . . . . . . . . . . . . . .
12.8.5.21 MaxQueuedRexmitBytes . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

269
269
269
270
270
270
270
270
271
271
271
271
271
272
272
272
272
273
273
273
273
273
274
274
274
275
275
275
275
276
276
276
277
277
277
277
278
278
278
278
279
279
279
279
279
280
280
280
280
281
281
281
281
282
282
282
282
282

xviii

12.8.5.22 MaxQueuedRexmitMessages . . . . .
12.8.5.23 MaxSampleSize . . . . . . . . . . . .
12.8.5.24 MeasureHbToAckLatency . . . . . . .
12.8.5.25 MinimumSocketReceiveBufferSize . .
12.8.5.26 MinimumSocketSendBufferSize . . .
12.8.5.27 MirrorRemoteEntities . . . . . . . . .
12.8.5.28 MonitorPort . . . . . . . . . . . . . .
12.8.5.29 NackDelay . . . . . . . . . . . . . . .
12.8.5.30 PreEmptiveAckDelay . . . . . . . . .
12.8.5.31 PrimaryReorderMaxSamples . . . . .
12.8.5.32 PrioritizeRetransmit . . . . . . . . . .
12.8.5.33 RediscoveryBlacklistDuration . . . . .
12.8.5.33.1 enforce . . . . . . . . . . . .
12.8.5.34 ResponsivenessTimeout . . . . . . . .
12.8.5.35 RetransmitMerging . . . . . . . . . .
12.8.5.36 RetransmitMergingPeriod . . . . . . .
12.8.5.37 RetryOnRejectBestEffort . . . . . . .
12.8.5.38 RetryOnRejectDuration . . . . . . . .
12.8.5.39 SPDPResponseMaxDelay . . . . . . .
12.8.5.40 ScheduleTimeRounding . . . . . . . .
12.8.5.41 SecondaryReorderMaxSamples . . . .
12.8.5.42 SquashParticipants . . . . . . . . . . .
12.8.5.43 SuppressSPDPMulticast . . . . . . . .
12.8.5.44 SynchronousDeliveryLatencyBound .
12.8.5.45 SynchronousDeliveryPriorityThreshold
12.8.5.46 Test . . . . . . . . . . . . . . . . . .
12.8.5.46.1 XmitLossiness . . . . . . . .
12.8.5.47 UnicastResponseToSPDPMessages . .
12.8.5.48 UseMulticastIfMreqn . . . . . . . . .
12.8.5.49 Watermarks . . . . . . . . . . . . . .
12.8.5.49.1 WhcAdaptive . . . . . . . .
12.8.5.49.2 WhcHigh . . . . . . . . . .
12.8.5.49.3 WhcHighInit . . . . . . . .
12.8.5.49.4 WhcLow . . . . . . . . . . .
12.8.5.50 WriterLingerDuration . . . . . . . . .
12.8.6 SSL . . . . . . . . . . . . . . . . . . . . . . .
12.8.6.1 CertificateVerification . . . . . . . . .
12.8.6.2 Ciphers . . . . . . . . . . . . . . . . .
12.8.6.3 Enable . . . . . . . . . . . . . . . . .
12.8.6.4 EntropyFile . . . . . . . . . . . . . .
12.8.6.5 KeyPassphrase . . . . . . . . . . . . .
12.8.6.6 KeystoreFile . . . . . . . . . . . . . .
12.8.6.7 SelfSignedCertificates . . . . . . . . .
12.8.6.8 VerifyClient . . . . . . . . . . . . . .
12.8.7 Sizing . . . . . . . . . . . . . . . . . . . . . .
12.8.7.1 EndpointsInSystem . . . . . . . . . .
12.8.7.2 LocalEndpoints . . . . . . . . . . . .
12.8.7.3 NetworkQueueSize . . . . . . . . . .
12.8.7.4 ReceiveBufferChunkSize . . . . . . .
12.8.7.5 Watermarks . . . . . . . . . . . . . .
12.8.7.5.1 WhcAdaptive . . . . . . . .
12.8.7.5.2 WhcHigh . . . . . . . . . .
12.8.7.5.3 WhcHighInit . . . . . . . .
12.8.7.5.4 WhcLow . . . . . . . . . . .
12.8.8 TCP . . . . . . . . . . . . . . . . . . . . . . .
12.8.8.1 Enable . . . . . . . . . . . . . . . . .
12.8.8.2 NoDelay . . . . . . . . . . . . . . . .
12.8.8.3 Port . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

283
283
283
283
284
284
284
284
285
285
285
285
286
286
286
286
287
287
287
287
288
288
288
288
289
289
289
289
289
290
290
290
290
291
291
291
291
291
292
292
292
292
292
293
293
293
293
293
294
294
294
294
294
295
295
295
295
296

xix

12.8.8.4 ReadTimeout . . . . . . . . .
12.8.8.5 WriteTimeout . . . . . . . . .
12.8.9 ThreadPool . . . . . . . . . . . . . . .
12.8.9.1 Enable . . . . . . . . . . . . .
12.8.9.2 ThreadMax . . . . . . . . . .
12.8.9.3 Threads . . . . . . . . . . . .
12.8.10 Threads . . . . . . . . . . . . . . . . .
12.8.10.1 Thread . . . . . . . . . . . . .
12.8.10.1.1 Name . . . . . . . .
12.8.10.1.2 Scheduling . . . . .
12.8.10.1.2.1 Class . . . . .
12.8.10.1.2.2 Priority . . .
12.8.10.1.3 StackSize . . . . . .
12.8.11 Tracing . . . . . . . . . . . . . . . . .
12.8.11.1 AppendToFile . . . . . . . . .
12.8.11.2 EnableCategory . . . . . . . .
12.8.11.3 OutputFile . . . . . . . . . . .
12.8.11.4 PacketCaptureFile . . . . . . .
12.8.11.5 Timestamps . . . . . . . . . .
12.8.11.5.1 absolute . . . . . . .
12.8.11.6 Verbosity . . . . . . . . . . . .
12.8.12 Watchdog . . . . . . . . . . . . . . . .
12.8.12.1 Scheduling . . . . . . . . . . .
12.8.12.1.1 Class . . . . . . . . .
12.8.12.1.2 Priority . . . . . . .
12.8.12.1.2.1 priority_kind
12.9 TunerService . . . . . . . . . . . . . . . . . . .
12.9.1 name . . . . . . . . . . . . . . . . . .
12.9.2 Watchdog . . . . . . . . . . . . . . . .
12.9.2.1 Scheduling . . . . . . . . . . .
12.9.2.1.1 Priority . . . . . . .
12.9.2.1.1.1 priority_kind
12.9.2.1.2 Class . . . . . . . . .
12.9.3 Server . . . . . . . . . . . . . . . . . .
12.9.3.1 PortNr . . . . . . . . . . . . .
12.9.3.2 Backlog . . . . . . . . . . . .
12.9.3.3 Verbosity . . . . . . . . . . . .
12.9.4 Client . . . . . . . . . . . . . . . . . .
12.9.4.1 MaxClients . . . . . . . . . .
12.9.4.2 MaxThreadsPerClient . . . . .
12.9.4.3 LeasePeriod . . . . . . . . . .
12.9.4.4 Scheduling . . . . . . . . . . .
12.9.4.4.1 Priority . . . . . . .
12.9.4.4.1.1 priority_kind
12.9.4.4.2 Class . . . . . . . . .
12.9.5 GarbageCollector . . . . . . . . . . . .
12.9.5.1 Scheduling . . . . . . . . . . .
12.9.5.1.1 Priority . . . . . . .
12.9.5.1.1.1 priority_kind
12.9.5.1.2 Class . . . . . . . . .
12.9.6 LeaseManagement . . . . . . . . . . .
12.9.6.1 Scheduling . . . . . . . . . . .
12.9.6.1.1 Priority . . . . . . .
12.9.6.1.1.1 priority_kind
12.9.6.1.2 Class . . . . . . . . .
12.10 DbmsConnectService . . . . . . . . . . . . . .
12.10.1 name . . . . . . . . . . . . . . . . . .
12.10.2 Watchdog . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

296
296
296
297
297
297
297
297
297
298
298
298
298
299
299
299
300
300
300
300
300
301
301
301
302
302
302
302
302
303
303
303
303
303
304
304
304
304
304
305
305
305
305
306
306
306
306
306
307
307
307
307
307
308
308
308
308
309

xx

12.10.2.1 Scheduling . . . . . . . . . . . . . .
12.10.2.1.1 Priority . . . . . . . . . .
12.10.2.1.1.1 priority_kind . . .
12.10.2.1.2 Class . . . . . . . . . . . .
12.10.3 DdsToDbms . . . . . . . . . . . . . . . . . .
12.10.3.1 replication_mode . . . . . . . . . .
12.10.3.2 NameSpace . . . . . . . . . . . . .
12.10.3.2.1 name . . . . . . . . . . . .
12.10.3.2.2 odbc . . . . . . . . . . . .
12.10.3.2.3 partition . . . . . . . . . .
12.10.3.2.4 topic . . . . . . . . . . . .
12.10.3.2.5 update_frequency . . . . .
12.10.3.2.6 dsn . . . . . . . . . . . . .
12.10.3.2.7 usr . . . . . . . . . . . . .
12.10.3.2.8 pwd . . . . . . . . . . . .
12.10.3.2.9 schema . . . . . . . . . . .
12.10.3.2.10catalog . . . . . . . . . . .
12.10.3.2.11replication_mode . . . . .
12.10.3.2.12Mapping . . . . . . . . . .
12.10.3.2.12.1 topic . . . . . . . .
12.10.3.2.12.2 table . . . . . . . .
12.10.3.2.12.3 query . . . . . . .
12.10.3.2.12.4 filter . . . . . . . .
12.10.4 DbmsToDds . . . . . . . . . . . . . . . . . .
12.10.4.1 publish_initial_data . . . . . . . . .
12.10.4.2 event_table_policy . . . . . . . . . .
12.10.4.3 trigger_policy . . . . . . . . . . . .
12.10.4.4 replication_user . . . . . . . . . . .
12.10.4.5 NameSpace . . . . . . . . . . . . .
12.10.4.5.1 name . . . . . . . . . . . .
12.10.4.5.2 odbc . . . . . . . . . . . .
12.10.4.5.3 partition . . . . . . . . . .
12.10.4.5.4 table . . . . . . . . . . . .
12.10.4.5.5 update_frequency . . . . .
12.10.4.5.6 dsn . . . . . . . . . . . . .
12.10.4.5.7 usr . . . . . . . . . . . . .
12.10.4.5.8 pwd . . . . . . . . . . . .
12.10.4.5.9 publish_initial_data . . . .
12.10.4.5.10force_key_equality . . . .
12.10.4.5.11event_table_policy . . . .
12.10.4.5.12trigger_policy . . . . . . .
12.10.4.5.13schema . . . . . . . . . . .
12.10.4.5.14catalog . . . . . . . . . . .
12.10.4.5.15replication_user . . . . . .
12.10.4.5.16Mapping . . . . . . . . . .
12.10.4.5.16.1 table . . . . . . . .
12.10.4.5.16.2 topic . . . . . . . .
12.10.4.5.16.3 query . . . . . . .
12.10.4.5.16.4 publish_initial_data
12.10.4.5.16.5 force_key_equality
12.10.4.5.16.6 event_table_policy
12.10.4.5.16.7 trigger_policy . . .
12.10.5 Tracing . . . . . . . . . . . . . . . . . . . .
12.10.5.1 OutputFile . . . . . . . . . . . . . .
12.10.5.2 Timestamps . . . . . . . . . . . . .
12.10.5.2.1 absolute . . . . . . . . . .
12.10.5.3 Verbosity . . . . . . . . . . . . . . .
12.11 RnRService . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

309
309
309
309
310
310
310
310
311
311
311
311
312
312
312
312
312
313
313
313
314
314
314
314
314
315
315
316
316
316
316
317
317
317
317
318
318
318
318
319
319
320
320
320
320
321
321
321
321
321
322
322
323
323
323
323
324
324

xxi

12.11.1 name . . . . . . . . . . . . . . . . . .
12.11.2 Watchdog . . . . . . . . . . . . . . . .
12.11.2.1 Scheduling . . . . . . . . . . .
12.11.2.1.1 Priority . . . . . . .
12.11.2.1.1.1 priority_kind
12.11.2.1.2 Class . . . . . . . . .
12.11.3 Storage . . . . . . . . . . . . . . . . .
12.11.3.1 name . . . . . . . . . . . . . .
12.11.3.2 rr_storageAttrXML . . . . . .
12.11.3.2.1 filename . . . . . . .
12.11.3.2.2 MaxFileSize . . . . .
12.11.3.3 rr_storageAttrCDR . . . . . .
12.11.3.3.1 filename . . . . . . .
12.11.3.3.2 MaxFileSize . . . . .
12.11.3.4 Statistics . . . . . . . . . . . .
12.11.3.4.1 enabled . . . . . . .
12.11.3.4.2 publish_interval . . .
12.11.3.4.3 reset . . . . . . . . .
12.11.4 Tracing . . . . . . . . . . . . . . . . .
12.11.4.1 OutputFile . . . . . . . . . . .
12.11.4.2 AppendToFile . . . . . . . . .
12.11.4.3 Verbosity . . . . . . . . . . . .
12.11.4.4 EnableCategory . . . . . . . .
12.12 Agent . . . . . . . . . . . . . . . . . . . . . . .
12.12.1 name . . . . . . . . . . . . . . . . . .
12.12.2 Tracing . . . . . . . . . . . . . . . . .
12.12.2.1 EnableCategory . . . . . . . .
12.12.2.2 Verbosity . . . . . . . . . . . .
12.12.2.3 OutputFile . . . . . . . . . . .
12.12.2.4 AppendToFile . . . . . . . . .
12.12.3 Watchdog . . . . . . . . . . . . . . . .
12.12.3.1 Scheduling . . . . . . . . . . .
12.12.3.1.1 Class . . . . . . . . .
12.12.3.1.2 Priority . . . . . . .
12.12.3.1.2.1 priority_kind
13 Example Reference Systems
13.1 Zero Configuration System . . . . . . . . . .
13.2 Single Node System . . . . . . . . . . . . .
13.3 Medium Size Static (Near) Real-time System
13.3.1 High Volumes . . . . . . . . . . . .
13.3.2 Low Latencies . . . . . . . . . . . .
13.3.3 Responsiveness . . . . . . . . . . .
13.3.4 Topology Discovery . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

324
324
324
325
325
325
325
326
326
326
326
326
327
327
327
327
327
328
328
328
328
329
329
329
329
330
330
330
330
331
331
331
331
331
332

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

333
333
333
333
334
334
334
334

14 Logrotate
335
14.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
14.2 Configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
14.3 Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
15 Contacts & Notices
337
15.1 Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
15.2 Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337

xxii

1
Preface
1.1 About the Deployment Guide
The Vortex OpenSplice Deployment Guide is intended to provide a complete reference on how to configure the
OpenSplice service and tune it as required.
The Deployment Guide is included with the Vortex OpenSplice Documentation Set.
The Deployment Guide is intended to be used after reading and following the instructions in the Vortex OpenSplice
Getting Started Guide.

1.2 Intended Audience
The Deployment Guide is intended to be used by anyone who wishes to use and configure Vortex OpenSplice.

1.3 Organisation
The Overview gives a general description of the Vortex OpenSplice architecture.
This is followed by Service Descriptions, which explain how Vortex OpenSplice provides integration of real-time
DDS and the non-/near-real-time enterprise DBMS domains.
The Tools section introduces the OpenSplice system management tools.
Full details of the configuration elements and attributes of all Vortex OpenSplice services are given in the Configuration section.

1.4 Conventions
The icons shown below are used in ADLINK product documentation to help readers to quickly identify information relevant to their specific use of Vortex OpenSplice.

1

Deployment Guide, Release 6.x

Icon

Meaning
Item of special significance or where caution needs to be taken.
Item contains helpful hint or special information.
Information applies to Windows (e.g. XP, 2003, Windows 7) only.
Information applies to Unix-based systems (e.g. Solaris) only.
Information applies to Linux-based systems (e.g. Ubuntu) only.
C language specific.
C++ language specific.
C# language specific.
Java language specific.

1.4. Conventions

2

2
Overview
This chapter explains the Vortex OpenSplice middleware from a configuration perspective. It shows the different
components running on a single node and briefly explains the role of each entity. Furthermore, it defines a
reference system that will be used throughout the rest of the document as an example.

2.1 Vortex OpenSplice Architecture
Vortex OpenSplice is highly configurable, even allowing the architectural structure of the DDS middleware to be
chosen by the user at deployment time.
Vortex OpenSplice can be configured to run using a so-called ‘federated’ shared memory architecture, where both
the DDS related administration (including the optional pluggable services) and DDS applications interface directly
with shared memory.
Alternatively, Vortex OpenSplice also supports a so-called ‘standalone’ single process architecture, where one or
more DDS applications, together with the OpenSplice administration and services, can all be grouped into a single
operating system process.
Both deployment modes support a configurable and extensible set of services, providing functionality such as:
• networking - providing QoS-driven real-time networking based on multiple reliable multicast ‘channels’
• durability - providing fault-tolerant storage for both real-time state data as well as persistent settings
• remote control and monitoring SOAP service - providing remote web-based access using the SOAP protocol
from various Vortex OpenSplice tools
• dbms service - providing a connection between the real-time and the enterprise domain by bridging data
from DDS to DBMS and vice versa
The Vortex OpenSplice middleware can be easily configured, on the fly, using its pluggable service architecture:
the services that are needed can be specified together with their configuration for the particular application domain,
including networking parameters, and durability levels for example).
There are advantages to both the single process and shared memory deployment architectures, so the most appropriate deployment choice depends on the user’s exact requirements and DDS scenario.

2.1.1 Single Process architecture
This deployment allows the DDS applications and Vortex OpenSplice administration to be contained together
within one single operating system process. This ‘standalone’ single process deployment option is most useful
in environments where shared memory is unavailable or undesirable. As dynamic heap memory is utilized in the
single process deployment environment, there is no need to pre-configure a shared memory segment which in
some use cases is also seen as an advantage of this deployment option.
Each DDS application on a processing node is implemented as an individual, self-contained standalone operating
system process (i.e. all of the DDS administration and necessary services have been linked into the application
process). This is known as a single process application. Communication between multiple single process applications co-located on the same machine node is done via the (loop-back) network, since there is no memory shared

3

Deployment Guide, Release 6.x

between them. An extension to the single process architecture is the option to co-locate multiple DDS applications
into a single process. This can be done be creating application libraries rather than application executables that can
be ‘linked’ into the single process in a similar way to how the DDS middleware services are linked into the single
process. This is known as a single process application cluster. Communication between clustered applications
(that together form a single process) can still benefit from using the process’s heap memory, which typically is
an order of magnitude faster than using a network, yet the lifecycle of these clustered applications will be tightly
coupled.
The Single Process deployment is the default deployment architecture provided within Vortex OpenSplice and
allows for easy deployment with minimal configuration required for a running DDS system.
The diagram The Vortex OpenSplice Single Process Architecture shows an overview of the single process architecture of Vortex OpenSplice.
The Vortex OpenSplice Single Process Architecture

2.1.2 Shared Memory architecture
In the ‘federated’ shared memory architecture data is physically present only once on any machine but smart
administration still provides each subscriber with his own private view on this data. Both the DDS applications
and Vortex OpenSplice administration interface directly with the shared memory which is created by the Vortex
OpenSplice daemon on start up. This architecture enables a subscriber’s data cache to be seen as an individual
database and the content can be filtered, queried, etc. by using the Vortex OpenSplice content subscription profile.
Typically for advanced DDS users, the shared memory architecture is a more powerful mode of operation and
results in extremely low footprint, excellent scalability and optimal performance when compared to the implementation where each reader/writer are communication end points each with its own storage (i.e. historical data
both at reader and writer) and where the data itself still has to be moved, even within the same platform.
The diagram The Vortex OpenSplice Shared Memory Architecture shows an overview of the shared memory
architecture of Vortex OpenSplice on one computing node. Typically, there are many nodes within a system.
The Vortex OpenSplice Shared Memory Architecture

2.1. Vortex OpenSplice Architecture

4

Deployment Guide, Release 6.x

2.1.3 Comparison of Deployment Architectures
Simple when sufficient, Performant when required
The choice between the ‘federated’ or ‘standalone’ deployment architecture is basically about going for out-ofthe-box simplicity or for maximum performance:
Federated Application Cluster
• Co-located applications share a common set of pluggable services (daemons)
• Resources (e.g. memory/networking) are managed per ‘federation’
• Added value: performance (scalability and determinism)
Federated Application Cluster

Non-federated, ‘single process’ Applications
• Each application links the required DDS services as libraries into a standalone ‘single process’
• Resources are managed by each individual application
• Added value: Ease-of-use (‘zero-configuration’, middleware lifecycle is simply coupled to that of the application process)
Non-federated, single-process Applications

2.1. Vortex OpenSplice Architecture

5

Deployment Guide, Release 6.x

2.1.4 Configuring and Using the Deployment Architectures
The deployment architecture choice between a shared-memory federation or a standalone ‘single process’ is a
runtime choice driven by a simple single configuration parameter in the domain configuration xml file:
true
Note that there is absolutely no need to recompile or even re-link an application when selecting or changing the
deployment architecture.
NOTE for VxWorks kernel mode builds of OpenSplice the single process feature of the OpenSplice domain must
not be enabled. i.e. “true” must not be included in the OpenSplice Configuration xml. The model used on VxWorks kernel builds is always that an area of kernel memory is allocated to
store the domain database ( the size of which is controlled by the size option in the Database configuration for
OpenSplice as is used on other platforms for the shared memory model. ) This can then be accessed by any task
on the same VxWorks node.
The deployment modes can be mixed at will, so even on a single computing node, one could have some applications that are deployed as a federation as well as other applications that are deployed as individual ‘single
processes’.
To facilitate the ‘out-of-the-box’ experience, the default ospl.xml configuration file specifies the standalone
‘single process’ deployment architecture where the middleware is simply linked as libraries into an application:
no need to configure shared-memory, no need to ‘fire up’ Vortex OpenSplice first to start the related services. The
middleware lifecycle (and with that the information lifecycle) is directly coupled to that of the application.
When, with growing system scale, scalability and determinism require efficient sharing of memory and networking resources, the deployment architecture can be switched easily to the federated archtirecture; thereafter the
middleware and application(s) lifecycles are decoupled and a single set of services facilitate the federation of applications with regard to scheduling data transfers over the wire (based upon the actual importance and urgency
of each published data-sample), maintaining data for late joining applications (on the same or other nodes in the
system) and efficient (single-copy) sharing of all data within the computing node regardless of the number of
applications in the federation.
The Vortex OpenSplice distribution contains multiple example configuration files that exploit both deployment
architectures. Configurations that exploit the single-process architecture start with ospl_sp_ whereas federateddeployment configurations start with ospl_shmem_.

2.2 Vortex OpenSplice Usage
The Vortex OpenSplice environment has to be set up to instruct the node where executables and libraries can be
found in order to be able to start the Domain Service.

On UNIX-like platforms this can be realized by starting a shell and sourcing the release.com file
located in the root directory of the Vortex OpenSplice installation:
% . ./release.com

On the Windows platform the environment must be set up by running release.bat, or else the
Vortex OpenSplice Command Prompt must be used.

2.2.1 Starting Vortex OpenSplice for a Single Process Deployment
For ‘standalone’ single process deployment, there is no need to start the Vortex OpenSplice middleware before
starting the DDS application, since the application itself will implicitly start the library threads of Vortex Open-

2.2. Vortex OpenSplice Usage

6

Deployment Guide, Release 6.x

Splice Domain Service and associated services at the point when the DDS create_participant operation
is invoked by the standalone application process.

2.2.2 Starting Vortex OpenSplice for a Shared Memory Deployment
For a shared memory deployment, it is necessary to start the Vortex OpenSplice Domain Service prior to running
a DDS application. The ospl command-tool is provided to manage Vortex OpenSplice for shared memory
deployments. To start Vortex OpenSplice in this way, enter:
% . ./release.com

This will start the Domain Service using the default configuration.

NOTE: The Integrity version of Vortex OpenSplice does not include the ospl program. Instead
there is a project generator, ospl_projgen, which generates projects containing the required address spaces which will auto-start when loaded. Please refer to the Getting Started Guide for details.

NOTE: The VxWorks version of Vortex OpenSplice does not include the ospl program. Please refer
to the Getting Started Guide for details of how to use VxWorks projects and Real Time Processes to
deploy Vortex OpenSplice applications.

2.2.3 Monitoring
The Vortex OpenSplice Domain Service can be monitored and tuned in numerous ways after it has been started.
The monitoring and tuning capabilities are described in the following subsections.
2.2.3.1 Diagnostic Messages
Vortex OpenSplice outputs diagnostic information. This information is written to the ospl-info.log file
located in the start-up directory, by default. Error messages are written to the ospl-error.log file, by default.
The state of the system can be determined from the information written into these files.
The location where the information and error messages are stored can be overridden by setting the
OSPL_LOGPATH environment variable to a location on disk (by specifying a path), to standard out (by specifying ) or to standard error (by specifying ). The names of these log files can also be
changed by setting the OSPL_INFOFILE and OSPL_ERRORFILE variables.
Vortex OpenSplice also accepts the environment properties OSPL_VERBOSITY and OSPL_LOGAPPEND. These
provide an alternate method of specifying values for Attribute append and Attribute verbosity of the
Domain/Report configuration element (see the Configuration section for details).

Values specified in the domain configuration override the environment values.
2.2.3.2 Vortex OpenSplice Tuner
The intention of Vortex OpenSplice Tuner, ospltun, is to provide facilities for monitoring and controlling Vortex
OpenSplice, as well as the applications that use OpenSplice for the distribution of data. The Vortex OpenSplice
Tuner User Guide specifies the capabilities of Vortex OpenSplice Tuner and describes how they should be used.
Note that the Tuner will only be able to connect to the memory running in a particular DDS Domain by being run
on a node that is already running Vortex OpenSplice using the shared memory deployment mode.

2.2. Vortex OpenSplice Usage

7

Deployment Guide, Release 6.x

The Tuner will also be able to monitor and control a Domain running as a single process if the Tuner itself is
started as the single process application with other DDS applications clustered in the process by being deployed
as a single process application cluster. Please refer to the Vortex OpenSplice Tuner User Guide for a description
of how to cluster applications together in a single process.
2.2.3.3 Vortex OpenSplice Memory Management Statistics Monitor
The Vortex OpenSplice Memory Management Statistics Tool, mmstat, provides a command line interface that
allows monitoring the status of the nodal shared administration (shared memory) used by the middleware and the
applications. Use the help switch (mmstat -h) for usage information. Please refer to the Tools chapter for
detailed information about mmstat.

Please note that mmstat is only suitable for diagnostic purposes, and its use is only applicable in
shared memory mode.

2.2.4 Stopping Vortex OpenSplice
2.2.4.1 Stopping a Single Process deployment
When deployed as a single process, the application can either be terminated naturally when the end of the main
function is reached, or stopped prematurely by means of a signal interrupt, for example Ctrl-C. In either case,
the Vortex OpenSplice middleware running within the process will be stopped and the process will terminate.
2.2.4.2 Stopping a Shared Memory deployment
In shared memory deployment mode, the Vortex OpenSplice Domain Service can be stopped by issuing the following command on the command-line.
% ospl stop

The Vortex OpenSplice Domain Service will react by announcing the shutdown using the shared administration.
Applications will not be able to use DDS functionality anymore and services will terminate elegantly. Once this
has succeeded, the Domain Service will destroy the shared administration and finally terminate itself.
Stopping OSPL by using signals

Alternatively the Vortex OpenSplice domain service can also be stopped by sending a signal to the ospl process,
assuming the process was started using the -f option.

For example, on Unix you could use the following command to send a termination signal to the ospl
tool, where  identifies the ospl tool pid:
% kill -SIGTERM 

Sending such a signal will cause the ospl tool to exit gracefully, properly terminating all services and exiting
with returncode 0.
The following table shows a list of all POSIX signals and what the behavior of OSPL is when that signal is sent
to the ospl tool.

2.2. Vortex OpenSplice Usage

8

Deployment Guide, Release 6.x

Signal
SIGHUP
SIGINT
SIGQUIT
SIGILL
SIGABRT
SIGFPE
SIGKILL
SIGSEGV
SIGPIPE
SIGALRM
SIGTERM
SIGUSR1
SIGUSR2
SIGCHLD
SIGCONT
SIGSTOP
SIGTSTOP
SIGTTIN
SIGTTOUT

Default action
Term.
Term.
Core
Core
Core
Core
Term.
Core
Term.
Term.
Term.
Term.
Term.
Ignore
Ignore
Stop
Stop
Stop
Stop

OSPL action
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Term.
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Graceful exit
Ignore
Ignore
Stop
Graceful exit
Graceful exit
Graceful exit

Description
Hang up on controlling process
Interrupt from keyboard
Quit from keyboard
Illegal instruction
Abort signal from abort function
Floating point exception
Kill signal (can’t catch, block, ignore)
Invalid memory reference
Broken pipe: write to pipe with no readers
Timer signal from alarm function
Termination signal
User defined signal 1
User defined signal 2
A child process has terminated or stopped
Continue if stopped
Stop process (can’t catch, block, ignore)
Stop typed at tty
Tty input for background process
Tty output for background process

Stopping Applications in Shared Memory Mode

Applications that are connected to and use Vortex OpenSplice in shared memory mode must not be terminated
with a KILL signal. This will ensure that Vortex OpenSplice DDS shared memory always remains in a valid,
functional state.
When Vortex OpenSplice applications terminate naturally, a cleanup mechanism is executed that releases any
references held to the shared memory within Vortex OpenSplice which that application was using. This mechanism
will be executed even when an application is terminated by other means (e.g. by terminating with Ctrl+C) or
even if the application crashes in the user code.

The cleanup mechanisms are not executed when an application is terminated with a KILL signal. For
this reason a user must not terminate an application with a kill -9 command (or, on Windows,
must not use TaskManager’s End Process option) because the process will be forcibly removed and
the cleanup mechanisms will be prevented from executing. If an application is killed in this manner,
the shared memory regions of Vortex OpenSplice will become inconsistent and no recovery will then
be possible other than re-starting Vortex OpenSplice and all applications on the node.

2.2.5 Deploying Vortex OpenSplice on VxWorks 6.x
The VxWorks version of Vortex OpenSplice does not include the ospl program. Please refer to the Getting
Started Guide for details of how to use VxWorks projects and Real Time Processes to deploy Vortex OpenSplice
applications.

2.2.6 Deploying Vortex OpenSplice on Integrity
The Integrity version of Vortex OpenSplice does not include the ospl program. Instead there is a project generator, ospl_projgen, which generates projects containing the required address spaces which will auto-start when
loaded. Please refer to the Getting Started Guide for detailed information about Vortex OpenSplice deployment
on Integrity.

2.2. Vortex OpenSplice Usage

9

Deployment Guide, Release 6.x

2.2.7 Installing/Uninstalling the Vortex OpenSplice C# Assembly to the Global
Assembly Cache
The installer for the commercial distribution of Vortex OpenSplice includes the option to install the C# Assembly
to the Global Assembly Cache during the installation process. If you chose to omit this step, or you are an open
source user, then you should follow the instructions in the next few paragraphs, which describe how to manually
install and uninstall the assembly to the Global Assembly Cache.
2.2.7.1 Installing the C# Assembly to the Global Assembly Cache
To install an assembly to the Global Assembly Cache, you need to use the gacutil.exe tool. Start a Visual
Studio command prompt and type:
% gacutil /i \bin\dcpssacsAssembly.dll

where  is the installation path of the Vortex OpenSplice
distribution. If you are successful you will see a message similar to the following:
% C:\Program Files\Microsoft Visual Studio 9.0\VC>gacutil.exe /i
"C:\Program Files \ADLINK\VortexOpenSplice\V6.6.0\HDE\x86.win32\
bin\dcpssacsAssembly.dll"
%
% Microsoft (R) .NET Global Assembly Cache Utility. Version
3.5.30729.1
% Copyright (c) Microsoft Corporation. All rights reserved.
%
% Assembly successfully added to the cache
%
% C:\Program Files\Microsoft Visual Studio 9.0\VC>

2.2.7.2 Uninstalling the C# Assembly from the Global Assembly Cache
To uninstall an assembly from the Global Assembly Cache, you need to use the gacutil.exe tool. Start a
Visual Studio command prompt and type:
% gacutil /u dcpssacsAssembly,Version=

The version number of the assembly is defined in the \etc\RELEASEINFO file, in the CS_DLL_VERSION variable.
If you are successful you will see a message similar to the following:
% C:\Program Files\Microsoft Visual Studio 9.0\VC>gacutil /u
dcpssacsAssembly,Version=5.1.0.14734
% Microsoft (R) .NET Global Assembly Cache Utility. Version
3.5.30729.1
% Copyright (c) Microsoft Corporation. All rights reserved.
%
% Assembly: dcpssacsAssembly, Version=5.1.0.14734,
Culture=neutral, PublicKeyToken=5b9310ab51310fa9,
processorArchitecture=MSIL
% Uninstalled: dcpssacsAssembly, Version=5.1.0.14734,
Culture=neutral, PublicKeyToken=5b9310ab51310fa9,
processorArchitecture=MSIL
% Number of assemblies uninstalled = 1
% Number of failures = 0
%
% C:\Program Files\Microsoft Visual Studio 9.0\VC>

2.2. Vortex OpenSplice Usage

10

Deployment Guide, Release 6.x

If you do not specify a version to the uninstall option, then all installed Vortex OpenSplice C#
Assemblies in the GAC called dcpssacsAssembly will be removed from the GAC, so take
care with this option as it can adversely affect any deployed applications that rely on other
versions of these assemblies.
We strongly recommend that every time you uninstall an Vortex OpenSplice C# Assembly you
specify the version you want to uninstall.

2.3 Vortex OpenSplice Configuration
Each application domain has its own characteristics; Vortex OpenSplice therefore allows configuring a wide range
of parameters that influence its behaviour to be able to achieve optimal performance in every situation. This
section describes generally how to instruct Vortex OpenSplice to use a configuration that is different from the
default. This requires the creation of a custom configuration file and an instruction to the middleware to use this
custom configuration file.

2.3.1 Configuration Files
Vortex OpenSplice expects the configuration to be defined in the XML format. The expected syntax and semantics
of the configuration parameters will be discussed further on in this document. Within the context of Vortex
OpenSplice, a reference to a configuration is expressed in the form of a Uniform Resource Identifier (URI).
Currently, only file URIs are supported (for example, file:///opt/ospl/config/ospl.xml).
When Vortex OpenSplice is started, the Domain Service parses the configuration file using the provided URI.
According to this configuration, it creates the DDS administration and initialises it. After that, the Domain Service
starts the configured services. The Domain Service passes on its own URI to all services it starts, so they will
also be able to resolve their configuration from this resource as well. (Of course, it is also possible to configure a
different URI for each of the services, but usually one configuration file for all services will be the most convenient
option.) The services will use default values for the parameters that have not been specified in the configuration.

2.3.2 Environment Variables
The Vortex OpenSplice middleware will read several environment variables for different purposes. These variables
are mentioned in this document at several places. To some extent, the user can customize the Vortex OpenSplice
middleware by adapting the environment variables.
When specifying configuration parameter values in a configuration file, environment variables can be referenced
using the notation ${VARIABLE}. When parsing the XML configuration, the Domain Service will replace the
symbol with the variable value found in the environment.
2.3.2.1 The OSPL_URI environment variable
The environment variable OSPL_URI is a convenient mechanism to pass the configuration file to the Domain Service and DDS applications. The variable will refer to the default configuration that comes with Vortex OpenSplice
but of course can be overridden to refer to a customer configuration.
For single process mode operation this variable is required; see also Single Process architecture in this Guide,
and the detailed description of the Element //OpenSplice/Domain/SingleProcess in the Configuration
section.

On Linux/Unix-based platforms, this variable can be initialized by sourcing the release.com
script that is created by the Vortex OpenSplice installer.

2.3. Vortex OpenSplice Configuration

11

Deployment Guide, Release 6.x

On Windows platforms, this variable may already be initialized in your environment by the Windows
installer. Alternatively, it can be set by executing the supplied release.bat script or the Vortex
OpenSplice Command Prompt.

2.3.3 Configuration of Single Process deployment
A single process deployment is enabled when the OSPL_URI environment variable refers to
an XML configuration containing the  attribute within the Domain section
(//OpenSplice/Domain/SingleProcess). See the Configuration section for full details. In such
a deployment, each Vortex OpenSplice service including the Domain Service will be started as threads within the
existing application process.
In this case there is no need to start the Vortex OpenSplice administration manually since this is implicitly handled
within the DDS code when the application first invokes the DDS create_participant operation. Since the
OSPL_URI environment variable describes the Vortex OpenSplice system, there is no requirement to pass any
Vortex OpenSplice configuration parameters to the application.

2.3.4 Configuration of Shared Memory deployment
In order to have Vortex OpenSplice start with a custom configuration file, use:
% ospl start 

where  denotes the URI of the Domain Service configuration file.
In order to stop a specific Vortex OpenSplice instance, the same mechanism holds. Use:
% ospl stop 

Several instances of Vortex OpenSplice can run simultaneously, as long as their configurations specify different
domain names. Typically, only one instance of the middleware is needed. Multiple instances of the middleware
are only required when one or more applications on the computing node participate in different or multiple DDS
Domains. At any time, the system can be queried for all running Vortex OpenSplice instances by using the
command:
% ospl list

To stop all active Vortex OpenSplice Domains, use:
% ospl -a stop

Note that the  parameter to the above commands is not required if the OSPL_URI environment variable
refers to the configuration that is intended to be started or stopped.

2.3.5 Temporary Files
Please note that for a shared memory deployment, Vortex OpenSplice uses temporary files that are used to describe
the shared memory that has been created. The exact nature of these files varies according to the operating system;
however, the user does not need to manipulate these files directly.

On Linux systems the location of the temp files is /tmp by default, while on Windows the location
is the value of the TEMP (or TMP if TEMP is not set) environment variable. These locations can be
over-ridden, if required, by setting the OSPL_TEMP variable to a location on disk by specifying a
path. Please note, however, that this must be consistent for all environments on a particular node.

2.3. Vortex OpenSplice Configuration

12

Deployment Guide, Release 6.x

2.4 Applications which operate in multiple domains
Vortex OpenSplice can be configured to allow a DDS application to operate in multiple domains.

Please note that an application operating in multiple domains is currently only supported in shared
memory deployments.
In order to achieve multi-domain operation, the host node for the application must run Vortex OpenSplice instances
for every domain in which applications on that node will interact. For example, if an application A wants to operate
in domains X, Y and Z then the node on which application A operates must run appropriate services for X, Y and
Z.
Vortex OpenSplice utilises shared memory regions for intra-node communication. Each domain running on a
node must have its own shared memory region, and subsequently the shared memory region for each domain
that an application wants to operate within must be mapped into that application’s virtual address space. The
mapping must occur at a virtual address in memory that is common to both the Vortex OpenSplice daemon (and
any services) for that domain and the application itself. This requires some thought when configuring multiple
Vortex OpenSplice domains on a single node. Care must be taken to ensure that the XML configuration files
contain unique and non-overlapping addresses for the shared memory mapping (please also see the description of
the XML element //OpenSplice/Domain/Database/Address in the Configuration section).
When designing and coding applications, care must also be taken with regard to usage of the default domain. If a
domain is not explicitly identified in the application code, then appropriate steps must be taken at deployment in
order to ensure that applications operate in the domain they were intended to.

2.4.1 Interaction with a Networking Service
Where multiple domains are running on a single node, each domain must run its own instance of a networking
service if that domain is to participate in remote communication.
• Each domain should have its own pre-determined port numbers configured in the XML for that domain.
• These port numbers must be common for that domain across the system.

2.5 Time-jumps
Observed time discontinuities can affect data ordering and processing of middleware actions. Time-jumps can be
caused by adjusting the clock forward or backward. When resuming from being suspended, time will seem to
have jumped forward as if the clock was advanced.

2.5.1 Effect on data
When a sample is published, a time stamp is determined at the source which is attached to the sample before it is
sent. The subscriber stores the time stamp at which the sample is received in the sample as well. In DDS samples
are ordered within the history of an instance based on either the source time stamp or the reception
time stamp. This is controlled by means of the DestinationOrderQosPolicy.
The HistoryQosPolicy controls how many historic samples are stored in a reader. By default, a DataReader
has a KEEP_LAST history with a depth of 1. This means that only the ‘last’ (based on the ordering defined by the
DestinationOrderQosPolicy) sample for each instance is maintained by the middleware. When a sample
is received by the subscriber, it determines whether and where to insert the sample in the history of an instance
based on either the source
time stamp or the reception time stamp, potentially replacing an existing sample of
the instance.
2.4. Applications which operate in multiple domains

13

Deployment Guide, Release 6.x

BY_SOURCE_ time stamp If samples are ordered by source time stamp and time is set back 1 hour on the
subscriber node, nothing changes. If it is set back one hour on the publisher node, samples written after the
time has changed have ‘older’ source time stamps and will therefore not overwrite the samples in the history
from before the time changed.
BY_RECEPTION_ time stamp If samples are ordered by destination time stamp and time is set back back
1 hour on the publisher node, nothing changes. If it is set back one hour on the subscriber node, samples
delivered after the time has changed have ‘older’ reception time stamps and will therefore not overwrite the
samples in the history from before the time changed.

2.5.2 Effect on processing
Processing of relative time actions, actions for which a time contract exists with local entities (e.g., inter-process
leases, wait for attachment of a service) or time contracts involving remote parties (e.g., heartbeats, deadline) may
not behave as expected when time is advanced discontinuously by an operator or when a system is suspended
(e.g., hibernate or standby) and resumed. If the OS doesn’t support alternative clocks that aren’t influenced by
this, the middleware may for example stop working because spliced doesn’t seem to have updated its lease on
time, causing services/applications to conclude that spliced isn’t running anymore.
Also, timed waits may not have the expected duration; too short when time is advanced and too long when time
is turned back. Modern Windows and Linux OS’s provide these alternative clocks. If the clocks are available, the
middleware will use these to prevent the adverse effects of observed time-jumps on its processing.

2.5.3 Background information
The basic clock used for time stamps of data published in DDS is the real-time clock. This time is expressed as the
time since the Unix epoch (00:00:00 on Thursday the 1st of January 1970, UTC). All systems support some form
of a real-time clock. For most distributed systems that use time, the clocks on different nodes should have similar
notions of time progression and because all systems try to keep track of the actual time as accurately as possible,
the real-time clock is typically a very good distributed notion of time. If a machine is not synchronised with the
actual time, correcting the offset will cause the real-time clock to become discontinuous. These discontinuities
make it impossible even to track relative times, so this is where monotonic clocks are needed. However, not all
systems have support for monotonic clocks with near real-time time progression.
The following clock-types are used by the middleware to cope with time discontinuities in processing of data,
local leases and remote time based contracts if supported by the OS.
Real-time clock This is the main clock used for time stamps on data and data-related actions. This time is typically
kept close to the actual time by the OS by means of NTP or the like. This clock can also be provided by
the customer through the ‘UserClock’ functionality (//OpenSplice/Domain/UserClockService
is fully described in the Configuration section).
Monotonic clock This is a clock that never jumps back and which provides a measure for the time a machine has
been running since boot. When the time is adjusted, this clock will not jump forward or backward. This
clock doesn’t include the time spent when a machine was suspended.
Elapsed time clock This is also a monotonic clock, since it measures the elapsed time since some undefined, but
fixed time. This clock is also not affected by adjusting the real-time clock, but it does include time the
machine was suspended.

2.6 Time stamps and year 2038 limit
The DDS_Time_t time stamp definition contains a 32-bit second field with an epoch of 01-01-1970. As a result of this the second field is unable to represent a time after year 2038. From version 6.7 this problem
is addressed by changing the second field to a 64-bit representation. For now this change is only done for
CORBA C++ and CORBA Java and all internal DDS data structures. All other language bindings still use
the 32-bit representation. Version 6.7 is fully compatible with older versions and will communicate by default in the 32-bit time representation with other nodes. If the domain/y2038Ready option is set, the node will
2.6. Time stamps and year 2038 limit

14

Deployment Guide, Release 6.x

use the new 64-bit second representation which makes it incompatible with older nodes prior to version 6.7.
(//OpenSplice/Domain/y2038Ready is fully described in the Configuration section)

2.6.1 CORBA C++
By default the CORBA C++ library (dcpsccpp) that comes with OpenSplice is built with support for the 32-bit
DDS_Time_t representation. To rebuild this library to get support for the new 64-bit DDS_Time_t representation
please look at the OSPL_HOME/custom_lib/ccpp/README document which explains how to do this.

2.6.2 CORBA Java
By default the CORBA Java library (dcpscj.jar) that comes with OpenSplice is built with support for the 32bit DDS_Time_t representation. A new library dcpscj_y2038_ready.jar is added which supports the new 64-bit
DDS_Time_t representation. This library can be used when time stamps beyond year 2038 are needed.

2.6.3 Migration
client-durability Users that use client-durability cannot use times beyond 2038. This is because the client durability protocol uses DDS_Time_t in 32-bit. Also, Lite and Cafe do not support 64-bit yet.
DDS_Time_t in user data model Users that currently use DDS_Time_t in their user-defined data structures cannot migrate a running system. If they want to migrate, the complete system must be shut down and delete all
storages containing the old 32-bit dds_time topics stored by the durability service. Rebuild the data models
with the new 64-bit dds_time topics and restart the system. Running a mixed environment with old and new
dds_time structures will result in topic mismatches.
Record and Replay (RnR) service Users that use the Record and Replay service cannot use time beyond 2038.
This is because the RnR service uses 32-bit times in the provided api.
No client durability and no DDS_Time_t usage Customers that do not use DDS_Time_t in their user-defined
data structures AND do not use client durability can migrate in two steps:
• First update all nodes to minimal version 6.7 to be compatible with the 64-bit time stamps, but don’t
set the domain/y2038Ready option
• If all nodes are running compatible versions, node by node can be changed to use 64-bit time stamps
by setting the domain/y2038Ready option to true.

2.6.4 Platform support
• Linux 64-bit: On 64-bit platforms linux already supports 64-bit time. No action required.
• Linux 32-bit: On 32-bit platforms linux support for 64-bit time stamps is still in development. To
provide y2038 safe time in GLIBC it is proposed that the user code defines _TIME_BITS=64 to get
64bit time support. When GLIBC sees _TIME_BITS=64 or when the system is 64bit it will set
__USE_TIME_BITS64 to indicate that it will use 64bit time. Note that this is not yet supported. See:
https://sourceware.org/glibc/wiki/Y2038ProofnessDesign?rev=83
• Windows: 64-bit time stamps are supported
NOTE: Network Time Protocol: (This is outside the scope of OpenSplice) When NTP is used then there may be
a problem that the time stamp will rollover in 2036. This may not be an issue when version 4 of the NTP protocol
is used which provides specification of an era number and era offset.

2.6.5 DDS_Time structure change
The new DDS_Time representation which contains a 64-bit second field:

2.6. Time stamps and year 2038 limit

15

Deployment Guide, Release 6.x

module DDS {
struct Time_t {
long long sec;
unsigned long nanosec;
};

The original DDS_Time representation with a 32-bit second field:
module DDS {
struct Time_t {
long sec;
unsigned long nanosec;
};

2.6. Time stamps and year 2038 limit

16

3
Service Descriptions
Vortex OpenSplice middleware includes several services; each service has a particular responsibility. All of the
services are described in the following sections.
The Shared Memory architecture shows all of the services included with Vortex OpenSplice.
Each service can be enabled or disabled. The services can be configured or tuned to meet the optimum requirements of a particular application domain (noting that detailed knowledge of the requirement is needed for effective
tuning).
The following sections explain each of the services and their responsibilities.
The Domain Service
The Durability Service
The Networking Service
The DDSI2 and DDSI2E Networking Services
The NetworkingBridge Service
The Tuner Service
The DbmsConnect Service
For the Recording and Replay Service, see the its own specific guide.
Vortex OpenSplice middleware and its services can be configured using easy-to-maintain XML files.
Full details of how to use XML files to configure the elements and attributes of all Vortex OpenSplice services are
given in the Configuration section.

17

4
The Domain Service
The Domain Service is responsible for creating and initialising the database which is used by the administration
to manage the DDS data.
In the single process architecture the Domain Service is started as a new thread within the DDS application. This is
done implicitly when the application invokes the DDS create_participant operation and no such service
currently exists within the process. Without an database size configured the Domain Service creates the DDS
database within the heap memory of the process and so is limited only to the maximal heap that the operating
system supports. To be able to manage the maximum database size a database size can also be given in the single
process mode. Then the Domain Service creates the DDS database within the heap memory of the process with
the given size and will use it’s own memory manager in this specific allocated memory.
In the shared memory architecture, the user is responsible for managing the DDS administration separately from
the DDS application. In this mode, the Domain Service is started as a separate process; it then creates and
initialises the database by allocating a particular amount of shared memory as dictated by the configuration.
Without this administration, no other service or application is able to participate in the DDS Domain.
In either deployment mode, once the database has been initialised, the Domain Service starts the set of pluggable
services. In single process mode these services will be started as threads within the existing process, while in
shared memory mode the services will be represented by new separate processes that can interface with the shared
memory segment.
When a shutdown of the OpenSplice Domain Service is requested in shared memory mode, it will react by announcing the shutdown using the shared administration. Applications will not be able to use DDS functionality
anymore and services are requested to terminate elegantly. Once this has succeeded, the Domain Service will
destroy the shared administration and finally terminate itself.
The exact fulfilment of these responsibilities is determined by the configuration of the Domain Service. There are
detailed descriptions of all of the available configuration parameters and their purpose in the Configuration section

18

5
The Durability Service
This section provides a description the most important concepts and mechanisms of the current durability service implementation, starting with a description of the purpose of the service. After that all its concepts and
mechanisms are described.
The exact fulfilment of the durability responsibilities is determined by the configuration of the Durability Service. There are detailed descriptions of all of the available configuration parameters and their purpose in the
Configuration section.

5.1 Durability Service Purpose
Vortex OpenSplice will make sure data is delivered to all ‘compatible’ subscribers that are available at the time
the data is published using the ‘communication paths’ that are implicitly created by the middleware based on the
interest of applications that participate in the domain. However, subscribers that are created after the data has
been published (called late-joiners) may also be interested in the data that was published before they were created
(called historical data). To facilitate this use case, DDS provides a concept called durability in the form of a
Quality of Service (DurabilityQosPolicy).
The DurabilityQosPolicy prescribes how published data needs to be maintained by the DDS middleware
and comes in four flavours:
VOLATILE Data does not need to be maintained for late-joiners (default).
TRANSIENT_LOCAL Data needs to be maintained for as long as the DataWriter is active.
TRANSIENT Data needs to be maintained for as long as the middleware is running on at least one of the nodes.
PERSISTENT Data needs to outlive system downtime. This implies that it must be kept somewhere on permanent storage in order to be able to make it available again for subscribers after the middleware is restarted.
In Vortex OpenSplice, the realisation of the non-volatile properties is the responsibility of the durability service.
Maintenance and provision of historical data could in theory be done by a single durability service in the domain,
but for fault-tolerance and efficiency one durability service is usually running on every computing node. These
durability services are on the one hand responsible for maintaining the set of historical data and on the other hand
responsible for providing historical data to late-joining subscribers. The configurations of the different services
drive the behaviour on where and when specific data will be maintained and how it will be provided to late-joiners.

5.2 Durability Service Concepts
The following subsections describe the concepts that drive the implementation of the OpenSplice Durability Service.

5.2.1 Role and Scope
Each OpenSplice node can be configured with a so-called role. A role is a logical name and different nodes can
be configured with the same role. The role itself does not impose anything, but multiple OpenSplice services use

19

Deployment Guide, Release 6.x

the role as a mechanism to distinguish behaviour between nodes with the equal and different roles. The durability
service allows configuring a so-called scope, which is an expression that is matches against roles of other nodes.
By using a scope, the durability service can be instructed to apply different behaviour with respect to merging of
historical data sets (see Merge policy) to and from nodes that have equal or different roles.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/Domain/Role
• //OpenSplice/DurabilityService/NameSpaces/Policy/Merge[@scope]

5.2.2 Name-spaces
A sample published in DDS for a specific topic and instance is bound to one logical partition. This means that in
case a publisher is associated with multiple partitions, a separate sample for each of the associated partitions is
created. Even though they are syntactically equal, they have different semantics (consider for instance the situation
where you have a sample in the ‘simulation’ partition versus one in the ‘real world’ partition).
Because applications might impose semantic relationships between instances published in different partitions, a
mechanism is required to express this relationship and ensure consistency between partitions. For example, an
application might expect a specific instance in partition Y to be available when it reads a specific instance from
partition X.
This implies that the data in both partitions need to be maintained as one single set. For persistent data, this
dependency implies that the durability services in a domain needs to make sure that this data set is re-published
from one single persistent store instead of combining data coming from multiple stores on disk. To express this
semantic relation between instances in different partitions to the durability service, the user can configure so-called
‘name-spaces’ in the durability configuration file.
Each name-space is formed by a collection of partitions and all instances in such a collection are always handled
as an atomic data-set by the durability service. In other words, the data is guaranteed to be stored and reinserted
as a whole.
This atomicity also implies that a name-space is a system-wide concept, meaning that different durability services
need to agree on its definition, i.e. which partitions belong to one name-space. This doesn’t mean that each
durability service needs to know all name-spaces, as long as the name-spaces one does know don’t conflict with
one of the others in the domain. Name-spaces that are completely disjoint can co-exist (their intersection is an
empty set); name-spaces conflict when they intersect. For example: name-spaces {p1, q} and {p2, r} can co-exist,
but name-spaces {s, t} and {s, u} cannot.
Furthermore it is important to know that there is a set of configurable policies for name-spaces, allowing durability
services throughout the domain to take different responsibilities for each name-space with respect to maintaining
and providing of data that belongs to the name-space. The durability name-spaces define the mapping between
logical partitions and the responsibilities that a specific durability service needs to play. In the default configuration
file there is only one name-space by default (holding all partitions).
Next to the capability of associating a semantic relationship for data in one name-space, the need to differentiate
the responsibilities of a particular durability service for a specific data-set is the second purpose of a name-space.
Even though there may not be any relation between instances in different partitions, the choice of grouping specific
partitions in different name-spaces can still be perfectly valid. The need for availability of non-volatile data under
specific conditions (fault-tolerance) on the one hand versus requirements on performance (memory usage, network
bandwidth, CPU usage, etc.) on the other hand may force the user to split up the maintaining of the non-volatile
data-set over multiple durability services in the domain. Illustrative of this balance between fault-tolerance and
performance is the example of maintaining all data in all durability services, which is maximally fault-tolerant, but
also requires the most resources. The name-spaces concept allows the user to divide the total set of non-volatile
data over multiple name-spaces and assign different responsibilities to different durability-services in the form of
so-called name-space policies.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/NameSpace

5.2. Durability Service Concepts

20

Deployment Guide, Release 6.x

5.2.3 Name-space policies
This section describes the policies that can be configured per name-space giving the user full control over the
fault-tolerance versus performance aspect on a per name-space level.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy
5.2.3.1 Alignment policy
The durability services in a domain are on the one hand responsible for maintaining the set of historical data
between services and on the other hand responsible for providing historical data to late-joining applications. The
configurations of the different services drive the behaviour on where and when specific data will be kept and
how it will be provided to late-joiners. The optimal configuration is driven by fault-tolerance on the one hand
and resource usage (like CPU usage, network bandwidth, disk space and memory usage) on the other hand. One
mechanism to control the behaviour of a specific durability service is the usage of alignment policies that can
be configured in the durability configuration file. This configuration option allows a user to specify if and when
data for a specific name-space (see the section about Name-spaces) will be maintained by the durability service
and whether or not it is allowed to act as an aligner for other durability services when they require (part of) the
information.
The alignment responsibility of a durability service is therefore configurable by means of two different configuration options being the aligner and alignee responsibilities of the service:
Aligner policy
TRUE The durability service will align others if needed.
FALSE The durability service will not align others.
Alignee policy
INITIAL Data will be retrieved immediately when the data is available and continuously maintained from that
point forward.
LAZY Data will be retrieved on first arising interest on the local node and continuously maintained from that
point forward.
ON_REQUEST Data will be retrieved only when requested by a subscriber, but not maintained. Therefore each
request will lead to a new alignment action.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DurabilityService/NameSpaces/Policy[@aligner]
• //OpenSplice/DurabilityService/NameSpaces/Policy[@alignee]
5.2.3.2 Durability policy
The durability service is capable of maintaining (part of) the set of non-volatile data in a domain. Normally this
results in the outcome that data which is written as volatile is not stored, data written as transient is stored in
memory and data that is written as persistent is stored in memory and on disk. However, there are use cases where
the durability service is required to ‘weaken’ the DurabilityQosPolicy associated with the data, for instance by
storing persistent data only in memory as if it were transient. Reasons for this are performance impact (CPU load,
disk I/O) or simply because no permanent storage (in the form of some hard-disk) is available on a node. Be aware
that it is not possible to ‘strengthen’ the durability of the data (Persistent > Transient > Volatile).
The durability service has the following options for maintaining a set of historical data:
PERSISTENT Store persistent data on permanent storage, keep transient data in memory, and don’t maintain
volatile data.
TRANSIENT Keep both persistent and transient data in memory, and don’t maintain volatile data.

5.2. Durability Service Concepts

21

Deployment Guide, Release 6.x

VOLATILE Don’t maintain persistent, transient, or volatile data.
This configuration option is called the ‘durability policy’.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy[@durability]
5.2.3.3 Delayed alignment policy
The durability service has a mechanism in place to make sure that when multiple services with a persistent dataset
exist, only one set (typically the one with the newest state) will be injected in the system (see Persistent data
injection). This mechanism will, during the startup of the durability service, negotiate with other services which
one has the best set (see Master selection). After negotiation the ‘best’ persistent set (which can be empty) is
restored and aligned to all durability services.
Once persistent data has been re-published in the domain by a durability service for a specific name-space,
other durability services in that domain cannot decide to re-publish their own set for that name-space from disk
any longer. Applications may already have started their processing based on the already-published set, and republishing another set of data may confuse the business logic inside applications. Other durability services will
therefore back-up their own set of data and align and store the set that is already available in the domain.
It is important to realise that an empty set of data is also considered a set. This means that once a durability
service in the domain decides that there is no data (and has triggered applications that the set is complete), other
late-joining durability services will not re-publish any persistent data that they potentially have available.
Some systems however do require re-publishing persistent data from disk if the already re-published set is empty
and no data has been written for the corresponding name-space. The durability service can be instructed to
still re-publish data from disk in this case by means of an additional policy in the configuration called ‘delayed
alignment’. This Boolean policy instructs a late-joining durability service whether or not to re-publish persistent
data for a name-space that has been marked complete already in the domain, but for which no data exists and no
DataWriters have been created. Whatever setting is chosen, it should be consistent between all durability services
in a domain to ensure proper behaviour on the system level.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy[@delayedAlignment]
5.2.3.4 Merge policy
A ‘split-brain syndrome’ can be described as the situation in which two different nodes (possibly) have a different
perception of (part of) the set of historical data. This split-brain occurs when two nodes or two sets of nodes
(i.e. two systems) that are participating in the same DDS domain have been running separately for some time and
suddenly get connected to each other. This syndrome also arises when nodes re-connect after being disconnected
for some time. Applications on these nodes may have been publishing information for the same topic in the same
partition without this information reaching the other party. Therefore their perception of the set of data will be
different.
In many cases, after this has occurred the exchange of information is no longer allowed, because there is no guarantee that data between the connected systems doesn’t conflict. For example, consider a fault-tolerant (distributed)
global id service: this service will provide globally-unique ids, but this will be guaranteed if and only if there is
no disruption of communication between all services. In such a case a disruption must be considered permanent
and a reconnection must be avoided at any cost.
Some new environments demand supporting the possibility to (re)connect two separate systems though. One can
think of ad-hoc networks where nodes dynamically connect when they are near each other and disconnect again
when they’re out of range, but also systems where temporal loss of network connections is normal. Another use
case is the deployment of Vortex OpenSplice in a hierarchical network, where higher-level ‘branch’ nodes need
to combine different historical data sets from multiple ‘leaves’ into its own data set. In these new environments
there is the same strong need for the availability of data for ‘late-joining’ applications (non-volatile data) as in any
other system.

5.2. Durability Service Concepts

22

Deployment Guide, Release 6.x

For these kinds of environments the durability service has additional functionality to support the alignment of
historical data when two nodes get connected. Of course, the basic use case of a newly-started node joining an
existing system is supported, but in contradiction to that situation there is no universal truth in determining who
has the best (or the right) information when two already running nodes (re)connect. When this situation occurs,
the durability service provides the following possibilities to handle the situation:
IGNORE Ignore the situation and take no action at all. This means new knowledge is not actively built up.
Durability is passive and will only build up knowledge that is ‘implicitly’ received from that point forward
(simply by receiving updates that are published by applications from that point forward and delivered using
the normal publish-subscribe mechanism).
DELETE Dispose and delete all historical data. This means existing data is disposed and deleted and other data
is not actively aligned. Durability is passive and will only maintain data that is ‘implicitly’ received from
that point forward.
MERGE Merge the historical data with the data set that is available on the connecting node.
REPLACE Dispose and replace all historical data by the data set that is available on the connecting node. Because
all data is disposed first, a side effect is that instances present both before and after the merge operation
transition through NOT_ALIVE_DISPOSED and end up as NEW instances, with corresponding changes to
the instance generation counters.
CATCHUP Updates the historical data to match the historical data on the remote node by disposing those instances available in the local set but not in the remote set, and adding and updating all other instances. The
resulting data set is the same as that for the REPLACE policy, but without the side effects. In particular, the
instance state of instances that are both present on the local node and remote node and for which no updates
have been done will remain unchanged.

Note that REPLACE and CATCHUP result in the same data set, but the instance states of the data
may differ.
From this point forward this set of options will be referred to as ‘merge policies’.
Like the networking service, the durability service also allows configuration of a so-called scope to give the user
full control over what merge policy should be selected based on the role of the re-connecting node. The scope is
a logical expression and every time nodes get physically connected, they match the role of the other party against
the configured scope to see whether communication is allowed and if so, whether a merge action is required.
As part of the merge policy configuration, one can also configure a scope. This scope is matched against the role
of remote durability services to determine what merge policy to apply. Because of this scope, the merge behaviour
for (re-)connections can be configured on a per role basis. It might for instance be necessary to merge data when
re-connecting to a node with the same role, whereas (re-)connecting to a node with a different role requires no
action.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy/Merge
5.2.3.5 Prevent aligning equal data sets
As explained in previous sections, temporary disconnections can cause durability services to get out-of-sync,
meaning that their data sets may diverge. To recover from such situations merge policies have been defined (see
Merge policy) where a user can specify how to combine divergent data sets when they become reconnected. Many
of these situations involve the transfer of data sets from one durability service to the other. This may generate a
considerable amount of traffic for large data sets.
If the data sets do not get out-of-sync during disconnection it is not necessary to transfer data sets from one
durability service to the other. Users can specify whether to compare data sets before alignment using the
equalityCheck attribute. When this check is enabled, hashes of the data sets are calculated and compared;
when they are equal, no data will be aligned. This may save valuable bandwidth during alignment. If the hashes
are different then the complete data sets will be aligned.

5.2. Durability Service Concepts

23

Deployment Guide, Release 6.x

Comparing data sets does not come for free as it requires hash calculations over data sets. For large sets this overhead may become significant; for that reason is not recommended to enable this feature for frequently-changing
data sets. Doing so will impose the penalty of having to calculate hashes when the hashes are likely to differ and
the data sets need to be aligned anyway.
Comparison of data sets using hashes is currently only supported for operational nodes that diverge; no support is
provided during initial startup.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy[@equalityCheck]
5.2.3.6 Dynamic name-spaces
As specified in the previous sections, a set of policies can be configured for a (set of) given name-space(s).
One may not know the complete set of name-spaces for the entire domain though, especially when new nodes
dynamically join the domain. However, in case of maximum fault-tolerance, one may still have the need to define
behaviour for a durability service by means of a set of policies for name-spaces that have not been configured on
the current node.
Every name-space in the domain is identified by a logical name. To allow a durability service to fulfil a specific
role for any name-space, each policy needs be configured with a name-space expression that is matched against
the name of name-spaces in the domain. If the policy matches a name-space, it will be applied by the durability
service, independently of whether or not the name-space itself is configured on the node where this durability
service runs. This concept is referred to as ‘dynamic name-spaces’.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/NameSpaces/Policy[@nameSpace]
5.2.3.7 Master/slave
Each durability service that is responsible for maintaining data in a namespace must maintain the complete set
for that namespace. It can achieve this by either requesting data from a durability service that indicates it has a
complete set or, if none is available, request all data from all services for that namespace and combine this into a
single complete set. This is the only way to ensure all available data will be obtained. In a system where all nodes
are started at the same time, none of the durability services will have the complete set, because applications on
some nodes may already have started to publish data. In the worst case every service that starts then needs to ask
every other service for its data. This concept is not very scalable and also leads to a lot of unnecessary network
traffic, because multiple nodes may (partly) have the same data. Besides that, start-up times of such a system will
grow exponentially when adding new nodes. Therefore the so-called ‘master’ concept has been introduced.
Durability services will determine one ‘master’ for every name-space per configured role amongst themselves.
Once the master has been selected, this master is the one that will obtain all historical data first (this also includes
re-publishing its persistent data from disk) and all others wait for that process to complete before asking the master
for the complete set of data. The advantage of this approach is that only the master (potentially) needs to ask all
other durability services for their data and all others only need to ask just the master service for its complete set of
data after that.
Additionally, a durability service is capable of combining alignment requests coming from multiple remote durability services and will align them all at the same time using the internal multicast capabilities. The combination of
the master concept and the capability of aligning multiple durability services at the same time make the alignment
process very scalable and prevent the start-up times from growing when the number of nodes in the system grows.
The timing of the durability protocol can be tweaked by means of configuration in order to increase chances of
combining alignment requests. This is particularly useful in environments where multiple nodes or the entire
system is usually started at the same time and a considerable amount of non-volatile data needs to be aligned.

5.2. Durability Service Concepts

24

Deployment Guide, Release 6.x

5.3 Mechanisms
5.3.1 Interaction with other durability services
To be able to obtain or provide historical data, the durability service needs to communicate with other durability
services in the domain. These other durability services that participate in the same domain are called ‘fellows’.
The durability service uses regular DDS to communicate with its fellows. This means all information exchange
between different durability services is done with via standard DataWriters and DataReaders (without relying on
non-volatile data properties of course).
Depending on the configured policies, DDS communication is used to determine and monitor the topology, exchange information about available historical data and alignment of actual data with fellow durability services.

5.3.2 Interaction with other OpenSplice services
In order to communicate with fellow durability services through regular DDS DataWriters and DataReaders, the
durability service relies on the availability of a network service. This can be either the interoperable DDSI or
the real-time networking service. It can even be a combination of multiple networking services in more complex
environments. As networking services are pluggable like the durability service itself, they are separate processes
or threads that perform tasks asynchronously next to the tasks that the durability service is performing. Some configuration is required to instruct the durability service to synchronise its activities with the configured networking
service(s). The durability service aligns data separately per partition-topic combination. Before it can start alignment for a specific partition-topic combination it needs to be sure that the networking service(s) have detected the
partition-topic combination and ensure that data published from that point forward is delivered from c.q. sent over
the network. The durability service needs to be configured to instruct it which networking service(s) need to be
attached to a partition-topic combination before starting alignment. This principle is called ‘wait-for-attachment’.
Furthermore, the durability service is responsible to announce its liveliness periodically with the splice-daemon.
This allows the splice-daemon to take corrective measures in case the durability service becomes unresponsive.
The durability service has a separate so-called ‘watch-dog’ thread to perform this task. The configuration file
allows configuring the scheduling class and priority of this watch-dog thread.
Finally, the durability service is also responsible to monitor the splice-daemon. In case the splice-daemon itself
fails to update its lease or initiates regular termination,0 the durability service will terminate automatically as well.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/Network

5.3.3 Interaction with applications
The durability service is responsible for providing historical data to late-joining subscribers.
Applications can use the DCPS API call wait_for_historical_data on a DataReader to synchronise on
the availability of the complete set of historical data. Depending on whether the historical data is already available
locally, data can be delivered immediately after the DataReader has been created or must be aligned from another
durability service in the domain first. Once all historical data is delivered to the newly-created DataReader, the
durability service will trigger the DataReader unblocking the wait_for_historical_data performed by
the application. If the application does not need to block until the complete set of historical data is available before
it starts processing, there is no need to call wait_for_historical_data. It should be noted that in such a
case historical data still is delivered by the durability service when it becomes available.

5.3.4 Parallel alignment
When a durability service is started and joins an already running domain, it usually obtains historical data from
one or more already running durability services. In case multiple durability services are started around the same
time, each one of them needs to obtain a set of historical data from the already running domain. The set of data that
needs to be obtained by the various durability services is often the same or at least has a large overlap. Instead of
5.3. Mechanisms

25

Deployment Guide, Release 6.x

aligning each newly joining durability service separately, aligning all of them at the same time is very beneficial,
especially if the set of historical data is quite big. By using the built-in multi-cast and broadcast capabilities of
DDS, a durability service is able to align as many other durability services as desired in one go. This ability
reduces the CPU, memory and bandwidth usage of the durability service and makes the alignment scale also in
situations where many durability services are started around the same time and a large set of historical data exists.
The concept of aligning multiple durability service at the same time is referred to as ‘parallel alignment’.
To allow this mechanism to work, durability services in a domain determine a master durability service for each
name-space. Every durability service elects the same master for a given name-space based on a set of rules that
will be explained later on in this document. When a durability service needs to be aligned, it will always send its
request for alignment to its selected master. This results in only one durability service being asked for alignment
by any other durability service in the domain for a specific name-space, but also allows the master to combine
similar requests for historical data. To be able to combine alignment requests from different sources, a master will
wait a period of time after receiving a request and before answering a request. This period of time is called the
‘request-combine period’.
The actual amount of time that defines the ‘request-combine period’ for the durability service is configurable.
Increasing the amount of time will increase the likelihood of parallel alignment, but will also increase the amount
of time before it will start aligning the remote durability service and in case only one request comes in within
the configured period, this is non-optimal behaviour. The optimal configuration for the request-combine period
therefore depends heavily on the anticipated behaviour of the system and optimal behaviour may be different in
every use case.
In some systems, all nodes are started simultaneously, but from that point forward new nodes start or stop sporadically. In such systems, different configuration with respect to the request-combine period is desired when
comparing the start-up and operational phases. That is why the configuration of this period is split into different
settings: one during the start-up phase and one during the operational phase.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/Network/Alignment/RequestCombinePeriod

5.3.5 Tracing
Configuring durability services throughout a domain and finding out what exactly happens during the lifecycle of
the service can prove difficult.
OpenSplice developers sometimes have a need to get more detailed durability specific state information than is
available in the regular OpenSplice info and error logs to be able to analyse what is happening. To allow retrieval
of more internal information about the service for (off-line) analysis to improve performance or analyse potential
issues, the service can be configured to trace its activities to a specific output file on disk.
By default, this tracing is turned off for performance reasons, but it can be enabled by configuring it in the XML
configuration file.
The durability service supports various tracing verbosity levels. In general can be stated that the more verbose
level is configured (FINEST being the most verbose), the more detailed the information in the tracing file will be.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/Tracing

5.4 Lifecycle
During its lifecycle, the durability service performs all kinds of activities to be able to live up to the requirements
imposed by the DDS specification with respect to non-volatile properties of published data. This section describes
the various activities that a durability service performs to be able to maintain non-volatile data and provide it to
late-joiners during its lifecycle.

5.4. Lifecycle

26

Deployment Guide, Release 6.x

5.4.1 Determine connectivity
Each durability service constantly needs to have knowledge on all other durability services that participate in the
domain to determine the logical topology and changes in that topology (i.e. detect connecting, disconnecting
and re-connecting nodes). This allows the durability service for instance to determine where non-volatile data
potentially is available and whether a remote service will still respond to requests that have been sent to it reliably.
To determine connectivity, each durability service sends out a heartbeat periodically (every configurable amount
of time) and checks whether incoming heartbeats have expired. When a heartbeat from a fellow expires, the
durability service considers that fellow disconnected and expects no more answers from it. This means a new
aligner will be selected for any outstanding alignment requests for the disconnected fellow. When a heartbeat
from a newly (re)joining fellow is received, the durability service will assess whether that fellow is compatible
and if so, start exchanging information.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/Network/Heartbeat

5.4.2 Determine compatibility
When a durability service detects a remote durability service in the domain it is participating in, it will determine
whether that service has a compatible configuration before it will decide to start communicating with it. The reason
not to start communicating with the newly discovered durability service would be a mismatch in configured namespaces. As explained in the section about the Name-spaces concept, having different name-spaces is not an issue
as long as they do not overlap. In case an overlap is detected, no communication will take place between the two
‘incompatible’ durability services. Such an incompatibility in your system is considered a mis-configuration and
is reported as such in the OpenSplice error log.
Once the durability service determines name-spaces are compatible with the ones of all discovered other durability
services, it will continue with selection of a master for every name-space, which is the next phase in its lifecycle.

5.4.3 Master selection
For each Namespace and Role combination there shall be at most one Master Durability Service. The Master
Durability Service coordinates single source re-publishing of persistent data and to allow parallel alignment after
system start-up, and to coordinate recovery of a split brain syndrome after connecting nodes having selected a
different Master indicating that more than one state of the data may exist.
Therefore after system start-up as well as after any topology change (i.e. late joining nodes or leaving master
node) a master selection process will take place for each affected Namespace/Role combination.
To control the master selection process a masterPriority attribute can be used.
Each Durability Service will have a configured masterPriority attribute per namespace which is an integer value
between 0 and 255 and which specifies the eagerness of the Durability Service to become Master for that namespace. The values 0 and 255 have a special meaning. Value 0 is used to indicate that the Durability Service will
never become Master. The value 255 is used to indicate that the Durability Service will not use priorities but
instead uses the legacy selection algorithm. If not configured the default is 255.
During the master selection process each Durability service will exchange for each namespace the masterPriority
and quality. The namespace quality is the timestamp of the latest update of the persistent data set stored on disk
and only plays a role in master selection initially when no master has been chosen before and persistent data has
not been injected yet.
Each Durability Service will determine the Master based upon the highest non zero masterPriority and in case of
multiple masters further select based on namespace quality (but only if persistent data has not been injected before)
and again in case of multiple masters select the highest system id. The local system id is an arbitrary value which
unique identifies a durability service. After selection each Durability Service will communicate their determined
master and on agreement effectuate the selection, on disagreement which may occur if some Durability Services
had a temporary different view of the system this process of master selection will restart until all Durability

5.4. Lifecycle

27

Deployment Guide, Release 6.x

Services have the same view of the system and have made the same selection. If no durability services exists
having a masterPriority greater than zero then no master will be selected.
Summarizing, the precedence rules for master selection are (from high to low):
1. The namespace masterPriority
2. The namespace quality, if no data has been injected before.
3. The Durability Service system id, which is unique for each durability service.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DurabilityService/Network/InitialDiscoveryPeriod

5.4.4 Persistent data injection
As persistent data needs to outlive system downtime, this data needs to be re-published in DDS once a domain is
started.
If only one node is started, the durability service on that node can simply re-publish the persistent data from its
disk. However, if multiple nodes are started at the same time, things become more difficult. Each one of them
may have a different set available on permanent storage due to the fact that durability services have been stopped
at a different moment in time. Therefore only one of them should be allowed to re-publish its data, to prevent
inconsistencies and duplication of data.
The steps below describe how a durability service currently determines whether or not to inject its data during
start-up:
1. Determine validity of own persistent data — During this step the durability service determines whether its
persistent store has initially been completely filled with all persistent data in the domain in the last run. If
the service was shut down in the last run during initial alignment of the persistent data, the set of data will
be incomplete and the service will restore its back-up of a full set of (older) data if that is available from a
run before that. This is done because it is considered better to re-publish an older but complete set of data
instead of a part of a newer set.
2. Determine quality of own persistent data — If persistence has been configured, the durability service will
inspect the quality of its persistent data on start-up. The quality is determined on a per-name-space level by
looking at the time-stamps of the persistent data on disk. The latest time-stamp of the data on disk is used as
the quality of the name-space. This information is useful when multiple nodes are started at the same time.
Since there can only be one source per name-space that is allowed to actually inject the data from disk into
DDS, this mechanism allows the durability services to select the source that has the latest data, because this
is generally considered the best data. If this is not true then an intervention is required. The data on the node
must be replaced by the correct data either by a supervisory (human or system management application)
replacing the data files or starting the nodes in the desired sequence so that data is replaced by alignment.
3. Determine topology — During this step, the durability service determines whether there are other durability
services in the domain and what their state is. If this service is the only one, it will select itself as the ‘best’
source for the persistent data.
4. Determine master — During this step the durability service will determine who will inject persistent data
or who has injected persistent data already. The one that will or already has injected persistent data is called
the ‘master’. This process is done on a per name-space level (see previous section).
(a) Find existing master – In case the durability service joins an already-running domain, the master has
already been determined and this one has already injected the persistent data from its disk or is doing
it right now. In this case, the durability service will set its current set of persistent data aside and will
align data from the already existing master node. If there is no master yet, persistent data has not been
injected yet.
(b) Determine new master – If the master has not been determined yet, the durability service determines
the master for itself based on who has the best quality of persistent data. In case there is more than
one service with the ‘best’ quality, the one with the highest system id (unique number) is selected.

5.4. Lifecycle

28

Deployment Guide, Release 6.x

Furthermore, a durability service that is marked as not being an aligner for a name-space cannot
become master for that name-space.
5. Inject persistent data — During this final step the durability service injects its persistent data from disk into
the running domain. This is only done when the service has determined that it is the master. In any other
situation the durability service backs up its current persistent store and fills a new store with the data it aligns
from the master durability service in the domain, or postpones alignment until a master becomes available
in the domain.

It is strongly discouraged to re-inject persistent data from a persistent store in a running system after
persistent data has been published. Behaviour of re-injecting persistent stores in a running system is
not specified and may be changed over time.

5.4.5 Discover historical data
During this phase, the durability service finds out what historical data is available in the domain that matches
any of the locally configured name-spaces. All necessary topic definitions and partition information are retrieved
during this phase. This step is performed before the historical data is actually aligned from others. The process
of discovering historical data continues during the entire lifecycle of the service and is based on the reporting of
locally-created partition-topic combinations by each durability service to all others in the domain.

5.4.6 Align historical data
Once all topic and partition information for all configured name-spaces are known, the initial alignment of historical data takes place. Depending on the configuration of the service, data is obtained either immediately after
discovering it or only once local interest in the data arises. The process of aligning historical data continues during
the entire lifecycle of the durability service.

5.4.7 Provide historical data
Once (a part of) the historical data is available in the durability service, it is able to provide historical data to local
DataReaders as well as other durability services.
Providing of historical data to local DataReaders is performed automatically as soon as the data is available.
This may be immediately after the DataReader is created (in case historical data is already available in the local
durability service at that time) or immediately after it has been aligned from a remote durability service.
Providing of historical data to other durability services is done only on request by these services. In case the
durability service has been configured to act as an aligner for others, it will respond to requests for historical data
that are received. The set of locally available data that matches the request will be sent to the durability service
that requested it.

5.4.8 Merge historical data
When a durability service discovers a remote durability service and detects that neither that service nor the service
itself is in start-up phase, it concludes that they have been running separately for a while (or the entire time)
and both may have a different (but potentially complete) set of historical data. When this situation occurs, the
configured merge-policies will determine what actions are performed to recover from this situation. The process
of merging historical data will be performed every time two separately running systems get (re-)connected.

5.4. Lifecycle

29

Deployment Guide, Release 6.x

5.5 Threads description
This section contains a short description of each durability thread. When applicable, relevant configuration parameters are mentioned.

5.5.1 ospl_durability
This is the main durability service thread. It starts most of the other threads, e.g. the listener threads that are
used to receive the durability protocol messages, and it initiates initial alignment when necessary. This thread
is responsible for periodically updating the service-lease so that the splice-daemon is aware the service is still
alive. It also periodically (every 10 seconds) checks the state of all other service threads to detect if deadlock has
occurred. If deadlock has occurred the service will indicate which thread didn’t make progress and action can
be taken (e.g. the service refrains from updating its service-lease, causing the splice daemon to execute a failure
action). Most of the time this thread is asleep.

5.5.2 conflictResolver
This thread is responsible for resolving conflicts. If a conflict has been detected and stored in the conflictQueue,
the conflictResolver thread takes the conflict, checks whether the conflict still exists, and if so, starts the procedure
to resolve the conflict (i.e., start to determine a new master, send out sample requests, etc).

5.5.3 statusThread
This thread is responsible for the sending status messages to other durability services. These messages are periodically sent between durability services to inform each other about their state (.e.,g INITIALIZING or TERMINATING).
Configuration parameters:
• //OpenSplice/DurabilityService/Watchdog/Scheduling

5.5.4 d_adminActionQueue
The durability service maintains a queue to schedule timed action. The d_adminActionQueue periodically checks
(every 100 ms) if an action is scheduled. Example actions are: electing a new master, detection of new local
groups and deleting historical data.
Configuration parameters:
• //OpenSplice/DurabilityService/Network/Heartbeat/Scheduling

5.5.5 AdminEventDispatcher
Communication between the splice-daemon and durability service is managed by events. The AdminEventDispatcher thread listens and acts upon these events. For example, the creation of a new topic is noticed by the
splice-daemon, which generates an event for the durability service, which schedules an action on to request historical data for the topic.

5.5.6 groupCreationThread
The groupCreationThread is responsible for the creation of groups that exist in other federations. When a durability service receives a newGroup message from another federation, it must create the group locally in order to
acquire data for it. Creation of a group may fail in case a topic is not yet known. The thread will retry with a 10ms
interval.

5.5. Threads description

30

Deployment Guide, Release 6.x

5.5.7 sampleRequestHandler
This thread is responsible for the handling of sampleRequests.
When a durability service receives
a d_sampleRequest message (see the sampleRequestListener thread) it will not immediately answer
the request, but wait some time until the time to combine requests has been expired (see //OpenSplice/DurabilityService/Network/Alignment/RequestCombinePeriod). When this time has expired the sampleRequestHandler will answer the request by collecting the requested data and sending the data as d_sampleChain
messages to the requestor.
Configuration parameters:
• //OpenSplice/DurabilityService/Network/Alignment/AlignerScheduling

5.5.8 resendQueue
This thread is responsible for injection of message in the group after it has been rejected before. When a
durability service has received historical data from another fellow, historical data is injected in the group (see
d_sampleChain). Injection of historical data can be rejected, e.g., when a resource limits are being used. When
this happens, a new attempt to inject the data is scheduled overusing the resendQueue thread. This thread will try
to deliver the data 1s later.
Configuration parameters:
• //OpenSplice/DurabilityService/Network/Alignment/AligneeScheduling

5.5.9 masterMonitor
The masterMonitor is the thread that handles the selection of a new master. This thread is invoked when the
conflict resolver detects that a master conflict has occurred. The masterMonitor is responsible for collecting
master proposals from other fellows and sending out proposals to other fellows.

5.5.10 groupLocalListenerActionQueue
This thread is used to handle historical data requests from specific readers, and to handle delayed alignment (see
//OpenSplice/DurabilityService/NameSpaces/Policy[@delayedAlignment])

5.5.11 d_groupsRequest
The d_groupsRequest thread is responsible for processing incoming d_groupsRequest messages from other fellows. When a durability service receives a message from a fellow, the durability service will send information
about its groups to the requestor by means of d_newGroup messages. This thread collects group information,
packs it in d_newGroup messages and send them to the requestor. This thread will only do something when a
d_groupsRequest has been received from a fellow. Most of the time it will sleep.

5.5.12 d_nameSpaces
This thread is responsible for processing incoming d_nameSpaces messages from other fellows. Durability services send each other their namespaces state so that they can detect potential conflicts. The d_nameSpaces thread
processes and administrates every incoming d_nameSpace. When a conflict is detected, the conflict is scheduled
which may cause the conflictResolver thread to kick in.

5.5. Threads description

31

Deployment Guide, Release 6.x

5.5.13 d_nameSpacesRequest
The d_nameSpacesRequest thread is responsible for processing incoming d_nameSpacesRequest messages from
other fellows. A durability service can request the namespaces form a fellow by sending a d_nameSpacesRequest
message to the fellow. Whenever a durability service receives a d_nameSpacesRequest messages it will respond
by sending its set of namespaces to the fellow. The thread handles incoming d_nameSpacesRequest messages. As
a side effect new fellows can be discovered if a nameSpacesRequest is received from an unknown fellow.

5.5.14 d_status
The d_status thread is responsible for processing incoming d_status messages from other fellows. Durability services periodically send each other status information (see the statusThread). NOTE: in earlier versions missing
d_status messages could lead to the conclusion that a fellows has been removed. In recent versions this mechanism has been replaced so that the durability service slaves itself to the liveness of remote federations based on
heartbeats (see thread dcpsHeartbeatListener). Effectivily, the d_status message is not used anymore to verify
liveliness of remote federations, it is only used to transfer the durability state of a remote federation.

5.5.15 d_newGroup
The d_newGroup thread is responsible for handling incoming d_newGroup messages from other fellows. Durability services inform each other about groups in the namespaces. They do that by sending d_newGroup messages
to each other (see also thread d_groupsRequest). The d_newGroup thread is responsible for handling incoming
groups.

5.5.16 d_sampleChain
The d_sampleChain thread handles incoming d_sampleChain messages from other fellows. When a durability
service answers an d_sampleRequest, it must collect the requested data and send it to the requestor. The collected
data is packed in d_sampleChain messages. The d_sampleChain thread handles incoming d_sampleChain messages and applies the configured merge policy for the data. For example, in case of a MERGE it injects all the
received data in the local group and delivers the data to the available readers.
Configuration parameters:
• //OpenSplice/DurabilityService/Network/Alignment/AligneeScheduling

5.5.17 d_sampleRequest
The d_sampleRequest thread is responsible for handling incoming d_sampleRequest messages from other fellows. A durability service can request historical data from a fellow by sending a d_sampleRequest message. The
d_sampleRequest thread is used to process d_sampleRequest messages. Because d_sampleRequest messages are
not handled immediately, they are stored in a list and handled later on (see thread sampleRequestHandler).

5.5.18 d_deleteData
The d_deleteData thread is responsible for handling incoming d_deleteData messages from other fellows. An
application can call delete_historical_data(). This causes all historical data up till now to be deleted. To propagate
deletion of historical data to all available durability services in the system, durability services send each other a
d_deleteData message. The d_deleteData thread handles incoming d_deleteData messages and takes care that the
relevant data is deleted. This thread will only be active after delete_historical_data() is called.

5.5. Threads description

32

Deployment Guide, Release 6.x

5.5.19 dcpsHeartbeatListener
The dcpsHeartbeatListener is responsible for the liveliness detection of remote federations. This thread listens
to DCPSHeartbeat messages that are sent by federation. It is used to detect new federations or federations that
disconnect. This thread will only do something when there is a change in federation topology. Most of the time it
will be asleep.

5.5.20 d_capability
The thread is responsible for processing d_cability messages from other fellows. As soon as a durability service
detects a fellow it will send its list of capabilities to the fellow. The fellow can use this list to find what functionality
is supported by the durability service. Similarly, the durability service can receive capabilities from the fellow.
This thread is used to process the capabilities sent by a fellow. This thread will only do something when a fellow
is detected. | |

5.5.21 remoteReader
The remoteReader thread is responsible for the detection of remote readers on other federations. The DDSI service
performs discovery and reader-writing matching. This is an asynchronous mechanism. When a durability service
(say A) receives a request from a fellow durability service (say B) and DDSI is used as networking service,
then A cannot be sure that DDSI has already detected the reader on B that should receive the answer to the
request. To ensure that durability services will only answer if all relevant remote readers have been detected, the
remoteReader thread keeps track of the readers that have been discovered by ddsi. Only when all relevant readers
have been discovered durability services are allowed to answer requests. This prevents DDSI from dropping
messages destined for readers that have not been discovered yet.

5.5.22 persistentDataListener
The persistentDataListenerThread is responsible for persisting durable data. When a durability service retrieves
persistent data, the data is stored in a queue. The persistentDataListener thread retrieves the data from the queue
and stores it in the persistent store. For large data sets persisting the data can take quite some time, depending
mostly on the performance of the disk.
Note
this
thread
is
only
created
when
persistency
Splice/DurabilityService/Persistent/StoreDirectory has a value set):

is

enabled

(//Open-

Configuration parameters:
• //OpenSplice/DurabilityService/Persistent/Scheduling

5.5.23 historicalDataRequestHandler
This thread is responsible for handling incoming historicalDataRequest messages from durability clients. In case
an application does not have the resources to run a durability service but still wants to acquire historical data it can
configure a client. The client sends HistoricalDataRequest messages to the durability service. These messages are
handled by the historicalDataRequestHandler thread.
Note this thread is only created when client durability is enabled (//OpenSplice/DurabilityService/ClientDurability
element exists)

5.5.24 durabilityStateListener
This thread is responsible for handling incoming durabilityStateRequest messages from durability clients.
Note this thread is only created when client durability is enabled (//OpenSplice/DurabilityService/ClientDurability
element exists)

5.5. Threads description

33

6
The Networking Service
When communication endpoints are located on different computing nodes or on different single processes, the
data produced using the local Domain Service must be communicated to the remote Domain Services and the
other way around. The Networking Service provides a bridge between the local Domain Service and a network
interface. Multiple Networking Services can exist next to each other; each serving one or more physical network
interfaces. The Networking Service is responsible for forwarding data to the network and for receiving data from
the network.
There are two implementations of the networking service: The Native Networking Service and The Secure Native
Networking Service.
There are detailed descriptions of all of the available configuration parameters and their purpose in the Configuration section.

6.1 The Native Networking Service
For large-scale LAN-based systems that demand maximum throughput, the native RTNetworking service is the
optimal implementation of DDS networking for Vortex OpenSplice and is both highly scalable and configurable.
The Native Networking Service can be configured to distinguish multiple communication channels with different
QoS policies. These policies will be used to schedule individual messages to specific channels, which may be
configured to provide optimal performance for a specific application domain.
The exact fulfilment of these responsibilities is determined by the configuration of the Networking Service.
Please refer to the Configuration section for fully-detailed descriptions of how to configure:
• //OpenSplice/NetworkService
• //OpenSplice/SNetworkService

6.2 The Secure Native Networking Service
There is a secure version of the native networking service available.
Please refer to the Configuration section for details.

6.2.1 Compression
This section describes the options available for configuring compression of the data packets sent by the Networking
Service.
In early OpenSplice 6.x releases, the zlib library was used at its default setting whenever the compression option
on a network partition was enabled. Now it is possible to configure zlib for less cpu usage or for more compression
effort, or to select a compressor written specifically for high speed, or to plug in an alternative algorithm.
The configuration for compression in a Networking Service instance is contained in the optional top-level Element
Compression. These settings apply to all partitions in which compression is enabled.
34

Deployment Guide, Release 6.x

Please refer to the Configuration section for a detailed description of:
• //OpenSplice/NetworkService/Compression
6.2.1.1 Availability
The compression functionality is available on enterprise platforms (i.e. Linux, Windows and Solaris). On embedded platforms there are no built-in compressors included, but plugins may be used.
6.2.1.2 How to set the level parameter in zlib
Set the Attribute PluginParameter to a single digit between 0 (no compression) and 9 (maximum compression, more CPU usage). Leave the Attribute PluginLibrary and Attribute PluginInitFunction blank.
6.2.1.3 How to switch to other built-in compressors
Set the Attribute PluginInitFunction to the name of the initialisation function of one of
the built-in compressors.
These are /ospl_comp_zlib_init/, /ospl_comp_lzf_init/ and
/ospl_comp_snappy_init/ for zlib, lzf and snappy respectively. As a convenience, the short names zlib,
lzf and snappy are also recognized.

Please note that not all compressors are available on all platforms. In this release zlib is available on
Linux, Windows and Solaris; lzf and snappy are available only on RedHat Linux.
6.2.1.4 How to write a plugin for another compression library
Other compression algorithms may be used by the Networking Service. In order to do this it is necessary to build
a library which maps the OpenSplice compression API onto the algorithm in question. This library may contain
the actual compressor code or be dynamically linked to it.
Definitions for the compression API are provided in the include file plugin/nw_compPlugin.h.
Five functions must be implemented.
The maxsize function. This function is called when sizing a buffer into which to compress a network packet. It
should therefore return the worst-case (largest) possible size of compressed data for a given uncompressed
size. In most cases it is acceptable to return the uncompressed size, as the compress operation is allowed to
fail if the resulting data is larger than the original (in which case the data is sent uncompressed). However,
snappy for example will not attempt compression unless the destination buffer is large enough to take the
worst possible result.
The compress function. This function takes a block of data of a given size and compresses it into a buffer of a
given size. It returns the actual size of the compressed data, or zero if an error ocurred (e.g. the destination
buffer was not large enough).
The uncompress function. This function takes a block of compressed data of given size and uncompresses it
into a buffer also of given size. It returns the actual size of the uncompressed data, or zero if an error ocurred
(e.g. the data was not in a valid compressed format).
The exit function. This function is called at service shutdown and frees any resources used by the plugin.
The init function. This function is called at service startup. It sets up the plugin by filling in a structure containing pointers to the four functions listed above. It also is passed the value of the Attribute
PluginParameter. The plugin configuration structure includes a pointer to some unspecified state data
which may be used to hold this parameter and/or any storage required by the compressor. This pointer is
passed into the compress and exit functions.

6.2. The Secure Native Networking Service

35

Deployment Guide, Release 6.x

By way of illustration, here is a simplified version of the code for zlib. The implementation is merely a veneer on
the zlib library to present the required API.
#include "nw_compPlugin.h"
#include "os_heap.h"
#include
unsigned long ospl_comp_zlib_maxsize (unsigned long srcsize)
{
/* if the data can’t be compressed into the same size buffer we’ll send
uncompressed instead */
return srcsize;
}
unsigned long ospl_comp_zlib_compress (void *dest, unsigned long destlen,
const void *source, unsigned long srclen, void *param)
{
unsigned long compdsize = destlen;
if (compress2 (dest, &compdsize, source, srclen, *(int *)param) == Z_OK)
{
return compdsize;
}
else
{
return 0;
}
}
unsigned long ospl_comp_zlib_uncompress (void *dest, unsigned long
destlen, const void *source, unsigned long srclen)
{
unsigned long uncompdsize = destlen;
if (uncompress (dest, &uncompdsize, source, srclen) == Z_OK)
{
return uncompdsize;
}
else
{
return 0;
}
}
void ospl_comp_zlib_exit (void *param)
{
os_free (param);
}
void ospl_comp_zlib_init (nw_compressor *config, const char *param)
{
/* param should contain an integer from 0 to 9 */
int *iparam = os_malloc (sizeof (int));
if (strlen (param) == 1)
{
*iparam = atoi (param);
}
else
{
*iparam = Z_DEFAULT_COMPRESSION;
}
config->maxfn = ospl_comp_zlib_maxsize;
config->compfn = ospl_comp_zlib_compress;
config->uncompfn = ospl_comp_zlib_uncompress;
config->exitfn = ospl_comp_zlib_exit;
config->parameter = (void *)iparam;
}

6.2. The Secure Native Networking Service

36

Deployment Guide, Release 6.x

6.2.2 How to configure for a plugin
Step 1: Set Attribute PluginLibrary to the name of the library containing the plugin implementation.
Step 2: Set Attribute PluginInitFunction to the name of the initialisation function within that library.
Step 3: If the compression method is controlled by a parameter, set Attribute PluginParameter to configure
it.
Please refer to the Configuration section for fully-detailed descriptions of how to configure:
• //OpenSplice/NetworkService/Compression[@PluginLibrary]
• //OpenSplice/NetworkService/Compression[@PluginInitFunction]
• //OpenSplice/NetworkService/Compression[@PluginParameter]

6.2.3 Constraints

The Networking Service packet format does not include identification of which compressor is in use.
It is therefore necessary to use the same configuration on all nodes.

6.2. The Secure Native Networking Service

37

7
The DDSI2 and DDSI2E Networking
Services
The purpose and scope of the Data-Distribution Service Interoperability Wire Protocol is to ensure that applications based on different vendors’ implementations of DDS can interoperate. The protocol was standardized by the
OMG in 2008, and was designed to meet the specific requirements of data-distribution systems.
Features of the DDSI protocol include:
• Performance and Quality-of-Service properties to enable best-effort and reliable publish-subscribe communications for real-time applications over standard IP networks.
• Fault tolerance to allow the creation of networks without single points of failure.
• Plug-and-play Connectivity so that new applications and services are automatically discovered and applications can join and leave the network at any time without the need for reconfiguration.
• Configurability to allow balancing the requirements for reliability and timeliness for each data delivery.
• Scalability to enable systems to potentially scale to very large networks.
DDSI-Extended (DDSI2E) is an extended version of the DDSI2 networking service, giving extra features for:
• Network partitions: Network partitions provide the ability to use alternative multicast addresses for combinations of DCPS topics and partitions to separate out traffic flows, for example for routing or load reduction.
• Security: Encryption can be configured per network partition. This enables configuring encrypted transmission for subsets of the data.
• Bandwidth limiting and traffic scheduling: Any number of ‘network channels’ can be defined, each with
an associated transport priority. Application data is routed via the network channel with the best matching
priority. For each network channel, outgoing bandwidth limits can be set and the IP ‘differentiated services’
options can be controlled.
The remainder of this section gives background on these two services and describes how the various mechanisms
and their configuration parameters interact.
Please refer to the Configuration section fully-detailed descriptions of:
• //OpenSplice/DDSI2Service
• //OpenSplice/DDSI2EService

7.1 DDSI Concepts
Both DDSI 2.1 and 2.2 standards are very intimately related to the DDS 1.2 and 1.4 standards, with a clear
correspondence between the entities in DDSI and those in DCPS. However, this correspondence is not one-to-one.
In this section we give a high-level description of the concepts of the DDSI specification, with hardly any reference
to the specifics of the Vortex OpenSplice implementation, DDSI2, which are addressed in subsequent sections.
This division was chosen to aid readers interested in interoperability to understand where the specification ends
and the Vortex OpenSplice implementation begins.

38

Deployment Guide, Release 6.x

7.1.1 Mapping of DCPS domains to DDSI domains
In DCPS, a domain is uniquely identified by a non-negative integer, the domain id. DDSI maps this domain id to
UDP/IP port numbers to be used for communicating with the peer nodes — these port numbers are particularly
important for the discovery protocol — and this mapping of domain ids to UDP/IP port numbers ensures that
accidental cross-domain communication is impossible with the default mapping.
DDSI does not communicate the DCPS port number in the discovery protocol; it assumes that each domain id
maps to a unique port number. While it is unusual to change the mapping, the specification requires this to be
possible, and this means that two different DCPS domain ids can be mapped to a single DDSI domain.

7.1.2 Mapping of DCPS entities to DDSI entities
Each DCPS domain participant in a domain is mirrored in DDSI as a DDSI participant. These DDSI participants
drive the discovery of participants, readers and writers in DDSI via the discovery protocols. By default each DDSI
participant has a unique address on the network in the form of its own UDP/IP socket with a unique port number.
Any data reader or data writer created by a DCPS domain participant is mirrored in DDSI as a DDSI reader or
writer. In this translation, some of the structure of the DCPS domain is lost, because DDSI has no knowledge of
DCPS Subscribers and Publishers. Instead, each DDSI reader is the combination of the corresponding DCPS data
reader and the DCPS subscriber it belongs to; similarly, each DDSI writer is a combination of the corresponding
DCPS data writer and DCPS publisher. This corresponds to the way the DCPS built-in topics describe the DCPS
data readers and data writers, as there are no built-in topics for describing the DCPS subscribers and publishers
either.
In addition to the application-created readers and writers (referred to as ‘endpoints’), DDSI participants have a
number of DDSI built-in endpoints used for discovery and liveliness checking/asserting. The most important
ones are those absolutely required for discovery: readers and writers for the discovery data concerning DDSI
participants, DDSI readers and DDSI writers. Some other ones exist as well, and a DDSI implementation can
leave out some of these if it has no use for them. For example, if a participant has no writers, it doesn’t strictly
need the DDSI built-in endpoints for describing writers, nor the DDSI built-in endpoint for learning of readers of
other participants.

7.1.3 Reliable communication
Best-effort communication is simply a wrapper around UDP/IP: the packet(s) containing a sample are sent to the
addresses at which the readers reside. No state is maintained on the writer. If a packet is lost, the reader will
simply drop the sample and continue with the next one.
When reliable communication is used, the writer does maintain a copy of the sample, in case a reader detects it
has lost packets and requests a retransmission. These copies are stored in the writer history cache (or WHC) of the
DDSI writer. The DDSI writer is required to periodically send Heartbeats to its readers to ensure that all readers
will learn of the presence of new samples in the WHC even when packets get lost.
If a reader receives a Heartbeat and detects it did not receive all samples, it requests a retransmission by sending
an AckNack message to the writer, in which it simultaneously informs the writer up to what sample it has received
everything, and which ones it has not yet received. Whenever the writer indicates it requires a response to a
Heartbeat the readers will send an AckNack message even when no samples are missing. In this case, it becomes
a pure acknowledgement.
The combination of these behaviours in principle allows the writer to remove old samples from its WHC when it
fills up too far, and allows readers to always receive all data. A complication exists in the case of unresponsive
readers, readers that do not respond to a Heartbeat at all, or that for some reason fail to receive some samples
despite resending it. The specification leaves the way these get treated unspecified.
Note that while this Heartbeat/AckNack mechanism is very straightforward, the specification actually allows
suppressing heartbeats, merging of AckNacks and retransmissions, etc.. The use of these techniques is required to
allow for a performant DDSI implementation, whilst avoiding the need for sending redundant messages.

7.1. DDSI Concepts

39

Deployment Guide, Release 6.x

7.1.4 DDSI-specific transient-local behaviour
The above describes the essentials of the mechanism used for samples of the ‘volatile’ durability kind, but the
DCPS specification also provides ‘transient-local’, ‘transient’ and ‘persistent’ data. Of these, the DDSI specification currently only covers transient-local, and this is the only form of durable data available when interoperating
across vendors.
In DDSI, transient-local data is implemented using the WHC that is normally used for reliable communication.
For transient-local data, samples are retained even when all readers have acknowledged them. With the default
history setting of KEEP_LAST with history_depth = 1, this means that late-joining readers can still obtain
the latest sample for each existing instance.
Naturally, once the DCPS writer is deleted (or disappears for whatever reason), the DDSI writer disappears as
well, and with it, its history. For this reason, transient data is generally much to be preferred over transient-local
data. In Vortex OpenSplice the durability service implements all three durability kinds without requiring any
special support from the networking services, ensuring that the full set of durability features is always available
between Vortex OpenSplice nodes.

7.1.5 Discovery of participants & endpoints
DDSI participants discover each other by means of the ‘Simple Participant Discovery Protocol’, or ‘SPDP’ for
short. This protocol is based on periodically sending a message containing the specifics of the participant to a
set of known addresses. By default, this is a standardised multicast address (239.255.0.1; the port number is
derived from the domain id) that all DDSI implementations listen to.
Particularly important in the SPDP message are the unicast and multicast addresses at which the participant can
be reached. Typically, each participant has a unique unicast address, which in practice means all participants on a
node all have a different UDP/IP port number in their unicast address. In a multicast-capable network, it doesn’t
matter what the actual address (including port number) is, because all participants will learn them through these
SPDP messages.
The protocol does allow for unicast-based discovery, which requires listing the addresses of machines where
participants may be located, and ensuring each participant uses one of a small set of port numbers. Because of
this, some of the port numbers are derived not only from the domain id, but also from a ‘participant index’, which
is a small non-negative integer, unique to a participant within a node. (The DDSI2 service adds an indirection and
uses at most one participant index regardless of how many DCPS participants it handles.)
Once two participants have discovered each other, and both have matched the DDSI built-in endpoints their peer
is advertising in the SPDP message, the ‘Simple Endpoint Discovery Protocol’ or ‘SEDP’ takes over, exchanging
information on the DCPS data readers and data writers in the two participants.
The SEDP data is handled as reliable, transient-local data. Therefore, the SEDP writers send Heartbeats, the SEDP
readers detect they have not yet received all samples and send AckNacks requesting retransmissions, the writer
responds to these and eventually receives a pure acknowledgement informing it that the reader has now received
the complete set.

Note that the discovery process necessarily creates a burst of traffic each time a participant is added
to the system: all existing participants respond to the SPDP message, following which all start exchanging SEDP data.

7.2 Vortex OpenSplice DDSI2 specifics
7.2.1 Translating between Vortex OpenSplice and DDSI
Given that DDSI is the DDS interoperability specification, that the mapping between DCPS entities and DDSI
entities is straightforward, and that Vortex OpenSplice is a full implementation of the DDS specification, one

7.2. Vortex OpenSplice DDSI2 specifics

40

Deployment Guide, Release 6.x

might expect that relationship between Vortex OpenSplice and its DDSI implementation, DDSI2, is trivial. Unfortunately, this is not the case, and it does show in a number of areas. A high-level overview such as this paragraph
is not the place for the details of these cases, but they will be described in due course.
The root cause of these complexities is a difference in design philosophy between Vortex OpenSplice and the more
recent DDSI.
DDSI is very strictly a peer-to-peer protocol at the level of individual endpoints, requiring lots of discovery
traffic, and (at least when implemented naively) very bad scalability. It is exactly these three problems that Vortex
OpenSplice was designed to avoid, and it does so successfully with its native RTNetworking service.
Because of this design for scalability and the consequent use of service processes rather than forcing everything
into self-contained application processes, there are various ways in which DDSI2 has to translate between the
two worlds. For example, queuing and buffering and, consequently, blocking behaviour are subtly different;
DDSI2 needs to also perform local discovery of DCPS endpoints to gather enough information for faithfully
representing the system in terms of DDSI, it needs to translate between completely different namespaces (native
Vortex OpenSplice identifiers are very different from the GUIDs used by DDSI), and it needs to work around
receiving asynchronous notifications for events one would expect to be synchronous in DDSI.
This Guide aims to not only provide guidance in configuring DDSI2, but also help in understanding the trade-offs
involved.

7.2.2 Federated versus Standalone deployment
As has been described elsewhere (see the Overview in this Guide and also the Getting Started Guide), Vortex
OpenSplice has multiple deployment models selectable in the configuration file (some of these require a license).
For DDSI2, there is no difference between the various models: it simply serves whatever DCPS participants are
in the same ‘instance’, whether that instance be a federation of processes on a single node, all attached to a shared
memory segment managed by a set of Vortex OpenSplice service processes on that node, or a standalone one in
which a single process incorporates the Vortex OpenSplice services as libraries.
This Guide ignores the various deployment modes, using the terminology associated with the federated deployment mode because that mode is the driving force behind several of the user-visible design decisions in DDSI2.
In consequence, for a standalone deployment, the term ‘node’ as used in this Guide refers to a single process.

7.2.3 Discovery behaviour
7.2.3.1 Local discovery and built-in topics
Inside one node, DDSI2 monitors the creation and deletion of local DCPS domain participants, data readers
and data writers. It relies on the DCPS built-in topics to keep track of these events, and hence the use of
DDSI requires that these topics are enabled in the configuration, which is the default (see the description of
//OpenSplice/Domain/BuiltinTopics[@enabled] in the Configuration section).
If the built-in topics must be disabled to reduce network load, then the alternative is to instruct DDSI2 to completely ignore them using the DCPS topic/partition to network partition mapping available in the enhanced version,
DDSI2E.
A separate issue is that of the DCPS built-in topics when interoperating with other implementations. In Vortex
OpenSplice the built-in topics are first-class topics, i.e. the only difference between application topics and the
built-in topics in Vortex OpenSplice is that the built-in topics are pre-defined and that they are published and used
by the Vortex OpenSplice services. This in turn allows the RTNetworking service to avoid discovery of individual
domain participants and endpoints, enabling its excellent scalability.
Conversely, DDSI defines a different and slightly extended representation for the information in the built-in topics
as part of the discovery protocol specification, with a clear intent to locally reconstruct the samples of the built-in
topics. Unfortunately, this also means that the DCPS built-in topics become a special case.
Taken together, DDSI2 is in the unfortunate situation of having to straddle two very different approaches. While
local reconstruction of the DCPS built-in topics by DDSI2 is clearly possible, it would negatively impact the

7.2. Vortex OpenSplice DDSI2 specifics

41

Deployment Guide, Release 6.x

handling of transient data. Since handling transient data is one of the true strengths of Vortex OpenSplice, DDSI2
currently does not perform this reconstruction, with the unfortunate implication that loss of liveliness will not be
handled fully when interoperating with another DDSI implementation.
7.2.3.2 Proxy participants and endpoints
DDSI2 is what the DDSI specification calls a ‘stateful’ implementation. Writers only send data to discovered
readers and readers only accept data from discovered writers. (There is one exception: the writer may choose to
multicast the data, and anyone listening will be able to receive it, if a reader has already discovered the writer but
not vice-versa; it may accept the data even though the connection is not fully established yet.) Consequently, for
each remote participant and reader or writer, DDSI2 internally creates a proxy participant, proxy reader or proxy
writer. In the discovery process, writers are matched with proxy readers, and readers are matched with proxy
writers, based on the topic and type names and the QoS settings.
Proxies have the same natural hierarchy that ‘normal’ DDSI entities have: each proxy endpoint is owned by
some proxy participant, and once the proxy participant is deleted, all of its proxy endpoints are deleted as well.
Participants assert their liveliness periodically, and when nothing has been heard from a participant for the lease
duration published by that participant in its SPDP message, the lease becomes expired triggering a clean-up.
Under normal circumstances, deleting endpoints simply triggers disposes and unregisters in SEDP protocol, and,
similarly, deleting a participant also creates special messages that allow the peers to immediately reclaim resources
instead of waiting for the lease to expire.
7.2.3.3 Sharing of discovery information
DDSI2 is designed to service any number of participants, as one would expect for a service capable of being
deployed in a federated system. This obviously means it is aware of all participants, readers and writers on a node.
It also means that the discovery protocol as sketched earlier is rather wasteful: there is no need for each individual
participant serviced by DDSI2 to run the full discovery protocol for itself.
Instead of implementing the protocol as suggested by the standard, DDSI2 shares all discovery activities amongst the participants, allowing one to add participants on a node with only a minimal impact on the system.
It is even possible to have only a single DDSI participant on each node,
which then becomes the virtual owner of all the endpoints serviced by that instance of DDSI2. (See
Combining multiple participants and refer to the Configuration section for a detailed description of
//OpenSplice/DDSI2Service/Internal/SquashParticipants.) In this latter mode, there is no
discovery penalty at all for having many participants, but evidently, any participant-based liveliness monitoring
will be affected.
Because other implementations of the DDSI specification may be written on the assumption that all participants
perform their own discovery, it is possible to simulate that with DDSI2. It will not actually perform the discovery
for each participant independently, but it will generate the network traffic as if it does.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/BuiltinEndpointSet
• //OpenSplice/DDSI2Service/Internal/ConservativeBuiltinReaderStartup
However, please note that at the time of writing, we are not aware of any DDSI implementation requiring the use
of these settings.)
By sharing the discovery information across all participants in a single node, each new participant or endpoint
is immediately aware of the existing peers and will immediately try to communicate with these peers. This may
generate some redundant network traffic if these peers take a significant amount of time for discovering this new
participant or endpoint.
Another advantage (particularly in a federated deployment) is that the amount of memory required for discovery
and the state of the remote entities is independent of the number of participants that exist locally.

7.2. Vortex OpenSplice DDSI2 specifics

42

Deployment Guide, Release 6.x

7.2.3.4 Lingering writers
When an application deletes a reliable DCPS data writer, there is no guarantee that all its readers have already
acknowledged the correct receipt of all samples. In such a case, DDSI2 lets the writer (and the owning participant
if necessary) linger in the system for some time, controlled by the Internal/WriterLingerDuration
option. The writer is deleted when all samples have been acknowledged by all readers or the linger duration has
elapsed, whichever comes first.
The writer linger duration setting is currently not applied when DDSI2 is requested to terminate. In a federated
deployment it is unlikely to visibly affect system behaviour, but in a standalone deployment data written just
before terminating the application may be lost.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DDSI2Service/Internal/WriterLingerDuration
7.2.3.5 Start-up mode
A similar issue exists when starting DDSI2: DDSI discovery takes time, and when data is written immediately
after DDSI2 has started, it is likely that the discovery process hasn’t completed yet and some remote readers have
not yet been discovered. This would cause the writers to throw away samples for lack of interest, even though
matching readers already existed at the time of starting. For best-effort writers, this is perhaps surprising but still
acceptable; for reliable writers, however, it would be very counter-intuitive.
Hence the existence of the so-called ‘start-up mode’, during which all volatile reliable writers are treated as-if they
are transient-local writers. Transient-local data is meant to ensure samples are available to late-joining readers,
the start-up mode uses this same mechanism to ensure late-discovered readers will also receive the data. This
treatment of volatile data as-if it were transient-local happens entirely within DDSI2 and is invisible to the outside
world, other than the availability of some samples that would not otherwise be available.
Once DDSI2 has completed its initial discovery, it has built up its view of the network and can locally match new
writers against already existing readers, and consequently keeps any new samples published in a writer history
cache because these existing readers have not acknowledged them yet. Hence why this mode is tied to the start-up
of the DDSI2 service, rather than to that of an individual writer.
Unfortunately it is impossible to detect with certainty when the initial discovery process has been completed and
therefore the time DDSI2 remains in this start-up mode is controlled by an option: General/StartupModeDuration.
While in general this start-up mode is beneficial, it is not always so. There are two downsides: the first is that
during the start-up period, the writer history caches can grow significantly larger than one would normally expect;
the second is that it does mean large amounts of historical data may be transferred to readers discovered relatively
late in the process.
In a federated deployment on a local-area network, the likelihood of this behaviour causing problems is negligible,
as in such a configuration the DDSI2 service typically starts seconds before the applications and besides the
discovery times are short. The other extreme is a single-process deployment in a wide-area network, where the
application starts immediately and discovery times may be long.

7.2.4 Writer history QoS and throttling
The DDSI specification heavily relies on the notion of a writer history cache (WHC) within which a sequence
number uniquely identifies each sample. The original Vortex OpenSplice design has a different division of responsibilities between various components than what is assumed by the DDSI specification and this includes the
WHC. Despite the different division, the resulting behaviour is the same.
DDSI2 bridges this divide by constructing its own WHC when needed. This WHC integrates two different indices
on the samples published by a writer: one is on sequence number, which is used for retransmitting lost samples,
and one is on key value and is used for retaining the current state of each instance in the WHC.
The index on key value allows dropping samples from the index on sequence number when the state of an instance
is overwritten by a new sample. For transient-local, it conversely (also) allows retaining the current state of each
instance even when all readers have acknowledged a sample.
7.2. Vortex OpenSplice DDSI2 specifics

43

Deployment Guide, Release 6.x

The index on sequence number is required for retransmitting old data, and is therefore needed for all reliable
writers. The index on key values is always needed for transient-local data, and can optionally be used for other
writers using a history setting of KEEP_LAST with depth 1. (The Internal/AggressiveKeepLast1Whc
setting controls this behaviour.) The advantage of an index on key value in such a case is that superseded samples
can be dropped aggressively, instead of having to deliver them to all readers; the disadvantage is that it is somewhat
more resource-intensive.
Writer throttling is based on the WHC size using a simple bang-bang controller. Once the WHC contains
Internal/Watermarks/WhcHigh bytes in unacknowledged samples, it stalls the writer until the number
of bytes in unacknowledged samples drops below Internal/Watermarks/WhcLow.
While ideally only the one writer would be stalled, the interface between the Vortex OpenSplice kernel and DDSI2
is such that other outgoing traffic may be stalled as well. See Unresponsive readers & head-of-stream blocking.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/AggressiveKeepLast1Whc
• //OpenSplice/DDSI2Service/Internal/Watermarks/WhcHigh
• //OpenSplice/DDSI2Service/Internal/Watermarks/WhcLow

7.2.5 Unresponsive readers & head-of-stream blocking
For reliable communications, DDSI2 must retain sent samples in the WHC until they have been acknowledged.
Especially in case of a KEEP_ALL history kind, but also in the default case when the WHC is not aggressively
dropping old samples of instances (Internal/AggressiveKeepLast1Whc), a reader that fails to acknowledge the samples timely will cause the WHC to run into resource limits.
The correct treatment suggested by the DDS specification is to simply take the writer history QoS setting, apply
this to the DDSI2 WHC, and block the writer up to its ‘max_blocking_time’ QoS setting. However, the scalable architecture of Vortex OpenSplice renders this simple approach infeasible because of the division of labour
between the application processes and the various services. Of course, even if it were a possible approach, the
problem would still not be gone entirely, as one unresponsive (for whatever reason) reader would still be able
to prevent the writer from making progress and thus prevent the system from making progress if the writer is a
critical one.
Because of this, once DDSI2 hits a resource limit on a WHC, it blocks the sequence of outgoing samples for
up to Internal/ResponsivenessTimeout. If this timeout is set larger than roughly the domain expiry
time (//OpenSplice/Domain/Lease/ExpiryTime), it may cause entire nodes to lose liveliness. The
enhanced version, DDSI2E, has the ability to use multiple queues and can avoid this problem; please refer to
Channel configuration.
Any readers that fail to acknowledge samples in time will be marked ‘unresponsive’ and be treated as best-effort
readers until they start acknowledging data again. Readers that are marked unresponsive by a writer may therefore
observe sample loss. The ‘sample lost’ status of the data readers can be used to detect this.
One particular case where this can easily occur is if a reader becomes unreachable, for example because a network
cable is unplugged. While this will eventually cause a lease to expire, allowing the proxy reader to be removed
and the writer to no longer retain data for it, in the meantime the writer can easily run into a WHC limit. This will
then cause the writer to mark the reader as unresponsive, and the system will continue to operate. The presence
of unacknowledged data in a WHC as well as the existence of unresponsive readers will force the publication of
Heartbeats, and so unplugging a network cable will typically induce a stream of Heartbeats from some writers.
Another case where this can occur is with a very fast writer, and a reader on a slow host, and with large buffers
on both sides: then the time needed by the receiving host to process the backlog can become longer than this
responsiveness timeout, causing the writer to mark the reader as unresponsive, in turn causing the backlog to be
dropped. This allows the reader catch up, at which point it once again acknowledges data promptly and will be
considered responsive again, causing a new backlog to build up, and so on.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/AggressiveKeepLast1Whc

7.2. Vortex OpenSplice DDSI2 specifics

44

Deployment Guide, Release 6.x

• //OpenSplice/DDSI2Service/Internal/ResponsivenessTimeout
• //OpenSplice/Domain/Lease/ExpiryTime

7.2.6 Handling of multiple partitions and wildcards
7.2.6.1 Publishing in multiple partitions
A variety of design choices allow Vortex OpenSplice in combination with its RTNetworking service to be fully
dynamically discovered, yet without requiring an expensive discovery protocol. A side effect of these choices
is that a DCPS writer publishing a single sample in multiple partitions simultaneously will be translated by the
current version of DDSI2 as a writer publishing multiple identical samples in all these partitions, but with unique
sequence numbers.
When DDSI2 is used to communicate between Vortex OpenSplice nodes, this is not an application-visible issue,
but it is visible when interoperating with other implementations. Fortunately, publishing in multiple partitions is
rarely a wise choice in a system design.
Note that this only concerns publishing in multiple partitions, subscribing in multiple partitions works exactly as
expected, and is also a far more common system design choice.
7.2.6.2 Wildcard partitions
DDSI2 fully implements publishing and subscribing using partition wildcards, but depending on many (deployment time and application design) details, the use of partition wildcards for publishing data can easily lead to the
replication of data as mentioned in the previous subsection (Publishing in multiple partitions).
Secondly, because DDSI2 implements transient-local data internally in a different way from the way the Vortex
OpenSplice durability service does, it is strongly recommended that the combination of transient-local data and
publishing using partition wildcards be avoided completely.

7.3 Network and discovery configuration
7.3.1 Networking interfaces
DDSI2 uses a single network interface, the ‘preferred’ interface, for transmitting its multicast packets and advertises only the address corresponding to this interface in the DDSI discovery protocol.
To determine the default network interface, DDSI2 ranks the eligible interfaces by quality, and then selects the
interface with the highest quality. If multiple interfaces are of the highest quality, it will select the first enumerated
one. Eligible interfaces are those that are up and have the right kind of address family (IPv4 or IPv6). Priority is
then determined as follows:
• interfaces with a non-link-local address are preferred over those with a link-local one;
• multicast-capable is preferred, or if none is available
• non-multicast capable but neither point-to-point, or if none is available
• point-to-point, or if none is available
• loopback
If this procedure doesn’t select the desired interface automatically, it can be overridden by setting
General/NetworkInterfaceAddress to either the name of the interface, the IP address of the host on
the desired interface, or the network portion of the IP address of the host on the desired interface. An exact match
on the address is always preferred and is the only option that allows selecting the desired one when multiple
addresses are tied to a single interface.
Please refer to the Configuration section for a detailed description of:

7.3. Network and discovery configuration

45

Deployment Guide, Release 6.x

• //OpenSplice/NetworkService/General/NetworkInterfaceAddress
The default address family is IPv4, setting General/UseIPv6 will change this to IPv6. Currently, DDSI2 does not
mix IPv4 and IPv6 addressing. Consequently, all DDSI participants in the network must use the same addressing
mode. When interoperating, this behaviour is the same, i.e. it will look at either IPv4 or IPv6 addresses in the
advertised address information in the SPDP and SEDP discovery protocols.
IPv6 link-local addresses are considered undesirable because they need to be published and received via the
discovery mechanism, but there is in general no way to determine to which interface a received link-local address
is related.
If IPv6 is requested and the preferred interface has a non-link-local address, DDSI2 will operate in a ‘global
addressing’ mode and will only consider discovered non-link-local addresses. In this mode, one can select any set
of interface for listening to multicasts. Note that this behaviour is essentially identical to that when using IPv4,
as IPv4 does not have the formal notion of address scopes that IPv6 has. If instead only a link-local address is
available, DDSI2 will run in a ‘link-local addressing’ mode. In this mode it will accept any address in a discovery
packet, assuming that a link-local address is valid on the preferred interface. To minimise the risk involved in this
assumption, it only allows the preferred interface for listening to multicasts.
When a remote participant publishes multiple addresses in its SPDP message (or in SEDP messages, for that
matter), it will select a single address to use for communicating with that participant. The address chosen is the
first eligible one on the same network as the locally chosen interface, else one that is on a network corresponding
to any of the other local interfaces, and finally simply the first one. Eligibility is determined in the same way as
for network interfaces.
7.3.1.1 Multicasting
DDSI2 allows configuring to what extent multicast is to be used:
• whether to use multicast for data communications,
• whether to use multicast for participant discovery,
• on which interfaces to listen for multicasts.
It is advised to allow multicasting to be used. However, if there are restrictions on the use of multicasting, or if the
network reliability is dramatically different for multicast than for unicast, it may be attractive to disable multicast
for normal communications. In this case, setting General/AllowMulticast to false will force DDSI2 to
use unicast communications for everything except the periodic distribution of the participant discovery messages.
If at all possible, it is strongly advised to leave multicast-based participant discovery enabled, because that
avoids having to specify a list of nodes to contact, and it furthermore reduces the network load considerably. However, if need be, one can disable the participant discovery from sending multicasts by setting
Internal/SuppressSpdpMulticast to true.
To disable incoming multicasts, or to control from which interfaces multicasts are to be accepted, one can use
the General/MulticastRecvInterfaceAddresses setting. This allows listening on no interface, the
preferred, all or a specific set of interfaces.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/General/AllowMulticast
• //OpenSplice/DDSI2Service/Internal/SuppressSpdpMulticast
• //OpenSplice/DDSI2Service/General/MulticastRecvNetworkInterfaceAddress
7.3.1.2 Discovery configuration
Discovery addresses

The DDSI discovery protocols, SPDP for the domain participants and SEDP for their endpoints, usually operate
well without any explicit configuration. Indeed, the SEDP protocol never requires any configuration.

7.3. Network and discovery configuration

46

Deployment Guide, Release 6.x

DDSI2 by default uses the domain id as specified in //OpenSplice/Domain/Id but allows overriding it for
special configurations using the Discovery/DomainId setting. The domain id is the basis for all UDP/IP
port number calculations, which can be tweaked when necessary using the configuration settings under Discovery/Ports. It is however rarely necessary to change the standardised defaults.
The SPDP protocol periodically sends, for each domain participant, an SPDP sample to a set of addresses,
which by default contains just the multicast address, which is standardised for IPv4 (239.255.0.1), but
not for IPv6 (it uses ff02::ffff:239.255.0.1). The actual address can be overridden using the
Discovery/SPDPMulticastAddress setting, which requires a valid multicast address.
In addition (or as an alternative) to the multicast-based discovery, any number of unicast addresses can be configured as addresses to be contacted by specifying peers in the Discovery/Peers section. Each time an SPDP
message is sent, it is sent to all of these addresses.
Default behaviour of DDSI2 is to include each IP address several times in the set, each time with a different UDP
port number (corresponding to another participant index), allowing at least several applications to be present on
these hosts.
Obviously, configuring a number of peers in this way causes a large burst of packets to be sent each time an SPDP
message is sent out, and each local DDSI participant causes a burst of its own. Most of the participant indices will
not actually be use, making this rather wasteful behaviour.
DDSI2 allows explicitly adding a port number to the IP address, formatted as IP:PORT, to avoid this waste, but
this requires manually calculating the port number. In practice it also requires fixing the participant index using
Discovery/ParticipantIndex (see the description of ‘PI’ in Controlling port numbers) to ensure that the
configured port number indeed corresponds to the remote DDSI2 (or other DDSI implementation), and therefore
is really practicable only in a federated deployment.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/Domain/Id
• //OpenSplice/DDSI2Service/Discovery/DomainId
• //OpenSplice/DDSI2Service/Discovery/SPDPMulticastAddress
• //OpenSplice/DDSI2Service/Discovery/Peers
• //OpenSplice/DDSI2Service/Discovery/ParticipantIndex
Asymmetrical discovery

On reception of an SPDP packet, DDSI2 adds the addresses advertised in that SPDP packet to this set, allowing
asymmetrical discovery. In an extreme example, if SPDP multicasting is disabled entirely, host A has the address
of host B in its peer list and host B has an empty peer list, then B will eventually discover A because of an SPDP
message sent by A, at which point it adds A’s address to its own set and starts sending its own SPDP message to
A, allowing A to discover B. This takes a bit longer than normal multicast based discovery, though.
Timing of SPDP packets

The interval with which the SPDP packets are transmitted is configurable as well, using the Discovery/SPDPInterval setting. A longer interval reduces the network load, but also increases the time discovery takes,
especially in the face of temporary network disconnections.
Endpoint discovery

Although the SEDP protocol never requires any configuration, the network partitioning of Vortex OpenSplice
DDSI2E does interact with it: so-called ‘ignored partitions’ can be used to instruct DDSI2 to completely ignore certain DCPS topic and partition combinations, which will prevent DDSI2 from forwarding data for these
topic/partition combinations to and from the network.

7.3. Network and discovery configuration

47

Deployment Guide, Release 6.x

While it is rarely necessary, it is worth mentioning that by overriding the domain id used by DDSI in conjunction
with ignored partitions and unique SPDP multicast addresses allows partitioning the data and giving each partition
its own instance of DDSI2.

7.3.2 Combining multiple participants
In a Vortex OpenSplice standalone deployment the various configured services, such as spliced and DDSI2, still
retain their identity by creating their own DCPS domain participants. DDSI2 faithfully mirrors all these participants in DDSI, and it will appear at the DDSI level as if there is a large system with many participants, whereas
in reality there are only a few application participants.
The Internal/SquashParticipants option can be used to simulate the existence of only one participant,
the DDSI2 service itself, which owns all endpoints on that node. This reduces the background messages because
far fewer liveliness assertions need to be sent.
Clearly, the liveliness monitoring features that are related to domain participants will be affected if multiple DCPS
domain participants are combined into a single DDSI domain participant. The Vortex OpenSplice services all use
a liveliness QoS setting of AUTOMATIC, which works fine.
In a federated deployment, the effect of this option is to have only a single DDSI domain participant per node.
This is of course much more scalable, but in no way resembles the actual structure of the system if there are in
fact multiple application processes running on that node.
However, in Vortex OpenSplice the built-in topics are not derived from the DDSI discovery, and hence in a Vortex
OpenSplice-only network the use of the Internal/SquashParticipants setting will not result in any loss
of information in the DCPS API or in the Vortex OpenSplice tools such as the Tester.
When interoperability with another vendor is not needed, enabling the SquashParticipants option is often
a good choice.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DDSI2Service/Internal/SquashParticipants

7.3.3 Controlling port numbers
The port numbers used by DDSI2 are determined as follows, where the first two items are given by the DDSI
specification and the third is unique to DDSI2 as a way of serving multiple participants by a single DDSI instance:
• 2 ‘well-known’ multicast ports: B and B+1
• 2 unicast ports at which only this instance of DDSI2 is listening: B+PG*PI+10 and B+PG*PI+11
• 1 unicast port per domain participant it serves, chosen by the kernel from the anonymous ports, i.e. >=
32768
where:
• B is Discovery/Ports/Base (7400) + Discovery/Ports/DomainGain (250) * Domain/Id
• PG is Discovery/Ports/ParticipantGain (2)
• PI is Discovery/ParticipantIndex
The default values, taken from the DDSI specification, are in parentheses. There are actually even more parameters, here simply turned into constants as there is absolutely no point in ever changing these values; however, they
are configurable and the interested reader is referred to the DDSI 2.1 or 2.2 specification, section 9.6.1.
PI is the most interesting, as it relates to having multiple instances of DDSI2 in the same domain on a single node.
In a federated deployment, this never happens (exceptional cases excluded). Its configured value is either ‘auto’,
‘none’ or a non-negative integer. This setting matters:
• When it is ‘auto’ (which is the default), DDSI2 probes UDP port numbers on start-up, starting with PI =
0, incrementing it by one each time until it finds a pair of available port numbers, or it hits the limit. The

7.3. Network and discovery configuration

48

Deployment Guide, Release 6.x

maximum PI it will ever choose is currently still hard-coded at 9 as a way of limiting the cost of unicast
discovery. (It is recognised that this limit can cause issues in a standalone deployment.)
• When it is ‘none’ it simply ignores the ‘participant index’ altogether and asks the kernel to pick two random
ports (>= 32768). This eliminates the limit on the number of standalone deployments on a single machine
and works just fine with multicast discovery while complying with all other parts of the specification for
interoperability. However, it is incompatible with unicast discovery.
• When it is a non-negative integer, it is simply the value of PI in the above calculations. If multiple instances
of DDSI2 on a single machine are needed, they will need unique values for PI, and so for standalone
deployments this particular alternative is hardly useful.
Clearly, to fully control port numbers, setting Discovery/ParticipantIndex (= PI) to a hard-coded value
is the only possibility. In a federated deployment this is an option that has very few downsides, and generally 0
will be a good choice.
By fixing PI, the port numbers needed for unicast discovery are fixed as well. This allows listing peers as IP:PORT
pairs, significantly reducing traffic, as explained in the preceding subsection.
The other non-fixed ports that are used are the per-domain participant ports, the third item in the list. These
are used only because there exist some DDSI implementations that assume each domain participant advertises
a unique port number as part of the discovery protocol, and hence that there is never any need for including
an explicit destination participant id when intending to address a single domain participant by using its unicast
locator. DDSI2 never makes this assumption, instead opting to send a few bytes extra to ensure the contents of a
message are all that is needed. With other implementations, you will need to check.
If all DDSI implementations in the network include full addressing information in the messages, like
DDSI2, then the per-domain participant ports serve no purpose at all. The default false setting of
Compatibility/ManySocketsMode disables the creation of these ports.
This setting has a few other side benefits as well, as there will generally be more participants using the same unicast
locator, improving the chances for requiring but a single unicast even when addressing a multiple participants in a
node. The obvious case where this is beneficial is when one host has not received a multicast.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Discovery/Ports/Base
• //OpenSplice/DDSI2Service/Discovery/Ports/DomainGain
• //OpenSplice/DDSI2Service/Discovery/Ports/ParticipantGain
• //OpenSplice/DDSI2Service/Discovery/ParticipantIndex
• //OpenSplice/DDSI2Service/Compatibility/ManySocketsMode

7.3.4 Coexistence with Vortex OpenSplice RTNetworking
DDSI2 has a special mode, configured using General/CoexistWithNativeNetworking, to allow it to
operate in conjunction with Vortex OpenSplice RTNetworking: in this mode DDSI2 only handles packets sent by
other vendors’ implementations, allowing all intra-Vortex OpenSplice traffic to be handled by the RTNetworking
service while still providing interoperability with other vendors.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DDSI2Service/General/CoexistWithNativeNetworking

7.4 Data path configuration
7.4.1 Data path architecture
The data path in DDSI2 consists of a transmitting and a receiving side. The main path in the transmit side accepts
data to be transmitted from the Vortex OpenSplice kernel via a network queue and administrates and formats the
7.4. Data path configuration

49

Deployment Guide, Release 6.x

data for transmission over the network.
The secondary path handles asynchronous events such as the periodic generation of writer Heartbeats and the
transmitting of acknowledgement messages from readers to writers, in addition to handling the retransmission
of old data on request. These requests can originate in packet loss, but also in requests for historical data from
transient-local readers.
The diagram Data flow using two channels gives an overview of the main data flow and the threads in a configuration using two channels. Configuring multiple channels is an enhanced feature that is available only in DDSI2E,
but the principle is the same in both variants.
Data flow using two channels

7.4.2 Transmit-side configuration
7.4.2.1 Transmit processing
DDSI2E divides the outgoing data stream into prioritised channels. These channels are handled completely independently, effectively allowing mapping DDS transport priorities to operating system thread priorities. Although
the ability to define multiple channels is limited to DDSI2E, DDSI2 uses the same mechanisms but is restricted to
what in DDSI2E is the default channel if none are configured explicitly. For details on configuring channels, see
Channel configuration.
7.4. Data path configuration

50

Deployment Guide, Release 6.x

Each channel has its own transmit thread, draining a queue with samples to be transmitted from the Vortex OpenSplice kernel. The maximum size of the queue can be configured per channel, and the default for the individual
channels is configured using the Sizing/NetworkQueueSize setting. In DDSI2, this setting simply controls
the queue size, as the default channel of DDSI2E has the default queue size. A larger queue size increases the
potential latency and (shared) memory requirements, but improves the possibilities for smoothing out traffic if the
applications publish it in bursts.
Once a networking service has taken a sample from the queue, it takes responsibility for it. Consequently, if it is
to be sent reliably and there are insufficient resources to store it in the WHC, it must wait for resources to become
available. See Unresponsive readers & head-of-stream blocking.
The DDSI control messages (Heartbeat, AckNack, etc.) are sent by a thread dedicated to handling timed events
and asynchronous transmissions, including retransmissions of samples on request of a reader. This thread is known
as the ‘timed-event thread’ and there is at least one such thread, but each channel can have its own one.
DDSI2E can also perform traffic shaping and bandwidth limiting, configurable per channel, and with independent
limits for data on the one hand and control and retransmissions on the other hand.
7.4.2.2 Retransmit merging
A remote reader can request retransmissions whenever it receives a Heartbeat and detects samples are missing.
If a sample was lost on the network for many or all readers, the next heartbeat is likely to trigger a ‘storm’ of
retransmission requests. Thus, the writer should attempt merging these requests into a multicast retransmission, to
avoid retransmitting the same sample over & over again to many different readers. Similarly, while readers should
try to avoid requesting retransmissions too often, in an interoperable system the writers should be robust against
it.
In DDSI2, upon receiving a Heartbeat that indicates samples are missing, a reader will schedule a retransmission
request to be sent after Internal/NackDelay, or combine it with an already scheduled request if possible.
Any samples received in between receipt of the Heartbeat and the sending of the AckNack will not need to be
retransmitted.
Secondly, a writer attempts to combine retransmit requests in two different ways. The first is to change messages
from unicast to multicast when another retransmit request arrives while the retransmit has not yet taken place. This
is particularly effective when bandwidth limiting causes a backlog of samples to be retransmitted. The behaviour
of the second can be configured using the Internal/RetransmitMerging setting. Based on this setting,
a retransmit request for a sample is either honoured unconditionally, or it may be suppressed (or ‘merged’) if it
comes in shortly after a multicasted retransmission of that very sample, on the assumption that the second reader
will likely receive the retransmit, too. The Internal/RetransmitMergingPeriod controls the length of
this time window.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/NackDelay
• //OpenSplice/DDSI2Service/Internal/RetransmitMerging
• //OpenSplice/DDSI2Service/Internal/RetransmitMergingPeriod
7.4.2.3 Retransmit backlogs
Another issue is that a reader can request retransmission of many samples at once. When the writer simply queues
all these samples for retransmission, it may well result in a huge backlog of samples to be retransmitted. As a
result, the ones near the end of the queue may be delayed by so much that the reader issues another retransmit
request. DDSI2E provides bandwidth limiting, which makes the situation even worse, as it can significantly
increase the time it takes for a sample to be sent out once it has been queued for retransmission.
Therefore, DDSI2 limits the number of samples queued for retransmission and ignores (those parts of) retransmission requests that would cause the retransmit queue to contain too many samples or take too much time to process.
There are two settings governing the size of these queues, and the limits are applied per timed-event thread (i.e.
the global one, and typically one for each configured channel with limited bandwidth when using DDSI2E). The

7.4. Data path configuration

51

Deployment Guide, Release 6.x

first is Internal/MaxQueuedRexmitMessages, which limits the number of retransmit messages, the second Internal/MaxQueuedRexmitBytes which limits the number of bytes. The latter is automatically set
based on the combination of the allowed transmit bandwidth and the Internal/NackDelay setting, as an
approximation of the likely time until the next potential retransmit request from the reader.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/MaxQueuedRexmitMessages
• //OpenSplice/DDSI2Service/Internal/MaxQueuedRexmitBytes
• //OpenSplice/DDSI2Service/Internal/NackDelay
7.4.2.4 Controlling fragmentation
Samples in DDS can be arbitrarily large, and will not always fit within a single datagram. DDSI has facilities to
fragment samples so they can fit in UDP datagrams, and similarly IP has facilities to fragment UDP datagrams to
into network packets. The DDSI specification states that one must not unnecessarily fragment at the DDSI level,
but DDSI2 simply provides a fully configurable behaviour.
If the serialised form of a sample is at least Internal/FragmentSize, it will be fragmented using the DDSI
fragmentation. All but the last fragment will be exactly this size; the last one may be smaller.
Control messages, non-fragmented samples, and sample fragments are all subject to packing into datagrams before
sending it out on the network, based on various attributes such as the destination address, to reduce the number of
network packets. This packing allows datagram payloads of up to Internal/MaxMessageSize, overshooting this size if the set maximum is too small to contain what must be sent as a single unit. Note that in this case,
there is a real problem anyway, and it no longer matters where the data is rejected, if it is rejected at all. UDP/IP
header sizes are not taken into account in this maximum message size.
The IP layer then takes this UDP datagram, possibly fragmenting it into multiple packets to stay within the maximum size the underlying network supports. A trade-off to be made is that while DDSI fragments can be retransmitted individually, the processing overhead of DDSI fragmentation is larger than that of UDP fragmentation.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/FragmentSize
• //OpenSplice/DDSI2Service/Internal/MaxMessageSize

7.4.3 Receive-side configuration
7.4.3.1 Receive processing
Receiving of data is split into multiple threads, as also depicted in the overall DDSI2 data path diagram Data flow
using two channels:
• A single receive thread responsible for retrieving network packets and running the protocol state machine;
• A delivery thread dedicated to processing DDSI built-in data: participant discovery, endpoint discovery and
liveliness assertions;
• One or more delivery threads dedicated to the handling of application data: deserialisation and delivery to
the DCPS data reader caches.
The receive thread is responsible for retrieving all incoming network packets, running the protocol state machine,
which involves scheduling of AckNack and Heartbeat messages and queueing of samples that must be retransmitted, and for defragmenting and ordering incoming samples.
For a specific proxy writer—the local manifestation of a remote DDSI data writer— with a number of data readers,
the organisation is as shown in the diagram Proxy writer with multiple data readers.
Proxy writer with multiple data readers

7.4. Data path configuration

52

Deployment Guide, Release 6.x

Fragmented data first enters the defragmentation stage, which is per proxy writer. The number of samples that can
be defragmented simultaneously is limited, for reliable data to Internal/DefragReliableMaxSamples
and for unreliable data to Internal/DefragUnreliableMaxSamples.
Samples (defragmented if necessary) received out of sequence are buffered, primarily per proxy
writer, but, secondarily, per reader catching up on historical (transient-local) data.
The size
of the first is limited to Internal/PrimaryReorderMaxSamples, the size of the second to
Internal/SecondaryReorderMaxSamples.
In between the receive thread and the delivery threads sit queues, of which the maximum size is controlled by the
Internal/DeliveryQueueMaxSamples setting. Generally there is no need for these queues to be very
large, their primary function is to smooth out the processing when batches of samples become available at once,
for example following a retransmission.
When any of these receive buffers hit their size limit, DDSI2 will drop incoming (fragments of) samples and/or
buffered (fragments of) samples to ensure the receive thread can continue to make progress. Such dropped samples
will eventually be retransmitted.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/DefragReliableMaxSamples
• //OpenSplice/DDSI2Service/Internal/DefragUnreliableMaxSamples
• //OpenSplice/DDSI2Service/Internal/PrimaryReorderMaxSamples
• //OpenSplice/DDSI2Service/Internal/SecondaryReorderMaxSamples
• //OpenSplice/DDSI2Service/Internal/DeliveryQueueMaxSamples
7.4.3.2 Minimising receive latency
In low-latency environments, a few microseconds can be gained by processing the application data directly in the
receive thread, or synchronously with respect to the incoming network traffic, instead of queueing it for asynchronous processing by a delivery thread. This happens for data transmitted with the max_latency QoS setting at
most a configurable value and the transport_priority QoS setting at least a configurable value. By default, these
values are 0 and the maximum transport priority, effectively disabling synchronous delivery for all but the most
important and urgent data.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Internal/SynchronousDeliveryLatencyBound
• //OpenSplice/DDSI2Service/Internal/SynchronousDeliveryPriorityThreshold

7.4. Data path configuration

53

Deployment Guide, Release 6.x

7.4.4 Direction-independent settings
7.4.4.1 Maximum sample size
DDSI2 provides a setting, Internal/MaxSampleSize, to control the maximum size of samples that the
service is willing to process. The size is the size of the (CDR) serialised payload, and the limit holds both for builtin data and for application data. The (CDR) serialised payload is never larger than the in-memory representation
of the data.
On the transmitting side, samples larger than MaxSampleSize are dropped with a warning in the Vortex OpenSplice info log. DDSI2 behaves as if the sample never existed. The current structure of the interface between
the Vortex OpenSplice kernel and the Vortex OpenSplice networking services unfortunately prevents DDSI2 from
properly reporting this back to the application that wrote the sample, so the only guaranteed way of detecting the
dropping of the sample is by checking the info log.
Similarly, on the receiving side, samples large than MaxSampleSize are dropped, and this is done as early
as possible, immediately following the reception of a sample or fragment of one, to prevent any resources from
being claimed for longer than strictly necessary. Where the transmitting side completely ignores the sample, on
the receiving side DDSI2 pretends the sample has been correctly received and, at the DDSI2 level, acknowledges
reception to the writer when asked. This allows communication to continue.
When the receiving side drops a sample, readers will get a ‘sample lost’ notification at the next sample that does
get delivered to those readers. This condition means that again checking the info log is ultimately the only truly
reliable way of determining whether samples have been dropped or not.
While dropping samples (or fragments thereof) as early as possible is beneficial from the point of view of reducing
resource usage, it can make it hard to decide whether or not dropping a particular sample has been recorded in the
log already. Under normal operational circumstances, DDSI2 will report a single event for each sample dropped,
but it may on occasion report multiple events for the same sample.
Finally, it is technically allowed to set MaxSampleSize to very small sizes, even to the point that the discovery
data can’t be communicated anymore. The dropping of the discovery data will be duly reported, but the usefulness
of such a configuration seems doubtful.
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DDSI2Service/Internal/MaxSampleSize

7.5 DDSI2E Enhanced features
7.5.1 Introduction to DDSI2E
DDSI2E is an enhanced version of the DDSI2 service, adding three major features:
• Channels: parallel processing of independent data stream, with prioritisation based on the transport priority
setting of the data writers, and supporting traffic-shaping of outgoing data;
• Network partitions: use of special multicast addresses for some partition-topic combinations as well as
allowing ignoring data; and
• Encryption: encrypting all traffic for a certain network partition. This section provides details on the configuration of these three features.

7.5.2 Channel configuration
7.5.2.1 Channel configuration overview
DDSI2E allows defining channels, which are independent data paths within the DDSI service. Vortex OpenSplice
chooses a channel based by matching the transport priority QoS setting of the data writer with the threshold
specified for the various channels. Because each channel has a set of dedicated threads to perform the processing
7.5. DDSI2E Enhanced features

54

Deployment Guide, Release 6.x

and the thread priorities can all be configured, it is straightforward to guarantee that samples from high-priority
data writers will get precedence over those from low-priority data throughout the service stack.
A second aspect to the use of channels is that the head-of-line blocking mentioned in Unresponsive readers &
head-of-stream blocking. Unresponsive readers & head-of-stream blocking is per channel, guaranteeing that a
high-priority channel will not be disrupted by an unresponsive reader of low-priority data.
The channel-specific threads perform essentially all processing (serialisation, writer history cache management,
deserialisation, delivery to DCPS data readers, etc.), but there still is one shared thread involved. This is the receive
thread (‘recv’) that demultiplexes incoming packets and implements the protocol state machine. The receive thread
only performs minimal work on each incoming packet, and never has to wait for the processing of user data.
The existence of the receive thread is the only major difference between DDSI2E channels and those of the Vortex
OpenSplice RTNetworking service: in the RTNetworking service, each thread is truly independent. This change is
the consequence of DDSI2E interoperating with implementations that are not aware of channels and with DDSI2E
nodes that have differently configured channels, unlike the RTNetworking service where all nodes must use exactly
the same channel definitions.
When configuring multiple channels, it is recommended to set the CPU priority of the receive thread to at least
that of the threads of the highest priority channel, to ensure the receive thread will be scheduled in promptly.
If no channels are defined explicitly, a single, default channel is used. In DDSI2 (rather than DDSI2E), the
processing is as if only this default channel exists.
7.5.2.2 Transmit side
For each discovered local data writer, DDSI2E determines the channel to use. This is the channel with the lowest
threshold priority of all channels that have a threshold priority that is higher than the writer’s transport priority.
If there is no such channel, i.e. the writer has a transport priority higher than the highest channel threshold, the
channel with the highest threshold is used.
Each channel has its own network queue into which the Vortex OpenSplice kernel writes samples to be transmitted and that DDSI2E reads. The size of this queue can be set for each channel independently by using
Channels/Channel/QueueSize, with the default taken from the global Sizing/NetworkQueueSize.
Bandwidth limiting and traffic shaping are configured per channel as well. The following parameters are configurable:
• bandwidth limit
• auxiliary bandwidth limit
• IP QoS settings
The traffic shaping is based on a ‘leaky bucket’ algorithm: transmit credits are added at a constant rate, the total
transmit credit is capped, and each outgoing packet reduces the available transmit credit. Outgoing packets must
wait until enough transmit credits are available.
Each channel has two separate credits: data and auxiliary. The data credit is used strictly for transmitting fresh
data (i.e. directly corresponding to writes, disposes, etc.) and control messages directly caused by transmitting that
data. This credit is configured using the Channels/Channel/DataBandwidthLimit setting. By default,
a channel is treated as if it has infinite data credit, disabling traffic shaping.
The auxiliary credit is used for everything else: asynchronous control data & retransmissions, and is configured
using the Channels/Channel/AuxiliaryBandwidthLimit setting.
When an auxiliary bandwidth limit has been set explicitly, or when one explicitly sets, e.g. a thread priority for
a thread named ‘tev.channel-name’, an independent event thread handles the generation of auxiliary data for that
channel. But if neither is given, the global event thread instead handles all auxiliary data for the channel.
The
global
event
thread
has
an
auxiliary
credit
of
its
own,
configured
using
Internal/AuxiliaryBandwidthLimit. This credit applies to all discovery-related traffic, as well
as to all auxiliary data generated by channels without their own event thread.

7.5. DDSI2E Enhanced features

55

Deployment Guide, Release 6.x

Generally, it is best to simply specify both the data and the auxiliary bandwidth for each channel separately,
and set Internal/AuxiliaryBandwidthLimit to limit the network bandwidth the discovery & liveliness
protocols can consume.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2EService/Channels/Channel/QueueSize
• //OpenSplice/DDSI2Service/Sizing/NetworkQueueSize
• //OpenSplice/DDSI2EService/Channels/Channel/DataBandwidthLimit
• //OpenSplice/DDSI2EService/Channels/Channel/AuxiliaryBandwidthLimit
• //OpenSplice/DDSI2EService/Internal/AuxiliaryBandwidthLimit
7.5.2.3 Receive side
On the receive side, the single receive thread accepts incoming data and runs the protocol state machine. Data
ready for delivery to the local data readers is queued on the delivery queue for the channel that best matches the
proxy writer that wrote the data, according to the same criterion used for selecting the outgoing channel for the
data writer.
The delivery queue is emptied by the delivery thread, ‘dq.channel-name’, which deserialises the data and updates
the data readers. Because each channel has its own delivery thread with its own scheduling priority, once the data
leaves the receive thread and is enqueued for the delivery thread, higher priority data once again takes precedence
over lower priority data.
7.5.2.4 Discovery traffic
DDSI discovery data is always transmitted by the global timed-event thread (‘tev’), and always processed by
the special delivery thread for DDSI built-in data (‘dq.builtin’). By explicitly creating a timed-event thread, one
effectively separates application data from all discovery data. One way of creating such a thread is by setting
properties for it (see Thread configuration), another is by setting a bandwidth limit on the auxiliary data of the
channel (see Transmit side).
Please refer to the Configuration section for a detailed description of:
• //OpenSplice/DDSI2EService/Channels/Channel/AuxiliaryBandwidthLimit.)
7.5.2.5 On interoperability
DDSI2E channels are fully compliant with the wire protocol. One can mix & match DDSI2E with different sets
of channels and with other vendors’ implementation.

7.5.3 Network partition configuration
7.5.3.1 Network partition configuration overview
Network partitions introduce alternative multicast addresses for data. In the DDSI discovery protocol, a reader can
override the default address at which it is reachable, and this feature of the discovery protocol is used to advertise
alternative multicast addresses. The DDSI writers in the network will (also) multicast to such an alternative
multicast address when multicasting samples or control data.
The mapping of a DCPS data reader to a network partition is indirect: DDSI2E first matches the DCPS data reader
partitions and topic against a table of ‘partition mappings’, partition/topic combinations to obtain the name of a
network partition, then looks up the network partition. This makes it easier to map many different partition/topic
combinations to the same multicast address without having to specify the actual multicast address many times
over.
If no match is found, DDSI2E automatically defaults to standardised DDSI multicast address.
7.5. DDSI2E Enhanced features

56

Deployment Guide, Release 6.x

7.5.3.2 Matching rules
Matching of a DCPS partition/topic combination proceeds in the order in which the partition mappings are specified in the configuration. The first matching mapping is the one that will be used. The * and ? wildcards are
available for the DCPS partition/topic combination in the partition mapping.
As mentioned earlier (see Local discovery and built-in topics), DDSI2E can be instructed to ignore all DCPS
data readers and writers for certain DCPS partition/topic combinations through the use of ‘IgnoredPartitions’.
The ignored partitions use the same matching rules as normal mappings, and take precedence over the normal
mappings.
7.5.3.3 Multiple matching mappings
A single DCPS data reader can be associated with a set of partitions, and each partition/topic combination can
potentially map to a different network partitions. In this case, DDSI2E will use the first matching network partition.
This does not affect what data the reader will receive; it only affects the addressing on the network.
7.5.3.4 On interoperability
DDSI2E network partitions are fully compliant with the wire protocol. One can mix and match DDSI2E with
different sets of network partitions and with other vendors’ implementations.

7.5.4 Encryption configuration
7.5.4.1 Encryption configuration overview
DDSI2E encryption support allows the definition of ‘security profiles’, named combinations of (symmetrical
block) ciphers and keys. These can be associated with subsets of the DCPS data writers via the network partitions: data from a DCPS data writer matching a particular network partition will be encrypted if that network
partition has an associated security profile.
The encrypted data will be tagged with a unique identifier for the network partition, in cleartext. The receiving
nodes use this identifier to lookup the network partition & the associated encryption key and cipher.
Clearly, this requires that the definition of the encrypted network partitions must be identical on the transmitting
and the receiving sides. If the network partition cannot be found, or if the associated key or cipher differs, the
receiver will ignore the encrypted data. It is therefore not necessary to share keys with nodes that have no need
for the encrypted data.
The encryption is performed per-packet; there is no chaining from one packet to the next.
7.5.4.2 On interoperability
Encryption is not yet a standardised part of DDSI, but the standard does allow vendor-specific extensions. DDSI2E
encryption relies on a vendor-specific extension to marshal encrypted data into valid DDSI messages, but they
cannot be interpreted by implementations that do not recognise this particular extension.

7.6 Thread configuration
DDSI2 creates a number of threads and each of these threads has a number of properties that can be controlled
individually. The threads involved in the data path are shown in the diagram in Data path architecture. The
properties that can be controlled are:
• stack size,
• scheduling class, and

7.6. Thread configuration

57

Deployment Guide, Release 6.x

• scheduling priority.
The threads are named and the attribute Threads/Thread[@name] is used to set the properties by thread
name. Any subset of threads can be given special properties; anything not specified explicitly is left at the default
value.
(See the detailed description of OpenSplice/DDSI2Service/Threads/Thread[@name] in the Configuration section)
The following threads exist:
• gc: garbage collector, which sleeps until garbage collection is requested for an entity, at which point it starts
monitoring the state of DDSI2, pushing the entity through whatever state transitions are needed once it is
safe to do so, ending with the freeing of the memory.
• main: the main thread of DDSI2, which performs start-up and teardown and monitors the creation and
deletion of entities in the local node using the built-in topics.
• recv: accepts incoming network packets from all sockets/ports, performs all protocol processing, queues
(nearly) all protocol messages sent in response for handling by the timed-event thread, queues for delivery
or, in special cases, delivers it directly to the data readers.
• dq.builtins: processes all discovery data coming in from the network.
• lease: performs internal liveliness monitoring of DDSI2 and renews the Vortex OpenSplice kernel lease if
the status is satisfactory.
• tev: timed-event handling, used for all kinds of things, such as: periodic transmission of participant discovery and liveliness messages, transmission of control messages for reliable writers and readers (except those
that have their own timed-event thread), retransmitting of reliable data on request (except those that have
their own timed-event thread), and handling of start-up mode to normal mode transition.
and, for each defined channel:
• xmit.channel-name: takes data from the Vortex OpenSplice kernel’s queue for this channel, serialises it and
forwards it to the network.
• dq.channel-name: deserialisation and asynchronous delivery of all user data.
• tev.channel-name: channel-specific ‘timed-event’ handling: transmission of control messages for reliable
writers and readers and retransmission of data on request. Channel-specific threads exist only if the configuration includes an element for it or if an auxiliary bandwidth limit is set for the channel.
For DDSI2, and DDSI2E when no channels are explicitly defined, there is one channel named ‘user’.

7.7 Reporting and tracing
DDSI2 can produce highly detailed traces of all traffic and internal activities. It enables individual categories
of information, as well as having a simple verbosity level that enables fixed sets of categories and of which the
definition corresponds to that of the other Vortex OpenSplice services.
The categorisation of tracing output is incomplete and hence most of the verbosity levels and categories are not of
much use in the current release. This is an ongoing process and here we describe the target situation rather than
the current situation.
All ‘fatal’ and ‘error’ messages are written both to the DDSI2 log and to the ospl-error.log file; similarly
all ‘warning’ messages are written to the DDSI2 log and the ospl-info.log file.
The Tracing element has the following sub elements:
• Verbosity: selects a tracing level by enabled a pre-defined set of categories. The list below gives the known
tracing levels, and the categories they enable:
– none
– severe: ‘error’ and ‘fatal’

7.7. Reporting and tracing

58

Deployment Guide, Release 6.x

– warning, info: severe + ‘warning’
– config: info + ‘config’
– fine: config + ‘discovery’
– finer: fine + ‘traffic’, ‘timing’ and ‘info’
– finest: fine + ‘trace’
• EnableCategory: a comma-separated list of keywords, each keyword enabling individual categories. The
following keywords are recognised:
– fatal: all fatal errors, errors causing immediate termination
– error: failures probably impacting correctness but not necessarily causing immediate termination.
– warning: abnormal situations that will likely not impact correctness.
– config: full dump of the configuration
– info: general informational notices
– discovery: all discovery activity
– data: include data content of samples in traces
– radmin: receive buffer administration
– timing: periodic reporting of CPU loads per thread
– traffic: periodic reporting of total outgoing data
In addition, the keyword trace enables all but radmin.
• OutputFile: the file to write the DDSI2 log to
• AppendToFile: boolean, set to true to append to the log instead of replacing the file.
Currently, the useful verbosity settings are config and finest.
Config writes the full configuration to the DDSI2 log file as well as any warnings or errors, which can be a good
way to verify everything is configured and behaving as expected.
Finest provides a detailed trace of everything that occurs and is an indispensable source of information when
analysing problems; however, it also requires a significant amount of time and results in huge log files.
Whether these logging levels are set using the verbosity level or by enabling the corresponding categories is
immaterial.

7.8 Compatibility and conformance
7.8.1 Conformance modes
The DDSI2 service operates in one of three modes: pedantic, strict and lax; the mode is configured using the
Compatibility/StandardsConformance setting. The default is lax.
(Please
refer
to
the
Configuration
section
for
a
detailed
//OpenSplice/DDSI2Service/Compatibility/StandardsConformance.)

description

of

In pedantic mode, it strives very hard to strictly conform to the DDSI 2.1 and 2.2 standards. It even uses a vendorspecific extension for an essential element missing in the specification, used for specifying the GUID of a DCPS
data reader or data writer in the discovery protocol; and it adheres to the specified encoding of the reliability QoS.
This mode is of interest for compliancy testing but not for practical use, even though there is no application-level
observable difference between an all-Vortex OpenSplice system using the DDSI2 service in pedantic mode and
one operating in any of the other modes.
The second mode, strict, instead attempts to follow the intent of the specification while staying close to the letter
of it. The points in which it deviates from the standard are in all probability editing errors that will be rectified in
7.8. Compatibility and conformance

59

Deployment Guide, Release 6.x

the next update. When operated in this mode, one would expect it to be fully interoperable with other vendors’
implementations, but this is not the case. The deviations in other vendors’ implementations are not required to
implement DDSI 2.1 (or 2.2), as is proven by the Vortex OpenSplice DDSI2 service, and they cannot rightly be
considered ‘true’ implementations of the DDSI 2.1 (or 2.2) standard.
The default mode, lax, attempts to work around (most of) the deviations of other implementations, and provides
interoperability with (at least) RTI DDS and InterCOM/Gallium DDS. (For compatibility with TwinOaks CoreDX
DDS, additional settings are needed. See the next section for more information.) In lax mode, the Vortex OpenSplice DDSI2 service not only accepts some invalid messages, but will even transmit them. The consequences
for interoperability of not doing this are simply too severe. It should be noted that if one configures two Vortex
OpenSplice nodes with DDSI2 in different compliancy modes, the one in the stricter mode will complain about
messages sent by the one in the less strict mode. Pedantic mode will complain about invalid encodings used in
strict mode, strict mode will complain about illegal messages transmitted by the lax mode. There is nonetheless
interoperability between strict and lax.
7.8.1.1 Compatibility issues with RTI
In lax mode, there should be no major issues with most topic types when working across a network, but within a
single host there is a known problem with the way RTI DDS uses, or attempts to use, its shared memory transport
to communicate with Vortex OpenSplice, which clearly advertises only UDP/IP addresses at which it is reachable.
The result is an inability to reliably establish bidirectional communication between the two.
Disposing data may also cause problems, as RTI DDS leaves out the serialised key value and instead expects the
reader to rely on an embedded hash of the key value. In the strict modes, the DDSI2 service requires a proper key
value to be supplied; in the relaxed mode, it is willing to accept key hash, provided it is of a form that contains the
key values in an unmangled form.
If an RTI DDS data writer disposes an instance with a key of which the serialised representation may be larger
than 16 bytes, this problem is likely to occur. In practice, the most likely cause is using a key as string, either
unbounded, or with a maximum length larger than 11 bytes. See the DDSI specification for details.
In strict mode, there is interoperation with RTI DDS, but at the cost of incredibly high CPU and network load,
caused by a Heartbeats and AckNacks going back-and-forth between a reliable RTI DDS data writer and a reliable
Vortex OpenSplice DCPS data reader. The problem is that once the Vortex OpenSplice reader informs the RTI
writer that it has received all data (using a valid AckNack message), the RTI writer immediately publishes a
message listing the range of available sequence numbers and requesting an acknowledgement, which becomes an
endless loop.
The best settings for interoperation appear to be:
• Compatibility/StandardsConformance: lax
• Compatibility/AckNackNumbitsEmptySet: 0
Note that the latter setting causes the DDSI2 service to generate illegal messages, and is the default when in lax
mode.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Compatibility/StandardsConformance
• //OpenSplice/DDSI2Service/Compatibility/AckNackNumbitsEmptySet
7.8.1.2 Compatibility issues with TwinOaks
Interoperability with TwinOaks CoreDX requires:
• Compatibility/ManySocketsMode: true
• Compatibility/StandardsConformance: lax
• Compatibility/AckNackNumbitsEmptySet: 0
• Compatibility/ExplicitlyPublishQosSetToDefault: true

7.8. Compatibility and conformance

60

Deployment Guide, Release 6.x

The ManySocketsMode option needs to be changed from the default, to ensure that each domain participant
has a unique locator; this is needed because TwinOaks CoreDX DDS does not include the full GUID of a reader
or writer if it needs to address just one. Note that the behaviour of TwinOaks CoreDX DDS is allowed by the
specification.
The Compatibility/ExplicitlyPublishQosSetToDefault settings work around TwinOaks
CoreDX DDS’ use of incorrect default values for some of the QoS settings if they are not explicitly supplied
during discovery.
Please refer to the Configuration section for detailed descriptions of:
• //OpenSplice/DDSI2Service/Compatibility/ManySocketsMode
• //OpenSplice/DDSI2Service/Compatibility/StandardsConformance
• //OpenSplice/DDSI2Service/Compatibility/AckNackNumbitsEmptySet
• //OpenSplice/DDSI2Service/Compatibility/ExplicitlyPublishQosSetToDefault

7.8. Compatibility and conformance

61

8
The NetworkingBridge Service
The OpenSplice NetworkingBridge is a pluggable service that allows bridging of data between networking services. This section gives an overview of the features of the NetworkingBridge.
The configuration parameters that control the behaviour of the NetworkingBridge are described in the Configuration section.

8.1 Background
When a networking service is selected that best suits a specific deployment, sometimes a part of the data needs to
be obtained from or disclosed to a system that is using a different kind of networking service. The NetworkingBridge allows DCPSPublications and DCPSSubscriptions to be matched and the related data forwarded
between a RTNetworking system and a DDSI2 system and vice versa.
The NetworkingBridge employs a fast path in the OpenSplice kernel by directly connecting the network queues
of the bridged services. This also allows full end-to-end flow control mechanisms to be realised across the bridge.
Which publications/subscriptions are bridged can be controlled by means of white- and black-lists.
The NetworkingBridge relies on the discovery of publications and subscriptions by the common means for
the networking services. This means that it relies on the real transient topics, aligned by the Durability service, for the RTNetworking part of the bridge. For the part that connects to DDSI2 the native DDSI2 discovery of end-points is used. In order for DDSI2 to only advertise bridged publications and subscriptions, the
LocalDiscoveryPartition used for regular discovery should be set to a non-existent partition, as can be
seen in the following example. This discovery takes some time and can introduce a short delay before data is
bridged.

8.2 Example Configuration
In order to properly configure the NetworkingBridge for bridging data between RTNetworking and DDSI2, both
networking services (and the Durability service for the alignment of the builtin topics of the RTNetworking side)
have to be configured. Filtering is also configured with the NetworkingBridge.
An example configuration file for bridging of all data (excluding Topic MyLocalTopic) in partition BridgedPartition is shown below.


NetworkingBridgeExample
0

networking


ddsi2e


nwbridge

62

Deployment Guide, Release 6.x



durability








54400


54410



54420




ThisIsNotAPartition















FALSE

2.5
0.1



networking
ddsi2e




*




Federated deployment for extending an RTNetworking-based

8.2. Example Configuration

63

Deployment Guide, Release 6.x

domain into a DDSI network.


8.2. Example Configuration

64

9
The Tuner Service
The Tuner Service provides a remote interface to the monitor and control facilities of OpenSplice by means of the
SOAP protocol. This enables the OpenSplice Tuner to remotely monitor and control, from any reachable location,
OpenSplice services as well as the applications that use OpenSplice for the distribution of their data.
The exact fulfilment of these responsibilities is determined by the configuration of the Tuner Service. There is
a detailed description of the available configuration parameters and their purpose in the Configuration section,
starting at the section on //OpenSplice/NetworkService/Tracing.

65

10
The DbmsConnect Service
The OpenSplice DbmsConnect Module is a pluggable service of OpenSplice that provides a seamless integration of the real-time DDS and the non-/near-real-time enterprise DBMS domains. It complements the advanced
distributed information storage features of the OpenSplice Persistence Module (and vice versa*).*
Where (relational) databases play an essential role to maintain and deliver typically non- or near-real-time ‘enterprise’ information in mission systems, OpenSplice targets the real-time edge of the spectrum of distributing and
delivering ‘the right information at the right place at the right time’ by providing a Quality-Of-Service (QoS)aware ‘real-time information backbone’.
Changing expectations about the accessibility of information from remote/non-real-time information-stores and
local/real-time sources lead to the challenge of lifting the boundaries between both domains. The DbmsConnect
module of OpenSplice answers this challenge in the following ways:
• Transparently ‘connects’ the real-time DDS ‘information backbone’ to one or more ‘enterprise’ databases
• Allows both enterprise as well as embedded/real-time applications to access and share data in the most
‘natural’ way
• Allows OpenSplice to fault-tolerantly replicate enterprise information persisted in multiple relational
databases in real-time
• Provides a pure ODBC/JDBC SQL interface towards real-time information via its transparent DbmsConnection
• Overcomes the lack of communication-control (QoS features controlling real-time behavior) of ‘talking’ to
a remote DBMS
• Overcomes the lack of traditional 3GL and 4GL tools and features in processing information directly from
a DDS backbone
• Allows for data-logging and analysis of real-time data persisted in a DBMS
• Aligns multiple and dispersed heterogeneous databases within a distributed system using the QoS-enabled
data-distribution features of OpenSplice
The DbmsConnect module is unique in its dynamic configurability to achieve maximum performance:
• Dynamic DDS Partition/Topic selection and configurable content-filters to exchange exactly ‘the right’ information critical for performance and resource-challenged users
• Dynamic creation and mapping of DBMS database-tables and DDS topics to allow seamless data-exchange,
even with legacy data models
• Selectable update-triggering per table/topic allowing for both real-time responsiveness as well as highvolume ‘batch transfers’
• Works with ANY third-party SQL:1999-compatible DBMS system with an ODBC interface
The DbmsConnect module thus effectively eliminates traditional ‘barriers’ of the standalone technologies by facilitating seamless data-exchange between any ODBC compliant (SQL)database and the OpenSplice real-time
DDS ‘information-backbone’. Applications in traditionally separated mission-system domains can now exploit
and leverage each other’s information in a highly efficient (based upon ‘current interest’ as supported by the publish/subscribe paradigm of DDS), non-disruptive (obeying the QoS demands as expressed for data-items in DDS)
and distributed service-oriented paradigm.
66

Deployment Guide, Release 6.x

OpenSplice DbmsConnect is based on SQL:1999 and utilizes ODBC 2.x to interface with third-party DBMS
systems. Interoperability has been verified with MySQL 5.0 and Microsoft SQL Server 2008. With limited strict
conformance of most RDBMS’s to both the SQL as well as the ODBC standard, support for other customer-chosen
DBMS systems may require a porting activity of the DbmsConnect service.
As SQL tables have no support for unbounded sequences and sequences of complex types, mapping such
DDS_Types to tables is not supported.

10.1 Usage
In order to understand the configuration and working of the DbmsConnect service, some basic concepts and usecases will be covered here.

10.2 DDS and DBMS Concepts and Types Mapping
The concepts within DDS and DBMS are related to each other as listed in the table DDS to DBMS mapping:
concepts.
DDS to DBMS mapping: concepts
DDS
Topic
Type
Instance
Sample
DataWriter.write()
DataWriter.dispose()

DBMS
Table
Table structure
Primary key
Row
INSERT or UPDATE
DELETE

The primitive types available in both domains map onto each other as listed in the table DDS to DBMS mapping:
primitive types.
DDS to DBMS mapping: primitive types
DDS IDL type
boolean
short
unsigned short
long
unsigned long
long long
unsigned long long
float
double
octet
char
wchar
string
wstring

DBMS column type (SQL:1999)
BOOLEAN/TINYINT
SMALLINT
SMALLINT
INTEGER
INTEGER
BIGINT
BIGINT
REAL
DOUBLE
BINARY(1)
CHAR(1)
CHAR(1)
VARCHAR()
VARCHAR()

DDS to DBMS mapping: complex (composite) types

The mapping of complex (composite) types is as follows:
• Struct - Flattened out - Each member maps to a column with fully scoped name
• Union - Flattened out - Additional #DISCRIMINATOR# column

10.1. Usage

67

Deployment Guide, Release 6.x

• Enumeration - An INTEGER typed column with fully scoped name
• Array and bounded sequence - Flattened out - [index] appended to fully scoped column name

10.3 General DbmsConnect Concepts
The DbmsConnect service can bridge data from the DDS domain to the DBMS domain and vice versa. In DDS,
data is represented by topics, while in DBMS data is represented by tables. With DbmsConnect, a mapping
between a topic and a table can be defined.
Because not all topic-table mappings have to be defined explicitly (DbmsConnect can do matching when the names
are the same), namespaces can be defined. A namespace specifies or limits the context of interest and allows for
easy configuration of all mappings falling (or defined in) a namespace. The context of interest for bridging data
from DDS to DBMS, consists of a partition- and topicname expression. When bridging data from DBMS to DDS,
the context of interest consists of a table-name expression.
A mapping thus defines the relationship of a table in DBMS with a topic in DDS and can be used to explicitly map
a topic and table with different names, or define settings for a specific mapping only.

10.4 DDS to DBMS Use Case
When data in the DDS domain has to be available in the DBMS domain, the DbmsConnect service can be configured to facilitate that functionality. A topic in DDS will be mapped to a table in DBMS.

10.4.1 DDS to DBMS Scenario
In the DDS domain, we have topics DbmsTopic and DbmsDdsTopic that we want to make available to a database
application. The database application expects the data from topic DbmsTopic to be available in an existing table
with name DbmsTable. Data from the DbmsDdsTopic topic can be just forwarded to a table (that does not yet
exist) with the same name. This is shown in The DDS to DBMS scenario.
The DDS to DBMS scenario

10.4.2 DDS to DBMS Configuration
The configuration for the DbmsConnect service that fulfils the needs of the scenario is given in the listing below.
1
2
3
4
5
6

...





10.3. General DbmsConnect Concepts

68

Deployment Guide, Release 6.x

7
8
9
10




...

10.4.2.1 DDS to DBMS Configuration Explanation
On line 3 a DdsToDbms element is specified in order to configure data bridging from DDS to DBMS. On line 4, a
NameSpace is defined that has interest in all topics starting with “Dbms” in all partitions. Both the partition- and
topic-expression make use of the *-wildcard (matching any sequence of characters). These wildcards match both
topics described in the scenario, but will possibly match more. If the mapping should be explicitly limited to both
topics, the topic-expression can be changed to DbmsTopic,DbmsDdsTopic.
The DbmsConnect service will implicitly map all matching topics to an equally named table in the DBMS.
While this is exactly what we want for the DbmsDdsTopic, the database application expects the data from
the DbmsTopic topic to be mapped to table DbmsTable. This is explicitly configured in the Mapping on line 6.
If the tables already exist and the table-definition matches the topic definition, the service will use that table. If a
table does not exist, it will be created by the service. If a table exists, but doesn’t match the topic definition, the
mapping fails.

10.5 DBMS to DDS Use Case
When data in the DBMS domain has to become available in the DDS domain, this can be achieved by configuring
the DbmsConnect service to map a table to a topic.

10.5.1 DBMS to DDS Scenario
In the DBMS, we have tables DbmsTable and DbmsDdsTopic that we want to make available in the dbmsPartition partition in DDS. The database application writes the data we want available in topic DbmsTopic to the
table named DbmsTable. Data from the DbmsDdsTopic table can be just forwarded to the identically-named
topic.
When the DbmsConnect service is started, mapped tables may already contain data. For the DbmsDdsTopic
table, we are not interested in that data. For the DbmsTable table however, we would like all data available to the
database application to be available to the DDS applications too. This scenario is the reverse (all arrows reversed)
situation of the scenario shown in The DDS to DBMS scenario.

10.5.2 DBMS to DDS Configuration
The configuration for the DbmsConnect service that fulfils the needs of the scenario is given in the listing below.
11
13
14
15
16
17
18
19
20
21
22

...















10.6. Replication Use Case

70

Deployment Guide, Release 6.x

33
34
35


 Open) or the keyboard shortcut Ctrl+O.

11.2. osplconf: the OpenSplice Configuration editor

73

Deployment Guide, Release 6.x

The appropriate service tab is selected.
If the appropriate service is not configured, and so its tab is not visible on the top, it can be added by using the top
menu-bar (Edit > Add Service).
The hierarchical tree on the left can be used to browse through the settings applicable to the Service and possibly
modify them.
The right pane shows the settings of the currently selected tree node. An item prefixed with a ‘@’ represents an
XML attribute. The other items represent XML elements.
If the appropriate setting is not currently configured, and therefore not visible in the tree, you can add it by
right-clicking anywhere in the tree to open a context-sensitive sub-menu displaying all available settings for that
particular element in the tree.
Adding an element in Configurator

Once the appropriate modifications have been made, and are accepted by the Configurator, the config file can be
saved using the top menu bar (File > Save) or the keyboard shortcut Ctrl+S.
Likewise, a config file can be written from scratch by using the top menu bar (File > New) or the keyboard shortcut
Ctrl+N.

11.3 ospl: the OpenSplice service manager
The OpenSplice service manager (ospl) is a tool that monitors and controls the lifecycle of the OpenSplice
Domain Service (spliced), which in turn monitors and controls all other OpenSplice services. This tool is only
applicable to the Federated Deployment Mode, because the Single Process Deployment Mode doesn’t need to run
external services. Basically you can view the OpenSplice service manager as a controller around the OpenSplice
Domain Service, that can be used to pass the following command-line instructions to the Domain Service:
start [URI] — Starts a Domain Service for the specified URI (It looks for the environment variable
OSPL_URI when no URI is explicitly passed.) The Domain Service will in turn parse the config
file indicated by the URI and start all configured services according to their settings.
When done, the OpenSplice service manager will return one of the following exit codes:
0 : normal termination (when the Domain Service has successfully started)
1 : a recoverable error has occurred (e.g. out of resources)
2 : an unrecoverable error has occurred (e.g. config file contains errors).
When also passing the -f flag, the OpenSplice service manager will not return the command prompt, but
remain blocked until the Domain Service successfully terminates. Any termination event sent to the service
manager will in that case be forwarded to the Domain Service it manages.
11.3. ospl: the OpenSplice service manager

74

Deployment Guide, Release 6.x

stop [URI] — Stops the Domain Service for the specified URI (It looks for the environment variable
OSPL_URI when no URI is explicitly passed.) The Domain Service will in turn wait for all the
services it currently monitors to terminate gracefully and will then terminate itself.
When done, the OpenSplice service manager will return one of the following exit codes:
0 : normal termination when the Domain Service has successfully terminated.
2 : an unrecoverable error has occurred (e.g. config file cannot be resolved).
When passing the -a flag instead of a URI, the OpenSplice manager is instructed to terminate all Domain
Services that are currently running on the local node.
status [URI] — Prints the status of the Domain Service for the specified URI (It looks for the environment
variable OSPL_URI when no URI is explicitly passed.) When a Domain with the specified URI cannot
be found, it prints nothing.
list — Lists all Domain Services by name (i.e. the name configured in the OpenSplice/Domain/Name element of the config file). This behaviour is similar to the status option, but then for all Domains that are
currently running on the local node.
There are a couple of other flags that can be used to display valuable information:
-v — prints the version number of the current OpenSplice release.
-h — prints help for all command-line options.
Note that the default behaviour of ospl without any command-line arguments is to display help.

11.4 mmstat: Memory Management Statistics
Mmstat is a command-line tool that can display valuable information about the shared memory statistics of
an OpenSplice Domain (this is only applicable to the Federated Deployment Mode, since the Single Process
Deployment Mode does not use shared memory). The Domain to which mmstat must attach can be passed as a
command-line parameter, and consists of a URI to the config file specifying the Domain. When no URI is passed,
mmstat will attach to the Domain specified in the environment variable OSPL_URI.
Basically mmstat can run in four separate modes, which all display their status at regular intervals. This interval
time is by default set to 3 seconds, but can be overruled by passing the -i flag followed by an interval value
specified in milliseconds.
The following modes can be distinguished using the specified flags:
-m — The memory statistics mode (default mode)
-M — The memory statistics difference mode
-t — The meta-object references mode
-T — The meta-object references difference mode
Mmstat will keep on displaying an updated status after every interval until the q key is pressed, or until the total
number of iterations reaches the sample_count limit that can be specified by passing the -s flag followed by
the preferred number of iterations. Intermediate status updates can be enforced by pressing the t key.
The following subsections provide detailed descriptions of the different mmstat modes mentioned above.

11.4.1 The memory statistics mode
In the memory statistics mode mmstat basically displays some general shared memory statistics that can help in
correctly estimating the required size of the shared memory database in the configuration file.
The numbers that will be displayed in this mode are:
• the total amount of shared memory still available (i.e. currently not in use).

11.4. mmstat: Memory Management Statistics

75

Deployment Guide, Release 6.x

• the number of objects currently allocated in the shared memory.
• the amount of shared memory that is currently in use by the allocated objects.
• the worstcase amount of shared memory that has been in use so far.
• the amount of shared memory that is currently marked as reuasble. (Reusable memory is memory that
is conceptually available, but it might be fragmented in small chunks that cannot be allocated in bigger
chunks.)
The memory statistics mode is the default mode for mmstat, and it is selected when no explicit mode selection
argument is passed. It can also be selected explicitly by passing the -m flag.
Typical mmstat view

11.4.2 The memory statistics difference mode
The memory statistics difference mode works very similarly to the memory statistics mode, but instead of displaying the current values of each measurement it displays the changes of each value relative to the previous
measurement. This provides a good overview of the dynamics of your shared memory, such as whether it remains
stable, whether it is rapidly being consumed/released, and so on.
Mmstat memory statistics difference mode

The numbers that will be displayed in this mode are:
• the difference in the amount of available shared memory relative to the previous measurement.
11.4. mmstat: Memory Management Statistics

76

Deployment Guide, Release 6.x

• the difference in the number of objects that is allocated in the shared memory relative to the previous
measurement.
• the difference in the amount of shared memory that is in use by the allocated objects relative to the previous
measurement.
• the difference in the worstcase amount of shared memory that has been allocated since the previous measurement. Notice that this value can only go up and so the difference can never be negative.
The memory statistics difference mode can be selected by explicitly passing the -M flag as a command-line parameter.

11.4.3 The meta-object references mode
In the meta-object references mode mmstat basically displays which objects are currently populating the shared
memory.
Mmstat meta-object references mode

For this purpose it will iterate through all datatypes known to the Domain, and for each datatype it will display the
following information:
• the number of objects currently allocated for the indicated type.
• the memory allocation footprint of a single object of the indicated type.
• the combined size taken by all objects of the indicated type.
• The kind of object (e.g. class, collection, etc.).
• The kind of collection (when appropriate).
• The fully scoped typename.
In normal circumstances the reference list will be so long (only the bootstrap will already inject hundreds of types
into the Domain) that it will not fit on one screen. For that reason there are several ways to restrict the number of
items that are displayed, by filtering out the non-interesting items:
• A filter can be specified by passing the -f flag, followed by a (partial) typename. This restricts the list to
the only those datatypes that match the filter.
• The maximum number of items that may be displayed can be specified by passing the -n flag, followed by
the maximum value.
This is especially useful when combined with another flag that determines the order in which the items will
be displayed. For example, when the items are sorted by memory footprint, passing -n10 will only display
the top ten datatypes that have the biggest footprint.

11.4. mmstat: Memory Management Statistics

77

Deployment Guide, Release 6.x

The order of the items in the list can be controlled by passing the -o flag, followed by a character specifying the
ordering criterion. The following characters are supported:
C — Sort by object Count (i.e. the number of allocated objects from the indicated datatype).
S — Sort by object Size (i.e. the memory footprint of a single object from the indicated datatype).
T — Sort by Total size (i.e. the combined memory footprint of all objects allocated from the indicated
datatype).

11.4.4 The meta-object references difference mode
The meta-object references difference mode is very similar to the meta-object references mode, but instead of
displaying the current values of each measurement it displays the changes of each value relative to the previous
measurement. This provides a good overview of the dynamics of your shared memory, such as whether the number
of objects remains stable, whether it is rapidly increasing/decreasing, and so on.
The fields that are displayed in this mode are similar to the fields displayed in the meta-object references mode,
except that the numbers displayed in the first and third column are now specifying the changes relative to the
previous measurement.
All the flags that are applicable to the meta-object references mode are also applicable to the meta-object references difference mode, but keep in mind that ordering (when specified) is now based on the absolute value of
the difference between the current and the previous measurement. This way big negative changes will still be
displayed at the top of the list.
Mmstat meta-object references difference mode

11.4. mmstat: Memory Management Statistics

78

12
Configuration
This section describes the various configuration elements and attributes available for Vortex OpenSplice. The
configuration items should be added to an XML file and then the OSPL_URI environment variable should be set
to point to the path of that XML file with the “file://” URI prefix.
• e.g.
– Linux: export OSPL_URI=file://$OSPL_HOME/etc/ospl.xml
– Windows: set OSPL_URI=file://%OSPL_HOME%\\etc\\ospl.xml
The ospl.xml file supplied with Vortex OpenSplice contains the following:



ospl_sp_ddsi
0
true

ddsi2


durability


cmsoap




AUTO
true
true
false









false

2.5
0.1




79

Deployment Guide, Release 6.x

ddsi2




*



Auto


Stand-alone ’single-process’ deployment and standard DDSI networking.

The tags in the XML file should be nested in the same way as they are in the table of contents in this configuration
section. The nesting and numbering of the tags in the contents of this section allows you to see which elements
are the parent or children of one another. For example, if you wanted to find a description of the NetworkInterfaceAddress attribute, you would first navigate to it’s parent, the General element, and inside that you would find a
heading for the child NetworkInterfaceAddress attribute along with a description and valid values. Some attributes
may state that they are required and if so these elements must be present when the parent element is included in
the XML file.
If you wanted to add a new element, say to enable security, you would navigate to the Security element of the
section. This has a child element called SecurityProfile which should be nested within the Security element.
Each element lists a number of occurences, this states how many times this element can appear in your XML
file. The SecurityProfile element has three attributes, Name, which is required, and Cipher and CipherKey which
are optional. Attributes are added within the parent element tag in the format name=”value”. Adding these new
elements and attributes would result in the following XML:



ospl_sp_ddsi
0



AUTO
true
true


lax


warning



Source Exif Data:
File Type                       : PDF
File Type Extension             : pdf
MIME Type                       : application/pdf
PDF Version                     : 1.5
Linearized                      : No
Page Count                      : 361
Page Mode                       : UseOutlines
Warning                         : Duplicate 'Author' entry in dictionary (ignored)
Author                          : 
Title                           : Deployment Guide
Subject                         : 
Creator                         : LaTeX with hyperref package
Producer                        : pdfTeX-1.40.14
Create Date                     : 2018:05:10 12:32:13+01:00
Modify Date                     : 2018:05:10 12:32:13+01:00
Trapped                         : False
PTEX Fullbanner                 : This is pdfTeX, Version 3.1415926-2.5-1.40.14 (TeX Live 2013/Debian) kpathsea version 6.1.1
EXIF Metadata provided by EXIF.tools

Navigation menu