VxWorks Application Programmer's Guide, 6.7 Programmers Guide

User Manual:

Open the PDF directly: View PDF PDF.
Page Count: 432 [warning: Documents this large are best viewed by clicking the View PDF Link!]

VxWorks
APPLICATION PROGRAMMER'S GUIDE
®
6.7
VxWorks Application Programmer's Guide, 6.7
Copyright © 2008 Wind River Systems, Inc.
All rights reserved. No part of this publication may be reproduced or transmitted in any
form or by any means without the prior written permission of Wind River Systems, Inc.
Wind River, Tornado, and VxWorks are registered trademarks of Wind River Systems, Inc.
The Wind River logo is a trademark of Wind River Systems, Inc. Any third-party
trademarks referenced are the property of their respective owners. For further information
regarding Wind River trademarks, please see:
www.windriver.com/company/terms/trademark.html
This product may include software licensed to Wind River by third parties. Relevant
notices (if any) are provided in your product installation at the following location:
installDir/product_name/3rd_party_licensor_notice.pdf.
Wind River may refer to third-party documentation by listing publications or providing
links to third-party Web sites for informational purposes. Wind River accepts no
responsibility for the information provided in such third-party documentation.
Corporate Headquarters
Wind River
500 Wind River Way
Alameda, CA 94501-1153
U.S.A.
Toll free (U.S.A.): 800-545-WIND
Telephone: 510-748-4100
Facsimile: 510-749-2010
For additional contact information, see the Wind River Web site:
www.windriver.com
For information on how to contact Customer Support, see:
www.windriver.com/support
VxWorks
Application Programmer's Guide
6.7
17 Nov 08
Part #: DOC-16304-ND-00
iii
Contents
1 Overview ............................................................................................... 1
1.1 Introduction ............................................................................................................. 1
1.2 Related Documentation Resources ..................................................................... 2
1.3 VxWorks Configuration and Build ..................................................................... 3
2 Real-Time Processes ........................................................................... 5
2.1 Introduction ............................................................................................................. 6
2.2 About Real-time Processes ................................................................................... 7
2.2.1 RTPs and Scheduling ............................................................................... 8
2.2.2 RTP Creation ............................................................................................. 8
2.2.3 RTP Termination ...................................................................................... 10
2.2.4 RTPs and Memory ................................................................................... 10
Virtual Memory Models .......................................................................... 11
Memory Protection .................................................................................. 11
2.2.5 RTPs and Tasks ......................................................................................... 11
Numbers of Tasks and RTPs .................................................................. 12
Initial Task in an RTP .............................................................................. 12
RTP Tasks and Memory .......................................................................... 12
VxWorks
Application Programmer's Guide, 6.7
iv
2.2.6 RTPs and Inter-Process Communication .............................................. 13
2.2.7 RTPs, Inheritance, Zombies, and Resource Reclamation ................... 13
Inheritance ................................................................................................. 13
Zombie Processes ..................................................................................... 14
Resource Reclamation .............................................................................. 14
2.2.8 RTPs and Environment Variables .......................................................... 15
Setting Environment Variables From Outside a Process .................... 15
Setting Environment Variables From Within a Process ..................... 16
2.2.9 RTPs and POSIX ....................................................................................... 16
POSIX PSE52 Support .............................................................................. 16
2.3 Configuring VxWorks For Real-time Processes .............................................. 17
2.3.1 Basic RTP Support .................................................................................... 17
2.3.2 MMU Support for RTPs .......................................................................... 18
2.3.3 Additional Component Options ............................................................ 19
2.3.4 Configuration and Build Facilities ......................................................... 20
2.4 Using RTPs Without MMU Support .................................................................. 20
Configuation With Process Support and Without an MMU ............. 22
2.5 About VxWorks RTP Virtual Memory Models ............................................... 23
2.5.1 Flat RTP Virtual Memory Model .......................................................... 23
2.5.2 Overlapped RTP Virtual Memory Model ............................................ 24
2.6 Using the Overlapped RTP Virtual Memory Model ...................................... 26
2.6.1 About User Regions and the RTP Code Region ................................. 26
User Regions of Virtual Memory ........................................................... 27
RTP Code Region in Virtual Memory .................................................. 27
2.6.2 Configuring VxWorks for Overlapped RTP Virtual Memory .......... 29
Getting Information About User Regions ............................................ 29
Identifying the RTP Code Region .......................................................... 31
Setting Configuration Parameters for the RTP Code Region ............ 33
Contents
v
2.6.3 Using RTP Applications With Overlapped RTP Virtual Memory ... 35
Building Absolutely-Linked RTP Executables ..................................... 35
Stripping Absolutely-Linked RTP Executables .................................. 36
Executing Absolutely-Linked RTP Executables ................................. 37
Executing Relocatable RTP Executables ............................................... 38
3 RTP Applications ................................................................................. 39
3.1 Introduction ............................................................................................................. 39
3.2 Configuring VxWorks For RTP Applications .................................................. 40
3.3 Developing RTP Applications ............................................................................. 40
RTP Applications With Shared Libraries and Plug-Ins ...................... 41
RTP Applications for the Overlapped Virtual Memory Model ........ 42
RTP Applications for UP and SMP Configurations of VxWorks ...... 42
Migrating Kernel Applications to RTP Applications .......................... 42
3.3.1 RTP Application Structure ...................................................................... 42
3.3.2 VxWorks Header Files ............................................................................. 43
POSIX Header Files .................................................................................. 44
VxWorks Header File: vxWorks.h ......................................................... 44
Other VxWorks Header Files ................................................................. 44
ANSI Header Files ................................................................................... 44
ANSI C++ Header Files ........................................................................... 45
Compiler -I Flag ........................................................................................ 45
VxWorks Nested Header Files ............................................................... 45
VxWorks Private Header Files ............................................................... 46
3.3.3 RTP Application APIs: System Calls and Library Routines .............. 46
VxWorks System Calls ............................................................................ 46
VxWorks Libraries ................................................................................... 47
Dinkum C and C++ Libraries ................................................................. 48
Custom Libraries ...................................................................................... 48
API Documentation ................................................................................. 48
3.3.4 Reducing Executable File Size With the strip Facility ........................ 48
3.3.5 RTP Applications and Multitasking ...................................................... 49
3.3.6 Checking for Required Kernel Support ................................................ 49
3.3.7 Using Hook Routines ............................................................................... 50
VxWorks
Application Programmer's Guide, 6.7
vi
3.3.8 Developing C++ Applications ................................................................ 50
3.3.9 Using POSIX Facilities ............................................................................. 50
3.3.10 Building RTP Applications ..................................................................... 50
3.4 Developing Static Libraries, Shared Libraries and Plug-Ins ......................... 50
3.5 Creating and Using Shared Data Regions ......................................................... 51
3.5.1 Configuring VxWorks for Shared Data Regions ................................. 52
3.5.2 Creating Shared Data Regions ............................................................... 52
3.5.3 Accessing Shared Data Regions ............................................................. 53
3.5.4 Deleting Shared Data Regions ................................................................ 53
3.6 Executing RTP Applications ................................................................................ 54
Caveat With Regard to Stripped Executables ...................................... 54
Starting an RTP Application ................................................................... 55
Stopping an RTP Application ................................................................. 55
Storing Application Executables ............................................................ 56
3.6.1 Running Applications Interactively ...................................................... 57
Starting Applications ............................................................................... 57
Terminating Applications ....................................................................... 58
3.6.2 Running Applications Automatically ................................................... 58
Startup Facility Options .......................................................................... 59
Application Startup String Syntax ......................................................... 60
Specifying Applications with a Startup Configuration Parameter ... 61
Specifying Applications with a Boot Loader Parameter .................... 62
Specifying Applications with a VxWorks Shell Script ........................ 63
Specifying Applications with usrRtpAppInit( ) .................................. 64
3.6.3 Spawning Tasks and Executing Routines in an RTP Application .... 65
3.6.4 Applications and Symbol Registration ................................................. 65
3.7 Bundling RTP Applications in a System using ROMFS ................................ 66
3.7.1 Configuring VxWorks with ROMFS ..................................................... 67
3.7.2 Building a System With ROMFS and Applications ............................ 67
3.7.3 Accessing Files in ROMFS ....................................................................... 67
Contents
vii
3.7.4 Using ROMFS to Start Applications Automatically ........................... 68
4 Static Libraries, Shared Libraries, and Plug-Ins ............................... 69
4.1 Introduction ............................................................................................................. 70
4.2 About Static Libraries, Shared Libraries, and Plug-ins .................................. 70
Advantages and Disadvantages of Shared Libraries and Plug-Ins .. 71
4.3 Additional Documentation .................................................................................. 73
4.4 Configuring VxWorks for Shared Libraries and Plug-ins ............................. 73
4.5 Common Development Issues: Initialization and Termination ................... 74
4.5.1 Library and Plug-in Initialization .......................................................... 74
4.5.2 C++ Initialization ..................................................................................... 76
4.5.3 Handling Initialization Failures ............................................................. 76
4.5.4 Shared Library and Plug-in Termination ............................................. 77
Using Cleanup Routines ......................................................................... 77
4.6 Common Development Facilities ........................................................................ 78
4.7 Developing Static Libraries .................................................................................. 78
4.7.1 Initialization and Termination ............................................................... 78
4.8 Developing Shared Libraries ............................................................................... 79
4.8.1 About Dynamic Linking ......................................................................... 79
Dynamic Linker ........................................................................................ 79
Position Independent Code: PIC ............................................................ 80
4.8.2 Configuring VxWorks for Shared Libraries ......................................... 80
4.8.3 Initialization and Termination ............................................................... 80
4.8.4 About Shared Library Names and ELF Records ................................. 80
4.8.5 Creating Shared Object Names for Shared Libraries .......................... 81
Options for Defining Shared Object Names and Versions ................ 82
Match Shared Object Names and Shared Library File Names .......... 82
VxWorks
Application Programmer's Guide, 6.7
viii
4.8.6 Using Different Versions of Shared Libraries ...................................... 82
4.8.7 Locating and Loading Shared Libraries at Run-time .......................... 83
Specifying Shared Library Locations: Options and Search Order .... 83
Using the LD_LIBRARY_PATH Environment Variable .................... 84
Using the ld.so.conf Configuration File ................................................ 85
Using the ELF RPATH Record ............................................................... 85
Using the Application Directory ............................................................ 86
Pre-loading Shared Libraries .................................................................. 86
4.8.8 Using Lazy Binding With Shared Libraries .......................................... 87
4.8.9 Developing RTP Applications That Use Shared Libraries ................. 88
4.8.10 Getting Runtime Information About Shared Libraries ....................... 88
4.8.11 Debugging Problems With Shared Library Use .................................. 89
Shared Library Not Found ...................................................................... 89
Incorrectly Started Application .............................................................. 90
Using readelf to Examine Dynamic ELF Files ...................................... 90
4.8.12 Working With Shared Libraries From a Windows Host .................... 92
Using NFS .................................................................................................. 93
Installing NFS on Windows .................................................................... 93
Configuring VxWorks With NFS ........................................................... 93
Testing the NFS Connection ................................................................... 94
4.9 Developing Plug-Ins .............................................................................................. 94
4.9.1 Configuring VxWorks for Plug-Ins ....................................................... 95
4.9.2 Initialization and Termination ............................................................... 95
4.9.3 Developing RTP Applications That Use Plug-Ins ............................... 95
Code Requirements .................................................................................. 95
Build Requirements ................................................................................. 96
Locating Plug-Ins at Run-time ............................................................... 96
Using Lazy Binding With Plug-ins ........................................................ 96
Example of Dynamic Linker API Use ................................................... 97
Example Application Using a Plug-In ................................................... 97
Routines for Managing Plug-Ins ............................................................ 99
4.9.4 Debugging Plug-Ins ................................................................................. 99
4.10 Using the VxWorks Run-time C Shared Library libc.so ................................. 100
Contents
ix
5 C++ Development ................................................................................. 101
5.1 Introduction ............................................................................................................. 101
5.2 C++ Code Requirements ....................................................................................... 102
5.3 C++ Compiler Differences ................................................................................... 102
5.3.1 Template Instantiation ............................................................................. 103
5.3.2 Run-Time Type Information ................................................................... 104
5.4 Namespaces ............................................................................................................. 104
5.5 C++ Demo Example ............................................................................................... 105
6 Multitasking .......................................................................................... 107
6.1 Introduction ............................................................................................................. 109
6.2 Tasks and Multitasking ....................................................................................... 110
6.2.1 Task States and Transitions .................................................................... 111
Tasks States and State Symbols .............................................................. 112
Illustration of Basic Task State Transitions ........................................... 113
6.3 Task Scheduling ..................................................................................................... 115
6.3.1 Task Priorities ........................................................................................... 115
6.3.2 Task Scheduling Control ......................................................................... 115
Task Priority .............................................................................................. 116
Preemption Locks ..................................................................................... 116
6.3.3 VxWorks Traditional Scheduler ............................................................. 117
Priority-Based Preemptive Scheduling ................................................. 117
Scheduling and the Ready Queue ......................................................... 118
Round-Robin Scheduling ........................................................................ 119
6.4 Task Creation and Management ......................................................................... 121
6.4.1 Task Creation and Activation ................................................................. 121
6.4.2 Task Names and IDs ................................................................................ 122
VxWorks
Application Programmer's Guide, 6.7
x
6.4.3 Inter-Process Communication With Public Tasks ............................... 124
6.4.4 Task Creation Options ............................................................................. 124
6.4.5 Task Stack .................................................................................................. 125
Task Stack Protection ............................................................................... 126
6.4.6 Task Information ...................................................................................... 127
6.4.7 Task Deletion and Deletion Safety ......................................................... 128
6.4.8 Task Execution Control ........................................................................... 129
6.4.9 Tasking Extensions: Hook Routines ...................................................... 131
6.5 Task Error Status: errno ......................................................................................... 132
6.5.1 A Separate errno Value for Each Task .................................................. 133
6.5.2 Error Return Convention ........................................................................ 133
6.5.3 Assignment of Error Status Values ........................................................ 133
6.6 Task Exception Handling ...................................................................................... 134
6.7 Shared Code and Reentrancy ............................................................................... 134
6.7.1 Dynamic Stack Variables ......................................................................... 136
6.7.2 Guarded Global and Static Variables .................................................... 136
6.7.3 Task-Specific Variables ........................................................................... 137
Thread-Local Variables: __thread Storage Class ................................. 137
tlsOldLib and Task Variables ................................................................ 138
6.7.4 Multiple Tasks with the Same Main Routine ....................................... 138
6.8 Intertask and Interprocess Communication ...................................................... 139
6.9 Inter-Process Communication With Public Objects ....................................... 140
Creating and Naming Public and Private Objects ............................... 141
6.10 Object Ownership and Resource Reclamation ................................................. 142
6.11 Shared Data Structures .......................................................................................... 142
6.12 Mutual Exclusion .................................................................................................... 143
Contents
xi
6.13 Semaphores ............................................................................................................. 144
6.13.1 Inter-Process Communication With Public Semaphores ................... 145
6.13.2 Semaphore Control .................................................................................. 145
Options for Scalable and Inline Semaphore Routines ........................ 147
Static Instantiation of Semaphores ........................................................ 148
Scalable and Inline Semaphore Take and Give Routines ................... 149
6.13.3 Binary Semaphores .................................................................................. 149
Mutual Exclusion ..................................................................................... 151
Synchronization ........................................................................................ 152
6.13.4 Mutual-Exclusion Semaphores .............................................................. 152
Priority Inversion and Priority Inheritance .......................................... 153
Deletion Safety .......................................................................................... 156
Recursive Resource Access ..................................................................... 156
6.13.5 Counting Semaphores ............................................................................. 157
6.13.6 Read/Write Semaphores ........................................................................ 158
Specification of Read or Write Mode .................................................... 159
Precedence for Write Access Operations .............................................. 160
Read/Write Semaphores and System Performance ........................... 160
6.13.7 Special Semaphore Options .................................................................... 160
Semaphore Timeout ................................................................................. 161
Semaphores and Queueing ..................................................................... 161
Semaphores Interruptible by Signals ................................................... 162
Semaphores and VxWorks Events ......................................................... 162
6.14 Message Queues ..................................................................................................... 162
6.14.1 Inter-Process Communication With Public Message Queues ........... 163
6.14.2 VxWorks Message Queue Routines ...................................................... 164
Message Queue Timeout ......................................................................... 164
Message Queue Urgent Messages ......................................................... 165
Message Queues Interruptible by Signals ............................................ 166
Message Queues and Queuing Options ............................................... 166
6.14.3 Displaying Message Queue Attributes ................................................. 166
6.14.4 Servers and Clients with Message Queues ........................................... 167
6.14.5 Message Queues and VxWorks Events ................................................. 168
VxWorks
Application Programmer's Guide, 6.7
xii
6.15 Pipes .......................................................................................................................... 168
6.16 VxWorks Events ...................................................................................................... 169
6.16.1 Preparing a Task to Receive Events ....................................................... 170
6.16.2 Sending Events to a Task ........................................................................ 171
6.16.3 Accessing Event Flags .............................................................................. 173
6.16.4 Events Routines ........................................................................................ 173
6.16.5 Task Events Register ................................................................................ 174
6.16.6 Show Routines and Events ..................................................................... 174
6.17 Message Channels ................................................................................................. 175
6.18 Network Communication ..................................................................................... 175
6.19 Signals ..................................................................................................................... 176
6.19.1 Configuring VxWorks for Signals ......................................................... 178
6.19.2 Basic Signal Routines ............................................................................... 178
6.19.3 Queued Signal Routines ......................................................................... 179
6.19.4 Signal Events ............................................................................................. 184
6.19.5 Signal Handlers ........................................................................................ 185
6.20 Timers ....................................................................................................................... 188
6.20.1 Inter-Process Communication With Public Timers ............................. 188
7 POSIX Facilities .................................................................................... 189
7.1 Introduction ............................................................................................................. 191
7.2 Configuring VxWorks with POSIX Facilities ................................................... 192
7.2.1 POSIX PSE52 Support ............................................................................. 193
7.2.2 VxWorks Components for POSIX Facilities ......................................... 195
7.3 General POSIX Support ........................................................................................ 196
7.4 Standard C Library: libc ........................................................................................ 198
Contents
xiii
7.5 POSIX Header Files ............................................................................................... 199
7.6 POSIX Namespace .................................................................................................. 201
7.7 POSIX Process Privileges ..................................................................................... 203
7.8 POSIX Process Support ......................................................................................... 203
7.9 POSIX Clocks and Timers .................................................................................... 204
7.10 POSIX Asynchronous I/O ..................................................................................... 208
7.11 POSIX Advisory File Locking .............................................................................. 209
7.12 POSIX Page-Locking Interface ............................................................................ 209
7.13 POSIX Threads ........................................................................................................ 210
7.13.1 POSIX Thread Stack Guard Zones ........................................................ 211
7.13.2 POSIX Thread Attributes ........................................................................ 211
7.13.3 VxWorks-Specific Pthread Attributes ................................................... 212
7.13.4 Specifying Attributes when Creating Pthreads .................................. 213
7.13.5 POSIX Thread Creation and Management ........................................... 215
7.13.6 POSIX Thread Attribute Access ............................................................. 216
7.13.7 POSIX Thread Private Data .................................................................... 217
7.13.8 POSIX Thread Cancellation .................................................................... 218
7.14 POSIX Thread Mutexes and Condition Variables ........................................... 220
7.14.1 Thread Mutexes ........................................................................................ 220
Type Mutex Attribute .............................................................................. 221
Protocol Mutex Attribute ....................................................................... 222
Priority Ceiling Mutex Attribute ........................................................... 222
7.14.2 Condition Variables ................................................................................. 223
7.15 POSIX and VxWorks Scheduling ........................................................................ 225
7.15.1 Differences in POSIX and VxWorks Scheduling ................................. 226
7.15.2 POSIX and VxWorks Priority Numbering ........................................... 227
VxWorks
Application Programmer's Guide, 6.7
xiv
7.15.3 Default Scheduling Policy ....................................................................... 227
7.15.4 VxWorks Traditional Scheduler ............................................................. 228
7.15.5 POSIX Threads Scheduler ....................................................................... 229
7.15.6 POSIX Scheduling Routines .................................................................... 234
7.15.7 Getting Scheduling Parameters: Priority Limits and Time Slice ....... 234
7.16 POSIX Semaphores ................................................................................................ 235
7.16.1 Comparison of POSIX and VxWorks Semaphores .............................. 237
7.16.2 Using Unnamed Semaphores ................................................................. 237
7.16.3 Using Named Semaphores ..................................................................... 241
7.17 POSIX Message Queues ........................................................................................ 245
7.17.1 Comparison of POSIX and VxWorks Message Queues ...................... 246
7.17.2 POSIX Message Queue Attributes ......................................................... 247
7.17.3 Communicating Through a Message Queue ....................................... 249
7.17.4 Notification of Message Arrival ............................................................ 253
7.18 POSIX Signals ......................................................................................................... 259
7.19 POSIX Memory Management .............................................................................. 259
7.19.1 POSIX Memory Management APIs ....................................................... 259
7.19.2 Anonymous Memory Mapping ............................................................. 261
7.19.3 Shared Memory Objects .......................................................................... 263
7.19.4 Memory Mapped Files ............................................................................ 264
7.19.5 Memory Protection .................................................................................. 264
7.19.6 Memory Locking ...................................................................................... 265
7.20 POSIX Trace ............................................................................................................. 265
Trace Events, Streams, and Logs ............................................................ 265
Trace Operation ........................................................................................ 266
Trace APIs ................................................................................................. 267
Trace Code and Record Example ........................................................... 269
Contents
xv
8 Memory Management .......................................................................... 271
8.1 Introduction ............................................................................................................. 271
8.2 Configuring VxWorks With Memory Management ....................................... 272
8.3 Heap and Memory Partition Management ........................................................ 272
8.4 Memory Error Detection ....................................................................................... 274
8.4.1 Heap and Partition Memory Instrumentation ..................................... 275
8.4.2 Compiler Instrumentation ...................................................................... 281
9 I/O System ............................................................................................. 287
9.1 Introduction ............................................................................................................. 287
9.2 Configuring VxWorks With I/O Facilities ........................................................ 289
9.3 Files, Devices, and Drivers ................................................................................... 290
Filenames and the Default Device ......................................................... 290
9.4 Basic I/O ................................................................................................................... 292
9.4.1 File Descriptors ......................................................................................... 292
File Descriptor Table ................................................................................ 293
9.4.2 Standard Input, Standard Output, and Standard Error ..................... 293
9.4.3 Standard I/O Redirection ....................................................................... 294
9.4.4 Open and Close ........................................................................................ 296
9.4.5 Create and Remove .................................................................................. 298
9.4.6 Read and Write ......................................................................................... 299
9.4.7 File Truncation .......................................................................................... 300
9.4.8 I/O Control ............................................................................................... 300
9.4.9 Pending on Multiple File Descriptors with select( ) ............................ 301
9.4.10 POSIX File System Routines ................................................................... 302
VxWorks
Application Programmer's Guide, 6.7
xvi
9.5 Buffered I/O: stdio .................................................................................................. 303
9.5.1 Using stdio ................................................................................................ 303
9.5.2 Standard Input, Standard Output, and Standard Error ..................... 304
9.6 Other Formatted I/O ............................................................................................... 305
9.7 Asynchronous Input/Output ................................................................................ 305
9.7.1 The POSIX AIO Routines ........................................................................ 305
9.7.2 AIO Control Block .................................................................................... 306
9.7.3 Using AIO .................................................................................................. 307
Alternatives for Testing AIO Completion ............................................ 308
9.8 Devices in VxWorks ............................................................................................... 308
9.8.1 Serial I/O Devices: Terminal and Pseudo-Terminal Devices ............ 309
tty Options ................................................................................................. 309
Raw Mode and Line Mode ..................................................................... 310
tty Special Characters .............................................................................. 311
9.8.2 Pipe Devices .............................................................................................. 312
Creating Pipes ........................................................................................... 312
I/O Control Functions ............................................................................. 313
9.8.3 Pseudo I/O Device ................................................................................... 313
I/O Control Functions ............................................................................. 313
9.8.4 Network File System (NFS) Devices ...................................................... 314
I/O Control Functions for NFS Clients ................................................. 314
9.8.5 Non-NFS Network Devices .................................................................... 315
I/O Control Functions ............................................................................. 316
9.8.6 Null Devices ............................................................................................. 316
9.8.7 Sockets ........................................................................................................ 316
9.8.8 Transaction-Based Reliable File System Facility: TRFS ...................... 317
Configuring VxWorks With TRFS ......................................................... 317
Automatic Instantiation of TRFS ............................................................ 318
Using TRFS in Applications .................................................................... 318
TRFS Code Example ............................................................................... 319
Contents
xvii
10 Local File Systems ............................................................................... 321
10.1 Introduction ............................................................................................................. 322
10.2 File System Monitor .............................................................................................. 325
10.3 Virtual Root File System: VRFS .......................................................................... 325
10.4 Highly Reliable File System: HRFS .................................................................... 327
10.4.1 Configuring VxWorks for HRFS ............................................................ 327
10.4.2 Configuring HRFS ................................................................................... 328
10.4.3 HRFS and POSIX PSE52 .......................................................................... 329
10.4.4 Creating an HRFS File System .............................................................. 330
10.4.5 Transactional Operations and Commit Policies ................................ 330
10.4.6 Configuring Transaction Points at Runtime ....................................... 332
10.4.7 File Access Time Stamps ......................................................................... 333
10.4.8 Maximum Number of Files and Directories ........................................ 334
10.4.9 Working with Directories ....................................................................... 334
Creating Subdirectories ........................................................................... 334
Removing Subdirectories ........................................................................ 334
Reading Directory Entries ....................................................................... 335
10.4.10 Working with Files ................................................................................... 335
File I/O Routines ...................................................................................... 335
File Linking and Unlinking ..................................................................... 335
File Permissions ........................................................................................ 336
10.4.11 I/O Control Functions Supported by HRFS ........................................ 336
10.4.12 Crash Recovery and Volume Consistency ........................................... 337
10.4.13 File Management and Full Devices ....................................................... 337
10.5 MS-DOS-Compatible File System: dosFs ......................................................... 339
10.5.1 Configuring VxWorks for dosFs ............................................................ 339
10.5.2 Configuring dosFs ................................................................................... 341
10.5.3 Creating a dosFs File System .................................................................. 342
VxWorks
Application Programmer's Guide, 6.7
xviii
10.5.4 Working with Volumes and Disks ......................................................... 342
Accessing Volume Configuration Information .................................... 342
Synchronizing Volumes .......................................................................... 343
10.5.5 Working with Directories ........................................................................ 343
Creating Subdirectories ........................................................................... 343
Removing Subdirectories ........................................................................ 343
Reading Directory Entries ....................................................................... 344
10.5.6 Working with Files ................................................................................... 344
File I/O Routines ...................................................................................... 344
File Attributes ........................................................................................... 344
10.5.7 Disk Space Allocation Options ............................................................... 347
Choosing an Allocation Method ............................................................ 347
Using Cluster Group Allocation ............................................................ 348
Using Absolutely Contiguous Allocation ............................................. 348
10.5.8 Crash Recovery and Volume Consistency ........................................... 350
10.5.9 I/O Control Functions Supported by dosFsLib ................................... 350
10.5.10 Booting from a Local dosFs File System Using SCSI .......................... 352
10.6 Raw File System: rawFs ......................................................................................... 352
10.6.1 Configuring VxWorks for rawFs ........................................................... 353
10.6.2 Creating a rawFs File System ................................................................. 353
10.6.3 Mounting rawFs Volumes ...................................................................... 353
10.6.4 rawFs File I/O ........................................................................................... 354
10.6.5 I/O Control Functions Supported by rawFsLib .................................. 354
10.7 CD-ROM File System: cdromFs .......................................................................... 355
10.7.1 Configuring VxWorks for cdromFs ....................................................... 357
10.7.2 Creating and Using cdromFs .................................................................. 357
10.7.3 I/O Control Functions Supported by cdromFsLib ............................. 357
10.7.4 Version Numbers ..................................................................................... 358
10.8 Read-Only Memory File System: ROMFS ........................................................ 359
10.8.1 Configuring VxWorks with ROMFS ..................................................... 359
Contents
xix
10.8.2 Building a System With ROMFS and Files ........................................... 359
10.8.3 Accessing Files in ROMFS ...................................................................... 360
10.8.4 Using ROMFS to Start Applications Automatically ........................... 361
10.9 Target Server File System: TSFS ......................................................................... 361
Socket Support .......................................................................................... 362
Error Handling ......................................................................................... 362
Configuring VxWorks for TSFS Use ...................................................... 363
Security Considerations .......................................................................... 363
Using the TSFS to Boot a Target ............................................................. 364
11 Error Detection and Reporting ............................................................ 365
11.1 Introduction ............................................................................................................. 365
11.2 Configuring Error Detection and Reporting Facilities ................................... 366
11.2.1 Configuring VxWorks ............................................................................. 366
11.2.2 Configuring the Persistent Memory Region ........................................ 367
11.2.3 Configuring Responses to Fatal Errors ................................................. 368
11.3 Error Records ........................................................................................................... 368
11.4 Displaying and Clearing Error Records ............................................................. 370
11.5 Fatal Error Handling Options .............................................................................. 371
11.5.1 Configuring VxWorks with Error Handling Options ......................... 372
11.5.2 Setting the System Debug Flag ............................................................... 373
Setting the Debug Flag Statically ........................................................... 373
Setting the Debug Flag Interactively ..................................................... 373
11.6 Other Error Handling Options for Processes ................................................... 374
11.7 Using Error Reporting APIs in Application Code ........................................... 374
11.8 Sample Error Record .............................................................................................. 375
VxWorks
Application Programmer's Guide, 6.7
xx
A Kernel to RTP Application Migration ................................................. 377
A.1 Introduction ............................................................................................................ 377
A.2 Migrating Kernel Applications to Processes ..................................................... 377
A.2.1 Reducing Library Size ............................................................................. 378
A.2.2 Limiting Process Scope ............................................................................ 378
Communicating Between Applications ................................................ 378
Communicating Between an Application and the Kernel ................ 379
A.2.3 Using C++ Initialization and Finalization Code .................................. 379
A.2.4 Eliminating Hardware Access ................................................................ 380
A.2.5 Eliminating Interrupt Contexts In Processes ........................................ 381
POSIX Signals ........................................................................................... 381
Watchdogs ................................................................................................. 381
Drivers ....................................................................................................... 382
A.2.6 Redirecting I/O ........................................................................................ 382
A.2.7 Process and Task API Differences ......................................................... 384
Task Naming ............................................................................................. 384
Differences in Scope Between Kernel and User Modes ...................... 384
Task Locking and Unlocking .................................................................. 385
Private and Public Objects ...................................................................... 385
A.2.8 Semaphore Differences ............................................................................ 386
A.2.9 POSIX Signal Differences ........................................................................ 386
Signal Generation ..................................................................................... 386
Signal Delivery ......................................................................................... 387
Scope Of Signal Handlers ....................................................................... 387
Default Handling Of Signals .................................................................. 387
Default Signal Mask for New Tasks ...................................................... 388
Signals Sent to Blocked Tasks ................................................................. 388
Signal API Behavior ................................................................................. 388
A.2.10 Networking Issues .................................................................................. 389
Socket APIs ................................................................................................ 389
routeAdd( ) ................................................................................................ 389
A.2.11 Header File Differences ........................................................................... 389
Contents
xxi
A.3 Differences in Kernel and RTP APIs .................................................................. 390
A.3.1 APIs Not Present in User Mode ............................................................. 390
A.3.2 APIs Added for User Mode Only .......................................................... 391
A.3.3 APIs that Work Differently in Processes .............................................. 391
A.3.4 ANSI and POSIX API Differences ......................................................... 392
A.3.5 Kernel Calls Require Kernel Facilities ................................................... 392
A.3.6 Other API Differences ............................................................................. 393
Index .............................................................................................................. 395
VxWorks
Application Programmer's Guide, 6.7
xxii
1
1
Overview
1.1 Introduction 1
1.2 Related Documentation Resources 2
1.3 VxWorks Configuration and Build 3
1.1 Introduction
This guide describes the VxWorks operating system, and how to use VxWorks
facilities in the development of real-time systems and applications. It covers the
following topics:
real-time processes (RTPs)
RTP applications
Static Libraries, Shared Libraries, and Plug-Ins
C++ development
multitasking facilities
POSIX facilities
memory management
I/O system
local file systems
error detection and reporting
VxWorks
Application Programmer's Guide, 6.7
2
1.2 Related Documentation Resources
The companion volume to this book, the VxWorks Kernel Programmer’s Guide,
provides material specific to kernel features and kernel-based development.
Detailed information about VxWorks libraries and routines is provided in the
VxWorks API references. Information specific to target architectures is provided in
the VxWorks BSP references and in the VxWorks Architecture Supplement.
For information about BSP and driver development, see the VxWorks BSP
Developer’s Guide and the VxWorks Device Driver Guide.
The VxWorks networking facilities are documented in the Wind River Network
Stack Programmer’s Guide and the VxWorks PPP Programmer’s Guide.
For information about migrating applications, BSPs, drivers, and projects from
previous versions of VxWorks and the host development environment, see the
VxWorks Migration Guide and the Wind River Workbench Migration Guide.
The Wind River IDE and command-line tools are documented in the Wind River
Workbench by Example guide, the VxWorks Command-Line Tools User’s Guide, the
Wind River compiler and GNU compiler guides, and the Wind River tools API and
command-line references.
NOTE: This book provides information about facilities available for real-time
processes. For information about facilities available in the VxWorks kernel, see the
the VxWorks Kernel Programmer’s Guide.
1 Overview
1.3 VxWorks Configuration and Build
3
1
1.3 VxWorks Configuration and Build
This document describes VxWorks features; it does not go into detail about the
mechanisms by which VxWorks-based systems and applications are configured
and built. The tools and procedures used for configuration and build are described
in the Wind River Workbench by Example guide and the VxWorks Command-Line Tools
User’s Guide.
NOTE: In this guide, as well as in the VxWorks API references, VxWorks
components and their configuration parameters are identified by the names used
in component description files. The names take the form, for example, of
INCLUDE_FOO and NUM_FOO_FILES (for components and parameters,
respectively).
You can use these names directly to configure VxWorks using the command-line
configuration facilities.
Wind River Workbench displays descriptions of components and parameters, as
well as their names, in the Components tab of the Kernel Configuration Editor.
You can use the Find dialog to locate a component or parameter using its name or
description. To access the Find dialog from the Components tab, type CTRL+F, or
right-click and select Find.
VxWorks
Application Programmer's Guide, 6.7
4
5
2
Real-Time Processes
2.1 Introduction 6
2.2 About Real-time Processes 7
2.3 Configuring VxWorks For Real-time Processes 17
2.4 Using RTPs Without MMU Support 20
2.5 About VxWorks RTP Virtual Memory Models 23
2.6 Using the Overlapped RTP Virtual Memory Model 26
VxWorks
Application Programmer's Guide, 6.7
6
2.1 Introduction
VxWorks real-time processes (RTPs) are in many respects similar to processes in
other operating systems—such as UNIX and Linux—including extensive POSIX
compliance.1 The ways in which they are created, execute applications, and
terminate will be familiar to developers who understand the UNIX process model.
The VxWorks process model is, however, designed for use with real-time
embedded systems. The features that support this model include system-wide
scheduling of tasks (processes themselves are not scheduled), preemption of
processes in kernel mode as well as user mode, process-creation in two steps to
separate loading from instantiation, and loading applications in their entirety.
VxWorks real-time processes provide the means for executing applications in user
mode. Each process has its own address space, which contains the executable
program, the program’s data, stacks for each task, the heap, and resources
associated with the management of the process itself (such as memory-allocation
tracking). Many processes may be present in memory at once, and each process
may contain more than one task (sometimes known as a thread in other operating
systems).
VxWorks processes can operate with two different virtual memory models: flat
(the default) or overlapped (optional). With the flat virtual-memory model each
VxWorks process has its own region of virtual memory described by a unique
range of addresses. This model provides advantages in execution speed, in a
programming model that accommodates systems with and without an MMU, and
in debugging applications. With overlapped virtual-memory model, each
VxWorks process uses the same range of virtual addresses for the area in which its
code (text, data, and bss segments) resides. This model provides more precise
control over the virtual memory space and allows for notably faster application
load time.
For information about developing RTP applications, see 3. RTP Applications.
1. VxWorks can be configured to provide POSIX PSE52 support for individual processes.
2 Real-Time Processes
2.2 About Real-time Processes
7
2
2.2 About Real-time Processes
A common definition of a process is “a program in execution,” and VxWorks
processes are no different in this respect. In fact, the life-cycle of VxWorks real-time
processes is largely consistent with the POSIX process model (see 2.2.9 RTPs and
POSIX, p.16).
VxWorks processes, however, are called real-time processes (RTPs) precisely
because they are designed to support the determinism required of real-time
systems. They do so in the following ways:
The VxWorks task-scheduling model is maintained. Processes are not
scheduled—tasks are scheduled globally throughout the system.
Processes can be preempted in kernel mode as well as in user mode. Every task
has both a user mode and a kernel mode stack. (The VxWorks kernel is fully
preemptive.)
Processes are created without the overhead of performing a copy of the
address space for the new process and then performing an exec operation to
load the file. With VxWorks, a new address space is simply created and the file
loaded.
Process creation takes place in two phases that clearly separate instantiation of
the process from loading and executing the application. The first phase is
performed in the context of the task that calls rtpSpawn( ). The second phase
is carried out by a separate task that bears the cost of loading the application
text and data before executing it, and which operates at its own priority level
distinct from the parent task. The parent task, which called rtpSpawn( ), is not
impacted and does not have wait for the application to begin execution, unless
it has been coded to wait.
Processes load applications in their entirety—there is no demand paging.
All of these differences are designed to make VxWorks particularly suitable for
hard real-time applications by ensuring determinism, as well as providing a
common programming model for systems that run with an MMU and those that
do not. As a result, there are differences between the VxWorks process model and
that of server-style operating systems such as UNIX and Linux. The reasons for
these differences are discussed as the relevant topic arises throughout this chapter.
VxWorks
Application Programmer's Guide, 6.7
8
2.2.1 RTPs and Scheduling
The primary way in which VxWorks processes support determinism is that they
themselves are simply not scheduled. Only tasks are scheduled in VxWorks
systems, using a priority-based, preemptive policy. Based on the strong
preemptibility of the VxWorks kernel, this ensures that at any given time, the
highest priority task in the system that is ready to run will execute, regardless of
whether the task is in the kernel or in any process in the system.
By way of contrast, the scheduling policy for non-real-time systems is based on
time-sharing, as well as a dynamic determination of process priority that ensures
that no process is denied use of the CPU for too long, and that no process
monopolizes the CPU.
VxWorks does provide an optional time-sharing capability—round-robin
scheduling—but it does not interfere with priority-based preemption, and is
therefore deterministic. VxWorks round-robin scheduling simply ensures that
when there is more than one task with the highest priority ready to run at the same
time, the CPU is shared between those tasks. No one of them, therefore, can usurp
the processor until it is blocked.
For more information about VxWorks scheduling see 6.3 Task Scheduling, p.115.
2.2.2 RTP Creation
The manner in which real-time processes are created supports the determinism
required of real-time systems. The creation of an RTP takes place in two distinct
phases, and the executable is loaded in its entirety when the process is created. In
the first phase, the rtpSpawn( ) call creates the process object in the system,
allocates virtual and physical memory to it, and creates the initial process task (see
2.2.5 RTPs and Tasks, p.11). In the second phase, the initial process task loads the
entire executable and starts the main routine.
This approach provides for system determinism in two ways:
First, the work of process creation is divided between the rtpSpawn( ) task and
the initial process task—each of which has its own distinct task priority. This
means that the activity of loading applications does not occur at the priority,
or with the CPU time, of the task requesting the creation of the new process.
Therefore, the initial phase of starting a process is discrete and deterministic,
regardless of the application that is going to run in it. And for the second
phase, the developer can assign the task priority appropriate to the
significance of the application, or to take into account necessarily
2 Real-Time Processes
2.2 About Real-time Processes
9
2
indeterministic constraints on loading the application (for example, if the
application is loaded from networked host system, or local disk). The
application is loaded with the same task priority as the priority with which it
will run. In a way, this model is analogous to asynchronous I/O, as the task
that calls rtpSpawn( ) just initiates starting the process and can concurrently
perform other activities while the application is being loaded and started.
Second, the entire application executable is loaded when the process is created,
which means that the determinacy of its execution is not compromised by
incremental loading during execution. This feature is obviously useful when
systems are configured to start applications automatically at boot time—all
executables are fully loaded and ready to execute when the system comes up.
The rtpSpawn( ) routine has an option that provides for synchronizing for the
successful loading and instantiation of the new process.
At startup time, the resources internally required for the process (such as the heap)
are allocated on demand. The application's text is guaranteed to be
write-protected, and the application's data readable and writable, as long as an
MMU is present and the operating system is configured to manage it. While
memory protection is provided by MMU-enforced partitions between processes,
there is no mechanism to provide resource protection by limiting memory usage
of processes to a specified amount. For more information, see 8. Memory
Management.
Note that creation of VxWorks processes involves no copying or sharing of the
parent processes page frames (copy-on-write), as is the case with some versions of
UNIX and Linux. The flat virtual memory model provided by VxWorks prohibits
this approach and the overlapped virtual memory model does not currently
support this feature. For information about the issue of inheritance of attributes
from parent processes, see 2.2.7 RTPs, Inheritance, Zombies, and Resource
Reclamation, p.13.
For information about what operations are possible on a process in each phase of
its instantiation, see the VxWorks API reference for rtpLib. Also see 3.3.7 Using
Hook Routines, p.50.
VxWorks processes can be started in the following ways:
interactively from the kernel shell
interactively from the host shell and debugger
automatically at boot time, using a startup facility
programmatically from applications or the kernel
VxWorks
Application Programmer's Guide, 6.7
10
Form more information in this regard, see 3.6 Executing RTP Applications, p.54.
2.2.3 RTP Termination
Processes are terminated under the following circumstances:
When the last task in the process exits.
If any task in the process calls exit( ), regardless of whether or not other tasks
are running in the process.
If the process’ main( ) routine returns.
This is because exit( ) is called implicitly when main( ) returns. An application
in which main( ) spawns tasks can be written to avoid this behavior—and to
allow its other tasks to continue operation—by including a taskExit( ) call as
the last statement in main( ). See 3.3 Developing RTP Applications, p.40.
If the kill( ) routine is used to terminate the process.
If rtpDelete( ) is called on the process—from a program, a kernel module, the
C interpreter of the shell, or from Workbench. Or if the rtp delete command is
used from the shell’s command interpreter.
If a process takes an exception during its execution.
This default behavior can be changed for debugging purposes. When the error
detection and reporting facilities are included in the system, and they are set
to debug mode, processes are not terminated when an exception occurs.
Note that if a process fails while a shell is running, a message is printed to the shell
console. Error messages can be recorded with the VxWorks error detection and
reporting facilities (see 11. Error Detection and Reporting).
For information about attribute inheritance and what happens to a process’
resources when it terminates, see 2.2.7 RTPs, Inheritance, Zombies, and Resource
Reclamation, p.13.
2.2.4 RTPs and Memory
Each process has its own address space, which contains the executable program,
the program's data, stacks for each task, the heap, and resources associated with
the management of the process itself (such as local heap management). Many
processes may be present in memory at once.
2 Real-Time Processes
2.2 About Real-time Processes
11
2
Virtual Memory Models
VxWorks processes can operate with two different virtual memory models: flat
(the default) or overlapped (optional).
The flat virtual-memory model provides advantages in execution speed, in a
programming model that accommodates systems with and without an MMU, and
in debugging applications. In this model each VxWorks process has its own region
of virtual memory described by a unique range of addresses.
The overlapped virtual-memory model provides more precise control over the
virtual memory space and allows for notably faster application load time. With
this model, each VxWorks process uses the same range of virtual addresses for the
area where its code (text, data, and bss segments) resides. The overlapped
virtual-memory model will not work unless VxWorks is configured with MMU
support and the MMU is turned on.
The two virtual memory models are mutually exclusive, and are described in more
detail in 2.5 About VxWorks RTP Virtual Memory Models, p.23.
Memory Protection
Each process is protected from any other process that is running on the system,
whenever the target system has an MMU, and MMU support has been configured
into VxWorks. Operations involving the code, data, and memory of a process are
accessible only to code executing in that process. It is possible, therefore, to run
several instances of the same application in separate processes without any
undesired side effects occurring between them. The name and symbol spaces of
the kernel and processes are isolated.
As processes run a fully linked image without external references, a process cannot
call a routine in another process, or a kernel routine that is not exported as a system
call—whether or not the MMU is enabled. However, if the MMU is not enabled, a
process can read and write memory external to its own address space, and could
cause the system to malfunction.
2.2.5 RTPs and Tasks
VxWorks can run many processes at once, and any number of processes can run
the same application executable. That is, many instances of an application can be
run concurrently.
VxWorks
Application Programmer's Guide, 6.7
12
For general information about tasks, see 6. Multitasking.
Numbers of Tasks and RTPs
Each process can execute one or more tasks. When a process is created, the system
spawns a single task to initiate execution of the application. The application may
then spawn additional tasks to perform various functions. There is no limit to the
number of tasks in a process, other than that imposed by the amount of available
memory. Similarly, there is no limit to the number of processes in the system—but
only for architectures that do not have (or do not use) a hardware mechanism that
manages concurrent address spaces (this mechanism is usually known as an
address space identifier, or ASID). For target architectures that do use ASIDs or
equivalent mechanisms, the number of processes is limited to that of the ASID
(usually 255). For more information, see the VxWorks Architecture Supplement.
Initial Task in an RTP
When a process is created, an initial task is spawned to begin execution of the
application. The name of the process’s initial task is based on the name of the
executable file, with the following modifications:
The letter i is prefixed.
The first letter of the filename capitalized.
The filename extension is removed.
For example, when foobar.vxe is run, the name of the initial task is iFoobar.
The initial task provides the execution context for the program’s main( ) routine,
which it then calls. The application itself may then spawn additional tasks.
RTP Tasks and Memory
Task creation includes allocation of space for the task's stack from process
memory. As needed, memory is automatically added to the process as tasks are
created from the kernel free memory pool.
Heap management routines are available in user-level libraries for tasks in
processes. These libraries provide the various ANSI APIs such as malloc( ) and
free( ). The kernel provides a pool of memory for each process in user space for
these routines to manage.
2 Real-Time Processes
2.2 About Real-time Processes
13
2
Providing heap management in user space provides for speed and improved
performance because the application does not incur the overhead of a system call
for memory during its execution. However, if the heap is exhausted the system
automatically allocates more memory for the process (by default), in which case a
system call is made. Environment variables control whether or not the heap grows
(see 8.3 Heap and Memory Partition Management, p.272).
2.2.6 RTPs and Inter-Process Communication
While the address space of each process is invisible to tasks running in other
processes, tasks can communicate across process boundaries through the use of
various IPC mechanisms (including public semaphores, public message queues,
and message channels) and shared data memory regions. See 6.8 Intertask and
Interprocess Communication, p.139 and 3.5 Creating and Using Shared Data Regions,
p.51 for more information.
2.2.7 RTPs, Inheritance, Zombies, and Resource Reclamation
VxWorks has a process hierarchy made up of parent/child relationships. Any
process spawned from the kernel (whether programmatically, from the shell or
other development tool, or by an automated startup facility) is a child of the kernel.
Any process spawned by another process is a child of that process. As in human
societies, these relationships are critical with regard to what characteristics
children inherit from their parents, and what happens when a parent or child dies.
Inheritance
VxWorks processes inherit certain attributes of their parent. The child process
inherits the file descriptors (stdin, stdout and stderr) of its parent process—which
means that they can access the same files (if they are open), and signal masks. If the
child process is started by the kernel, however, then the child process inherits only
the three standard file descriptors. Environment variables are not inherited, but
the parent can pass its environment, or a sub-set of it, to the child process (for
information in this regard, see 2.2.8 RTPs and Environment Variables, p.15).
While the signal mask is not actually a property of a process as such—it is a
property of a task—the signal mask for the initial task in the process is inherited
from the task that spawned it (that is, the task that called the rtpSpawn( ) routine).
VxWorks
Application Programmer's Guide, 6.7
14
If the kernel created the initial task, then the signal mask is zero, and all signals are
unblocked.
The getppid( ) routine returns the parent process ID. If the parent is the kernel, or
the parent is dead, it returns NULL.
Zombie Processes
By default, when a process is terminated, and its parent is not the kernel, it
becomes a zombie process.2
In order to respond to a SIGCHLD signal (which is generated whenever a child
process is stopped or exits, or a stopped process is started) and get the exit status
of the child process, the parent process must call wait( ) or waitpid( ) before the
child exits or is stopped. In this case the parent is blocked waiting. Alternatively,
the parent can set a signal handler for SIGCHLD and call wait( ) or waitpid( ) in the
signal handler. In this case the parent is not blocked. After the parent process
receives the exit status of a child process, the zombie entity is deleted
automatically.
The default behavior with regard to zombies can be modified in the following
ways:
By leaving the parent process unaware of the child process’ termination, and
not creating a zombie. This is accomplished by having the parent process
ignore the SIGCHLD signal. To do so, the parent process make a sigaction( )
call that sets the SIGCHLD signal handler to SIG_IGN.
By not transforming a terminating child process into a zombie when it exits.
This is accomplished having the parent process make a sigaction( ) call that
sets the sa_flag to SA_NOCLDWAIT.
Resource Reclamation
When a process terminates, all resources owned by the process (objects, data, and
so on) are returned to the system. The resources used internally for managing the
process are released, as are all resources owned by the process. All information
2. A zombie process is a “process that has terminated and that is deleted when its exit status
has been reported to another process which is waiting for that process to terminate.” (The
Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition.)
2 Real-Time Processes
2.2 About Real-time Processes
15
2
about that process is eliminated from the system (with the exception of any
temporary zombie process information). Resource reclamation ensures that all
resources that are not in use are immediately returned to the system and available
for other uses.
Note, however, that there are exceptions to this general rule:
Public objects—which may be referenced by tasks running in other processes
that continue to run—must be explicitly deleted.
Socket objects can persist for some time after a process is terminated. They are
reclaimed only when they are closed, which is driven by the nature of the
TCP/IP state machine. Some sockets must remain open until timeout is
reached.
File descriptors are reclaimed only when all references to them are closed. This
can occur implicitly when all child processes—which inherit the descriptors
from the parent process—terminate. It can also happen explicitly when all
applications with references to the file descriptors close them.
For information about object ownership, and about public and private objects, see
6.9 Inter-Process Communication With Public Objects, p.140.
2.2.8 RTPs and Environment Variables
By default, a process is created without environment variables. In a manner
consistent with the POSIX standard, all tasks in a process share the same
environment variables—unlike kernel tasks, which each have their own set of
environment variables.
Setting Environment Variables From Outside a Process
While a process is created without environment variables by default, they can be
set from outside the process in the following ways:
If the new process is created by a kernel task, the contents of the kernel task’s
environment array can be duplicated in the application’s environment array.
The the envGet( ) routine is used to get the kernel task’s environment, which
is then used in the rtpSpawn( ) call.
If the new process is created by a process, the child process can be passed the
parent’s environment if the environment array is used in the rtpSpawn( ) call.
VxWorks
Application Programmer's Guide, 6.7
16
If the new process is created from the kernel shell—using either rtp exec
command or rtpSp( ) routine—then all of the shell's environment is passed to
the new process (the process’ envp is set using the shell’s environment
variables). This makes it simple to set environment variables specifically for a
process by first using putenv( ) to set the variable in the shell’s environment
before creating the process. (For example, this method can be used to set the
LD_LIBRARY_PATH variable for the runtime locations of shared libraries; see
4.8.7 Locating and Loading Shared Libraries at Run-time, p.83.)
For more information, see the rtpSpawn( ) API reference and 3.3.1 RTP Application
Structure, p.42 for details.
Setting Environment Variables From Within a Process
A task in a process (or in an application library) can create, reset, and remove
environment variables in a process. The getenv( ) routine can be used to get the
environment variables, and the setenv( ) and unsetenv( ) routines to change or
remove them. The environment array can also be manipulated directly—however,
Wind River recommends that you do not do so, as this bypasses the thread-safe
implementation of getenv( ), setenv( ) and putenv( ) in the RTP environment.
2.2.9 RTPs and POSIX
The overall behavior of the application environment provided by the real-time
process model is close to the POSIX 1003.1 standard, while maintaining the
embedded and real-time characteristics of the VxWorks operating system. The key
areas of deviation from the standard are that VxWorks does not provide the
following:
process creation with fork( ) and exec( )
memory-mapped files
file ownership and file permissions
For information about POSIX support, see 7. POSIX Facilities.
POSIX PSE52 Support
VxWorks can be configured to provide POSIX PSE52 support (for individual
processes, as defined by the profile). For detailed information, see 2.3 Configuring
2 Real-Time Processes
2.3 Configuring VxWorks For Real-time Processes
17
2
VxWorks For Real-time Processes, p.17, 7.1 Introduction, p.191, 7.2 Configuring
VxWorks with POSIX Facilities, p.192, and 10.4.3 HRFS and POSIX PSE52, p.329.
2.3 Configuring VxWorks For Real-time Processes
The VxWorks operating system is configured and built independently of any
applications that it might execute. To support RTP applications, VxWorks need
only be configured with the appropriate components for real-time processes and
any other facilities required by the application (for example, message queues). This
independence of operating system from applications allows for development of a
variety of systems, using differing applications, that are based on a single
VxWorks configuration. That is, a single variant of VxWorks can be combined with
different sets of applications to create different systems. The operating system does
not need to be aware of what applications it will run before it is configured and
built, as long as its configuration includes the components required to support the
applications in question.
RTP applications can either be stored separately or bundled with VxWorks in an
additional build step that combines the operating system and applications into a
single system image (using the ROMFS file system). For information in this regard,
see 2.3.3 Additional Component Options, p.19 and 3.7 Bundling RTP Applications in a
System using ROMFS, p.66.
2.3.1 Basic RTP Support
In order to run RTP applications on a hardware target, VxWorks must be
configured with the INCLUDE_RTP component. Doing so automatically includes
other components required for RTP support.
Note that many of the components described in this chapter provide configuration
parameters. While not all are discussed in this chapter, they can be reviewed and
managed with the kernel configuration facilities (either Workbench or the vxprj
command-line tool).
Information about alternative configurations that support specific RTP
technologies are covered in the context of the general topics themselves. For
example:
VxWorks
Application Programmer's Guide, 6.7
18
Configuation With Process Support and Without an MMU, p.22
Setting Configuration Parameters for the RTP Code Region, p.33
4.4 Configuring VxWorks for Shared Libraries and Plug-ins, p.73.
2.3.2 MMU Support for RTPs
If a system is configured with INCLUDE_RTP the VxWorks components required
for MMU memory protection are included by default—except for the MIPS
architecture.
To create a system with processes, but without MMU support, the MMU
components must be removed from the VxWorks configuration after the
INCLUDE_RTP component has been added (for information in this regard see
2.4 Using RTPs Without MMU Support, p.20).
NOTE: The default VxWorks configuration for hardware targets does not include
support for running applications in real-time processes (RTPs). VxWorks must be
re-configured and rebuilt to provide these process facilities. The default
configuration of the VxWorks simulator does, however, include full support for
running RTP applications.
The reason that the default configuration of VxWorks (for hardware targets) does
not include process support, is that it facilitates migration of VxWorks 5.5
kernel-based applications to VxWorks 6.x by providing functionally the same basic
set of kernel components, and nothing more.
VxWorks 6.x systems can be created with kernel-based applications and without
any process-based applications, or with a combination of the two. Kernel
applications, however, cannot be provided the same level of protection as
process-based applications. When applications run in kernel space, both the kernel
and those applications are subject to any misbehavior on the part application code.
For more information about kernel-based applications, see the VxWorks Kernel
Programmer’s Guide: Kernel.
!CAUTION: For the MIPS architecture, MMU support is not provided by default. In
contrast to other architectures, this support is not added automatically when
VxWorks is configured with INCLUDE_RTP. For MIPS, VxWorks must be
configured with a mapped kernel by adding the INCLUDE_MAPPED_KERNEL
component. For more information in this regard, see the VxWorks Architecture
Supplement.
2 Real-Time Processes
2.3 Configuring VxWorks For Real-time Processes
19
2
2.3.3 Additional Component Options
The following components provide useful facilities for both development and
deployed systems:
INCLUDE_ROMFS for the ROMFS file system.
INCLUDE_RTP_APPL_USER, INCLUDE_RTP_APPL_INIT_STRING,
INCLUDE_RTP_APPL_INIT_BOOTLINE, and
INCLUDE_RTP_APPL_INIT_CMD_SHELL_SCRIPT for various ways of
automatically starting applications at boot time.
INCLUDE_SHARED_DATA for shared data regions.
INCLUDE_SHL for shared libraries.
INCLUDE_RTP_HOOKS for the programmatic hook facility, which allows for
registering kernel routines that are to be executed at various points in a
process’ life-cycle.
INCLUDE_POSIX_PTHREAD_SCHEDULER and INCLUDE_POSIX_CLOCK for
POSIX thread support. This replaces the traditional VxWorks scheduler with
a scheduler handling user threads in a manner consistent with POSIX.1.
VxWorks tasks as well as kernel pthreads are handled as usual. Note that the
INCLUDE_POSIX_PTHREAD_SCHEDULER is required for using pthreads in
processes. For more information, see 7.15 POSIX and VxWorks Scheduling,
p.225.
INCLUDE_PROTECT_TASK_STACK for stack protection. For deployed
systems this component may be omitted to save on memory usage. See
6.4.5 Task Stack, p.125 for more information.
The following components provide facilities used primarily in development
systems, although they can be useful in deployed systems as well:
The various INCLUDE_SHELL_feature components for the kernel shell, which,
although not required for applications and processes, are needed for running
applications from the command line, executing shell scripts, and on-target
debugging.
The INCLUDE_WDB component for using the host tools.
Either the INCLUDE_NET_SYM_TBL or the
INCLUDE_STANDALONE_SYM_TBL component, which specify whether
symbols for the shell are loaded or built-in.
The INCLUDE_DISK_UTIL and INCLUDE_RTP_SHOW components, which
include useful shell routines.
VxWorks
Application Programmer's Guide, 6.7
20
For information about the kernel shell, symbol tables, and show routines, see the
VxWorks Kernel Programmer’s Guide: Target Tools. For information about the host
shell, see the Wind River Workbench Host Shell User’s Guide.
Component Bundles
The VxWorks configuration facilities provide component bundles to simplify the
configuration process for commonly used sets of operating system facilities. The
following component bundles are provided for process support:
BUNDLE_RTP_DEPLOY is designed for deployed systems (final products), and
is composed of INCLUDE_RTP, INCLUDE_RTP_APPL, INCLUDE_RTP_HOOKS,
INCLUDE_SHARED_DATA, and the BUNDLE_SHL components.
BUNDLE_RTP_DEVELOP is designed for the development environment, and is
composed of BUNDLE_RTP_DEPLOY, INCLUDE_RTP_SHELL_CMD,
INCLUDE_RTP_SHOW, INCLUDE_SHARED_DATA_SHOW,
INCLUDE_SHL_SHOW, INCLUDE_RTP_SHOW_SHELL_CMD,
INCLUDE_SHL_SHELL_CMD, components.
BUNDLE_RTP_POSIX_PSE52 provides POSIX PSE52 support for individual
processes (for more information see 7.2 Configuring VxWorks with POSIX
Facilities, p.192). It can be used with either BUNDLE_RTP_DEPLOY or
BUNDLE_RTP_DEVELOP.
2.3.4 Configuration and Build Facilities
For information about configuring and building VxWorks, see the Wind River
Workbench by Example guide and the VxWorks Command-Line Tools User’s Guide.
Note that the VxWorks simulator includes all of the basic components required for
processes by default.
2.4 Using RTPs Without MMU Support
VxWorks can be configured to provide support for real-time processes on a system
based on a processor without an MMU, or based on a processor with MMU but
with the MMU disabled.
2 Real-Time Processes
2.4 Using RTPs Without MMU Support
21
2
With this configuration, a software simulation-based memory page management
library keeps track of identity mappings only. This means that there is no address
translation, and memory page attributes (protection attributes and cache
attributes) are not supported.
The advantages of a configuration without MMU support are that it:
Enables the process environment on systems without an MMU. It provides
private namespace for applications, for building applications independently
from the kernel, and for simple migration from systems without an MMU to
those with one.
Allows application code be run in non-privileged (user) mode.
Under certain conditions, it may provide increased performance by
eliminating overhead of the TLB miss and reload. This assumes, however, that
there is no negative impact due to the changed cache conditions.
The limitations of this configuration are:
Depending on the processor type, BSP configuration, drivers and OS facilities
used, disabling the MMU may require disabling the data cache as well.
Disabling the data cache results in significant performance penalty that is
much greater than the benefit derived from avoiding TLB misses.
There is no memory protection. That is, memory cannot be write-protected,
and neither the kernel or any process are protected from other processes.
The address space is limited to the available system RAM, which is typically
smaller than it would be available on systems with MMU-based address
translation enabled. Because of the smaller address space, a system is more
likely to run out of large contiguous blocks of memory due to fragmentation.
Not all processors and target boards can be used with the MMU disabled. For
the requirements of your system see the hardware manual of the board and
processor used.
For information about architecture and processor-specific limitations, see the
VxWorks Architecture Supplement.
!CAUTION: VxWorks SMP does not support MMU-less configurations. For
information about VxWorks SMP (which does support RTPs), see the VxWorks
Kernel Programmer’s Guide: VxWorks SMP.
VxWorks
Application Programmer's Guide, 6.7
22
Configuation With Process Support and Without an MMU
There are no special components needed for the process environment with
software-simulated paging. As with any configurations that provide process
support, the INCLUDE_RTP component must be added to the kernel.
The steps required to enable software-simulated paging are:
1. Add the INCLUDE_RTP component to include process support. This
automatically includes all dependent subsystems, among them
INCLUDE_MMU_BASIC.
2. Change the SW_MMU_ENABLE parameter of the INCLUDE_MMU_BASIC
component to TRUE (the default value is FALSE).
In addition, the following optional configuration steps can reduce the footprint of
the system:
3. Change the VM_PAGE_SIZE parameter of the INCLUDE_MMU_BASIC
component. The default is architecture-dependent; usually 4K or 8K. Allowed
values are 1K, 2K, 4K, 8K, 16K, 32K, 64K. Typically, a smaller page size results
in finer granularity and therefore more efficient use of the memory space.
However, smaller page size requires more memory needed for keeping track
the mapping information.
4. Disable stack guard page protection by changing the
TASK_STACK_OVERFLOW_SIZE and TASK_STACK_UNDERFLOW_SIZE
configuration parameters to zero. Without protection provided by an MMU,
stack overflow and underflow cannot be detected, so the guard pages serve no
purpose.
5. Remove the following components from the VxWorks configuration:
INCLUDE_KERNEL_HARDENING, INCLUDE_PROTECT_TEXT,
INCLUDE_PROTECT_VEC_TABLE, INCLUDE_PROTECT_TASK_STACK,
INCLUDE_TASK_STACK_NO_EXEC, and
INCLUDE_PROTECT_INTERRUPT_STACK. Without an MMU, these features
do not work. Including them only results in unnecessary consumption of
resources.
2 Real-Time Processes
2.5 About VxWorks RTP Virtual Memory Models
23
2
2.5 About VxWorks RTP Virtual Memory Models
VxWorks can be configured to use either a flat or an overlapped virtual memory
model for RTPs. By default, it uses the flat virtual memory model.
2.5.1 Flat RTP Virtual Memory Model
The flat virtual memory model is the default and requires no specific
configuration. With this model each VxWorks process has its own region of virtual
memory—processes do not overlap in virtual memory. The flat virtual-memory
map provides the following advantages:
Speed—context switching is fast.
Ease of debugging—the addresses for each process are unique.
A flexible programming model that provides the same process-model
regardless of MMU support. VxWorks’ application memory model allows for
running the same applications with and without an MMU. Hard real-time
determinism can be facilitated by using the same programming model, and by
disabling the MMU.
Systems can be developed and debugged on targets with an MMU, and then
shipped on hardware that does not have one, or has one that is not enabled for
deployment. The advantages of being able to do so include facilitating
debugging in development, lower cost of shipped units, as well as footprint
and performance advantages of targets without an MMU, or with one that is
not enabled. (For information in this regard, see 2.4 Using RTPs Without MMU
Support, p.20.)
With the flat virtual memory model, however, executable files must be relocated,
which means that their loading time is slower than with the overlapped model.
In addition, the flat model does not allow for selection of the specific virtual
memory area into which the RTP application's text, data, and bss segments are
installed. Instead a best-fit algorithm automatically selects the most appropriate
area of free virtual memory, based on the memory's current usage, where those
segments are to be loaded.
VxWorks
Application Programmer's Guide, 6.7
24
2.5.2 Overlapped RTP Virtual Memory Model
VxWorks can be configured to use an overlapped virtual memory model instead
of the flat model. With the overlapped model all VxWorks RTP applications share
a common range of virtual memory addresses into which the applications’ text,
data, and bss segments are installed, and all applications share the same execution
address.
In addition to holding the RTP application's text, data, and bss segments, the
overlapped area of virtual memory is automatically used for any elements that are
private to a process, such as its heap and task stacks (as long as there is enough
room; otherwise outside this area). The overlapped area is not, however, used for
elements that are designed to be accessible to other RTPs, such as shared data
regions and shared libraries.
To take advantage of the overlapped model, RTP applications must be built as
absolutely-linked executables (For information in this regard, see Building
Absolutely-Linked RTP Executables, p.35.). They share the same execution address,
which is within the overlapped area of virtual memory. For simplicity sake, the
execution address should be defined to be at the base of this area. When they are
loaded, the relocation step is therefore unnecessary.
Note that if several instances of the same RTP application are started on a system,
they occupy exactly the same ranges of virtual memory addresses, but each
instance still has physical memory allocated for its text, and data, and bss
segments. In other words, there is no sharing of the text segment between multiple
instances of an application.
The overlapped virtual memory model provides the following advantages:
Faster loading time when applications are built as absolutely-linked
executables. The relocation phase is skipped when absolutely-linked
executables applications are loaded because they are all installed at the same
virtual memory execution address. The improvement in loading times is on
the order of 10 to 50 percent. Note that the improvement in loading time is
dependent on the number of relocation entries in the file.
More precise control of the system's virtual memory. The user selects the area
in virtual memory into which the RTP application's text, data, and bss
segments are installed. With the flat memory model, on the other hand, a
best-fit algorithm automatically selects the most appropriate area of free
virtual memory, based on the memory's current usage, where those segments
are to be loaded.
2 Real-Time Processes
2.5 About VxWorks RTP Virtual Memory Models
25
2
More efficient usage of the system's virtual memory by reducing the
possibility of memory fragmentation.
The memory footprint of absolutely-linked RTP executables stored in ROMFS
is about only half that of relocatable executables once they (the
absolutely-linked executables) are stripped of their symbols and relocation
information. For information about using ROMFS and stripping executables,
see 3.7 Bundling RTP Applications in a System using ROMFS, p.66 and Stripping
Absolutely-Linked RTP Executables, p.36.
The overlapped virtual memory model does, however, require that the system
make use of an MMU.
VxWorks Overlapped Virtual Memory Model and Other Operating Systems
The VxWorks overlapped virtual memory model is similar to implementations for
other operating systems (such as UNIX), but differs in the following ways:
For VxWorks the user-mode virtual address space is not predefined. For
Windows, on the other hand, the operating system ensures that 2 GB (or 3 GB
with some configuration changes) are reserved for a process. For Solaris,
almost the entirety of the 4 GB address space is usable by a process. For Linux
almost 3 GB of the address space is reserved for a process. A VxWorks process
uses whatever is available and this can be different from target to target (see
User Regions of Virtual Memory, p.27).
The text, data, and bss segments of VxWorks applications are allocated
together in memory. In other operating systems (such as UNIX) there is
usually a separate allocation for each. This is significant only in the context of
defining the size of the RTP code region so that it is large enough to hold the
sum of the text, data, and bss segments, and not simply the text segment (see
RTP Code Region in Virtual Memory, p.27).
For Unix and Windows the concept of overlapped virtual address space for a
process covers the whole 4 GB of the address range and the address space
!CAUTION: For architectures that do not have (or use) a hardware mechanisms that
handles concurrent address spaces (usually known as an address space identifier,
or ASID), the overlapped virtual memory model also makes a system susceptible
to performance degradation. This is basically because each time a RTP's task is
switched out and another kernel or RTP task is switched in the entire cache must
be flushed and reloaded with the new task's context. With the ASID mechanism
this flushing is not (or less often) necessary because the cache is indexed using the
ASID. The hardware therefore knows when such a flushing is required or not.
VxWorks
Application Programmer's Guide, 6.7
26
organization is set in advance. In particular shared libraries are mapped
differently in each process but usually go in a pre-defined and reserved area
of the address space. In VxWorks only the RTP code region and the private
elements of an RTP are set in address ranges that are overlapped. The shared
libraries and shared data regions are not overlapped and appear at the same
virtual address in all processes; furthermore they can use any place in the user
regions except for the area covered by the RTP code region (this cannot be
controlled by the user). A process that does not use shared libraries still has a
portion of its address space unavailable if any other process uses shared
libraries and or shared data regions.
2.6 Using the Overlapped RTP Virtual Memory Model
Using the VxWorks overlapped virtual memory model requires an understanding
of user regions and how one of those regions is used for the area of overlapped
virtual memory called the RTP code region. This background information is
provided in 2.6.1 About User Regions and the RTP Code Region, p.26.
The process of implementing the overlapped memory model with an RTP code
region involves getting information about the user regions that are available in a
system, selecting the RTP code region based on the available user regions and RTP
application requirements, and configuring VxWorks accordingly. These steps are
described in 2.6.2 Configuring VxWorks for Overlapped RTP Virtual Memory, p.29.
In order for RTP applications make use of the advantages of overlapped virtual
memory, they, must be built as absolutely-linked executables. The compiler
options required to do so are described in 2.6.3 Using RTP Applications With
Overlapped RTP Virtual Memory, p.35 (in addition to information about
optimization by stripping and application execution).
2.6.1 About User Regions and the RTP Code Region
The VxWorks overlapped virtual memory model depends on using one of the
available user regions of virtual memory for the area of overlapped virtual memory
called an RTP code region.
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
27
2
User Regions of Virtual Memory
The virtual address space of a process in VxWorks does not correspond to a
contiguous range of address from 0 to 4 GB. It is generally composed of several
discontinuous blocks of virtual memory.
The blocks of virtual memory that are available for RTPs (that is, not used by the
kernel) are referred to as user regions. User regions are used for the RTP
applications’ text, data, and bss segments, as well as for their heaps and task stacks.
In addition, user regions are used for shared data regions, shared libraries, and so
on. Figure 2-1 illustrates an example of virtual memory layout in VxWorks.
RTP Code Region in Virtual Memory
Only one continuous area of the virtual address space of a process can be used to
overlap RTP application code, and this area must correspond (fully or in part) to
Figure 2-1 Virtual Memory and User Regions
Kernel Code
Kernel Heap
I/O Region 1
I/O Region 3
I/O Region 2
User Region 1
User Region 2
User Region 3
VxWorks
Application Programmer's Guide, 6.7
28
one of the user regions available in the target system’s memory. The user region
that is selected for the overlap is referred to as the RTP code region. The base address
and size of the RTP code region are defined by the user when the system is
configured.
RTP Code Region Example
Figure 2-2 illustrates the virtual memory layout of a system running three
applications, with the same three user regions as in Figure 2-1.
VxWorks has been configured to use the largest of the three user regions depicted
in Figure 2-1 (user region 2) for the RTP code region because the others were too
small for the code (text, data, and bss segments) of the largest application (RTP C).
As the heap for RTP C would not fit in the RTP code region, the best-fit algorithm
automatically placed it in User Region 1 in order to leave a larger area for other
purposes in User Region 2 (for example, it might be used for a large shared data
region).
Figure 2-2 Virtual Memory Layout With RTP Code Region
RTP_REGION_CODE_START
RTP_REGION_CODE_START
+
RTP_CODE_REGION_SIZE
Shared Data 1 (Reserved) Shared Data 1
RTP A RTP B RTP C
Heap
Heap
Code
Heap
Code
Code
RTP Code Region
User Region 1
User Region 3
User Region 2
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
29
2
Note that the size of the RTP Code Region is defined to be slightly larger than the
size of the text, data, and bss segments of that application.
The system illustrated in Figure 2-2 includes a shared data region used by RTP A
and RTP C, which map the region into their memory context. The location of
shared data regions is determined automatically at runtime on the basis of a
best-fit algorithm when they are created (for information shared data regions, see
3.5 Creating and Using Shared Data Regions, p.51). Note that RTP B is not allowed to
make use of the virtual addresses in the shared data region, even though the
application does not make use of it.
For information about the configuration parameters that define location and size
of the RTP code region (RTP_CODE_REGION_START and
RTP_CODE_REGION_SIZE), see Setting Configuration Parameters for the RTP Code
Region, p.33.
2.6.2 Configuring VxWorks for Overlapped RTP Virtual Memory
By default VxWorks is not configured for overlapped RTP virtual memory and
does not have an RTP code region defined. In order to use the overlapped RTP
virtual memory model, you must determine which user region is suitable for your
applications, and then reconfigure and rebuild VxWorks. This process involves the
following basic steps:
1. Boot an instance of VxWorks that has been configured with RTP support and
the flat virtual memory model, and get information about the available user
regions.
2. Identify the RTP code region (size and base address), based on the available
user regions and the requirements of your RTP applications.
3. Configure VxWorks for the overlapped memory model and specifying the
RTP code region in that configuration. Then rebuild the system.
These steps are described in detail in the following sections.
Getting Information About User Regions
Before you can configure VxWorks with an RTP code region, you must determine
what range of virtual memory is available for this purpose. To do so, boot an
instance of VxWorks that has been configured with RTP support (and which is by
default configured for the flat memory model), and get a listing of the user regions
that are available on the target.
VxWorks
Application Programmer's Guide, 6.7
30
The adrSpaceShow( ) kernel shell command (in verbose mode) can be used to list
user regions. For example:
-> adrSpaceShow 1
RAM Physical Address Space Info:
-------------------------------
Allocation unit size: 0x1000
Total number of units: 16384 (67108864 bytes)
Number of allocated units: 12150 (49766400 bytes)
Largest contiguous free block: 17342464
Number of free units: 4234 (17342464 bytes)
1 block(s) of 0x0108a000 bytes (0x02f70000-0x03ff9fff)
User Region (RTP/SL/SD) Virtual Space Info:
-------------------------------------------
Allocation unit size: 0x1000
Total number of units: 851968 (3489660928 bytes)
Number of allocated units: 0 (0 bytes)
Largest contiguous free block: 3221225472
Number of free units: 851968 (3489660928 bytes)
1 block(s) of 0xf000000 bytes (0x10000000-0x1effffff)
1 block(s) of 0xc0000000 bytes (0x30000000-0xefffffff)
Kernel Region Virtual Space Info:
---------------------------------
Allocation unit size: 0x1000
Total number of units: 196608 (805306368 bytes)
1 block(s) of 0x04000000 bytes (0x00000000-0x03ffffff)
1 block(s) of 0x0c000000 bytes (0x04000000-0x0fffffff)
1 block(s) of 0x08000000 bytes (0x20000000-0x27ffffff)
1 block(s) of 0x08000000 bytes (0x28000000-0x2fffffff)
1 block(s) of 0x04000000 bytes (0xf0000000-0xf3ffffff)
1 block(s) of 0x01000000 bytes (0xf4000000-0xf4ffffff)
1 block(s) of 0x05000000 bytes (0xf5000000-0xf9ffffff)
1 block(s) of 0x01010000 bytes (0xfa000000-0xfb00ffff)
1 block(s) of 0x00fe0000 bytes (0xfb010000-0xfbfeffff)
1 block(s) of 0x00050000 bytes (0xfbff0000-0xfc03ffff)
1 block(s) of 0x00fc0000 bytes (0xfc040000-0xfcffffff)
1 block(s) of 0x01810000 bytes (0xfd000000-0xfe80ffff)
1 block(s) of 0x00770000 bytes (0xfe810000-0xfef7ffff)
1 block(s) of 0x00010000 bytes (0xfef80000-0xfef8ffff)
1 block(s) of 0x00060000 bytes (0xfef90000-0xfefeffff)
1 block(s) of 0x00010000 bytes (0xfeff0000-0xfeffffff)
1 block(s) of 0x01000000 bytes (0xff000000-0xffffffff)
!CAUTION: For the MIPS architecture, in order to get correct information about user
regions, the system must be configured with MMU support. In contrast to other
architectures, this support is not provided automatically when VxWorks is
configured with INCLUDE_RTP. For MIPS, VxWorks must be configured with a
mapped kernel by adding the INCLUDE_MAPPED_KERNEL component. For more
information in this regard, see the VxWorks Architecture Supplement.
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
31
2
value = 0 = 0x0
This output shows that the following two user regions are available on this target:
from 0x10000000 to 0x1effffff
from 0x30000000 to 0xefffffff
Either region can be used for the RTP code region, depending on the requirements
of the system.
Identifying the RTP Code Region
The following guidelines should be used when defining the RTP code region:
The RTP code region should be slightly larger than the combined size of the
text, data, and bss segments of the largest RTP application that will run on the
system.
The RTP code region must fit in one user region—it cannot span multiple user
regions.
In selecting the user region to use for the RTP code region, take into
consideration number and size of the shared data regions, shared libraries and
other public mappings—such as those created by mmap( )—that are going to
be used in the system.
For simplicity sake, set the base address of the RTP code region to the base
address of a user region.
RTP Code Region Size
The RTP code region should be slightly larger than the combined size of the text,
data, and bss segments of the largest RTP application that will run on the system.
Allowing a bit of extra room allows for a moderate increase in the size of the
applications that you will run on the system. The readelfarch tool (for example,
readelfppc) can be used to determine the size of the text, data, and bss segments.
In addition, the page alignments of the text and data segments must be taken into
account for (the bss is already aligned by virtues of its being included in the data
segment's size). As a basic guideline for the alignment, use readelfarch -l (for
!CAUTION: If the overlapped virtual memory model is selected, but the base
address and size are not defined, or if the size is too small, absolutely-linked
executables will be relocated. If the executables are stripped, however, they cannot
be relocated and the RTP launch will fail.
VxWorks
Application Programmer's Guide, 6.7
32
example, readelfppc), and round up to one page for each of the segment’s sizes
displayed in the MemSiz fields.
For example, for a text segment with a MemSiz of 0x16c38, the round up value
would be 0x17000; for a data segment with a MemSiz field (do not use the FileSiz
field, which is much smaller as it does not account for the bss area) of 0x012a8, the
round up value would be 0x02000. The sum of the rounded values for the two
segments would then be 0x19000.
While it may be tempting to select the largest user region for the RTP code region
in order to accommodate the largest possible RTP application that the system
might run, this may not leave enough room in the other user regions to
accommodate all of the shared data regions, shared libraries, or other public
mappings required by the system.
User Region Choice
The RTP code region cannot span user regions. It must fit in one user region.
The RTP code region and public mappings are mutually exclusive because the RTP
code region is intended to receive the text, data, and bss segments of
absolutely-linked executables that cannot be relocated. However, public mappings
appear at the same address in all the RTPs that may want to use them (by design).
Since the location of the public mappings in virtual memory is not controlled by
the applications themselves, and since VxWorks applies a best-fit algorithm when
allocating virtual memory for a public mappings, there would be a risk of blocking
out a range of virtual addresses at a location meant to be used by an
absolutely-linked application's text, data, and bss segments.
RTP Code Region Base Address Choice
For simplicity sake, set the base address of the RTP code region to the base address
of a user region. If the RTP code region is set elsewhere in the user region, make
sure that its base address is page aligned. The page size for the target architecture
can be checked via the vmPageSizeGet( ) routine, which can be called directly
from the target shell.
RTP Code Region Size
When determining the size of the RTP code region, make sure that it is
page-aligned. The page size for the target architecture can be checked via the
vmPageSizeGet( ) routine, which can be called directly from the target shell.
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
33
2
Setting Configuration Parameters for the RTP Code Region
By default, VxWorks is configured for the flat virtual memory model. In order to
use the overlapped memory model, the configuration parameters listed below
must be set appropriately. They serve to select the overlapped virtual memory
model itself, and to define the RTP code region in virtual memory.
RTP Code Region Component Parameters
For information about determining the virtual memory addresses that you need
for setting the RTP_CODE_REGION_START and RTP_CODE_REGION_SIZE
parameters, see 2.6 Using the Overlapped RTP Virtual Memory Model, p.26, and
specifically 2.6.1 About User Regions and the RTP Code Region, p.26.
RTP_OVERLAPPED_ADDRESS_SPACE
Set to TRUE to change the virtual memory model from flat to overlapped. By
default it is set to FALSE. The following parameters have no effect unless this
one is set to TRUE.
RTP_CODE_REGION_START
Identifies the virtual memory address within a user region where the RTP
code region will start. That is, the base address of the memory area into which
the text, data, and bss segments of RTP applications will be installed. For
simplicity sake, set the base address of the RTP code region to the base address
of a user region. If the RTP code region is set elsewhere in the user region,
make sure that its base address is page aligned. For more information, see
Identifying the RTP Code Region, p.31.
Note that the VxWorks kernel is usually located in low memory addresses
except when required by the architecture (MIPS for instance). In that case, the
user mode virtual address space appears higher in the address range. This is
significant, as changing the configuration of the kernel (adding components or
devices) may have an impact on the base addresses of the available user
regions and may therefore impact the base address and size of the RTP code
region.
!CAUTION: The overlapped virtual memory model requires MMU support, which
is provided by default configurations of VxWorks—except for the MIPS
architecture. For MIPS, MMU protection requires a mapped kernel, which is
provided by INCLUDE_MAPPED_KERNEL component. This component is not
included by default and must be added to the kernel configuration. For more
information in this regard, see the VxWorks Architecture Supplement.
VxWorks
Application Programmer's Guide, 6.7
34
RTP_CODE_REGION_SIZE
The size (in bytes) of the virtual memory area into which the text, data, and bss
segments of RTP applications will be installed. Select a size that is slightly
larger than the combined size of the text, data, and bss segments of the largest
RTP application that will be run on the system. Make sure that the size is
page-aligned. For more information, see Identifying the RTP Code Region, p.31.
An RTP code region is defined for a system when
RTP_OVERLAPPED_ADDRESS_SPACE is set to TRUE and both
RTP_CODE_REGION_START and RTP_CODE_REGION_SIZE are set to values
other than zero.
Setting RTP Code Region Parameters for Multiple Projects
Once the settings for the RTP_OVERLAPPED_ADDRESS_SPACE,
RTP_CODE_REGION_START and RTP_CODE_REGION_SIZE parameters have
been determined, configuration for multiple projects can be automated by creating
a custom component description file (CDF) file containing those settings. Using a
CDF file means that the parameters do not have to be set manually each time a
project is created (either with Workbench or vxprj). If the file is copied into a
project directory before it is built, it applies the settings to that project. If it is copied
into the BSP directory before the project is created, it applies to all projects
subsequently based on that BSP.
The contents of the local CDF file (called, for example, 00rtpCodeRegion.cdf)
would look like this:
!CAUTION: If the overlapped virtual memory model is selected (by setting
RTP_OVERLAPPED_ADDRESS_SPACE to TRUE), then the user should set the
RTP_CODE_REGION_START and RTP_CODE_REGION_SIZE parameters
appropriately. If they are not defined, or if the size of the region is too small,
absolutely-linked executables will be relocated to another user region if there is
sufficient space. If not, the RTP launch will fail. In addition, if the executables are
stripped, they cannot be relocated regardless of the availability of space in other
user regions, and the RTP launch will fail.
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
35
2
Parameter RTP_OVERLAPPED_ADDRESS_SPACE {
DEFAULT TRUE
}
Parameter RTP_CODE_REGION_START {
DEFAULT 0xffc00000
}
Parameter RTP_CODE_REGION_SIZE {
DEFAULT 0x100000
}
For information about CDF files, file naming conventions, precedence of files, and
so on, see the VxWorks Kernel Programmer’s Guide: Kernel Customization.
2.6.3 Using RTP Applications With Overlapped RTP Virtual Memory
In order to take advantage of the efficiencies provided by the overlapped virtual
memory model, RTP applications must be built as absolutely-linked executables.
They are started in the same manner as relocatable RTP executables, and can also
be run on systems configured with the default flat virtual memory model, but will
be relocated. Absolutely-linked executables can stripped of their symbols to
reduce their footprint, but they would fail to run on a system configured with the
flat virtual memory model.
Building Absolutely-Linked RTP Executables
In order to take advantage of the efficiencies provided by the overlapped virtual
memory model, RTP applications must be built as absolutely-linked executables.
This is done by defining the link address with a special linker option. The option
can be used directly with the Wind River or GNU toolchain, or indirectly with a
Wind River Workbench GUI option or a command-line make macro. For
simplicity sake, it is useful to use the same link address for all executables (with
consideration of each of them fitting within the RTP code region, of course).
Selecting the Link Address
While technically the link address of the RTP executable could be anywhere in the
RTP code region (provided the address is low enough to leave room for the
applications text, data, and bss segments), Wind River recommends that it be set
to base address of the RTP code region itself (that is, the address specified with the
RTP_CODE_REGION_START configuration parameter; see Setting Configuration
Parameters for the RTP Code Region, p.33).
VxWorks
Application Programmer's Guide, 6.7
36
Linker Options
The following linker options must be used to generate executable with a
pre-determined link address (the base address of the text segment):
Wind River Compiler (diab): -Wl,-Bt
GNU compiler: -Wl,--defsym,__wrs_rtp_base
These linker options are automatically invoked when the default makefile rules of
the VxWorks cross-development environment are used.
The link address of the data segment cannot be specified separately. By design the
data segment of an application immediately follows the text segment, with
consideration for page alignment constraints. Both segments are installed together
in one block of allocated memory.
Note that Wind River Workbench provides GUI options for defining the base
address and the command-line build environment provides a make macro for
doing the same.
RTP_LINK_ADDR Make Macro
The RTP_LINK_ADDR make macro can be used to set the link address for
absolutely-linked executables when using the VxWorks build environment from
the command line. For example, as in executing the following command from
installDir/vxworks-6.x/target/usr/apps/sample/helloworld:
% make CPU=PENTIUM4 TOOL=gnu RTP_LINK_ADDR=0xe0000000
Stripping Absolutely-Linked RTP Executables
Absolutely-linked RTP executables are generated for execution at a predetermined
address. They do not, therefore, need symbolic information and relocation
information during the load phase. This information can be stripped out of the
executable using the striparch utility with the -s option. The resulting file is notably
smaller (on average 30%-50%). This make its footprint in ROMFS noticeably
smaller (if the executables are stored in ROMFS, this can reduce overall system
footprint), as well as making its load time somewhat shorter.
Note, however, that stripping symbols and relocation sections from an
absolutely-linked RTP executable means that it cannot be loaded if the
predetermined execution address cannot be granted by the system. This may occur
under the following circumstances:
2 Real-Time Processes
2.6 Using the Overlapped RTP Virtual Memory Model
37
2
The RTP code region is too small for the text, data, and bss segments of the
executable.
The executable file has been generated for an execution address that does not
correspond to the system's current RTP code region.
The absolutely-linked executable file is used on a system configured for the flat
virtual memory model.
Note that it may be useful to leave the symbolic and relocation information in
executable file situations for situations in which the execution environment may
change. For example, if a deployed system is updated with a new configuration of
VxWorks for which the existing applications’ execution addresses are no longer
valid (but the applications cannot be updated at the same) the applications suffer
the cost of relocation, but still execute. If, however, the applications had been
stripped, would be unusable.
Executing Absolutely-Linked RTP Executables
Absolutely-linked executables can be started in the same ways as relocatable
executables (for information in this regard, see 3.6 Executing RTP Applications,
p.54). The load time of absolutely-linked executables, however, is noticeably
shorter because they are not relocated. Their text, data, and bss segments are
installed according to the link address defined when they are compiled.
RTP executables can be executed on different target boards of the same
architecture as long as the VxWorks images running on those boards provide the
features that the applications require. However, if the RTP code regions are not at
the same location and of the same size, the RTP executables text, data, and bss
segments are relocated. This would typically happen if the RTP code region cannot
accommodate the segments, for any of the following reasons:
The size of the region is not sufficient.
The base address of the executable is too close to the top address of the RTP
code region (which prevents the segments from fitting in the remaining space).
The executable's base address does not corresponds to an address within a
user region.
Note that if the base address of the executable is completely outside of the RTP
code region but still corresponds to a user region then the executable is not
relocated, and is installed outside of the RTP code region. The side-effect of this
situation is that this may reduce the memory areas available to public mappings
(in particular shared data regions and shared libraries).
VxWorks
Application Programmer's Guide, 6.7
38
Executing Relocatable RTP Executables
Relocatable executables are supported when the RTP address space is overlapped.
They are relocated as usual, and their text and data segments are installed in the
RTP code region providing that the RTP code region is big enough to
accommodate them. If the RTP code region is too small for the segments, they are
installed elsewhere, providing again that there is enough room available in the
remaining areas of the user regions.
39
3
RTP Applications
3.1 Introduction 39
3.2 Configuring VxWorks For RTP Applications 40
3.3 Developing RTP Applications 40
3.4 Developing Static Libraries, Shared Libraries and Plug-Ins 50
3.5 Creating and Using Shared Data Regions 51
3.6 Executing RTP Applications 54
3.7 Bundling RTP Applications in a System using ROMFS 66
3.1 Introduction
Real-time process (RTP) applications are user-mode applications similar to those
used with other operating systems, such as UNIX and Linux. This chapter
provides information about writing RTP application code, using shared data
regions, executing applications, and so on. For information about multitasking,
I/O, file system, and other features of VxWorks that are available to RTP
applications, see the respective chapters in this guide.
Before you begin developing RTP applications, you should understand the
behavior of RTP applications in execution—that is, as processes. For information
VxWorks
Application Programmer's Guide, 6.7
40
about RTP scheduling, creation and termination, memory, tasks and so on, see
2. Real-Time Processes.
For information about using Workbench and the command-line build
environment for developing RTP applications, see the Wind River Workbench by
Example guide and the VxWorks Command-Line Tools User’s Guide, respectively.
For information about developing kernel-mode applications (which execute in
VxWorks kernel space) see the VxWorks Kernel Programmer’s Guide: Kernel
Applications.
3.2 Configuring VxWorks For RTP Applications
RTP applications require VxWorks kernel support. For information about
configuring VxWorks for RTPs, see 2.3 Configuring VxWorks For Real-time Processes,
p.17.
3.3 Developing RTP Applications
Real-time process (RTP) applications have a simple structural requirement that is
common to C programs on other operating systems—they must include a main( )
routine. VxWorks provides C and C++ libraries for application development, and
the kernel provides services for user-mode applications by way of system calls.
RTP applications are built independently of the VxWorks operating system, using
cross-development tools on the host system. When an application is built, user
code is linked to the required VxWorks application API libraries, and a single ELF
!CAUTION: Because RTP applications are built independently of the operating
system, the build process cannot determine if the instance of VxWorks on which
the application will eventually run has been configured with all of the components
that the application requires. It is, therefore, important for application code to
check for errors indicating that kernel facilities are not available and to respond
appropriately. For more information, see 3.3.6 Checking for Required Kernel Support,
p.49.
3 RTP Applications
3.3 Developing RTP Applications
41
3
executable is produced. By convention, VxWorks RTP executables are named with
a .vxe file-name extension. The extension draws on the vx in VxWorks and the e in
executable to indicate the nature of the file.
Applications are created as either fully-linked or partially linked executables using
cross-development tools on a host system. (Partially-linked executables are used
with shared libraries.) They can be either relocatable or absolutely-linked objects
depending on the virtual memory model in use on the VxWorks system. By default
RTP applications are created as relocatable executables so as to be usable with
either the flat virtual memory model or the overlapped virtual memory model.
They can also be generated absolutely-linked to fully take advantage of the
overlapped virtual memory model (for information about RTP virtual memory
models, see 2.5 About VxWorks RTP Virtual Memory Models, p.23).
During development, processes can be spawned to execute applications from the
VxWorks shell or various host tools. Applications can also be started
programmatically, and systems can be configured to start applications
automatically at boot time for deployed systems. For systems with multiple
applications, not all must be started at boot time. They can be started later by other
applications, or interactively by users. Developers can also implement their own
application startup managers.
A VxWorks application can be loaded from any file system for which the kernel
has support (NFS, ftp, and so on). RTP executables can be stored on disks, in RAM,
flash, or ROM. They can be stored on the target or anywhere else that is accessible
over a network connection.
In addition, applications can be bundled into a single image with the operating
system using the ROMFS file system (see 3.7 Bundling RTP Applications in a System
using ROMFS, p.66). The ROMFS technology is particularly useful for deployed
systems. It allows developers to bundle application executables with the VxWorks
image into a single system image. Unlike other operating systems, no root file
system (on NFS or diskette, for example) is required to hold application binaries,
configuration files, and so on.
RTP Applications With Shared Libraries and Plug-Ins
For information about the special requirements of applications that use shared
libraries and plug-ins, see 4.8.9 Developing RTP Applications That Use Shared
Libraries, p.88 and 4.9.3 Developing RTP Applications That Use Plug-Ins, p.95.
VxWorks
Application Programmer's Guide, 6.7
42
RTP Applications for the Overlapped Virtual Memory Model
Relocatable RTP application executables (the default) can be run on a system that
has been configured for the overlapped virtual memory model. However, to take
advantage of this model, RTP applications must be built as absolutely-linked
executables. For more information, see 2.6.3 Using RTP Applications With
Overlapped RTP Virtual Memory, p.35
RTP Applications for UP and SMP Configurations of VxWorks
RTP applications can be used for both the uniprocessor (UP) and symmetric
multiprocessing (SMP) configurations of VxWorks. They must, however, only use
the subset of APIs provided by VxWorks SMP and be compiled specifically for the
system in question (SMP or UP).
Among other things, this means that the RTP application must do the following in
order to run on both VxWorks UP and VxWorks SMP systems:
Use semaphores or another mechanism supported for SMP instead of
taskRtpLock( ).
Use the __thread storage class instead of tlsLib routines.
For more information about using the SMP configuration of VxWorks, and about
migrating applications from VxWorks UP to VxWorks SMP, see the VxWorks
Kernel Programmer’s Guide: VxWorks SMP.
Migrating Kernel Applications to RTP Applications
For information about migrating VxWorks kernel applications to RTP
applications, see A. Kernel to RTP Application Migration.
3.3.1 RTP Application Structure
VxWorks RTP applications have a simple structural requirement that is common
to C programs on other operating systems—they must include a main( ) routine.
The main( ) routine can be used with the conventional argc and argv arguments,
as well as two additional optional arguments, envp and auxp:
3 RTP Applications
3.3 Developing RTP Applications
43
3
int main
(
int argc, /* number of arguments */
char * argv[], /* null-terminated array of argument strings */
char * envp[], /* null-terminated array of environment variable strings */
void * auxp /* implementation specific auxiliary vector */
);
The envp and auxp arguments are usually not required by the application code.
The envp argument is used for passing VxWorks environment variables to the
application. These can be set by a user and are typically inherited from the calling
environment. Note that the getenv( ) routine can be used to get the environment
variables programmatically, and the setenv( ) and unsetenv( ) routines to change
or remove them. (For more information about environment variables, see
2.2.8 RTPs and Environment Variables, p.15.)
Environment variables are general properties of the running system, such as the
default path—unlike argv arguments, which are passed to a particular invocation
of the application, and are unique to that application. The system uses the auxp
vector to pass system information to the new process, including page size, cache
alignment size and so on.
The argv[0] argument is typically the relative path to the executable.
3.3.2 VxWorks Header Files
RTP applications often make use of VxWorks operating system facilities or utility
libraries. This usually requires that the source code refer to VxWorks header files.
The following sections discuss the use of VxWorks header files.
VxWorks header files supply ANSI C function prototype declarations for all global
VxWorks routines. VxWorks provides all header files specified by the ANSI
X3.159-1989 standard.
VxWorks system header files for RTP applications are in the directory
installDir/vxworks-6.x/target/usr/h and its subdirectories (different directories are
used for kernel applications).
!CAUTION: Do not reference header files that are for kernel code (which are in and
below installDir/vxworks-6.x/target/h) in application code.
VxWorks
Application Programmer's Guide, 6.7
44
POSIX Header Files
Traditionally, VxWorks has provided many header files that are described by
POSIX.1, although their content only partially complied with that standard. For
user-mode applications the POSIX header files are more strictly compliant with the
POSIX.1 description, in both in their content and in their location. See 7.4 Standard
C Library: libc, p.198 for more information.
VxWorks Header File: vxWorks.h
It is often useful to include header file vxWorks.h in all application modules in
order to take advantage of architecture-specific VxWorks facilities. Many other
VxWorks header files require these definitions. Include vxWorks.h with the
following line:
#include <vxWorks.h>
Other VxWorks Header Files
Applications can include other VxWorks header files as needed to access VxWorks
facilities. For example, an application module that uses the VxWorks linked-list
subroutine library must include the lstLib.h file with the following line:
#include <lstLib.h>
The API reference entry for each library lists all header files necessary to use that
library.
ANSI Header Files
All ANSI-specified header files are included in VxWorks. Those that are
compiler-independent or more VxWorks-specific are provided in
installDir/vxworks-6.x/target/usr/h while a few that are compiler-dependent (for
example stddef.h and stdarg.h) are provided by the compiler installation. Each
toolchain knows how to find its own internal headers; no special compile flags are
needed.
3 RTP Applications
3.3 Developing RTP Applications
45
3
ANSI C++ Header Files
Each compiler has its own C++ libraries and C++ headers (such as iostream and
new). The C++ headers are located in the compiler installation directory rather
than in installDir/vxworks-6.x/target/usr/h. No special flags are required to enable
the compilers to find these headers. For more information about C++
development, see 5. C++ Development.
Compiler -I Flag
By default, the compiler searches for header files first in the directory of the source
code and then in its internal subdirectories. In general,
installDir/vxworks-6.x/target/usr/h should always be searched before the
compilers’ other internal subdirectories; to ensure this, always use the following
flag for compiling under VxWorks:
-I %WIND_BASE%/target/usr/h %WIND_BASE%/target/usr/h/wrn/coreip
Some header files are located in subdirectories. To refer to header files in these
subdirectories, be sure to specify the subdirectory name in the include statement,
so that the files can be located with a single -I specifier. For example:
#include <vxWorks.h>
#include <sys/stat.h>
VxWorks Nested Header Files
Some VxWorks facilities make use of other, lower-level VxWorks facilities. For
example, the tty management facility uses the ring buffer subroutine library. The
tty header file tyLib.h uses definitions that are supplied by the ring buffer header
file rngLib.h.
It would be inconvenient to require you to be aware of such include-file
interdependencies and ordering. Instead, all VxWorks header files explicitly
include all prerequisite header files. Thus, tyLib.h itself contains an include of
rngLib.h. (The one exception is the basic VxWorks header file vxWorks.h, which
all other header files assume is already included.)
NOTE: In releases prior to VxWorks 5.5 Wind River recommended the use of the
flag -nostdinc. This flag should not be used with the current release since it prevents
the compilers from finding headers such as stddef.h.
VxWorks
Application Programmer's Guide, 6.7
46
Generally, explicit inclusion of prerequisite header files can pose a problem: a
header file could get included more than once and generate fatal compilation
errors (because the C preprocessor regards duplicate definitions as potential
sources of conflict). However, all VxWorks header files contain conditional
compilation statements and definitions that ensure that their text is included only
once, no matter how many times they are specified by include statements. Thus,
an application can include just those header files it needs directly, without regard
to interdependencies or ordering, and no conflicts will arise.
VxWorks Private Header Files
Some elements of VxWorks are internal details that may change and so should not
be referenced in your application. The only supported uses of VxWorks facilities
are through the public definitions in the header file, and through the public APIs.
Your adherence ensures that your application code is not affected by internal
changes in the implementation of a VxWorks facility.
Some header files mark internal details using HIDDEN comments:
/* HIDDEN */
...
/* END HIDDEN */
Internal details are also hidden with private header files that are stored in the
directory installDir/vxworks-6.x/target/usr/h/private. The naming conventions for
these files parallel those in installDir/vxworks-6.x/target/usr/h with the library
name followed by P.h. For example, the private header file for semLib is
installDir/vxworks-6.x/target/usr/h/private/semLibP.h.
3.3.3 RTP Application APIs: System Calls and Library Routines
VxWorks provides an extensive set of APIs for developing RTP applications. As
with other operating systems, these APIs include both system calls and library
routines. Some library routines include system calls, and others execute entirely in
user space. Note that the user-mode libraries provided for RTP applications are
completely separate from kernel libraries.
Note that a few APIs operate on the process rather than the task level—for
example, kill( ) and exit( ).
3 RTP Applications
3.3 Developing RTP Applications
47
3
VxWorks System Calls
Because kernel mode and user mode have different instruction sets and MMU
settings, RTP applications—which run in user mode—cannot directly access
kernel routines and data structures (as long as the MMU is on). System calls
provide the means by which applications request that the kernel perform a service
on behalf of the application, which usually involves operations on kernel or
hardware resources.
System calls are transparent to the user, but operate as follows: For each system
call, an architecture-specific trap operation is performed to change the CPU
privilege level from user mode to kernel mode. Upon completion of the operation
requested by the trap, the kernel returns from the trap, restoring the CPU to user
mode. Because they involve a trap to the kernel, system calls have higher overhead
than library routines that execute entirely in user mode.
Note that if VxWorks is configured without a component that provides a system
call required by an application, ENOSYS is returned as an errno by the
corresponding user-mode library API.
Also note that if a system call has trapped to the kernel and is waiting on a system
resource when a signal is received, the system call may be aborted. In this case the
errno EINTR may be returned to the caller of the API.
System calls are identified as such in the VxWorks API references.
The set of system calls provided by VxWorks can be extended by kernel
developers. They can add their own facilities to the operating system, and make
them available to processes by registering new system calls with the VxWorks
system call infrastructure. For more information, see the VxWorks Kernel
Programmer’s Guide: Kernel Customization.
Monitoring System Calls
The VxWorks kernel shell provides facilities for monitoring system calls. For more
information, see the VxWorks Kernel Programmer’s Guide: Target Tools, the
syscall monitor entry in the VxWorks Kernel Shell Command Reference, and the
sysCallMonitor( ) entry in the VxWorks Kernel API Reference.
VxWorks Libraries
VxWorks distributions include libraries of routines that provide APIs for RTP
applications. Some of these routines execute entirely in the process in user mode.
Others are wrapper routines that make one or more system calls, or that add
VxWorks
Application Programmer's Guide, 6.7
48
additional functionality to one or more system calls. For example, printf( ) is a
wrapper that calls the system call write( ). The printf( ) routine performs a lot of
formatting and so on, but ultimately must call write( ) to output the string to a file
descriptor.
Library routines that do not include system calls execute in entirely user mode,
and are therefore more efficient than system calls, which include the overhead of
a trap to the kernel.
Dinkum C and C++ Libraries
Dinkum C and C++ libraries—including embedded (abridged) C++ libraries—are
provided for VxWorks RTP application development. For more information about
these libraries, see the Dinkum API references.
The VxWorks distribution also provides a C run-time shared library feature that is
similar to that of the UNIX C run-time shared library. For information about this
library, see 4.10 Using the VxWorks Run-time C Shared Library libc.so, p.100.
For more information about C++ development, see 5. C++ Development.
Custom Libraries
For information about creating custom user-mode libraries for applications, see
4. Static Libraries, Shared Libraries, and Plug-Ins.
API Documentation
For detailed information about the routines available for use in applications, see
the VxWorks Application API Reference and the Dinkumware library references.
3.3.4 Reducing Executable File Size With the strip Facility
For production systems, it may be useful to strip executables. The striparch utility
can be used with the --strip-unneeded and --strip-debug (or -d) options for any
RTP executables.
The --strip-all (or -s) option should only be used with absolutely-linked
executables. For more information in this regard, see Stripping Absolutely-Linked
RTP Executables, p.36 and Caveat With Regard to Stripped Executables, p.54). For
3 RTP Applications
3.3 Developing RTP Applications
49
3
information about absolutely-linked RTP executables and the overlapped virtual
memory model, see 2.5 About VxWorks RTP Virtual Memory Models, p.23 and
2.6 Using the Overlapped RTP Virtual Memory Model, p.26.
3.3.5 RTP Applications and Multitasking
If an application is multi-threaded (has multiple tasks), the developer must ensure
that the main( ) routine task starts all the other tasks.
VxWorks can run one or more applications simultaneously. Each application can
spawn multiple tasks, as well as other processes. Application tasks are scheduled
by the kernel, independently of the process within which they execute—processes
themselves are not scheduled. In one sense, processes can be viewed as containers
for tasks.
In developing systems in which multiple applications will run, developers should
therefore consider:
the priorities of tasks running in all the different processes
any task synchronization requirements between processes as well as within
processes
For information about task priorities and synchronization, see 6.2 Tasks and
Multitasking, p.110 and 6.8 Intertask and Interprocess Communication, p.139.
3.3.6 Checking for Required Kernel Support
VxWorks is a highly configurable operating system. Because RTP applications are
built independently of the operating system, the build process cannot determine if
the instance of VxWorks on which the application will eventually run has been
configured with all of the components that the application requires (for example,
networking and file systems).
It is, therefore, important for application code to check for errors indicating that
kernel facilities are not available (that is, check the return values of API calls) and
to respond appropriately. If an API requires a facility that is not configured into
the kernel, an errno value of ENOSYS is returned when the API is called.
The syscallPresent( ) routine can also be used to determine whether or not a
particular system call is present in the system.
VxWorks
Application Programmer's Guide, 6.7
50
3.3.7 Using Hook Routines
For information about using hook routines, which are called during the execution
of rtpSpawn( ) and rtpDelete( ), see the VxWorks API reference for rtpHookLib
and 6.4.9 Tasking Extensions: Hook Routines, p.131.
3.3.8 Developing C++ Applications
For information about developing C++ applications, see 5. C++ Development.
3.3.9 Using POSIX Facilities
For information about POSIX APIs available with VxWorks, and a comparison of
native VxWorks and POSIX APIs, see 7. POSIX Facilities.
3.3.10 Building RTP Applications
RTP Applications can be built using Wind River Workbench or the command-line
VxWorks development environment. For information about these facilities, see the
Wind River Workbench by Example guide and the VxWorks Command-Line Tools
User’s Guide, respectively.
Note that applications that make use of share libraries or plug-ins must be built as
dynamic executables. See 4.6 Common Development Facilities, p.78 for information
about dynamic executables.
3.4 Developing Static Libraries, Shared Libraries and Plug-Ins
For information about developing libraries and plug-ins for use with RTP
applications, see 4. Static Libraries, Shared Libraries, and Plug-Ins.
3 RTP Applications
3.5 Creating and Using Shared Data Regions
51
3
3.5 Creating and Using Shared Data Regions
Shared data regions provide a means for RTP applications to share a common area
of memory with each other. Processes otherwise provide for full separation and
protection of all processes from one another.
The shared data region facility provides no inherent facility for mutual exclusion.
Applications must use standard mutual exclusion mechanisms—such as public
semaphores—to ensure controlled access to a shared data region resources (see
6.8 Intertask and Interprocess Communication, p.139).
For systems without an MMU enabled, shared data regions simply provide a
standard programming model and separation of data for the applications, but
without the protection provided by an MMU.
A shared data region is a single block of contiguous virtual memory. Any type of
memory can be shared, such as RAM, memory-mapped I/O, flash, or VME.
Multiple shared data regions can be created with different characteristics and
different users.
Common uses of a shared data region would include video data from buffers.
The sdLib shared data region library provides the facilities for the following
activities:
Creating a shared data region.
Opening the region.
Mapping the region to a process’ memory context so that it can be accessed.
Changing the protection attributes of a region that has been mapped.
Un-mapping the region when a process no longer needs to access it.
Deleting the region when no processes are attached to it.
Operations on shared data regions are not restricted to applications—kernel tasks
may also perform these operations.
Shared data regions use memory resources from both the kernel’s and the
application’s memory space. The kernel's heap is used to allocate the shared data
object. The physical memory for the shared data region is allocated from the global
physical page pool.
When a shared data region is created, it must be named. The name is global to the
system, and provides the means by which applications identify regions to be
shared.
VxWorks
Application Programmer's Guide, 6.7
52
Shared data regions can be created in systems with and without MMU support.
Also see 6.11 Shared Data Structures, p.142 and 7.19.3 Shared Memory Objects, p.263
3.5.1 Configuring VxWorks for Shared Data Regions
For applications to be able to use shared data region facilities, the
INCLUDE_SHARED_DATA component must be included in VxWorks.
3.5.2 Creating Shared Data Regions
Shared data regions are created with sdCreate( ). They can be created by an
application, or from a kernel task such as the shell. The region is automatically
mapped into the creator’s memory context. The sdOpen( ) routine also creates and
maps a region—if the region name used in the call does not exist in the system.
The creation routines take parameters that define the name of the region, its size
and physical address, MMU attributes, and two options that govern the regions
persistence and availability to other processes.
The MMU attribute options define access permissions and the cache option for the
process’ page manager:
read-only
read/write
read/execute
read/write/execute
cache write-through, cache copy-back, or cache off
By default, the creator process always gets read and write permissions for the
region, regardless of the permissions set with the creation call, which affect all
client processes. The creator, can however, change its own permissions with
sdProtect( ). See Changing Shared Data Region Protection Attributes, p.53.
!WARNING: If the shell is used to create shared data regions, the optional physical
address parameter should not be used with architectures for which the
PHYS_ADDRESS type is 64 bits. The shell passes the physical address parameter as
32 bits regardless. If it should actually be 64 bits, the arguments will not be aligned
with the proper registers and unpredictable behavior will result. See the VxWorks
Architecture Supplement for the processor in question for more information.
3 RTP Applications
3.5 Creating and Using Shared Data Regions
53
3
The SD_LINGER creation option provides for the persistence of the region after all
processes have unmapped from it—the default behavior is for it to cease to exist,
all of its resources being reclaimed by the system. The second option,
SD_PRIVATE, restricts the accessibility of the region to the process that created it.
This can be useful, for example, for restricting memory-mapped I/O to a single
application.
3.5.3 Accessing Shared Data Regions
A shared data region is automatically opened and mapped to the process that
created it, regardless of whether the sdCreate( ) or sdOpen( ) routine was used.
A client process must use the region’s name with sdOpen( ) to access the region.
The region name can be hard-coded into the client process’ application, or
transmitted to the client using IPC mechanisms.
Mutual exclusion mechanisms should be used to ensure that only one application
can access the same shared data region at a time. The sdLib library does not
provide any mechanisms for doing so automatically. For more information about
mutual exclusion, see 6.8 Intertask and Interprocess Communication, p.139.
For information about accessing shared data regions from interrupt service
routines (ISRs), see the VxWorks Kernel Programmer’s Guide: Multitasking.
Changing Shared Data Region Protection Attributes
The MMU attributes of a shared data region can be changed with sdProtect( ). The
change can only be to a sub-set of the attributes defined when the region was
created. For example, if a region was created with only read and write permissions,
these can only be changed to read-only and no access, and not expanded to other
permissions. In addition, the changes are made only for the caller’s process; they
do not affect the permissions of other processes.
A set of macros is provided with the library for common sets of MMU attribute
combinations.
3.5.4 Deleting Shared Data Regions
Shared data regions can be deleted explicitly and automatically. However,
deletion of regions is restricted by various conditions, including how the region
was created, and if any processes are attached to it.
VxWorks
Application Programmer's Guide, 6.7
54
If a shared data region was created without the SD_LINGER option, the region is
deleted if:
Only one process is mapped to the region, and its application calls
sdUnmap( ).
Only one process is mapped to the region, and the process exits.
If a shared data region is created with the SD_LINGER option, it is never deleted
implicitly. The region is only deleted if sdDelete( ) is called on it after all clients
have unmapped it.
3.6 Executing RTP Applications
Because a process is an instance of a program in execution, starting and
terminating an application involves creating and deleting a process. A process
must be spawned in order to initiate execution of an application; when the
application exits, the process terminates. Processes may also be terminated
explicitly.
Processes provide the execution environment for applications. They are started
with rtpSpawn( ). The initial task for any application is created automatically in
the create phase of the rtpSpawn( ) call. This initial task provides the context
within which main( ) is called.
Caveat With Regard to Stripped Executables
Executables that have been stripped of their relocation information will not run on
a system configured with the flat virtual memory model (the default). The launch
will fail—silently if initiated from the shell’s C interpreter. The error detection and
reporting facility can be used to display the reason for failure, as follows (with
output abbreviated for purposes of clarity):
-> edrShow
[...]
rtpLoadAndRun(): RTP 0x1415010 Init Task exiting. errno = 0xba006e [...]
-> printErrno 0xba006e
errno = 0xba006e : S_loadRtpLib_NO_RELOCATION_SECTION.
3 RTP Applications
3.6 Executing RTP Applications
55
3
Executables should only be stripped of their relocation information if they are built
as absolutely-linked executables and run on a system that is properly configured
for the overlapped virtual memory model.
For information about these topics, see 2.5 About VxWorks RTP Virtual Memory
Models, p.23 and 2.6 Using the Overlapped RTP Virtual Memory Model, p.26;
3.3.4 Reducing Executable File Size With the strip Facility, p.48; and 11. Error Detection
and Reporting.
Starting an RTP Application
An RTP application can be started and terminated interactively,
programmatically, and automatically with various facilities that act on processes.
An application can be started by:
a user from Workbench
a user from the shell or debugger with rtpSp (for the shell C interpreter) or
rtp exec (for the shell command interpreter)
other applications or from the kernel with rtpSpawn( )
one of the startup facilities that runs applications automatically at boot time
For more information, see 3.6.1 Running Applications Interactively, p.57 and
3.6.2 Running Applications Automatically, p.58.
Stopping an RTP Application
RTP applications terminate automatically when the program’s main( ) routine
returns. They can also be terminated explicitly.
Automatic Termination
By default, a process is terminated when the main( ) routine returns, because the
C compiler automatically inserts an exit( ) call at the end of main( ). This is
undesirable behavior if main( ) spawns other tasks, because terminating the
process deletes all the tasks that were running in it. To prevent this from
happening, any application that uses main( ) to spawn tasks can call taskExit( )
instead of return( ) as the last statement in the main( ) routine. When main( )
includes taskExit( ) as its last call, the process’ initial task can exit without the
kernel automatically terminating the process.
VxWorks
Application Programmer's Guide, 6.7
56
Explicit Termination
A process can explicitly be terminated when a task does either of the following:
Calls exit( ) to terminate the process in which it is are running, regardless of
whether or not other tasks are running in the process.
Calls the kill( ) routine to terminate the specified process (using the process
ID).
Terminating processes—either programmatically or by interactive user
command—can be used as a means to update or replace application code. Once the
process is stopped, the application code can be replaced, and the process started
again using the new executable.
Storing Application Executables
Application executables can be stored in the VxWorks ROMFS file system on the
target system, on the host development system, or on any other file system
accessible to the target system (another workstation on a network, for example).
Various combinations of startup mechanisms and storage locations can be used for
developing systems and for deployed products. For example, storing application
executables on the host system and using the kernel shell to run them is ideal for
the early phases of development because of the ease of application re-compilation
and of starting applications. Final products, on the other hand, can be configured
and built so that applications are bundled with the operating system, and started
automatically when the system boots, all independently of humans, hosts, and
hard drives.
!CAUTION: The error S_rtp_INVALID_FILE is generated when the path and name of
the RTP executable is not provided, or when the executable cannot be found using
the indicated path. RTP executable files are accessed and loaded from the VxWorks
target. Therefore, the path to the executable file must be valid from the point of
view of the target itself. Correctly specifying the path may involve including the
proper device name as part of the path. For example:
host:d:/my.vxe
3 RTP Applications
3.6 Executing RTP Applications
57
3
3.6.1 Running Applications Interactively
Running applications interactively is obviously most desirable for the
development environment, but it can also be used to run special applications on
deployed systems that are otherwise not run as part of normal system operation
(for diagnostic purposes, for example). In the latter case, it might be advantageous
to store auxiliary applications in ROMFS; see 3.7 Bundling RTP Applications in a
System using ROMFS, p.66.
Starting Applications
From the shell, applications can be started with shell command variants of the
rtpSpawn( ) routine.
Using the traditional C interpreter, the rtpSp command is used as follows:
rtpSp "host:c:/myInstallDir/vxworks-6.1/target/usr/root/PPC32diab/bin/myVxApp.vxe first
second third"
In this example, a process is started to run the application myVxApp.vxe, which is
stored on the host system in
c:\myInstallDir\vxworks-6.x\target\usr\root\PPC32diab\bin. The application
takes command-line arguments, and in this case they are first, second, and third.
Additional arguments can also be used to specify the initial task priority, stack
size, and other rtpSpawn( ) options.
Note that some types of connections between the target and host require modifiers
to the pathname (NFS is transparent; FTP requires hostname: before the path if it
is not on the same system from which VxWorks was booted; the VxWorks
simulator requires a host: prefix; and so on).
Using the shell’s command interpreter, the application can be started in two
different ways, either directly specifying the path and name of the executable file
and the arguments (like with a UNIX shell):
host:c:/myInstallDir/vxworks-6.1/target/usr/root/PPC32diab/bin/myVxApp.vxe first second third
Or, the application can be started with the rtp exec command:
rtp exec host:c:/myInstallDir/vxworks-6.1/target/usr/root/PPC32diab/bin/myVxApp.vxe first
second third
Note that you must use forward-slashes as path delimiters with the shell, even for
files on Windows hosts. The shell does not work with back-slash delimiters.
Regardless of how the process is spawned, the application runs in exactly the same
manner.
VxWorks
Application Programmer's Guide, 6.7
58
Note that you can switch from the C interpreter to the command interpreter with
the cmd command; and from the command interpreter to the C interpreter with
the C command. The command interpreter rtp exec command has options that
provide more control over the execution of an application.
Terminating Applications
An application can be stopped by terminating the process in which it is running.
Using the shell’s command interpreter, a process can be killed with the full
rtp delete command, or with either of the command shell aliases kill and rtpd. It
can also be killed with CTRL+C if it is running in the foreground (that is, it has not
been started using an ampersand after the rtp exec command and the name of the
executable—which is similar to UNIX shell command syntax for running
applications in the background).
With the shell’s C interpreter, a process can be terminated with kill( ) or
rtpDelete( ).
For a description of all the ways in which a process can be terminated, see
2.2.3 RTP Termination, p.10.
And, of course, rebooting the system terminates all processes that are not
configured to restart at boot time.
3.6.2 Running Applications Automatically
Running applications automatically—without user intervention—is required for
many deployed systems. VxWorks applications can be started automatically in a
variety of ways. In addition, application executables can be stored either on a host
system—which can be useful during development even when a startup facility is
in use—or they can be stored on the target itself.
The VxWorks application startup facility is designed to serve the needs of both the
development environment and deployed systems.
For the development environment, the startup facility can be used interactively to
specify a variety of applications to be started at boot time. The operating system
does not need to be rebuilt to run different sets of applications, or to run the same
applications with different arguments or process-spawn parameters (such as the
priority of the initial task). That is, as long as VxWorks has been configured with
the appropriate startup components, and with the components required by the
applications themselves, the operating system can be completely independent and
3 RTP Applications
3.6 Executing RTP Applications
59
3
ignorant of the applications that it will run until the moment it boots and starts
them. One might call this a blind-date scenario.
For deployed systems, VxWorks can be configured and built with statically
defined sets of applications to run at boot time (including their arguments and
process-spawn parameters). The applications can also be built into the system
image using the ROMFS file system. And this scenario might be characterized as
most matrimonial.
In this section, use of the startup facility is illustrated with applications that reside
on the host system. For information about using ROMFS to bundle applications
with the operating system, and for examples illustrating how applications in the
ROMFS file system are identified for the startup facility, see 3.7 Bundling RTP
Applications in a System using ROMFS, p.66.
Startup Facility Options
Various means can be used to identify applications to be started, as well as to
provide their arguments and process-spawn parameters for the initial application
task. Applications can be identified and started automatically at boot time using
any of the following:
an application startup configuration parameter
a boot loader parameter
a VxWorks shell script
the usrRtpAppInit( ) routine
The components that support this functionality are, respectively:
INCLUDE_RTP_APPL_INIT_STRING
INCLUDE_RTP_APPL_BOOTLINE
INCLUDE_RTP_APPL_INIT_CMD_SHELL_SCRIPT (for the command
interpreter; the C interpreter can also be used with other components)
INCLUDE_RTP_APPL_USER
The boot loader parameter and the shell script methods can be used both
interactively (without modifying the operating system) and statically. Therefore,
they are equally useful for application development, and for deployed systems.
The startup configuration parameter and the usrRtpAppInit( ) routine methods
require that the operating system be re-configured and rebuilt if the developer
VxWorks
Application Programmer's Guide, 6.7
60
wants to change the set of applications, application arguments, or process-spawn
parameters.
There are no speed or initialization-order differences between the various means
of automatic application startup. All of the startup facility components provide
much the same performance.
Application Startup String Syntax
A common string syntax is used with both the startup facility configuration
parameter and the boot loader parameter for identifying applications. The basic
syntax is as follows:
#progPathName^arg1^arg2^arg3#progPathName...
This syntax involves only two special characters:
#
A pound sign identifies what immediately follows as the path and name of an
application executable.
^
A caret delimits individual arguments (if any) to the application. A caret is not
required after the final argument.
The carets are not required—spaces can be used instead—with the startup
configuration parameter, but carets must be used with the boot loader
parameter.
The following examples illustrate basic syntax usage:
#c:/apps/myVxApp.vxe
Starts c:\apps\myVxApp.vxe
#c:/apps/myVxApp.vxe^one^two^three
Starts c:\apps\myVxApp.vxe with the arguments one, two, three.
#c:/apps/myOtherVxApp.vxe
Starts c:\apps\myOtherVxApp.vxe without any arguments.
#c:/apps/myVxApp.vxe^one^two^three#c:/apps/myOtherVxApp.vxe
Starts both applications, the first one with its three arguments.
The startup facility also allows for specification of rtpSpawn( ) routine parameters
with additional syntax elements:
3 RTP Applications
3.6 Executing RTP Applications
61
3
%p=value
Sets the priority of the initial task of the process. Priorities can be in the range
of 0-255.
%s=value
Sets the stack size for the initial task of the process (an integer parameter).
%o=value
Sets the process options parameter.
%t=value
Sets task options for the initial task of the process.
When using the boot loader parameter, the option values must be either decimal
or hexadecimal numbers. When using the startup facility configuration parameter,
the code is preprocessed before compilation, so symbolic constants may be used as
well (for example, VX_FP_TASK).
The following string, for example, specifies starting c:\apps\myVxApp.vxe with
the arguments one, two, three, and an initial task priority of 125; and also starting
c:\apps\myOtherVxApp.vxe with the options value 0x10 (which is to stop the
process before running in user mode):
#c:/apps/myVxApp.vxe p=125^one^two^three#c:/apps/myOtherVxApp.vxe %o=0x10
If the rtpSpawn( ) options are not set, the following defaults apply: the initial task
priority is 220; the initial task stack size is 64 Kb; the options value is zero; and the
initial task option is VX_FP_TASK.
The maximum size of the string used in the assignment is 160 bytes, inclusive of
names, parameters, and delimiters. No spaces can be used in the assignment, so
application files should not be put in host directories for which the path includes
spaces.
Specifying Applications with a Startup Configuration Parameter
Applications can be specified with the RTP_APPL_INIT_STRING parameter of the
INCLUDE_RTP_APPL_INIT_STRING component.
The identification string must use the syntax described in Application Startup String
Syntax, p.60. And the operating system must be rebuilt thereafter.
VxWorks
Application Programmer's Guide, 6.7
62
Specifying Applications with a Boot Loader Parameter
The VxWorks boot loader includes a parameter—the s parameter—that can be
used to identify applications that should be started automatically at boot time, as
well as to identify shell scripts to be executed.1 (For information about the boot
loader, see the VxWorks Kernel Programmer’s Guide: Boot Loader.)
Applications can be specified both interactively and statically with the s
parameter. In either case, the parameter is set to the path and name of one or more
executables and their arguments (if any), as well as to the applications’
process-spawn parameters (optionally). The special syntax described above is
used to describe the applications (see Application Startup String Syntax, p.60).
This functionality is provided with the INCLUDE_RTP_APPL_BOOTLINE
component.
Note that the boot loader s parameter serves a dual purpose: to dispatch script file
names to the shell, and to dispatch application startup strings to the startup
facility. Script files used with the s parameter can only contain C interpreter
commands; they cannot include startup facility syntax (also see Specifying
Applications with a VxWorks Shell Script, p.63).
If the boot parameter is used to identify a startup script to be run at boot time as
well as applications, it must be listed before any applications. For example, to run
the startup script file myScript and myVxApp.vxe (with three arguments), the
following sequence would be required:
myScript#c:/apps/myVxApp.vxe^one^two^three
The assignment in the boot console window would look like this:
startup script (s) : myScript#c:/apps/myVxApp.vxe^one^two^three
The interactively-defined boot-loader parameters are saved in the target’s boot
media, so that the application is started automatically with each reboot.
For the VxWorks simulator, the boot parameter assignments are saved in a special
file on the host system, in the same directory as the image that was booted, for
example,
installDir/vxworks-6.x/target/proj/simpc_diab/default/nvram.vxWorks0. The
number appended to the file name is processor ID number—the default for the
first instance of the simulator is zero.
1. In versions of VxWorks 5.x, the boot loader s parameter was used solely to specify a shell
script.
3 RTP Applications
3.6 Executing RTP Applications
63
3
For a hardware target, applications can be identified statically. The
DEFAULT_BOOT_LINE parameter of the INCLUDE_RTP_APPL_BOOTLINE
component can be set to an identification string using the same syntax as the
interactive method. Of course, the operating system must be rebuilt thereafter.
Specifying Applications with a VxWorks Shell Script
Applications can be started automatically with a VxWorks shell script. Different
methods must be used, however, depending on whether the shell script uses
command interpreter or C interpreter commands.
If the shell script is written for the command interpreter, applications can be
identified statically.
The RTP_APPL_CMD_SCRIPT_FILE parameter of the
INCLUDE_RTP_APPL_INIT_CMD_SHELL_SCRIPT component can be set to the
location of the shell script file.
A startup shell script for the command interpreter might, for example, contain the
following line:
rtp exec c:/apps/myVxApp.vxe first second third
Note that for Windows hosts you must use either forward-slashes or double
back-slashes instead of single back-slashes as path delimiters with the shell.
If a shell script is written for the C interpreter, it can be identified interactively
using the boot loader s parameter— in a manner similar to applications—using a
sub-set of the same string syntax. A shell script for the C interpreter can also be
identified statically with the DEFAULT_BOOT_LINE parameter of the
INCLUDE_RTP_APPL_BOOTLINE component. (See Specifying Applications with a
Boot Loader Parameter, p.62 and Application Startup String Syntax, p.60.)
The operating system must be configured with the kernel shell and the C
interpreter components for use with C interpreter shell scripts (see the VxWorks
Kernel Programmer’s Guide: Target Tools).
A startup shell script file for the C interpreter could contain the following line:
rtpSp "c:/apps/myVxApp.vxe first second third"
With the shell script file c:\scripts\myVxScript, the boot loader s parameter
would be set interactively at the boot console as follows:
startup script (s) : c:/scripts/myVxScript
VxWorks
Application Programmer's Guide, 6.7
64
Note that shell scripts can be stored in ROMFS for use in deployed systems (see
3.7 Bundling RTP Applications in a System using ROMFS, p.66).
Specifying Applications with usrRtpAppInit( )
The VxWorks application startup facility can be used in conjunction with the
usrRtpAppInit( ) initialization routine to start applications automatically when
VxWorks boots. In order to use this method, VxWorks must be configured with the
INCLUDE_RTP_APPL_USER component.
For each application you wish to start, add an rtpSpawn( ) call and associated code
to the usrRtpAppInit( ) routine stub, which is located in
installDir/vxworks-6.x/target/proj/projDir/usrRtpAppInit.c.
The following example starts an application called myVxApp, with three
arguments:
void usrRtpAppInit (void)
{
char * vxeName = "c:/vxApps/myVxApp/PPC32diab/myVxApp.vxe";
char * argv[5];
RTP_ID rtpId = NULL;
/* set the application's arguments */
argv[0] = vxeName;
argv[1] = "first";
argv[2] = "second";
argv[3] = "third";
argv[4] = NULL;
/* Spawn the RTP. No environment variables are passed */
if ((rtpId = rtpSpawn (vxeName, argv, NULL, 220, 0x10000, 0)) == NULL)
{
printErr ("Impossible to start myVxApp application (errno = %#x)",
errno);
}
}
Note that in this example, the myVxApp.vxe application executable is stored on
the host system in c:\vxApps\myVxApp\PPC32diab.
The executable could also be stored in ROMFS on the target system, in which case
the assignment statement that identifies the executable would look like this:
char * vxeName = "/romfs/myVxApp.vxe";
For information about bundling applications with the system image in ROMFS,
see 3.7 Bundling RTP Applications in a System using ROMFS, p.66.
3 RTP Applications
3.6 Executing RTP Applications
65
3
3.6.3 Spawning Tasks and Executing Routines in an RTP Application
The VxWorks kernel shell provides facilities for spawning tasks and for calling
application routines in a real-time process. These facilities are particularly useful
for debugging RTP applications.
For more information, see the VxWorks Kernel Programmer’s Guide: Target Tools and
the entries for the task spawn and func call commands in the VxWorks Kernel Shell
Command Reference.
In addition, note that the kernel shell provides facilities for monitoring system
calls. For more information, see the VxWorks Kernel Programmer’s Guide: Target
Tools, the syscall monitor entry in the VxWorks Kernel Shell Command Reference, and
the sysCallMonitor( ) entry in the VxWorks Kernel API Reference.
3.6.4 Applications and Symbol Registration
Symbol registration is the process of storing symbols in a symbol table that is
associated with a given process. Symbol registration depends on how an
application is started:
When an application is started from the shell, symbols are registered
automatically, as is most convenient for a development environment.
When an application is started programmatically—that is, with a call to
rtpSpawn( )—symbols are not registered by default. This saves on memory at
startup time, which is useful for deployed systems.
The registration policy for a shared library is, by default, the same as the one for
the application that loads the shared library.
The default symbol-registration policy for a given method of starting an
application can be overridden, whether the application is started interactively or
programmatically.
The shell’s command interpreter provides the rtp exec options –g for global
symbols, -a for all symbols (global and local), and –z for zero symbols. For
example:
rtp exec -a /folk/pad/tmp/myVxApp/ppc/myVxApp.vxe one two three &
The rtp symbols override command has the options –g for global symbols, -a for
all symbols (global and local), and –c to cancel the policy override.
VxWorks
Application Programmer's Guide, 6.7
66
The rtpSpawn( ) options parameter RTP_GLOBAL_SYMBOLS (0x01) and
RTP_ALL_SYMBOLS (0x03) can be used to load global symbols, or global and local
symbols (respectively).
The shell’s C interpreter command rtpSp( ) provides the same options with the
rtpSpOptions variable.
Symbols can also be registered and unregistered interactively from the shell,
which is useful for applications that have been started without symbol
registration. For example:
rtp symbols add –a –s 0x10000 –f /romfs/bin/myApp.vxe
rtp symbols remove –l –s 0x10000
rtp symbols help
Note that when the flat virtual memory model is in use symbols should not be
stripped from executable files (.vxe files) because they are relocatable. It is possible
to strip symbols from absolutely-linked executable files (intended to be used with
the overlapped virtual memory model) because they are not relocated but it makes
debugging them more difficult. The same apply to run-time shared library files
(.so files)
3.7 Bundling RTP Applications in a System using ROMFS
The ROMFS facility provides the ability to bundle RTP applications—or any other
files for that matter—with the operating system. No other file system is required
to store applications; and no storage media is required beyond that used for the
system image itself.
RTP applications do not need to be built in any special way for use with ROMFS.
As always, they are built independently of the operating system and ROMFS itself.
However, when they are added to a ROMFS directory on the host system and
VxWorks is rebuilt, a single system image is that includes both the VxWorks and
the application executables is created. ROMFS can be used to bundle applications
in either a system image loaded by the boot loader, or in a self-loading image (for
information about VxWorks image types, see the VxWorks Kernel Programmer’s
Guide: Kernel Facilities and Kernel Configuration).
When the system boots, the ROMFS file system and the application executables are
loaded with the kernel. Applications and operating system can therefore be
deployed as a single unit. And coupled with an automated startup facility (see
3 RTP Applications
3.7 Bundling RTP Applications in a System using ROMFS
67
3
3.6 Executing RTP Applications, p.54), ROMFS provides the ability to create fully
autonomous, multi-process systems.
This section provides information about using ROMFS to store process-based
applications with the VxWorks operating system in a single system image. For
general information about ROMFS, see 10.8 Read-Only Memory File System:
ROMFS, p.359.
3.7.1 Configuring VxWorks with ROMFS
VxWorks must be configured with the INCLUDE_ROMFS component to provide
ROMFS facilities.
3.7.2 Building a System With ROMFS and Applications
Configuring VxWorks with ROMFS and applications involves several simple
steps. A ROMFS directory must be created in the BSP directory on the host system,
application files must be copied into the directory, and then VxWorks must be
rebuilt. For example:
cd c:\myInstallDir\vxworks-6.1\target\proj\wrSbc8260_diab
mkdir romfs
copy c:\myInstallDir\vxworks-6.1\target\usr\root\PPC32diab\bin\myVxApp.vxe romfs
make TOOL=diab
The contents of the romfs directory are automatically built into a ROMFS file
system and combined with the VxWorks image.
The ROMFS directory does not need to be created in the VxWorks project
directory. It can also be created in any location on (or accessible from) the host
system, and the make ROMFS_DIR macro used to identify where it is in the build
command. For example:
make TOOL=diab ROMFS_DIR="c:\allMyVxAppExes"
Note that any files located in the romfs directory are included in the system image,
regardless of whether or not they are application executables.
3.7.3 Accessing Files in ROMFS
At run time, the ROMFS file system is accessed as /romfs. The content of the
ROMFS directory can be browsed using the traditional ls and cd shell commands,
VxWorks
Application Programmer's Guide, 6.7
68
and accessed programmatically with standard file system routines, such as open( )
and read( ).
For example, if the directory
installDir/vxworks-6.x/target/proj/wrSbc8260_diab/romfs has been created on the
host, myVxApp.vxe copied to it, and the system rebuilt and booted, then using ls
from the shell looks like this:
[vxWorks]# ls /romfs
/romfs/.
/romfs/..
/romfs/myVxApp.vxe
And myVxApp.vxe can also be accessed at run time as /romfs/myVxApp.vxe by
any other applications running on the target, or by kernel modules (kernel-based
applications).
3.7.4 Using ROMFS to Start Applications Automatically
ROMFS can be used with any of the application startup mechanisms simply by
referencing the local copy of the application executables. See 3.6.2 Running
Applications Automatically, p.58 for information about the various ways in which
applications can be run automatically when VxWorks boots.
69
4
Static Libraries, Shared
Libraries, and Plug-Ins
4.1 Introduction 70
4.2 About Static Libraries, Shared Libraries, and Plug-ins 70
4.3 Additional Documentation 73
4.4 Configuring VxWorks for Shared Libraries and Plug-ins 73
4.5 Common Development Issues: Initialization and Termination 74
4.6 Common Development Facilities 78
4.7 Developing Static Libraries 78
4.8 Developing Shared Libraries 79
4.9 Developing Plug-Ins 94
4.9.4 Debugging Plug-Ins 99
VxWorks
Application Programmer's Guide, 6.7
70
4.1 Introduction
Custom static libraries, shared libraries, and plug-ins can be created for use with
RTP applications. This chapter describes their features, comparative advantages
and uses, development procedures, and debugging methods. It also describes the
C run-time shared library provided with the VxWorks distribution, which can be
used with applications as an alternative to statically linking them to C libraries.
4.2 About Static Libraries, Shared Libraries, and Plug-ins
Static libraries are linked to an application at compile time. They are also referred
to as archives. Shared libraries are dynamically linked to an application when the
application is loaded. They are also referred to as dynamically-linked libraries, or
DLLs. Plug-ins are similar in most ways to shared libraries, except that they are
loaded on demand (programmatically) by the application instead of automatically.
Both shared libraries and plug-ins are referred to generically as dynamic shared
objects.
Static libraries and shared libraries perform essentially the same function. The key
differences in their utility are as follows:
Only the elements of a static library that are required by an application (that
is, specific .o object files within the archive) are linked with the application.
The entire library does not necessarily become part of the system. If multiple
applications (n number) in a system use the same library elements, however,
those elements are duplicated (n times) in the system—in both the storage
media and system memory.
The dynamic linker loads the entire shared library when any part of it is
required by an application. (As with a .o object file, a shared library .so file is
an indivisible unit.) If multiple applications in a system need the shared
library, however, they share a single copy. The library code is not duplicated
in the system.
4 Static Libraries, Shared Libraries, and Plug-Ins
4.2 About Static Libraries, Shared Libraries, and Plug-ins
71
4
Advantages and Disadvantages of Shared Libraries and Plug-Ins
Both dynamic shared objects—Shared libraries and plug-ins—can provide
advantages of footprint reduction, flexibility, and efficiency, as follows (shared
library is used to refer to both here, except where plug-in is used specifically):
The storage requirements of a system can be reduced because the applications
that rely on a shared library are smaller than if they were each linked with a
static library. Only one set of the required library routines is needed, and they
are provided by the run-time library file itself. The extent to which shared
libraries make efficient use of mass storage and memory depends primarily on
how many applications are using how much of a shared library, and if the
applications are running at the same time.
Plug-ins provide flexibility in allowing for dynamic configuration of
applications—they are loaded only when needed by an application
(programmatically on demand).
Shared libraries are efficient because their code requires fewer relocations than
standard code when loaded into RAM. Moreover, lazy binding (also known as
lazy relocation or deferred binding) allows for linking only those functions that
are required.
At the same time, shared libraries use position-independent code (PIC), which is
slightly larger than standard code, and PIC accesses to data are usually somewhat
slower than non-PIC accesses because of the extra indirection through the global
offset table (GOT). This has more impact on some architectures than on others.
Usually the difference is on the order of a fraction of a percent, but if a
time-sensitive code path in a shared library contains many references to global
functions, global data or constant data, there may be a measurable performance
penalty.
If lazy binding is used with shared libraries, it introduces non-deterministic
behavior. (For information about lazy binding, see 4.8.8 Using Lazy Binding With
Shared Libraries, p.87 and Using Lazy Binding With Plug-ins, p.96.)
!CAUTION: Applications that make use of shared libraries or plug-ins must be built
as dynamic executables to include a dynamic linker in their image. The dynamic
linker carries out the binding of the dynamic shared object and application at run
time. For more information in this regard, see 4.8.9 Developing RTP Applications
That Use Shared Libraries, p.88 and 4.9.3 Developing RTP Applications That Use
Plug-Ins, p.95.
VxWorks
Application Programmer's Guide, 6.7
72
The startup cost of shared libraries makes up the largest efficiency cost (as is the
case with UNIX). It is also greater because of more complex memory setup and
more I/O (file accesses) than for static executables.
In summary, shared libraries are most useful when the following are true:
Many programs require a few libraries.
Many programs that use libraries run at the same time.
Libraries are discrete functional units with little unused code.
Library code represents a substantial amount of total code.
Conversely, it is not advisable to use shared libraries when only one application
runs at a time, or when applications make use of only a small portion of the
routines provided by the library.
Additional Considerations
There are a number of other considerations that may affect whether to use shared
libraries (or plug-ins):
Assembly code that refers to global functions or data must be converted by
hand into PIC in order to port it to a shared library.
The relocation process only affects the data section of a shared library.
Read-only data identified with the const C keyword are therefore gathered
with the data section and not with the text section to allow a relocation per
executable. This means that read-only data used in shared libraries are not
protected against erroneous write operations at run-time.
Code that has not been compiled as PIC will not work in a shared library. Code
that has been compiled as PIC does not work in an executable program, even
if the executable program is dynamic. This is because function prologues in
code compiled as PIC are edited by the dynamic linker in shared objects.
All constructors in a shared library are executed together, hence a constructor
with high priority in one shared library may be executed after a constructor
with low priority in another shared library loaded later than the first one. All
shared library constructors are executed at the priority level of the dynamic
linker’s constructor from the point of view of the executable program.
Dynamic shared objects are not cached (they do not linger) if no currently
executing program is using them. There is, therefore, extra processor overhead
if a shared library is loaded and unloaded frequently.
4 Static Libraries, Shared Libraries, and Plug-Ins
4.3 Additional Documentation
73
4
There is a limit on the number of concurrent shared libraries, which is 1024.
This limit is imposed by the fact that the GOT table has a fixed size, so that
indexing can be used to look up GOTs (which makes it fast).
4.3 Additional Documentation
The following articles provide detailed discussions of dynamic shared objects
(including recommendations for optimization) and the dynamic linker in the
context of Linux development:
Drepper, Ulrich. How to Write Shared Libraries. Red Hat, Inc. 2006.
Jelinek, Jakub. Prelink. Red Hat, Inc. 2003.
4.4 Configuring VxWorks for Shared Libraries and Plug-ins
While shared libraries and plug-ins can only be used with RTP (user mode)
applications (and not in the kernel), they do require additional kernel support for
managing their use by different processes.
Shared library support is not provided by VxWorks by default. The operating
system must be configured with the INCLUDE_SHL component.
Doing so automatically includes these components as well:
INCLUDE_RTP, the main component for real-time process support
INCLUDE_SHARED_DATA for storing shared library code
INCLUDE_RTP_HOOKS for shared library initialization
and various INCLUDE_SC_XYZ components—for the relevant system calls
!CAUTION: There is no support for so-called far PIC on PowerPC. Some shared
libraries require the global offset table to be larger than 16,384 entries; since this is
greater than the span of a 16-bit displacement, specialized code must be used to
support such libraries.
VxWorks
Application Programmer's Guide, 6.7
74
It can also be useful to include support for relevant show routines with these
components:
INCLUDE_RTP_SHOW
INCLUDE_SHL_SHOW
INCLUDE_SHARED_DATA_SHOW
Note that if you use the INCLUDE_SHOW_ROUTINES component, the three above
are automatically added.
Configuration can be simplified through the use of component bundles.
BUNDLE_RTP_DEVELOP and BUNDLE_RTP_DEPLOY provide support for shared
libraries for the development systems and for deployed systems respectively (for
more information, see Component Bundles, p.20).
For general information about configuring VxWorks for real-time processes, see &
2.3 Configuring VxWorks For Real-time Processes, p.17.
4.5 Common Development Issues: Initialization and Termination
Development of static libraries, shared libraries, and plug-ins all share the issues
of initialization and termination, which are covered below. For issues specific to
development of each, see 4.7 Developing Static Libraries, p.78, 4.8 Developing Shared
Libraries, p.79, and 4.9 Developing Plug-Ins, p.94.
4.5.1 Library and Plug-in Initialization
A library or plug-in requires an initialization routine only if its operation requires
that resources be created (such as semaphores, or a data area) before its routines
are called.
If an initialization routine is required for the library (or plug-in), its prototype
should follow this convention:
void fooLibInit (void);
The routine takes no arguments and returns nothing. It can be useful to use the
same naming convention used for VxWorks libraries; nameLibInit( ), where name
is the basename of the feature. For example, fooLibInit( ) would be the
initialization routine for fooLib.
4 Static Libraries, Shared Libraries, and Plug-Ins
4.5 Common Development Issues: Initialization and Termination
75
4
The code that calls the initialization of application libraries is generated by the
compiler. The _WRS_CONSTRUCTOR compiler macro must be used to identify the
library’s (or plug-in’s) initialization routine (or routines), as well as the order in
which they should be called. The macro takes two arguments, the name of the
routine and a rank number. The routine itself makes up the body of the macro. The
syntax is as follows:
_WRS_CONSTRUCTOR (fooLibInit, rankNumInteger)
{
/* body of the routine */
}
The following example is of a routine that creates a mutex semaphore used to
protect a scarce resource, which may be used in a transparent manner by various
features of the application.
_WRS_CONSTRUCTOR (scarceResourceInit , 101)
{
/*
* Note: a FIFO mutex is preferable to a priority-based mutex
* since task priority should not play a role in accessing the scarce
* resource.
*/
if ((scarceResourceMutex = semMCreate (SEM_DELETE_SAFE | SEM_Q_FIFO |
SEM_USER)) == NULL)
EDR_USR_FATAL_INJECT (FALSE,
"Cannot enable task protection on scarce resource\n");
}
(For information about using the error detection and reporting macro
EDR_USR_FATAL_INJECT, see 11.7 Using Error Reporting APIs in Application Code,
p.374.)
The rank number is used by the compiler to order the initialization routines. (The
rank number is referred to as a priority number in the compiler documentation.)
Rank numbers from 100 to 65,535 can be used—numbers below 100 are reserved
for VxWorks libraries. Using a rank number below 100 does not have detrimental
impact on the kernel, but may disturb or even prevent the initialization of the
application environment (which involves creating resources such as the heap,
semphores, and so on).
Initialization routines are called in numerical order (from lowest to highest). When
assigning a rank number, consider whether or not the library (or plug-in) in
question is dependent on any other application libraries that should be called
before it. If so, make sure that its number is greater.
VxWorks
Application Programmer's Guide, 6.7
76
If initialization routines are assigned the same rank number, the order in which
they are run is indeterminate within that rank (that is, indeterminate relative to
each other).
4.5.2 C++ Initialization
Libraries or plug-ins written in C++ may require initialization of static
constructors for any global objects that may be used, in addition to the
initialization required for code written in C (described in 4.5.1 Library and Plug-in
Initialization, p.74).
By default, static constructors are called last, after the library’s (or plug-in’s)
initialization routine. In addition, there is no guarantee that the library’s static
constructors will be called before any static constructors in the associated
application’s code. (Functionally, they both have the default rank of last, and there
is no defined ordering within a rank.)
If you require that the initialization of static constructors be ordered, rank them
explicitly with the _WRS_CONSTRUCTOR macro. However, well-written C++
should not need a specific initialization routine if the objects and methods defined
by the library (or plug-in) are properly designed (using deferred initialization).
4.5.3 Handling Initialization Failures
Libraries and plug-ins should be designed to respond gracefully to initialization
failures. In such cases, they should do the following:
Check whether the ENOSYS errno has been set, and respond appropriately.
For system calls, this errno indicates that the required support component has
not been included in the kernel.
Release all the resources that have been created or obtained by the
initialization routine.
Use the EDR_USER_FATAL_INJECT macro to report the error. If the system has
been configured with the error detection and reporting facility, the error is
recorded in the error log (and the system otherwise responds to the error
depending on how the facility has been configured). If the system has not been
configured with the error detection and reporting facility, it attempts to print
the message to a host console by way of a serial line. For example:
4 Static Libraries, Shared Libraries, and Plug-Ins
4.5 Common Development Issues: Initialization and Termination
77
4
if (mutex = semMCreate (SEM_Q_PRIORITY | SEM_INVERSION_SAFE)) == NULL)
{
EDR_USR_FATAL_INJECT (FALSE, "myLib: cannot create mutex. Abort.");
}
For more information, see 11.7 Using Error Reporting APIs in Application Code,
p.374.
4.5.4 Shared Library and Plug-in Termination
Shared libraries and plug-ins are removed from memory when the only (last)
process making use of them exits. A plug-in can also be terminated explicitly when
the only application making use of it calls dlclose( ) on it.
Using Cleanup Routines
There is no library (or plug-in) termination routine facility comparable to that for
initialization routines (particularly with regard to ranking). If there is a need to
perform cleanup operations in addition to what occurs automatically with RTP
deletion, (such as deleting kernel resources created by the library) then the atexit( )
routine must be used. The call to atexit( ) can be made at anytime during the life of
the process, although it is preferably done by the library (or plug-in) initialization
routine. Cleanup routines registered with atexit( ) are called when exit( ) is called.
Note that if a process’ task directly calls the POSIX _exit( ) routine, none of the
cleanup routines registered with atexit( ) will be executed.
If the cleanup is specific to a task or a thread then taskDeleteHookAdd( ) or
pthread_cleanup_push( ) should be used to register a cleanup handler (for a
VxWorks task or pthread, respectively). These routines are executed in reverse
order of their registration when a process is being terminated.
VxWorks
Application Programmer's Guide, 6.7
78
4.6 Common Development Facilities
There are three alternatives for developing static libraries, shared libraries, and
plug-ins, as well as the applications that make use of them. The alternatives are as
follows:
Use Wind River Workbench. All of the build-related elements are created
automatically as part of creating library and application projects. For
information in this regard, see the Wind River Workbench by Example guide.
Use the make build rules and macros provided with the VxWorks installation
to create the appropriate makefiles, and execute the build from the command
line. For information in this regard, see the VxWorks Command-Line Tools User’s
Guide.
Write makefiles and rules from scratch, or make use of a custom or proprietary
build environment. For information in this regard, see the VxWorks
Command-Line Tools User’s Guide.
4.7 Developing Static Libraries
Static libraries (archives) are made up of routines and data that can be used by
applications, just like shared libraries. When an application is linked against a
static library at build time, however, the linker copies object code (in .o files) from
the library into the executable—they are statically linked. With shared libraries, on
the other hand, the linker does not perform this copy operation (instead it adds
information about the name of the shared library and its run-time location into the
application).
The VxWorks development environment provides simple mechanisms for
building static libraries (archives), including a useful set of default makefile rules.
Both Workbench and command line facilities can be used to build libraries. See
4.6 Common Development Facilities, p.78.
4.7.1 Initialization and Termination
For information about initialization and termination of static libraries, see
4.5 Common Development Issues: Initialization and Termination, p.74.
4 Static Libraries, Shared Libraries, and Plug-Ins
4.8 Developing Shared Libraries
79
4
4.8 Developing Shared Libraries
Shared libraries are made up of routines and data that can be used by applications,
just like static libraries. When an application is linked against a shared library at
build time, however, the linker does not copy object code from the library into the
executable—they are not statically linked. Instead it copies information about the
name of the shared library (its shared object name) and its run-time location (if the
appropriate compiler option is used) into the application. This information allows
the dynamic linker to locate and load the shared library for the application
automatically at run-time.
Once loaded into memory by the dynamic linker, shared libraries are held in
sections of memory (shared data areas) that are accessible to all applications. Each
application that uses the shared library gets its own copy of the private data, which
is stored in its own memory space. When the last application that references a
shared library exits, the library is removed from memory.
4.8.1 About Dynamic Linking
The dynamic linking feature in VxWorks is based on equivalent functionality in
UNIX and related operating systems. It uses features of the UNIX-standard ELF
binary executable file format, and it uses many features of the ELF ABI standards,
although it is not completely ABI-compliant for technical reasons. The source code
for the dynamic linker comes from NetBSD, with VxWorks-specific modifications.
It provides dlopen( ) for plug-ins, and other standard features.
Dynamic Linker
An application that is built as a dynamic executable contains a dynamic linker
library that provides code to locate, read and edit dynamic shared objects at
run-time (unlike UNIX, in which the dynamic linker is itself a shared library). The
dynamic linker contains a constructor function that schedules its initialization at a
very early point in the execution of process (during its instantiation phase). It reads
a list of shared libraries and other information about the executable file and uses
that information to make a list of shared libraries that it will load. As it reads each
!CAUTION: Applications that make use of shared libraries must be built as dynamic
executables to include a dynamic linker in their image. The dynamic linker carries
out the binding of the shared library and application at run time. For more
information, see 4.8.9 Developing RTP Applications That Use Shared Libraries, p.88.
VxWorks
Application Programmer's Guide, 6.7
80
shared library, it looks for more of this dynamic information, so that eventually it
has loaded all of the code and data that is required by the program and its libraries.
The dynamic linker makes special arrangements to share code between processes,
placing shared code in a shared memory region. The dynamic linker allocates its
memory resources from shared data regions and additional pages of memory
allocated on demand—and not from process memory—so that the use of process
memory is predictable.
Position Independent Code: PIC
Dynamic shared objects are compiled in a special way, into position-independent
code (PIC). This type of code is designed so that it requires relatively few changes
to accommodate different load addresses. A table of indirections called a global
offset table (GOT) is used to access all global functions and data. Each process that
uses a given dynamic shared object has a private copy of the library’s GOT, and
that private GOT contains pointers to shared code and data, and to private data.
When PIC must use the value of a variable, it fetches the pointer to that variable
from the GOT and de-references it. This means that when code from a shared
object is shared across processes, the same code can fetch different copies of the
analogous variable. The dynamic linker is responsible for initializing and
maintaining the GOT.
4.8.2 Configuring VxWorks for Shared Libraries
VxWorks must be configured with support for shared libraries. For information in
this regard, see 4.4 Configuring VxWorks for Shared Libraries and Plug-ins, p.73
4.8.3 Initialization and Termination
For information about initialization and termination of shared libraries, see
4.5 Common Development Issues: Initialization and Termination, p.74.
4.8.4 About Shared Library Names and ELF Records
In order for the dynamic linker to determine that an RTP application requires a
shared library, the application must be built in such a way that the executable
includes the name of the shared library.
4 Static Libraries, Shared Libraries, and Plug-Ins
4.8 Developing Shared Libraries
81
4
The name of a shared library—it’s shared object name—must initially be defined
when the shared library itself is built. This creates an ELF SONAME record with the
shared object name in the library’s binary file. A shared object name is therefore
often referred to simply as an soname.
The shared object name is added to an application executable when the application
is built as a dynamic object and linked against the shared library at build time. This
creates an ELF NEEDED record, which includes the name originally defined in the
library’s SONAME record. One NEEDED record is created for each shared library
against which the application is linked.
The application’s NEEDED records are used at run-time by the dynamic linker to
identify the shared libraries that it requires. The dynamic linker loads shared
libraries in the order in which it encounters NEEDED records. It executes the
constructors in each shared library in reverse order of loading. (For information
about the order in which the dynamic linker searches for shared libraries, see
Specifying Shared Library Locations: Options and Search Order, p.83)
Note that dynamic shared objects (libraries and plug-ins) may also have NEEDED
records if they depend on other dynamic shared objects.
For information about the development process, see 4.8.5 Creating Shared Object
Names for Shared Libraries, p.81 and 4.8.9 Developing RTP Applications That Use
Shared Libraries, p.88. For examples of displaying ELF records (including SONAME
and NEEDED), see Using readelf to Examine Dynamic ELF Files, p.90.
4.8.5 Creating Shared Object Names for Shared Libraries
Each shared library must be created with a shared object name, which functions as
the run-time name of the library. The shared object name is used—together with
other mechanisms—to locate the library at run-time, and it can also be used to
identify different versions of a library.
For more information about shared object names, see 4.8.4 About Shared Library
Names and ELF Records, p.80. For information about identifying the runtime
location of shared libraries, see 4.8.7 Locating and Loading Shared Libraries at
Run-time, p.83.
Note that a plug-in does not require a shared object name. For information about
plug-ins, see 4.9 Developing Plug-Ins, p.94.